An Alibaba research unit has developed an AI model that creates short videos based on written prompts.

 

An Alibaba research unit has developed an AI model that creates short videos based on written prompts.

  Called ModelScope, the text-to-video system is available to try out online .

  • The open-source system generates two-second videos based on a user's text prompts, though its creations are far from perfect and can be unsettling .
  • In addition, some of the data it was trained on included images and videos taken from Shutterstock, causing the stock photo site's logo to appear in some of its generations. 
  • ModelScope comes from DAMO Vision Intelligence Lab, a research unit of Chinese e-commerce giant Alibaba.
  • In September, Meta announced Make-a-Video , which generates brief video clips from words as well as images or similar videos. It's not available to the public yet.
  • Runway AI has also introduced what is described as the "first publicly available text-to-video model on the market." The Stable Diffusion firm's Gen-2 model creates original videos that are three seconds long.

Post a Comment

Previous Next

Contact Form