Applied AI research firm Runway AI has introduced a new AI system that creates original video clips from text prompts.
The Gen-2 software synthesizes short videos from only text descriptions provided by the user. It's described as the "first publicly available text-to-video model on the market."
- Runway's Gen-2 model is an upgrade from its existing Gen-1, which came out in February.
- That previously released model could take pre-existing videos and edit their structures via text or image prompts — for example, by altering the color of a car in a video.
- Gen-2 doesn't require an original source video, needing only a text prompt to create original videos that are 3 seconds long.
- The web-based platform's video clips are also higher-fidelity. Like Gen 1, it's still capable of using pre-existing images or videos as a base.
- Gen-2 will initially launch on Discord. A waitlist is expected to be available on the Runway website.
- Runway is most known as the co-creator of the popular Stable Diffusion generative AI model.
- The New York startup is focused on synthetic media and video automation.
- Runway announced a $50M Series C funding round led by new investor Felicis in December.
- At that time, the startup was valued at $500M, up from $200M in December 2021.