09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
09:48
14:00
10:14
09:40
12:52
11:39
Runway researchers have presented a video diffusion model that enables users to edit videos based on visual or textual descriptions of the desired output.
Users can generate new videos out of existing ones through words and images. The developers published the following:
Our model is trained jointly on images and videos which also exposes explicit control of temporal consistency through a novel guidance method. Our experiments demonstrate a wide variety of successes; fine-grained control over output characteristics, customization based on a few reference images, and a strong user preference towards results by our model.
There are 5 possible modes:
Recently an award-winning director with over 500 million views Karen X. Cheng tested the tool and published her results: