AI video generation has come a long way in a short time, going from 2-second clips with significant morphing and distortion to looks almost indistinguishable from footage. Runway is the latest player in the space to release its next-generation model.
Gen-3 was first revealed two weeks ago, and after some initial testing by creative partners, it’s now available for everyone, at least it’s text-to-video version. Text-to-image is coming soon.
Each generation produces a 10-11 second photorealistic clip with amazingly accurate motion, including a representation of human actions that reflect the scenario and environment.
From my initial testing, it’s as good as Sora at some tasks, though better than OpenAI’s video model in that it’s widely available to everyone. It’s also better than the Luma Labs Dream Machine for motion understanding, but without an image-to-video model it falls short in consistency.
What is it like working with Gen-3?
Look at
I’ve been playing with it since its inception and have created more than a dozen clips to effectively improve the boost process. “Less is more” and “being descriptive” are my main points, though Runway produces a handy guide to driving Gen-3.
You’ll want to try and get requests early, as each generation with Gen-3 costs between $1 and $2.40 per 10-second generation. The cheapest option is to top up credits which cost $10 per 1000. In contrast, Luma Labs’ basic plan costs 20c per generation.
In terms of actually using the video generator, it works exactly like Gen-2. You give her your request and wait for her to make the video. You can also use lip sync, which is now integrated into the same interface as video creation, and animate the entire video.
I’ve come up with five prompts that worked particularly well and shared them below. Until image-to-video launches, you’ll have to be very descriptive if you want a distinctive look, but Runway’s Gen-3 image generation is impressive. You also only get 500 characters per request.
1. Cyber city race
This was one of the last requests I created and built from refinement. It’s relatively short, but because of the specific description of the movement and style, Runway performed it exactly as I expected.
Announcement: “Hyperspeed POV: Racing through a neon-lit cyberpunk city, data streams and holograms blur the past as we zoom into a digital sphere of rotating code.”
2. Diver
The first part of this involved some weird motion blur over the eyes and extended fingers that were corrected. Otherwise, it was an impressive and realistic rendering. The issue with the motion blur was the part of the request that suggested sunlight was coming through. The request was too complicated.
Announcement: “Slow motion tracking shot: A scuba diver explores a vibrant coral reef teeming with colorful fish. Shafts of sunlight penetrate the crystal clear water, creating a dreamlike atmosphere. The camera pans alongside the diver as they encounter a curious sea turtle.”
3. A street view
This is not only one of my favorite videos from Runway Gen-3 Alpha, but from anything I’ve made using AI video tools over the past year or so. It didn’t exactly follow the prompt, but it captures the changing sky during the day.
Claim: “Hyperspeed timelapse: The camera climbs from street level to a rooftop, showing the transformation of a city from day to night. Neon signs flicker to life, traffic turns into streams of light, and skyscrapers glow against the darkening sky. The final frame reveals a gorgeous cityscape under a starry night.”
4. The bear
I overwrote this request massively. He was supposed to show the bear becoming more animated towards the end, but I asked him to do too much in 10 seconds.
Prompt: “SLOW MOTION CLOSE TO WIDE ANGLE: A worn, old teddy bear sits motionless on a child’s bed in a dimly lit room. Golden sunlight gradually filters through lace curtains, softly lighting the bear As the warm light touches that fur, the bear’s glassy eyes suddenly blink The camera pulls back as the bear slowly sits up, its movements becoming more fluid and alive.
I refined the request to: “Slow motion from close up to wide angle: A vintage teddy bear on a child’s bed comes to life as golden sunlight filters through lace curtains, the camera pulls back to reveal the bear sitting and was animated.”
This gave better movement, going to the back of the original, although it created some artifacts on the bear’s face and still didn’t make it sit.
5. The old farmer
This was the first request I tried with the Runway Gen-3 Alpha. It’s too complex and descriptive as I was trying to replicate something I was going to create using image-to-video on the Luma Labs Dream Machine. It wasn’t the same, but it was very well done.
Prompt: “Weather farmer from the sun, 70s, scorched field polls. Leather skin, silver beard, bulging eyes beneath a dusty hat. Threaded shirt, patched overalls. Fence posts grip calloused hands. Golden light illuminates lines of concern, determination. The camera zooms in on the steel gaze. The barren land stretches, the distant ruins draw closer. Improvised irrigation, visible fortified fences. The old man enters the hat, discovers the hidden technology. The device vibrates, hope arises.”