Runway re-releases AI video generation model Gen-4: one photo generates coherent full-length movie!

artifact4wks agoupdate AiFun
889 0
Runway重磅发布AI视频生成模型Gen-4:一张照片生成连贯全片!

Runway, a company specializing in video generation, has officially launched their next-generation model line, the Gen-4.This is a story that the director has been brewing for more than ten years but has never been able to realize with AI.

Before that, to create this video with AI, we need to overcome a series of challenges: the consistency of the character image, the subtlety of the emotional expression, the realism of the physical effects, the continuity of the narrative, and the unity of the overall style.

It wasn't until the arrival of the veteran AI video generation giant's newest model, Runway Gen-4, the world's first model to achieve "world consistency," meaning that users can create coherent worlds with consistent environments, objects, locations, and characters, that the director's dream came to life.

Now that Runway Gen-4 is available to all paid subscribers and enterprise customers, the Runway team also revealed that scene referencing for character, location and object consistency will be available soon.

The official website shows that Gen-4's core highlights include:

  • World Consistency: The ability to maintain consistency of characters, scenes and objects across multiple scenes without additional fine-tuning.
  • Reference Map Capability: Generate consistent characters or objects in different lighting and scenes with just one reference map.
  • Scene Overlay: Reconstruct and capture the scene from any angle, just provide a reference image and description.
  • Physical effects: Simulates real-world physical laws to present realistic lighting, shadows and dynamic effects.
  • Video quality: excellent cue comprehension and world building skills.
  • Generative Visual Effects: Provides fast, controllable video effects that blend seamlessly with live action and traditional special effects.

Runway co-founder and CEO Cristóbal Valenzuela Barrera said in a post on X Platforms:

Our next generation family of AI models for media generation and world coherence is here. Welcome to Gen-4. This model is so special that we built it from the ground up entirely for one goal: to tell great stories.

As mentioned in the opening paragraph, the biggest highlight of Gen-4 is the realization of 'world consistency' - the ability to accurately generate characters, scenes and objects in multiple scenes and maintain the consistency of their visual characteristics.

Users simply set the overall style and visualization, and the model maintains a coherent world environment while maintaining the unique style, atmosphere and cinematic texture of each frame. And all this without fine-tuning or additional training.

By combining visual references and textual instructions, Gen-4 dramatically simplifies the professional content creation process by generating images and videos that are highly consistent in terms of style, subject matter, and scene, and users can now create 5- and 10-second 720p HD video clips.

To show the potential of the Gen-4, the Runway team has crafted a series of short films.

The opening shot sets the tone, feel, and atmosphere of The Lonely Little Flame throughout the short. In one of the scenes, there is a skunk looking for something. With Gen 4, the creator can directly guide the subject through the scene.

They set two key markers for the skunk to precisely control its movement path - first moving to one side of the scene and then turning back, successfully creating a sense of 'searching' dynamics.

"Like all great animation, you can see the richness of expression in character design and scene movement," explains a team member, "The same character remains consistent across scenes and lighting conditions, while being able to express different emotions and movements."

To make this clip, a member of the Runway team generated hundreds of individual video clips over the course of several hours, then edited them into a coherent clip. Sound effects were added separately.

In an interview with Bloomberg, Runway co-founder and CEO Cristóbal Valenzuela Barrera said the whole process took a few days.

While traditional visual effects production often requires time-consuming modeling, rendering and post-production tweaking, Runway Gen-4 introduces Generative Visual Effects (GVFX) technology that dramatically shortens the process with AI-driven generative capabilities.

The core of GVFX's technology is its efficiency and adaptability.

Users only need to provide simple visual references or text descriptions, such as character movements, scene ambience, or specific special effects needs, and Gen-4 can generate high-quality video clips in a short period of time.

A specific use case is the 'wooden toy' scenario that the Runway team showed in their demo.

Runway team members took out a wooden toy, took a photo of it with their cell phones, and imported it into Gen-4 as a reference, while uploading a previously captured New York City street scene as a backdrop. With a simple description: "Wooden toy leaning against the sidewalk on a New York street," Gen-4 quickly generated four images.

Pick one of these, pick one, and add an animated effect of pedestrians walking past the toy to the image. "You can put this toy anywhere - in the mountains, in the desert, basically whatever you want."

Runway重磅发布AI视频生成模型Gen-4:一张照片生成连贯全片!

The Herd is a heart-wrenching short film about a young man being chased through a herd of cattle at night. With only Gen-4 and a few simple image references, the Runway team constructed each shot of the character and the foggy cattle scene.

At the same time, Runway also utilizes Act-One technology to further enhance the expressiveness and coherence of the image.

In this short film, the production team emphasized two major technical highlights: the reflection of the characters that can be seen in the cow's eyes, and the realistic physical effect of the flames spreading across the grass.

This example shows how Gen-4 can utilize consistent characters, objects, and environments across multiple scenes. Creators can start with a well-designed character, build atmosphere and look, and then generate entirely new images with variations for different shots and perspectives.

Gen-4's understanding of the real physical world has reached new heights.

Combining real photographs of different areas of New York with real photographs of animals, the New York short film clearly demonstrates Gen-4's understanding of physics, the weight of animals, how they move across surfaces, and the way they interact with their environment.

Complex creative works often start with a simple idea.

The process of video creation can also grow like a snowball, and the Runway team used the example of a music video that started out as a generic image of a monkey and eventually grew into a full-blown music video that was rich in content and tightly paced.

The Runway CEO made an important assertion last year, "TheAI is becoming as much an infrastructure as electricity or the Internet. Calling yourself an AI company today is like calling yourself an Internet company in 2024. It doesn't make sense because everyone uses it -- every company uses the Internet; every company will use AI."

Just as the electricity revolution was not about power plants, it was about how lights, TVs and refrigerators changed lives. In his view, Runway is not an AI company, but a media and entertainment company.

Runway has previously generated film and TV sets for the US TV series House of David, as well as commercials for Puma.

The film and television industry has always been an important part of the AI video generation tool attack. In September last year, Runway entered into a partnership with Lionsgate, a famous movie production company, which is the first large movie company to sign a direct agreement with an AI video model provider.

Runway will leverage Lionsgate's library of more than 20,000 film and television titles, including acclaimed films such as The Hunger Games, to build a customized AI video production and editing model. The model will be used for storyboarding, background creation and special effects.

A good fountain pen won't make a writer think about the physics of ink flow, and a great AI authoring tool shouldn't distract a director from the algorithmic details.

Valenzuela also noted that the company trained its model with more reference to film industry terminology than the way it used to be, in an effort to make writing prompts more natural for filmmakers using the model.

We'll follow up with Gen-4-specific real-world testing, but regardless of how it actually works this time around, it's an indisputable trend that generative AI video generation tools are disrupting the movie and TV industry as we know it.

DreamWorks co-founder Jeffrey Kassenberg even said that AI could eliminate animated movie 90% jobs.

Many parts of the traditional animation production process - intermediate frame drawing, background design, coloring touches - may be greatly simplified or replaced by AI. But at the same time, new specialized positions are emerging, with roles such as AI prompt engineer, visual development director, and AI-human collaborative choreographer on the production roster in the future.

The ability to render video for basic generative functionality - a common level of current AI video technology, and in this promotion of Runway Gen-4Instead, it emphasizes AI's ability to create real stories and produce content that is both beautiful and entertaining and that resonates emotionally.

Perhaps it's only when the tools become simple enough that creators can really focus on what's important - telling stories that touch people's hearts.

How do I use Gen-4?

The good news is that Gen-4's image-to-video feature is now available for all RunwayPaid program userscap (a poem)Corporate CustomersThe official push. And the highly anticipated References functionIt will also be online soon

Trial address:https://app.runwayml.com/login

For more technical details and background on Gen-4's development, you can visit the official release page:http://runwayml.com/research/introducing-runway-gen-4

© Copyright notes

Related articles

No comments

none
No comments...