Another big breakthrough in AI video! One person, one camera, shoots cinematic animated expression captures
Act-One is integrated into Runway's video generation model, Gen-3 Alpha. Users can easily record videos of themselves or others using their cell phones or cameras, and later utilize the Act-One feature to transfer the facial expressions of the recorded subjects to the AI-generated characters.
According to information posted on Runway's official blog, the company has been gradually opening up the Act-One feature to existing users since yesterday and plans to fully open it up to all users shortly in the future.


Simplify the complex process of traditional 3D animation, Act-One accurately captures actors' micro-expressions
Since the first Vincentian video models were introduced in late 2022, AI video technology has made significant advances in terms of realism, resolution, fidelity, cue matching (i.e., how well the AI-generated video fits the user-provided descriptions or examples), and the number of videos generated.
However, an ongoing challenge for many AI video creators is how to achieve realistic and controlled facial expressions in AI-generated characters. Most existing solutions are quite limited in this regard.
Today, the Act-One feature offers a solution to this dilemma, a major step forward in the use of generative modeling to reproduce live action and animated content.
All users with a Runway account can try out this new feature to create new videos with the Gen-3 Alpha video generation model. However, this feature is currently only available to users with sufficient credits.Launched earlier this year, Gen-3 Alpha supports a variety of input methods, including text-to-video, image-to-video and video-to-video. Users can describe a scene, upload an image or video, or combine these elements to have Gen-3 Alpha generate an entirely new video based on the input.
Although Act-One's current availability is still limited, it has already received high praise from a number of AI video creators. Additionally, Runway recently announced a partnership with Lionsgate Films, a leading Hollywood studio, to develop customized, based on Lionsgate's content library of over 20,000 filmsAI Video GenerationModel.
Traditional facial animation is often a complex and time-consuming process involving motion capture equipment, manual facial binding and multiple reference shots. The goal of these techniques is to translate an actor's performance into a 3D model suitable for the animation process.
Those interested in filmmaking may have learned about the intricacies of the process on set or while watching behind-the-scenes footage of special effects and motion capture movies like the Lord of the Rings series and Avatar. For example, in Rise of the Planet of the Apes, actors were covered in ping-pong ball markers, had markers plastered on their faces, and wore head-mounted devices.

▲The actor's face is covered in markings and blocked by a headset. (Source: YouTube)
It was this need for accurate modeling of complex facial expressions that prompted director David Fincher and his team to develop a new 3D modeling process for The Curious Case of Benjamin Button, which ultimately won an Oscar, according to VentureBeat.
The main difficulty with traditional 3D motion capture is how to preserve emotions and nuanced expressions from reference shots in digital characters. To overcome this challenge, many AI startups have worked in recent years to reduce the amount of equipment needed for accurate motion capture. For example, Move AI, a 3D motion capture app that successfully raised a $10 million seed round last year, has introduced a single-device motion capture feature. With this feature, users can use a smartphone camera or digital camera to enable the capture of full-body and wider movements.
In contrast, Act-One makes this complex process much simpler to understand.Act-One focuses on facial expression modeling, which allows users to accurately capture an actor's performance, including eyes, micro-expressions, and subtle rhythms, with simple camera settings. This allows creators to animate characters in a variety of styles and designs without the need for motion capture equipment or character binding.

▲Animate the generated character with a simple video of the actor performing. (Source: Runway)
As Runway states on its X account, "Act-One is able to transform performances from a single input video into a myriad of different character designs and multiple styles."
Suitable for a wide range of reference images, Act-One is capable of preserving realistic facial expressions and accurately translating performances to characters of different scales. This versatility opens up new possibilities for creative character design and animation.

▲Actors can be captured using a simple home video camera and animated for the generated characters. In addition, voice alternation effects can be added.
II. Other advantages of Act-One: cinematic realism with multi-camera angles, broader video narrative capabilities and public figure rights protection
One of Act-One's great strengths is its ability to deliver cinematic-quality, lifelike output from a variety of camera angles and focal lengths, and to maintain high-fidelity facial animation at different angles. This flexibility enhances creators' ability to tell emotionally resonant stories through character acting, which in the past typically required expensive equipment and a complex multi-step workflow to achieve.
Runway has previously supported video-to-video AI conversion, allowing users to upload their own videos and have them "redesigned" by Gen-3 Alpha or other previous Runway AI video models like Gen-2. The new Act-One feature is optimized specifically for face mapping and effects, and Runway co-founder and CEO Cristóbal Valenzuela cited consistency and performance as standout features of Act-One in an interview with VentureBeat.

▲Capture live action and output realistic movie characters. (Source: Runway)
Additionally, Runway has been exploring how Act-One can generate multi-round, expressive dialog scenes, which has been very challenging in the past when using generative video models.
Users can now create narrative content by simply reading and acting out different roles in a script using a common camera and an actor. An actor can play multiple roles using only a common camera, and the model generates a different output for each role. This capability promises to change the way narrative content is created, especially in independent filmmaking and digital media, which often lack high-end production resources.
There's been a shift in the way the industry deals with generative modeling, Valenzuela said publicly at X. People are now beyond the stage of questioning whether a generative model can produce consistent video. A good model has become the new benchmark. The key is how you use the model, how you think about its applications and use cases, and what you ultimately build.
▲Multi-camera dialog scenes edited using a single actor and camera setup driving the performances of two uniquely generated characters. (Image source: Runway)
Runway's securely generated media foundation is the basis for its current and future releases of Act-One. As with all of Runway's releases, Act-One comes with a comprehensive set of content auditing and security precautions. These include detecting and blocking attempts to generate content containing public figures, technical validation measures to ensure that end-users are authorized to use the voices they create through custom voices, and continuous monitoring to detect and mitigate potential misuse of the tool and platform.
Conclusion: Act-One breaks through facial recognition technology barriers and promotes new heights of AI video creativity
Act-One's breakthrough in AI facial recognition technology will help Runway stand out against a growing number of competitors. Competitors include, AI video startup Luma AI, Chinese AI startup MiniMax's AI video generator Hailuo, Crypto's AI video model Kling, and AI video startup Genmo's Mochi 1 open-source video generation model, which was just launched yesterday.
By reducing the technical barriers to traditional character animation, Runway promises to inspire new creativity in digital media. With Act-One, complex animation techniques become more accessible. As Act-One is introduced and used more, we may see many artists, filmmakers and other creators utilizing this new tool to realize their creativity.
© Copyright notes
The copyright of the article belongs to the author, please do not reprint without permission.
Related posts
No comments...