
What's FacePoke?
FacePoke is an open source real-time facial editing tool based on AI technology by Hugging Face, carefully developed by Julian Bilcke. It utilizes advanced LivePortrait technology and deep learning models (e.g., Convolutional Neural Network CNN) to allow users to edit facial expressions and head orientation in still images in real-time through simple drag-and-drop operations, including facial features such as eyes (eyes open and closed, eye direction), eyebrows, mouth, and other facial features, as well as head movements such as lifting, lowering and shaking head from side to side. The tool has earned widespread attention in the digital art, content creation, and AI enthusiast communities.
FacePoke Features
- Real-time editing and animation: FacePoke supports real-time facialExpression EditorUsers can adjust expressions by dragging and dropping facial feature points, and add animation effects such as blinking and mouth movements to make static images more vivid and realistic.
- High Resolution Output: The tool supports high-quality image processing for high-precision needs and is suitable for professional content creation and digital art.
- Facial Marker Assist: Users can choose to display facial markers for more precise editing operations based on control points.
- Open Source Features: FacePoke is an open source project hosted on GitHub where developers are free to modify and extend its functionality.
- Cross-platform deployment: Implemented based on LivePortrait technology, it supports local and Docker deployments in Linux environments, as well as online use.
FacePoke Application Scenarios
- Art Projects: FacePoke can animate classic portraits and add new ways of expression to artistic creations.
- Video Production: It can be used in conjunction with video generation tools to realize more vivid video content. For example, in movie and TV post-production, quickly adjust the actor's expression or head pose to fit different camera needs, saving time and cost.
- social content: Users can add interest and appeal to social media content by editing facial expressions and creating funny moving pictures or GIFs to engage viewers.
- Personal photo editing: Users can easily fix unsatisfactory photos, such as adjusting head poses, improving expressions, or fixing closed eyes for a perfect photo.
- Virtual Image and Game Development: Developers can enhance the interactivity and realism of virtual images to improve the user experience.
How to use FacePoke
Using FacePoke is very easy, here are the basic steps:
- Upload a picture: Select a clear, front-facing portrait photo to upload.
- Edit Emoji: Change the expression by dragging and dropping various feature points on the face, and also adjust the head orientation.
- Adding Animation: Optionally add eye and mouth dynamics for a more dynamic and emotionally expressive image.
- Export and share: Export the edited content and share it on social platforms or other channels.
FacePoke program address and online experience
- Project Address: The GitHub repository address ishttps://github.com/jbilcke-hf/FacePokeThe developers can get the project code here and modify and extend it freely.
- Online Experience: Users can find out more about this in thehttps://huggingface.co/spaces/jbilcke-hf/FacePokeConduct an online experience and use FacePoke's real-time facial expression editing feature without downloading and installing.
data statistics
Relevant Navigation

Multi-intelligent body collaboration open source framework, through the simulation of software company operation process, to achieve efficient collaboration and automation of GPT model in complex tasks.

BLOOM
A large open-source multilingual language model developed by over 1,000 researchers from more than 60 countries and 250 institutions, with 176B parameters and trained on the ROOTS corpus, supporting 46 natural languages and 13 programming languages, aims to advance the research and use of large-scale language models by academics and small companies.

OmniGen
Unified image generation diffusion model, which naturally supports multiple image generation tasks with high flexibility and scalability.

AlphaDrive
Combining visual language modeling and reinforcement learning, the autopilot technology framework is equipped with powerful planning inference and multimodal planning capabilities to deal with complex and rare traffic scenarios.

Tongyi Qianqian Qwen1.5
Alibaba launched a large-scale language model with multiple parameter scales from 0.5B to 72B, supporting multilingual processing, long text comprehension, and excelling in several benchmark tests.

Kolors
Racer has open-sourced a text-to-image generation model called Kolors (Kotu), which has a deep understanding of English and Chinese and is capable of generating high-quality, photorealistic images.

R1-Omni
Alibaba's open-source multimodal large language model uses RLVR technology to achieve emotion recognition and provide an interpretable reasoning process for multiple scenarios.

ChatAnyone
The real-time portrait video generation tool developed by Alibaba's Dharma Institute realizes highly realistic, style-controlled and real-time efficient portrait video generation through a hierarchical motion diffusion model, which is suitable for video chatting, virtual anchoring and digital entertainment scenarios.
No comments...