Tencent Mixed 3D-Omni, Mixed 3D-Part released and open source: 3D generation into the era of accurate and controllable
On September 26, Tencent announced thatMixed 3DGenerative model family added new members - hybrid 3D-Omni, hybrid 3D-Part released and open source. This is also the first time that Tencent Mixed Elements has released a new generation model in the controllable3D generationNew breakthroughs in AI 3D modeling make it more practical and accelerate the application of 3D generated models in real production processes such as gaming, printing and AR/VR.
As the industry's first 3D generation framework that uniquely supports multi-conditional control, Hybrid 3D-Omni breaks through the limitations of traditional image inputs to support multiple modal inputs for fine control of object geometry, topology, and pose. Hybrid 3D-Part realizes flexible and controllable part splitting and generation, which makes decomposing and generating 3D models as easy as playing Lego.
Hybrid 3D-Omni and Hybrid 3D-Part will be fully open inference code and weights, fully open source and free to use, facilitating academic research and industrial deployment, and aiding community exploration of controlled 3D generation.
Hybrid 3D-Omni: The "ControlNet" of 3D, Multi-Conditional Control in One Place
In recent years, generative models based on native 3D representations (e.g., point clouds and voxels) have rapidly emerged. However, the current mainstream methods mainly rely on image inputs, which are susceptible to single-view occlusion and lighting interference, leading to insufficient geometric accuracy; they are also difficult to finely tune the scale, pose, and details, and are unable to adapt to multimodal inputs, limiting their usefulness in complex scenes.
Built on the Hybrid 3D 2.1 open source model, Hybrid 3D-Omni is like the "ControlNet of the 3D world". With a lightweight unified control encoder and a progressive difficulty-aware training strategy, Hybrid 3D-Omni is able to incorporate up to four types of control conditions to significantly improve the controllability and quality of the generation:
- Bones:Adding skeleton data under single-image conditions allows you to accurately adjust the pose of the generated character assets, perfect for animation or virtual character design;
- Point cloud:Injecting a complete object point cloud or a partial point cloud projected from a depth map helps to remove visual ambiguity from a single image, enhance geometric detail, and make 3D models more realistic and reliable;
- Bounding box:Allows fine-tuning of the aspect ratio of the generated assets to ensure that the results are aligned as expected;
- Somatotropin:Precise adjustments are made to the structure of the object so that the generated 3D asset meets the requirements in terms of geometric details.

Skeletal control of the character's posture

Point cloud control supplemental 3D information

Bounding boxes control different scales

Boundary box control to solve the problem of "paper" generated by a single map

Voxels control the structure of objects
These control conditions can be flexibly combined to support input sources such as depth cameras, LiDAR or reconstruction models. Community developers can also easily extend more creative conditions based on the open source model, such as additional character pose control.

Hybrid 3D-Omni marks a key step in the transformation of 3D generation from "image-dominant" to "multimodal controllable". The innovation of multimodal fusion not only improves the controllability and robustness of generation, but also paves the way for downstream applications. Imagine a virtual reality project where you use skeletal signals to control a character's dynamic pose, and then superimpose cloud details to make the model more realistic - all of which can be rapidly iterated locally without the need for expensive hardware.
Hybrid 3D-Part: A New Paradigm for Component Generation, Making 3D Models as "Detachable" as Lego
Echoing the precise generation of hybrid 3D-Omni, hybrid 3D-Part focuses on solving the "disassembly problem" of 3D generation.
Traditional algorithms tend to output inseparable "integrated" models, but in practice, detachable models can be adapted to the needs of more scenarios: for example, in game production, the car model is split into the body and independent wheels, which makes it easier to bind the scrolling logic; when 3D printing, printing component by component like building blocks can avoid the risk of deformation of the large parts. The 3D printing can avoid the risk of deformation of large parts by printing components one by one like building blocks.
The latest hybrid 3D-Part technology is composed of the industry's first native 3D segmentation model, P3-SAM, and an industrial-grade component generation model, X-Part, which for the first time realizes high-precision and controllable component-based 3D generation, supports automatic generation of 50+ components, and generates models with high geometrical quality, editable, and reasonable structure, which makes the models easier for editing, production, and application.

Component segmentation results for P3-SAM

Component generation results for X-Part
After generating the overall mesh with a hybrid 3D 2.5 or 3.0 model, users can obtain semantic features and bounding boxes through P3-SAM for automatic and accurate component segmentation, and X-Part takes over the baton to decompose the overall mesh into independent components, outputting high-fidelity, structurally consistent component geometry while maintaining flexibility and controllability.

Hunyuan3D-Part Component Splitting Overall Flow
In benchmark tests such as PartObj-Tiny, PartObj-Tiny-WT, and PartNetE, the segmentation and generation results of Hunyuan3D-Part significantly outperform existing work, reflecting its leading edge in accuracy and quality.

Comparison of X-Part component generation results and open source work
The model is also now available on Mixed 3D Studio, which can be used for free through Tencent's Mixed 3D Creation Engine.
Fully embrace open source, accelerate the landing application in various industries
In the past year, Tencent hybrid big model accelerated the iteration, released more than 30 new models, and fully embraced open source, hybrid language, image, video, 3D generation model full modal, multi-size open source, and many times took the first place in HuggingFace model hot list. The hybrid 3D model series is the most popular 3D open source model in the world, with over 2.6 million community downloads.
Just released at the 2025 Tencent Global Digital Ecosystem Conference, Mixed 3D 3D 3.0 generates models with 3 times higher modeling accuracy, geometric resolution of up to 1536³, support for 3.6 billion voxels of ultra-high-definition modeling, overcoming face sculpting challenges, and significant enhancement of detail expression. For 3D designers, game developers, modelers and other groups, it also launched a professional-grade AI workbench, Mixed 3D Studio, which integrates the entire 3D production process through AI technology to achieve more controllable and efficient 3D creation.
With ultra-high-definition modeling and high-quality generation, Tencent Hybrid is accelerating the application of 3D technology in various industries. Head 3D printing manufacturers, such as Top Bamboo Technology and Creative 3D, have accessed Tencent Mixed 3D models, significantly improving modeling efficiency. The world's first design Agent Lovart also prefers Tencent Mixed 3D in 3D generation tasks, expanding innovative applications in the design field.
Hybrid 3D-Omni:
Code:https://github.com/Tencent-Hunyuan/Hunyuan3D-Omni
Weights:https://huggingface.co/tencent/Hunyuan3D-Omni
Technical report:https://arxiv.org/pdf/2509.21245
Hybrid 3D-Part:
© Copyright notes
The copyright of the article belongs to the author, please do not reprint without permission.
Related articles
No comments...
 
                 
                 
                