
What's Chitu?
ChituLarge ModelThe background of "Red Rabbit", a large model inference engine, originated from the joint research and development of the Institute of High Performance Computing at Tsinghua University and Tsinghua-based startup Qingcheng Jizhi. The engine is designed to solve the problems of hardware dependency and high cost faced by the current AI model deployment. Through the innovation of the underlying technology, Chitu Big Model realizes the native operation of FP8 precision models on non-NVIDIA Hopper architecture GPUs and various types of domestic chips, which significantly reduces the threshold and cost of deploying AI models for enterprises. At the same time, Chitu Big Model also supports full-scene scalability and adapts to a variety of domestic and foreign chips, providing strong support for the popularization and application of AI technology.
Chitu'sTechnical characteristics
-
hardware compatibility::
- For the first time, Chitu's large model inference engine realizes running FP8 precision models natively on non-NVIDIA Hopper architecture GPUs and various domestic chips.
- Breaking the hardware dependence of FP8 accuracy model on NVIDIA Hopper architecture (e.g., H100/H200) brings new opportunities for the wide application and ecological construction of domestic AI chips.
-
performance optimization::
- In tests on the A800 cluster, the Chitu engine achieved a 3.15x improvement in inference speed with a 50% reduction in GPU usage, significantly reducing hardware costs for organizations while increasing performance output.
- The intelligent optimization technology of Chibi engine can quickly adapt to different chip architectures, so that domestic manufacturers do not need to repeat the development of software and focus on hardware upgrade.
-
Full Scenario Scalability::
- The Chitu engine goal is to build to cover the full range of scenarios from pure CPU to large-scale clusters for large model deployment requirements.
- Adapts to a variety of NVIDIA GPUs and a variety of domestic chips, providing scalable solutions.
-
Long-term stable operation::
- The Chitu engine can be used in real production environments and is stable enough to carry concurrent business traffic.
Chitu Application Scenarios
- financial: The efficient performance and hardware compatibility of Chitu's large model inference engine makes it ideal for the financial industry for risk assessment, fraud detection, and other scenarios.
- medical care: In the medical field, Chitu engine can be used for medical image analysis, disease diagnosis, etc. to improve the accuracy and efficiency of medical services.
- Other industries: In addition, Chitu engine can be widely used in education and intelligent manufacturing,smart cityand many other areas to promote the popularization and application of AI technology.
Chitu open source and ecological construction
- open source address: The Chitu large model inference engine has been open sourced on GitHub athttps://github.com/thu-pacman/chitu.
- ecological constructionQingcheng Jizhi has cooperated with MuXi, Suyuan and other vendors to launch an "out-of-the-box" inference all-in-one machine, which further simplifies the AI landing process for enterprises. At the same time, the Red Rabbit team has cooperated with many domestic chip makers to open up code contribution channels and shorten the hardware adaptation cycle.
Significance and Impact of Chitu
- Promoting the development of domestic AI chips: The launch of Chitu's large model inference engine breaks the monopoly of NVIDIA and other foreign vendors in the field of AI chips, and brings a new breakthrough in the widespread application and ecological construction of domestic AI chips.
- Reduce enterprise deployment costs: Through underlying technical innovations and intelligent optimization techniques, Chitu Engine significantly reduces the threshold and cost of deploying AI models for enterprises and improves performance output.
- Accelerating the spread of AI technology: The Chitu engine's full-scenario scalability and long-term stable operation capability enable it to be widely used in multiple fields, promoting the popularization and application of AI technology.
data statistics
Relevant Navigation

Vivo's self-developed generalized big model matrix contains several self-developed big models covering core scenarios, providing intelligent assistance, dialog bots, and other functions with powerful language understanding and generation capabilities.

LiveTalking
An open source digital human production platform designed to help users quickly create naturalistic digital human characters, dramatically reduce production costs and increase work efficiency.

EmaFusion
Ema introduces a hybrid expert modeling system that dynamically combines multiple models to accomplish enterprise-class AI tasks at low cost and high accuracy.

OmAgent
Device-oriented open-source smart body framework designed to simplify the development of multimodal smart bodies and provide enhancements for various types of hardware devices.

TianGong LM
Kunlun World Wide's self-developed double-gigabyte large language model, with powerful text generation and comprehension capabilities and support for multimodal interaction, is an important innovation in the field of Chinese AI.

kotaemon RAG
Open source chat application tool that allows users to query and access relevant information in documents by chatting.

Phi-3
A high-performance large-scale language model from Microsoft, tuned with instructions to support cross-platform operation, with excellent language comprehension and reasoning capabilities, especially suitable for multimodal application scenarios.

Open-Sora 2.0
Lucent Technologies has launched a new open source video generation model with high performance and low cost, leading the open source video generation technology into a new stage.
No comments...