
What's Chitu?
ChituLarge ModelThe background of "Red Rabbit", a large model inference engine, originated from the joint research and development of the Institute of High Performance Computing at Tsinghua University and Tsinghua-based startup Qingcheng Jizhi. The engine is designed to solve the problems of hardware dependency and high cost faced by the current AI model deployment. Through the innovation of the underlying technology, Chitu Big Model realizes the native operation of FP8 precision models on non-NVIDIA Hopper architecture GPUs and various types of domestic chips, which significantly reduces the threshold and cost of deploying AI models for enterprises. At the same time, Chitu Big Model also supports full-scene scalability and adapts to a variety of domestic and foreign chips, providing strong support for the popularization and application of AI technology.
Chitu'sTechnical characteristics
-
hardware compatibility::
- For the first time, Chitu's large model inference engine realizes running FP8 precision models natively on non-NVIDIA Hopper architecture GPUs and various domestic chips.
- Breaking the hardware dependence of FP8 accuracy model on NVIDIA Hopper architecture (e.g., H100/H200) brings new opportunities for the wide application and ecological construction of domestic AI chips.
-
performance optimization::
- In tests on the A800 cluster, the Chitu engine achieved a 3.15x improvement in inference speed with a 50% reduction in GPU usage, significantly reducing hardware costs for organizations while increasing performance output.
- The intelligent optimization technology of Chibi engine can quickly adapt to different chip architectures, so that domestic manufacturers do not need to repeat the development of software and focus on hardware upgrade.
-
Full Scenario Scalability::
- The Chitu engine goal is to build to cover the full range of scenarios from pure CPU to large-scale clusters for large model deployment requirements.
- Adapts to a variety of NVIDIA GPUs and a variety of domestic chips, providing scalable solutions.
-
Long-term stable operation::
- The Chitu engine can be used in real production environments and is stable enough to carry concurrent business traffic.
Chitu Application Scenarios
- financial: The efficient performance and hardware compatibility of Chitu's large model inference engine makes it ideal for the financial industry for risk assessment, fraud detection, and other scenarios.
- medical care: In the medical field, Chitu engine can be used for medical image analysis, disease diagnosis, etc. to improve the accuracy and efficiency of medical services.
- Other industries: In addition, Chitu Engine can be widely used in education, intelligent manufacturing, smart cities and other fields to promote the popularization and application of AI technology.
Chitu open source and ecological construction
- open source address: The Chitu large model inference engine has been open sourced on GitHub athttps://github.com/thu-pacman/chitu.
- ecological constructionQingcheng Jizhi has cooperated with MuXi, Suyuan and other vendors to launch an "out-of-the-box" inference all-in-one machine, which further simplifies the AI landing process for enterprises. At the same time, the Red Rabbit team has cooperated with many domestic chip makers to open up code contribution channels and shorten the hardware adaptation cycle.
Significance and Impact of Chitu
- Promoting the development of domestic AI chips: The launch of Chitu's large model inference engine breaks the monopoly of NVIDIA and other foreign vendors in the field of AI chips, and brings a new breakthrough in the widespread application and ecological construction of domestic AI chips.
- Reduce enterprise deployment costs: Through underlying technical innovations and intelligent optimization techniques, Chitu Engine significantly reduces the threshold and cost of deploying AI models for enterprises and improves performance output.
- Accelerating the spread of AI technology: The Chitu engine's full-scenario scalability and long-term stable operation capability enable it to be widely used in multiple fields, promoting the popularization and application of AI technology.
data statistics
Relevant Navigation

A multimodal model that supports text generation and image editing with powerful contextual understanding and authoring capabilities.

ERNIE X1 Turbo
Baidu has launched a new generation of high-level AI assistants to disassemble complex tasks and automate the entire process with autonomous deep thinking, multimodal toolchain invocation and extreme cost advantages.

GPT-5
The next-generation multimodal big model introduced by OpenAI has stronger language understanding, generation and cross-modal interaction capabilities, and is widely applicable to content creation, intelligent assistants and enterprise applications.

Shortest
An end-to-end testing framework based on natural language processing and AI technologies which streamlines the testing process, increases testing efficiency, and lowers the testing threshold.

LiveTalking
An open source digital human production platform designed to help users quickly create naturalistic digital human characters, dramatically reduce production costs and increase work efficiency.

MindSpore
Huawei's full-scenario deep learning framework is designed to provide full-stack AI capabilities that are easy to develop and efficient to execute, supporting the complete process from data loading and model building to training, evaluation and deployment.

R1-Omni
Alibaba's open-source multimodal large language model uses RLVR technology to achieve emotion recognition and provide an interpretable reasoning process for multiple scenarios.

o1-pro
High-performance inference models from OpenAI with enhanced multimodal inference capabilities, structured outputs, and function call support, designed to handle complex professional problems with high pricing but high performance.
No comments...