ChituTranslation site

4mos agoupdate 3,796 0 0

The Tsinghua University team and Qingcheng Jizhi jointly launched an open source large model inference engine, aiming to realize efficient model inference across chip architectures through underlying technological innovations and promote the widespread application of AI technology.

Language:
en
Collection time:
2025-03-15

What's Chitu?

ChituLarge ModelThe background of "Red Rabbit", a large model inference engine, originated from the joint research and development of the Institute of High Performance Computing at Tsinghua University and Tsinghua-based startup Qingcheng Jizhi. The engine is designed to solve the problems of hardware dependency and high cost faced by the current AI model deployment. Through the innovation of the underlying technology, Chitu Big Model realizes the native operation of FP8 precision models on non-NVIDIA Hopper architecture GPUs and various types of domestic chips, which significantly reduces the threshold and cost of deploying AI models for enterprises. At the same time, Chitu Big Model also supports full-scene scalability and adapts to a variety of domestic and foreign chips, providing strong support for the popularization and application of AI technology.

Chitu'sTechnical characteristics

  1. hardware compatibility::

    • For the first time, Chitu's large model inference engine realizes running FP8 precision models natively on non-NVIDIA Hopper architecture GPUs and various domestic chips.
    • Breaking the hardware dependence of FP8 accuracy model on NVIDIA Hopper architecture (e.g., H100/H200) brings new opportunities for the wide application and ecological construction of domestic AI chips.
  2. performance optimization::

    • In tests on the A800 cluster, the Chitu engine achieved a 3.15x improvement in inference speed with a 50% reduction in GPU usage, significantly reducing hardware costs for organizations while increasing performance output.
    • The intelligent optimization technology of Chibi engine can quickly adapt to different chip architectures, so that domestic manufacturers do not need to repeat the development of software and focus on hardware upgrade.
  3. Full Scenario Scalability::

    • The Chitu engine goal is to build to cover the full range of scenarios from pure CPU to large-scale clusters for large model deployment requirements.
    • Adapts to a variety of NVIDIA GPUs and a variety of domestic chips, providing scalable solutions.
  4. Long-term stable operation::

    • The Chitu engine can be used in real production environments and is stable enough to carry concurrent business traffic.

Chitu Application Scenarios

  1. financial: The efficient performance and hardware compatibility of Chitu's large model inference engine makes it ideal for the financial industry for risk assessment, fraud detection, and other scenarios.
  2. medical care: In the medical field, Chitu engine can be used for medical image analysis, disease diagnosis, etc. to improve the accuracy and efficiency of medical services.
  3. Other industries: In addition, Chitu engine can be widely used in education and intelligent manufacturing,smart cityand many other areas to promote the popularization and application of AI technology.

Chitu open source and ecological construction

  1. open source address: The Chitu large model inference engine has been open sourced on GitHub athttps://github.com/thu-pacman/chitu.
  2. ecological constructionQingcheng Jizhi has cooperated with MuXi, Suyuan and other vendors to launch an "out-of-the-box" inference all-in-one machine, which further simplifies the AI landing process for enterprises. At the same time, the Red Rabbit team has cooperated with many domestic chip makers to open up code contribution channels and shorten the hardware adaptation cycle.

Significance and Impact of Chitu

  1. Promoting the development of domestic AI chips: The launch of Chitu's large model inference engine breaks the monopoly of NVIDIA and other foreign vendors in the field of AI chips, and brings a new breakthrough in the widespread application and ecological construction of domestic AI chips.
  2. Reduce enterprise deployment costs: Through underlying technical innovations and intelligent optimization techniques, Chitu Engine significantly reduces the threshold and cost of deploying AI models for enterprises and improves performance output.
  3. Accelerating the spread of AI technology: The Chitu engine's full-scenario scalability and long-term stable operation capability enable it to be widely used in multiple fields, promoting the popularization and application of AI technology.

data statistics

Relevant Navigation

No comments

none
No comments...