Xiaomi MiMoTranslation site

23hrs agorelease 52 0 0

Xiaomi's open-sourced 7 billion parameter inference macromodel, which outperforms models such as OpenAI o1-mini in mathematical reasoning and code competitions by a small margin.

Language:
en
Collection time:
2025-04-30
Xiaomi MiMoXiaomi MiMo

What is Xiaomi MiMo

Xiaomi MiMo是小米首个专为推理设计的大语言模型,由小米全新成立的Large ModelCore团队开发,于2025年4月30日正式开源。该模型聚焦推理能力提升,通过预训练与后训练阶段的多维度创新,在参数规模仅70亿的情况下,超越了OpenAI闭源inference modelo1-mini and Ali Qwen's larger scale open source inference model QwQ-32B-Preview. its technological breakthroughs include synthesizing high-density inference data of about 200B tokens, a three-phase incremental training strategy, and Test Difficulty Driven Reward and Easy Data Re-Sampling Algorithm Optimization.

In addition, MiMo designed the Seamless Rollout system, which accelerates reinforcement learning training by 2.29 times and verification by 1.96 times, significantly improving R&D efficiency. This achievement signifies Xiaomi's technological prowess in the field of AI inference and provides the industry with a lightweight, high-performance inference solution.


Xiaomi MiMo Technical Architecture

  1. Model size and efficiency
    • parameter scale: only 7 billion parameters (7B), much lower than mainstream large models (e.g., 1.8 trillion parameters for GPT-4, 32 billion parameters for QwQ-32B), but high performance is achieved through algorithmic optimization.
    • Reasoning efficiency: For tasks such as mathematical reasoning, code generation, etc., MiMo significantly outperforms larger scale models in terms of resource footprint, response speed, and is suitable for end-side device (e.g., cell phone, IoT) deployments.
  2. Data and Training Strategies
    • Inference Data SynthesisThe model is a high-density reasoning corpus of about 200 billion tokens (200B), covering scenarios of math, code, and logical reasoning, to ensure that the model "knows a lot".
    • Three-stage training::
      • pre-training phase: Learning language-based competencies from large-scale generalized textual data.
      • intermediate stage: Introducing synthetic reasoning data to strengthen the model's understanding of complex logic.
      • post-training phase: Reinforcement learning (RL) combined with human feedback (RLHF) is used to optimize the model's performance in specific tasks.
    • algorithm optimization::
      • Test Difficulty Driven Reward (TDDR): Dynamically allocate rewards according to the difficulty of the test questions to alleviate the problem of "sparse rewards for difficult problems" and improve the ability of the model to attack.
      • Easy Data Re-Sampling (EDRS): Resample simple data to balance the distribution of training data and avoid model "bias".
  3. Training Framework and Acceleration
    • Seamless Rollout SystemThe RL training is accelerated by 2.29 times and validation by 1.96 times through parallelization technology, significantly reducing the R&D cycle time.
    • Mixed-precision training: Combine FP16 and BF16 formats to reduce video memory usage while ensuring precision.

Xiaomi MiMo Performance

  1. mathematical reasoning
    • In the AIME 2024-2025 Mathematics Competition Benchmark Test, MiMo outperforms OpenAI o1-mini (closed-source inference model) and QwQ-32B-Preview (Ali Tongyi Thousand Questions 32 Billion Parameters open-source model) in terms of correct problem solving, especially in the complex domains of Algebra, Geometry, and Number Theory.
    • Example: Successfully solved the problems of "Generalization of Fermat's Little Theorem" and "Solving Higher-Order Differential Equations" with complete and logical reasoning steps.
  2. Code Generation Capabilities
    • In the LiveCodeBench v5 code competition evaluation, MiMo's code pass rate and execution efficiency are better than the comparison model, especially in algorithmic questions (e.g., LeetCode Hard difficulty) and engineered code (e.g., API design, system architecture).
    • Example: Quickly generate "Rust-based distributed lock implementation" "TensorFlow model quantization optimization code" with detailed comments.
  3. Resource Usage Comparison
    • Under the same hardware environment, MiMo's inference latency is 40% lower than that of o1-mini, and the video memory occupation is 60% less, which is suitable for edge computing scenarios.

Xiaomi MiMo Application Scenarios

  1. Xiaomi intelligent terminal
    • mobile: Integrate into Surge OS, optimize the math tutoring and code debugging functions of Xiaoxia, and realize "offline reasoning".
    • IoT devicesDeployed in smart home hubs, it supports automatic generation of complex logic rules (e.g., "dynamically adjust air conditioning policy according to weather and power consumption").
  2. Developer Tools
    • Launched the MiMo DevTools plug-in to assist developers in generating high-quality code, debugging complex logic, and lowering the development threshold.
    • Example: Auto-completion of "Rust-based Blockchain Smart Contracts" and "Android Dynamic Permission Management Code".
  3. Education and Corporate Services
    • Education: Provide automatic problem solving and step-by-step analysis services for online education platforms, and support personalized learning path planning.
    • Corporate Services: Help financial and research institutions to handle data analysis, model optimization and other tasks to improve efficiency.

Xiaomi MiMo Program Address

data statistics

Related Navigation

No comments

none
No comments...