
What is the Bunshin Big Model X1
Wenxin Big Model X1 is officially released by Baidu on March 16, 2025deep thinking model. As Baidu's next-generation base big model, X1 benchmarks the performance of the industry-leading DeepSeek-R1, with stronger comprehension, planning, reflection, and evolution capabilities, and support for multimodality.X1 Key technologies such as progressive reinforcement learning, end-to-end training based on the chain of thought and action, and a diverse and unified reward system are utilized, and through the joint optimization of Flying Paddles and Wenxin, we can achieve extreme tuning of the entire chain from compression, inference, and service deployment, and dramatically lower the cost of inference.
The launch of X1 further enriches Baidu's Wenxin Big Model product line, provides stronger tool support for AI developers, and signifies Baidu's continued innovation and leadership in the field of artificial intelligence.
Wenshin Big Model X1 Main Features
- reflect in depth: Specializes in complex problems, possesses a long chain of thought, and is capable of logical reasoning and planning.
- multimodal support: Understanding and generating images, support for multimodal content handling.
- Tool Call: Invoke a variety of tools to generate code, charts, etc. to extend the scope of the application.
- Chinese Knowledge Quiz: Accurately answer questions in the Chinese language domain.
- literary creation: embellish the text, enhance linguistic flamboyance, and incorporate multiple narrative perspectives.
Bunshin Big Model X1 Core Technology
- Progressive Intensive Learning Training Method: Innovative application of progressive reinforcement learning methodology to comprehensively improve the comprehensive application of models in scenarios such as authoring, searching, tool invocation, and reasoning.
- End-to-end training based on the chain of thought and action: For deep search, tool invocation and other scenarios, end-to-end model training is performed based on the result feedback, which significantly improves the training effect.
- Diversified and harmonized reward system: A unified reward system is established that incorporates multiple types of reward mechanisms to provide more robust feedback for model training.
- Joint Optimization of Flying Paddles and Wenxin: Through the joint optimization of Flying Paddle and Wenxin, it achieves the extreme tuning of the whole chain from compression, inference, and service deployment, and dramatically reduces the inference cost.
Wenxin Big Model X1 Usage Scenarios
- Chinese Knowledge Quiz: The X1 model performs well in Chinese knowledge quizzes and is able to accurately answer users' questions.
- literary creationThe X1 model specializes in literary creation, embellishing text, elevating linguistic flourishes, and skillfully incorporating multiple perspectives into narratives.
- logical inference: The X1 model has powerful logical reasoning capabilities to handle planning and decision-making tasks in complex scenarios.
- Tool Call: X1 models can invoke a variety of tools, such as generating code, charts, etc., to meet the needs of users in different areas.
Wenxin Big Model X1 Charging Method
Wenxin Big Model X1 is available online on Wenxin Yiyin's official website, and users can experience it for free by logging in. At the same time, enterprises and developers can call the new model on the Baidu Intelligent Cloud Qianfan Big Model platform.
The X1 model is priced at $0.002/thousand tokens for inputs and $0.008/thousand tokens for outputs, which is about half the call price of R1 compared to DeepSeek-R1.
Wenxin Big Model X1 Recommended Reasons
- superior performance: The X1 model outperforms in multiple tests, benchmarks performance against the industry-leading DeepSeek-R1, and offers a price advantage.
- multimodal support: The X1 model has multimodal capabilities and is able to understand and generate images to meet user needs in multimodal content processing.
- Abundant tool calls: The X1 model can call a variety of tools, such as generating code, charts, etc., which extends the application scope of the model and improves work efficiency.
- Free Experience: Users can experience the X1 model for free on the Wenxin Yiyan website, which lowers the threshold of use and makes it easier for users to understand and evaluate the model's performance.
data statistics
Relevant Navigation

Developed by Tencent, the Big Language Model features powerful Chinese authoring capabilities, logical reasoning in complex contexts, and reliable task execution.

Pangu LM
Huawei has developed an industry-leading, ultra-large-scale pre-trained model with powerful natural language processing, visual processing, and multimodal capabilities that can be widely used in multiple industry scenarios.

WebLI-100B
Google DeepMind launches a 100 billion visual language dataset designed to enhance the cultural diversity and multilingualism of AI models.

Yan model
Rockchip has developed the first non-Transformer architecture generalized natural language model with high performance, low cost, multimodal processing capability and private deployment security.

Xiaomi MiMo
Xiaomi's open-sourced 7 billion parameter inference macromodel, which outperforms models such as OpenAI o1-mini in mathematical reasoning and code competitions by a small margin.

BaiChuan LM
Baichuan Intelligence launched a large-scale language model integrating intent understanding, information retrieval and reinforcement learning technologies, which is committed to providing natural and efficient intelligent services, and has opened APIs and open-sourced some of the models.

Qwen3-Max-Preview
Alibaba's flagship large model with trillions of parameters, supporting ultra-long context, multi-language understanding and powerful inference programming capabilities, is built for complex tasks and enterprise-class applications.

Seed-OSS
ByteDance's open-source 36 billion parameter-long contextual big language model supports 512K tokens, a controlled mind budget, excels in inference, code and agent tasks, and is freely commercially available under the Apache-2.0 license.
No comments...
