Google Gemini 2.0 Series of AI Models Released, Taking Programming and Reasoning Performance to the Next Level

Newsflash3mos agoupdate AiFun
259 0

谷歌Gemini 2.0 系列AI模型发布,编程和推理性能迈上新台阶On February 5 Google announced that its latest suite of AI modelsGemini 2.0officially available to all users. According togoogleIt's the company's "most powerful" AI modeling suite to date, it said.

Previously, in December last year, Google opened up only some of the functionality to developers and trusted testers, and integrated some features into Google's core products. Now fully open, all new models are available to developers through Google AI Studio and the Gemini API.

The Gemini 2.0 suite includes three submodels for different application scenarios:

Gemini 2.0 Flash::Known as the "workhorse model" for high-volume, high-frequency tasks, it is now fully available.

Gemini 2.0 Pro::Focused on improving programming performance, the best programming support tool to date, supporting an input capacity of 2 million tokens and the ability to analyze and process large amounts of information at once, has been released.

Gemini 2.0 Flash-Lite:Google calls it "the most cost-effective model to date," better than 1.5 Flash in cost and speed, with 1 million tokens of contextual windows and multi-modal inputs, and it's now in public preview.

谷歌Gemini 2.0 系列AI模型发布,编程和推理性能迈上新台阶

Model Characteristics

Gemini 2.0 Flash offers a comprehensive set of features, including native tool usage, a 1 million Token context window, and multimodal input. Currently, it supports text output, while image and audio output capabilities and a multimodal Live API are scheduled for general availability in the coming months.Gemini 2.0 Flash-Lite is cost-optimized for large-scale text output use cases.

谷歌Gemini 2.0 系列AI模型发布,编程和推理性能迈上新台阶

Model Performance

The Gemini 2.0 model achieves significant performance gains over Gemini 1.5 in a variety of benchmarks.

谷歌Gemini 2.0 系列AI模型发布,编程和推理性能迈上新台阶

Similar to the previous model, Gemini 2.0 Flash defaults to a clean style, which makes it easier to use and reduces costs. In addition, it can be prompted to use more elaborate styles for better results in chat-oriented use cases.

Gemini 2.0 Pricing

Google continues to reduce costs with Gemini 2.0 Flash and 2.0 Flash-Lite. Both models have a single price per input type, removing the distinction that Gemini 1.5 Flash made between short and long context requests. This means that while both 2.0 Flash and Flash-Lite offer performance improvements, they can be less expensive than Gemini 1.5 Flash for mixed-context workloads.

谷歌Gemini 2.0 系列AI模型发布,编程和推理性能迈上新台阶

Note: For the Gemini model, one tonkes is equivalent to approximately 4 characters. 100 lemmas is equivalent to approximately 60-80 English words.

A side-by-side comparison of API prices.The input prices for Gemini 2.0 Flash and Lite are $0.10 per 1 million tokens and $0.075 per 1 million tokens, respectively.In Google's own words, it costs about $1 to generate 1 line of captions for each of 40,000 unique images using the Gemini 2.0 Flash-Lite model.

In the case of hitting the cache, the price will drop to $0.025 per 1 million tokens (not including audio) and$0.01875 per 1 million Tokens.

Under the same conditions, OpenAI's price/performance model (gpt-4o-mini) can only do as little as$0.075 per 1 million Tokens.

The DeepSeek-V3 model, which is now a more cost-effective and powerful model.Only $0.014/per 1 million Tokens in the case of centralized caching. But DeepSeek has announced thatStarting February 8, the price will quintuple to $0.07 per 1 million Tokens.

© Copyright notes

Related posts

No comments

none
No comments...