News Technologies 01-21-2025 at 08:32 comment views icon

DeepSeek launched «thinking» AI model R1 — as o1 from OpenAI, but for free

author avatar

Kateryna Danshyna

News writer

DeepSeek launched «thinking» AI model R1 — as o1 from OpenAI, but for free

The Chinese artificial intelligence laboratory DeepSeek released the R1 reasoning model, which duplicated or even surpassed the results of o1 from OpenAI in some tests.

Among the advantages — DeepSeek R1 is available for free with a limit of up to 50 messages per day. After registration or authorization, the “DeepThink” option should be selected.

According to DeepSeek, R1 surpasses o1 in AIME, MATH-500, and SWE-bench Verified tests (the first compares the model with others to assess effectiveness, the second is a collection of text problems, and the third focuses on programming tasks).

DeepSeek launched the 'thinking' AI model R1
R1 Tests / DeepSeek

Reasoning models are distinguished by their ability to effectively verify facts and avoid some “traps” that usually “stall” regular models, and also show more reliable results in natural sciences, physical and mathematical problems. At the same time, compared to standard models, reasoning models need a bit more time to find solutions.

DeepSeek R1 contains 671 billion parameters, but there are also “simpler” versions, which have from 1.5 billion to 79 billion parameters — while the smallest can work on a PC, more powerful versions require strong equipment (however, it is also available through the DeepSeek API at a price 90% lower than OpenAI o1).

Considering that DeepSeek R1 is a Chinese model, there are certain drawbacks. Its operation must be approved by the Chinese regulator, who must ensure that the model’s responses “embody core socialist values” (i.e., R1 will not respond to questions about Tiananmen Square or the autonomy of Taiwan).

Interestingly, one of the previous AI models from DeepSeek also surpassed many competitors in popular tests (particularly in programming and essay writing), but had an interesting feature — it believed that it was ChatGPT (likely because it was trained on data from its American competitor).

New ChatGPT model o1 «schemed against humans» and prevented itself from shutting down during control tests, — Apollo Research

Source: TechCrunch



Spelling error report

The following text will be sent to our editors: