News Science and space 06-06-2025 at 11:51 comment views icon

FutureHouse presents AI ether0 — specialized LLM for scientific tasks

author avatar

Oleksandr Fedotkin

Author of news and articles

FutureHouse presents AI ether0 — specialized LLM for scientific tasks

The San Francisco-based startup FutureHouse has introduced the ether0 LLM aimed at scientific research.

The developers, led by Sam Rodriguez calls ether0 the first true «reasoning model of» designed specifically for solving scientific problems. It is large language model, designed to solve problems in the field of chemistry. The model was trained by passing a test with about 500 thousand questions. 

By following instructions in plain English, ether0 is able to output chemical formulas, including those that are suitable for the creation of pharmaceuticals. LLM is open source and in the public domain.

Unlike previous specialized models, ether0 can follow the progress of its own reasoning in plain english and can answer complex questions that usually require deep reasoning.

According to a chemist from University of Jena in Germany Kevin Jablonka, who has already tried working with ether0, says that the model is able to draw significant conclusions about chemical properties for which it has not received special training. 

FutureHouse was launched in 2023 as a non-profit organization backed by former Google CEO Eric Schmidt with a mission to accelerate the scientific process with AI. Last year, the company released an advanced scientific literature reviewer and an AI agent platform.

These agents take information from the scientific literature and use tools from the field of molecular chemistry to analyze data and answer questions about drug development. However, like most LLMs, agents are fundamentally limited by the amount of information in chemistry, which is available online.

For further improvement, scientists have turned to such models of reasoning as the chinese DeepSeek-R1. These models are able to «reflect» and demonstrate the course of their own reasoning that leads them to a particular answer. The FutureHouse researchers took a relatively small LLM from the french startup Mistral AI, which is about 25 times smaller than DeepSeek-R1 — compact enough to run on a laptop.

Instead of training the model from chemistry textbooks and scientific articles, the researchers decided that it could learn by taking tests. To do so, they collected laboratory results from 45 scientific articles on chemistry, including those on molecular solubility and odor. Based on this, 5,790 questions were generated.

The basic ether0 model was trained to think out loud. It was asked to read incorrect decisions and chains of reasoning generated by DeepSeek-R1. Each of the seven versions of the model attempted to solve a specific subset of chemistry questions, receiving reinforcing rewards for correct answers. The researchers then combined the chains of reasoning from these specialized models into one universal model. 

The performance of ether0 was evaluated using a number of additional questions, some of which were not related to the topics covered in the training course. In almost all areas, ether0 outperformed such models as OpenAI GPT-4.1 and DeepSeek-R1.

Has it started? OpenAI’s smartest AI models refuse to shut down on direct orders

Source: Nature



Spelling error report

The following text will be sent to our editors: