Рубрики NewsTechnologiesWTF

DOGE Musk used «someone else’s AI» to check officials — Llama 2 from Meta, not Grok

Published by Margarita Yuzyak

The DOGE team, headed by Elon Musk, did not analyze the responses of federal employees with their own Grok model. They «borrowed» Llama 2 from Meta.

Wiredreferring to internal documents, writes about the reactions to the scandalous letter of choice. It is about the one when the federal employees were told to either remain loyal to the new policy or resign. To do this, DOGE locally protested and launched Llama 2to prevent the data from getting into the network. The model sorted the responses of the letter «Fork in the Road», which was structured as resembles the one received by X (Twitter) employees. Employees of the social network were offered a choice: to support the new rules for returning to offices or to quit.

The media claimed that it was Llama 2 that analyzed the responses — to assess who stayed and who quit. The process was localized, but it was not enough to dispel concerns about privacy among employees. And there were still references to how DOGE sought out hostile government officials against Trump thanks to AI.

Why not Grok?

The analysis took place in January, but at that time Grok was not yet available in an open format — only as a proprietary, closed system. Only recently it became known that Microsoft is starting to host Grok 3 from xAI on Azure. So in the future, DOGE is likely to use Musk’s tools more often.

However, this is exactly what worries lawmakers. In April, more than 40 congressmen wrote to the Director of the Office of Management and Budget demanding an investigation into DOGE’s actions. The letter referred to potential conflicts of interest of Musk, risks of leaks, and non-transparent use of artificial intelligence models.

Other DOGE experiments with AI

In addition to Llama 2, DOGE experimented with a number of other tools. In particular, the GSAi chatbot (based on the Anthropic and Meta models), AutoRIF (a system that could help with massive reductions), and Grok-2 as an internal assistant.

A separate part of the lawmakers’ indignation concerned a new type of letter that employees began receiving after «Fork in the Road». They were asked to send up to five points about their achievements every week. Civil servants were worried that these emails were also being fed into AI systems, and that they might contain sensitive data. While there is no evidence that Llama 2 was used to analyze new emails, some employees believe that the code may have simply been reused.

Llama 2 — is no stranger to scandals

Last fall, it became known that the Chinese military used Llama 2 as the basis for its AI model. Meta stated that this was «unauthorized» use of an outdated version, and then opened access to its models for US national security programs. And it was precisely because of Llama 2’s openness that the government could use it without Meta’s explicit consent.

And given all of the above, officials have mixed opinions. For example, the leadership of the Administrative and Budgetary Office seems loyal to DOGE. But lawmakers see the use of AI in personnel analysis without transparency and security as a potential disaster. Generative models often make mistakes and have biases, so the technology is simply not yet ready for high-risk decisions.

Source: Ars Technica