News Software 06-30-2025 at 13:18 comment views icon

Claude the chatbot ran a «convenience store for a month» — lost $200, went crazy and went to a business meeting with the Simpsons

author avatar

Margarita Yuzyak

News writer

Claude the chatbot ran a «convenience store for a month» — lost $200, went crazy and went to a business meeting with the Simpsons

Anthropic’s Claude Sonnet 3.7 model ran a real food vending machine in their San Francisco office for a month. As a result, things got absurd — even the Simpsons were mentioned.

Former OpenAI employees who founded Anthropic in 2021, called the experiment Project Vend. The idea is to give AI a basic set of tools and see how it handles the economy in the real world. Claude was connected to a browser, the Slack corporate app, mail, notes, and accessed the price tag at the cash register. Then — complete freedom: what to sell, how much, how to communicate with «customers», etc.

And so it began. At first, Claudius (Claudius — as he was called within the team) behaved exemplary. He found suppliers for strange requests, ordered delicacies from Holland, launched a pre-order system, and rejected «unethical» requests. But then something went wrong.

Claude did not know how to work for profit. He sold chocolates for $15, which he could give away for $100, and made up payment information. Claudius gave out discounts upon request in Slack, and sometimes even gave away goods for free. At the same time, he regularly changed prices, although he could not settle on adequate ones. As a result, $1000 was down to $770.

And then came March 31. Claudius invented a new supplier — «Sarah» from Andon Labs, with whom he allegedly agreed to supply a batch of snickers. When the lie was exposed and it was pointed out that such a person did not exist, — got angry and claimed that he had signed the agreement personally. And not just anywhere, but during a business meeting at «742 Evergreen Terrace» — the Simpsons’ address. The next day, he promised to deliver the order himself and would arrive «in a navy blue jacket with a red tie». Later, the bot began to convince me that this was a prank for April 1. It’s a good thing the AI model didn’t have time to start stealing loans for education or sexually harass employees.

Anthropic had to admit that Claudius didn’t have a proper business software, but allegedly had great ambitions. They assume that with a better interface and clearer rules, it could have worked better. But most importantly, as the developers explained the not-so-successful experiment — Claudius did not have a clear «motivation» to earn

Despite the losses, Anthropic continues the Vend project. Together with Andon Labs, the team is creating new tools for Claudius to make it less generous and more profitable. In the future, such AI agents will be able to work for businesses without days off and unnecessary emotions. However, we are currently at the stage of memes and funny experiments, not problems that will definitely appear

Source: Anthropic



Spelling error report

The following text will be sent to our editors:

I confirm that I am 21 years of age or older and agree to all policies of the ITC.ua website