Рубрики NewsSoftware

New York City’s AI chatbot gives false answers about city rules and business activities

Опубликовал
Андрей Русанов

The tendency of large AI language models to make things up when a user asks for help is well known. This is also the case with the «full-time» chatbot of the New York City government, which gives false answers to important questions about local legislation and municipal policy.

Last October, the MyCity chatbot was launched as a pilot program in New York City. The announcement touted the ChatBot as a way for business owners to save time and money by obtaining instant, useful and reliable information from over 2,000 NYC Business web pages and articles on topics such as code and regulation compliance, business incentives, and avoiding violations and fines.

But the MyCity chatbot was found to be providing dangerously incorrect information about some of the city’s most basic rules. The chatbot gives outright incorrect information about renting buildings in the city, wages and working hours, as well as about various areas of the city’s economy, such as funeral prices, and other areas of life. At the same time, sometimes the chatbot gave correct answers to the same questions.

MyCity is currently in beta status and informs users that «may sometimes produce incorrect, harmful, or biased content by» and that «users should not rely on its answers as a substitute for professional advice from». But the page also states that it is «trained to provide official New York City business information» and is a way to «help business owners guide government».

New York City Office of Technology and Innovation spokeswoman Leslie Brown says the bot has «already provided thousands of people with timely and accurate answers» and that «we will continue to focus on updating this tool so we can better support small businesses».

A recent Washington Post story reported that chatbots integrated into mainstream tax preparation software provide random, misleading, or inaccurate answers to many tax queries. Problems like these are prompting some companies to move away from more generalized chatbots to more specially trained models that are customized to only a small set of relevant information. The U.S. regulator FTC is seeking to gain the ability to hold chatbot owners accountable for false or disparaging information.

Source: Ars Technica

Disqus Comments Loading...