
Alexios Mantzarlis spent five years at Google, where he worked on generative artificial intelligence and factual accuracy in Search. He’s gone, accuracy and usefulness are no longer the main thing.
At Google, Mantzarlis created the «red team» — emergency team focused on risks due to AI-generated content and led the team responsible for Google Search’s content policy on accuracy and usefulness of results. He left due to disagreements with the company’s policy and spoke about the reasons in an interview with Cybernews.
«I really liked Google, and then I really hated it. So many things have changed in terms of philosophy, and now it’s only gotten worse. When there was a big shift towards artificial intelligence, I lost any confidence that the quality of information was a real priority,» says Mantzarlis.
He is currently working at Cornell University, where he directs the Security, Trust, and Protection Initiative, which seeks to prevent digital harm through graduate programs, active communities of practice, and research. However, it is difficult there as well: due to the actions of the administration of US President Donald Trump, as well as the prevailing «general catastrophic atmosphere».
Mantzarlis believes Google’s introduction of artificial intelligence technologies «somewhat haphazard». The reason, of course, was the sudden rise in popularity of OpenAI and ChatGPT. He believes that ChatGPT’s impact on Google search may be truly existential.
«The rollout of this tool was motivated mainly by the intention to show markets and shareholders that Google can do this, rather than the idea of really highlighting Google as a place to go and find high quality, useful information. ChatGPT definitely affects search behavior, so yes, it’s existentially important for search. But I think that Google «has solved» this existential problem in a way that has actually increased the risks, not decreased».
As a tech writer and expert, Mantzarlis has noticed that so-called artificial intelligence waste (a term describing low-quality media created by generative AI) «is making us less and less able to get to important evidence, truth, and facts».
And Google has become part of the problem — simply because of its enormous scale. Even if the error rate is 0.1%, there will be tens or hundreds of thousands of errors among 15 billion search results — that’s a lot.
«It is also frustrating that priority has been given to the universal use of AI rather than specialized tools that can be trained to be extremely accurate in certain areas. Instead, we have created this self-confident liar»,” Mantzarlis said.
He hopes that people will see how AI-based search actually works and be disappointed. Already, people all over the world are suing OpenAI because ChatGPT tells them that they killed their own children and other false and outrageous things.
«But I’m also a bit of a fatalist, because these are the largest, best-endowed, and most advanced companies in the world – and this is the path they’ve chosen. Generative AI is being collectively imposed on us, whether we want it or not. And, of course, the current atmosphere makes it even more difficult to protect the security and quality of information».
As the former founder and director of the International Fact Checkers Network, a global coalition of fact-checking projects, Mantzarlis has spent numerous hours in talks with Google, Meta, and other industry representatives. He is not overly optimistic about the future.
Spelling error report
The following text will be sent to our editors: