If you think that the Internet is not what it used to be, you’re right. A new study (and not just one) has shown that most traffic no longer comes from people. We officially lost control of the global network back in 2024.
Bots (programs that interact with websites in an automated manner) have been the norm for a long time, but it was last year that traffic from them exceeded human traffic for the first time. This growth is due in no small part to the rise of generative artificial intelligence (AI) and large language models (LLM). It has made it much easier to create bots, and not all of them are just useful tools for monitoring or performing other assistance.
Content
AI bots are getting smarter and more dangerous every year. They are capable of stealing passwords, attacking websites, and manipulating social media. While you’re trying to solve another tricky CAPTCHA, cybercriminals are using generative artificial intelligence to outsmart security systems. And they’re doing it on a massive scale.
According to the report 2025 Imperva Bad Bot Report by Thales, in 2024, automated web traffic exceeded human traffic for the first time in a decade, reaching 51%, of which only 14% were regular bots. That is, 37% is accounted for by the so-called «bad» bots — automated programs with malicious intentions that pose a serious threat to businesses and users.
As automated traffic increases, security teams have to adapt their approach to protecting applications against the threat of bots, which are gaining new advantages every day.
The trend of «bad» bots has been growing for the sixth year in a row: in 2023, their share was 32%, while human traffic decreased to 50.4%. The tourism (41% of traffic from «decepticons» bots), retail (59%), and financial services (45%) sectors are particularly vulnerable to «bad» bots.
In the travel industry, for example, bots create fake airline reservations, which leads to artificially inflated prices, and in retail, they buy up limited edition goods to resell them at higher prices.
Thanks to the rapid development of generative AI and large language models, bot creation has become accessible even to unskilled people. Everyone knows ChatGPT, Gemini or Claude allow for the development of bots that can mimic real human actions — from mouse movements to clicks, making detection more and more difficult.
In 2024, Imperva blocked about 2 million AI-based cyberattacks daily, most of which targeted social media, data theft, and API manipulation (which are the backbone of modern digital services). Also, 31% of all attacks detected and mitigated by Imperva were automated threats. Of these, 25% of the attacks were sophisticated malicious bots aimed at undermining business logic.
OWASP 21 Automated Threats is a set of automated cyberattacks that use bots and scripts to exploit web application vulnerabilities on a large scale, bypass security controls, and disrupt businesses in various industries.
In general, 44% of attacks «bad» bots were targeted at APIs, which were used by attackers to gain unauthorized access to confidential data, commit payment fraud, or simply steal accounts.
In 2024, sophisticated and medium complexity bot attacks accounted for 55% of all bot attacks. Simple, massive bot attacks have also increased, accounting for 45% (compared to 40% in 2023). That’s because the availability of simple AI-based automation tools makes it easy for attackers with less technical expertise to launch massive bot attacks.ChatGPT, ByteSpider Bot, ClaudeBot, Google Gemini, Perplexity AI, Cohere AI, and Apple Bot were the top performers. ByteSpider Bot was responsible for 54% of all AI attacks, followed by AppleBot with 26%. ClaudeBot accounted for 13%, and ChatGPT User Bot — 6%.
In contrast, «good» bots, such as AI crawlers, SEO tools, and security software, increase functionality. For example, OpenAI’s GPT Bot increased its activity by 12% in 2024 by collecting training data and linking language models to real-time data. Meanwhile, the popularity of Google’s AI crawler grew by an impressive 62%, driven by demand for intelligent data collection.
Interesting things came to light in 2023 after the incident with the Chinese balloon (the Chinese balloon entered the airspace of the United States and Canada. It was later shot down over the Atlantic Ocean). Of course, espionage has become a major topic of diplomatic dispute between the United States and China. The holy war has also unfolded on social media.
Researchers from Carnegie Mellon University discovered reported that 35% of geolocated users in the United States and 64% in China demonstrated bot-like behavior and actively tried to influence public opinion through the X platform.
Experts tracked nearly 1.2 million tweets posted by more than 120,000 users of the former Twitter between January 31 and February 22, 2023. All of these tweets contained the hashtags #chineseballoon and #weatherballoon and discussed the controversial aerial object. The tweets were geolocalized using the X platform’s location feature and verified using an algorithm called BotHunter. Interestingly, of the 42% of accounts whose location was not tracked, only 58% were human.
Bots generate provocative or emotional posts or comments to stir up debate and manipulate the objective perception of real users.
Also in 2021, the company Kasada found that most of the firms it surveyed (83%) experienced at least one bot attack during the year. 77% of companies lost 6% or more of their revenue due to bot attacks, and another 39% reported losses of more than 10% of their revenue. In general, 80% of organizations noted that malicious bots are becoming more complex and difficult to detect with existing security tools. At the same time, only 31% of companies are confident in their ability to detect bots «zero-day» (i.e., those that have not been encountered before).
Each bot attack costs one in four companies an average of $500 thousand. Another 77% of companies spend $250 thousand or more just to maintain anti-bot solutions.
The largest number of bot attacks globally in 2024 came from the United States (53%), the United Kingdom (6%), and Brazil (6%) Ukraine and Russia are among the top 10 most attacked countries in EMEA (Europe, Middle East, Africa) amid the military operations. Interestingly, some bot attacks were aimed at hacktivism.
The biggest problem is that «bad» bots cost businesses billions of dollars annually through attacks on websites, APIs, and applications. They not only steal passwords and data, but also create fake traffic that distorts company analytics.
Bots extract huge amounts of data using APIs that provide access to confidential or private information. This method is popular because it allows attackers to collect massive amounts of data in an automated manner: comprehensive information about users, products, and internal metrics. And all of this with little or no interference. The large amount of data collected not only facilitates further criminal activity, but can also be used to research competitors.
By attacking financial transaction processing points, attackers manipulate payment processes. This type of attack includes exploiting vulnerabilities in checkout systems to make unauthorized transactions or abusing promotions and discounts. The immediate financial loss, combined with the loss of customer trust, makes payment fraud extremely attractive to bad bots.
Account theft operations (ATO) use previously stolen or matched credentials. Attackers then gain access to sensitive personal and financial information.
Scalping attacks involve the use of bots to instantly buy back or reserve large quantities of popular goods or services. This approach disrupts fair access to consumers, distorts market dynamics, and allows reselling goods at inflated prices.
Fighting bots is a complex task that mostly falls on the shoulders (and finances) of web application management companies. Ordinary users should take care of themselves and follow these simple steps to minimize hacking and manipulation: