Рубрики ArticlesTechnologies

Prove you’re not a robot: how bots quietly took over the Internet in 2024

Published by Tetiana Nechet

If you think that the Internet is not what it used to be, you’re right. A new study (and not just one) has shown that most traffic no longer comes from people. We officially lost control of the global network back in 2024.

Bots (programs that interact with websites in an automated manner) have been the norm for a long time, but it was last year that traffic from them exceeded human traffic for the first time. This growth is due in no small part to the rise of generative artificial intelligence (AI) and large language models (LLM). It has made it much easier to create bots, and not all of them are just useful tools for monitoring or performing other assistance.

The war of the bots: «Decepticons» rapidly winning

AI bots are getting smarter and more dangerous every year. They are capable of stealing passwords, attacking websites, and manipulating social media. While you’re trying to solve another tricky CAPTCHA, cybercriminals are using generative artificial intelligence to outsmart security systems. And they’re doing it on a massive scale.

According to the report 2025 Imperva Bad Bot Report by Thales, in 2024, automated web traffic exceeded human traffic for the first time in a decade, reaching 51%, of which only 14% were regular bots. That is, 37% is accounted for by the so-called «bad» bots — automated programs with malicious intentions that pose a serious threat to businesses and users.

As automated traffic increases, security teams have to adapt their approach to protecting applications against the threat of bots, which are gaining new advantages every day.

The trend of «bad» bots has been growing for the sixth year in a row: in 2023, their share was 32%, while human traffic decreased to 50.4%. The tourism (41% of traffic from «decepticons» bots), retail (59%), and financial services (45%) sectors are particularly vulnerable to «bad» bots.

In the travel industry, for example, bots create fake airline reservations, which leads to artificially inflated prices, and in retail, they buy up limited edition goods to resell them at higher prices.

Thanks to the rapid development of generative AI and large language models, bot creation has become accessible even to unskilled people. Everyone knows ChatGPT, Gemini or Claude allow for the development of bots that can mimic real human actions — from mouse movements to clicks, making detection more and more difficult.

In 2024, Imperva blocked about 2 million AI-based cyberattacks daily, most of which targeted social media, data theft, and API manipulation (which are the backbone of modern digital services). Also, 31% of all attacks detected and mitigated by Imperva were automated threats. Of these, 25% of the attacks were sophisticated malicious bots aimed at undermining business logic.

OWASP 21 Automated Threats is a set of automated cyberattacks that use bots and scripts to exploit web application vulnerabilities on a large scale, bypass security controls, and disrupt businesses in various industries.

In general, 44% of attacks «bad» bots were targeted at APIs, which were used by attackers to gain unauthorized access to confidential data, commit payment fraud, or simply steal accounts.

In 2024, sophisticated and medium complexity bot attacks accounted for 55% of all bot attacks. Simple, massive bot attacks have also increased, accounting for 45% (compared to 40% in 2023). That’s because the availability of simple AI-based automation tools makes it easy for attackers with less technical expertise to launch massive bot attacks.ChatGPT, ByteSpider Bot, ClaudeBot, Google Gemini, Perplexity AI, Cohere AI, and Apple Bot were the top performers. ByteSpider Bot was responsible for 54% of all AI attacks, followed by AppleBot with 26%. ClaudeBot accounted for 13%, and ChatGPT User Bot — 6%.

ByteSpider’s dominance in AI attacks is due to its widespread recognition as a legitimate web crawler. The perfect undercover agent. Cybercriminals often disguise their malicious bots as web crawlers to avoid detection and bypass security systems.

In contrast, «good» bots, such as AI crawlers, SEO tools, and security software, increase functionality. For example, OpenAI’s GPT Bot increased its activity by 12% in 2024 by collecting training data and linking language models to real-time data. Meanwhile, the popularity of Google’s AI crawler grew by an impressive 62%, driven by demand for intelligent data collection.

Social media is a favorite target of the bad guys

Interesting things came to light in 2023 after the incident with the Chinese balloon (the Chinese balloon entered the airspace of the United States and Canada. It was later shot down over the Atlantic Ocean). Of course, espionage has become a major topic of diplomatic dispute between the United States and China. The holy war has also unfolded on social media.

Researchers from Carnegie Mellon University discovered reported that 35% of geolocated users in the United States and 64% in China demonstrated bot-like behavior and actively tried to influence public opinion through the X platform.

Experts tracked nearly 1.2 million tweets posted by more than 120,000 users of the former Twitter between January 31 and February 22, 2023. All of these tweets contained the hashtags #chineseballoon and #weatherballoon and discussed the controversial aerial object. The tweets were geolocalized using the X platform’s location feature and verified using an algorithm called BotHunter. Interestingly, of the 42% of accounts whose location was not tracked, only 58% were human.

Bots generate provocative or emotional posts or comments to stir up debate and manipulate the objective perception of real users.

Another way to use bad robots is — «scraping». That is, collecting personal data of social media users to create convincing phishing attacksResearchers from Arkose Labs found that from the first to the second quarters of 2023, scraping bot activity increased by 432%. This has become a very serious threat to privacy. For example, a bot can steal data from a famous person’s account, after which the attackers will place a malicious link on the page. For example, to a fake donation site.

Companies lose hundreds of thousands of dollars

Also in 2021, the company Kasada found that most of the firms it surveyed (83%) experienced at least one bot attack during the year. 77% of companies lost 6% or more of their revenue due to bot attacks, and another 39% reported losses of more than 10% of their revenue. In general, 80% of organizations noted that malicious bots are becoming more complex and difficult to detect with existing security tools. At the same time, only 31% of companies are confident in their ability to detect bots «zero-day» (i.e., those that have not been encountered before).

Each bot attack costs one in four companies an average of $500 thousand. Another 77% of companies spend $250 thousand or more just to maintain anti-bot solutions.

The largest number of bot attacks globally in 2024 came from the United States (53%), the United Kingdom (6%), and Brazil (6%) Ukraine and Russia are among the top 10 most attacked countries in EMEA (Europe, Middle East, Africa) amid the military operations. Interestingly, some bot attacks were aimed at hacktivism.

Here are the methods and tactics used by bots to evade detection in 2024:

  • Faking browser identity and attributes (for example, Chrome or Firefox). This is a simple but effective way to bypass basic security measures. More advanced bots also spoof other browser attributes — such as headers and JavaScript execution — to avoid detection by sophisticated anti-bot systems. The most popular browsers among bots were Chrome, Mobile Safari, Mobile Chrome, and Microsoft Edge.
  • Resident proxies. Attackers use the IP addresses of real users to blend in with normal traffic. Resident proxies allow malicious traffic to be routed through real users’ devices, making detection based on IP reputation more difficult. Although the use of resident proxies has decreased slightly — from 26% of bot attacks in 2023 to 21% in 2024 — this tactic remains popular due to its effectiveness.
  • Іprivacy tools. Services like iCloud Private Relay mask the identity of users, making it difficult to separate real traffic from automated bot activity.
  • Misuse of the API. Attacking bots use open or unsecured APIs to steal data, automate attacks, and bypass user interface security.
  • Mobile application hacking. Bots attack outdated mobile applications that do not require mandatory updates. This makes them vulnerable to reverse engineering, credential brute-force attacks (login and password guessing), and unauthorized changes.
  • CAPTCHA bypass. For example, AI-powered bots can pass CAPTCHAs with high accuracy, which has made this traditional security mechanism less effective.

Huge losses for businesses and users

The biggest problem is that «bad» bots cost businesses billions of dollars annually through attacks on websites, APIs, and applications. They not only steal passwords and data, but also create fake traffic that distorts company analytics.Report from the company Lunio showed that advertisers lost more than $71 billion in 2023 due to fake traffic generated by bots. In addition, bots complicate the lives of ordinary users. They intercept cheap airline tickets or tickets to concerts first, create artificial shortages of goods, and imitate real people on social media to spread misinformation.

Data collection (scraping) accounted for 31% of API attacks

Bots extract huge amounts of data using APIs that provide access to confidential or private information. This method is popular because it allows attackers to collect massive amounts of data in an automated manner: comprehensive information about users, products, and internal metrics. And all of this with little or no interference. The large amount of data collected not only facilitates further criminal activity, but can also be used to research competitors.

Financial fraud: 26%

By attacking financial transaction processing points, attackers manipulate payment processes. This type of attack includes exploiting vulnerabilities in checkout systems to make unauthorized transactions or abusing promotions and discounts. The immediate financial loss, combined with the loss of customer trust, makes payment fraud extremely attractive to bad bots.

Account hijacking: 12%

Account theft operations (ATO) use previously stolen or matched credentials. Attackers then gain access to sensitive personal and financial information.

Scalping: 11%

Scalping attacks involve the use of bots to instantly buy back or reserve large quantities of popular goods or services. This approach disrupts fair access to consumers, distorts market dynamics, and allows reselling goods at inflated prices.

Is it possible to destroy an army of bots?

Fighting bots is a complex task that mostly falls on the shoulders (and finances) of web application management companies. Ordinary users should take care of themselves and follow these simple steps to minimize hacking and manipulation:

  • Do not use simple or the same passwords on all sites
  • Update router software and antivirus regularly
  • Avoid suspicious VPN services.
  • Check the sources of information on social media to avoid becoming a victim of manipulation