PromptLocker: перший "ШІ-вимагач" виявився навчальним експериментом / Depositphotos
The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.
On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.
The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.
According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.
Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.
Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.
New York University researchers noted that the economic side of this experiment is particularly interesting. Traditional ransomware campaigns require experienced teams, custom code, and significant infrastructure investments. In the case of Ransomware 3.0, the entire attack consumed about 23 thousand AI tokens, which is only $0.70 in value if you use commercial APIs with flagship models.
Moreover, the researchers emphasized that open source AI models completely eliminate even these costs. This means that cybercriminals can do without any costs at all, getting the most favorable ratio of investment to result. And this ratio far exceeds the efficiency of any legal investment in AI development.
However, this is still only a hypothetical scenario. The research looks convincing, but it is too early to say that cybercriminals will massively integrate AI into their attacks. Perhaps we will have to wait until the cybersecurity industry can prove in practice that artificial intelligence will be the driving force behind the new wave of hacking.
The New York University research paper titled “Ransomware 3.0: Self-Composing and LLM-Orchestrated” is distributed by in the public domain.
Source: tomshardware
Контент сайту призначений для осіб віком від 21 року. Переглядаючи матеріали, ви підтверджуєте свою відповідність віковим обмеженням.
Cуб'єкт у сфері онлайн-медіа; ідентифікатор медіа - R40-06029.