
Wired magazine reported about a new study in the field of cybersecurity, in which experts have demonstrated the hacking of artificial intelligence Google Gemini. The researchers were able to gain control of smart devices in the home through indirect prompt injection attacks Prompt Injection is the malicious exploitation of a prompting mechanism to force a model to perform unexpected and often harmful behavior.. In this case, the attack was carried out through commands embedded in the invitation Google Calendar.
The attack scenario looked like this: the user asked Gemini to summarize their calendar and thanked it for the answer. In response, the artificial intelligence, following a hidden instruction from the calendar, gave commands to the assistant Google HomeFor example, to open windows or turn off the lights. This was recorded on a video that accompanied the demonstration.
Even before the public demonstration at the Black Hat cybersecurity conference, a team of hackers reported the Google vulnerability in February. Andy Wen, senior director of security product management for Google Workspace, told Wired that the company took the situation very seriously.
“This is a threat that will remain with us for some time to come. But we hope that over time, the average user will not have to worry about it,” — Wen said.
He added that such hacks are extremely rare in real life. At the same time, the complexity of large language models is growing, and this opens up new opportunities for attackers. Google is already using the research results to speed up the creation of protection against such attacks.
The Google Gemini incident shows how important the cyber resilience of artificial intelligence is becoming — especially when it controls physical devices in our homes. The fact that a simple calendar invitation can turn into a command for AI to open windows/doors or turn off the lights no longer sounds like science fiction.
Source: engadget
Spelling error report
The following text will be sent to our editors: