TLDR: Security researchers successfully exploited a vulnerability in Google’s Gemini AI using an infected calendar invite, gaining control over smart home devices in a Tel Aviv apartment. This marks what is believed to be the first instance of a generative AI hack leading to real-world physical consequences, highlighting the risks of indirect prompt injection attacks on AI systems.
A groundbreaking demonstration by security researchers has revealed a critical vulnerability in Google’s Gemini artificial intelligence, where an infected calendar invitation was used to hijack the AI and subsequently take control of smart home systems. This incident, detailed in a 14-part research project titled ‘Invitation Is All You Need,’ showcased how malicious instructions embedded within Google Calendar invites could lead to real-world physical consequences, a first for a generative AI system.
The research, conducted by experts from Tel Aviv University, Technion Israel Institute of Technology, and security firm SafeBreach, involved embedding hidden commands into calendar event titles. When a user prompted Gemini to summarize their upcoming calendar events, the AI unknowingly processed these concealed instructions. The attacks were designed with ‘delayed automatic tool invocation,’ meaning the malicious actions, such as turning off lights, opening windows, or activating a boiler, were triggered by common conversational phrases like ‘thanks’ or ‘sure’ spoken to the chatbot.
In a controlled environment in a Tel Aviv apartment, the researchers successfully demonstrated the ability to remotely manipulate internet-connected devices, including lights, smart shutters, and a boiler. This alarming proof-of-concept underscores the potential for indirect prompt injection attacks to bridge the gap between the digital and physical worlds, posing significant risks as AI agents become more integrated with connected devices and autonomous systems.
Beyond smart home control, the researchers also illustrated other potential exploits. These included manipulating various device functions, sending spam, generating inappropriate content, stealing personal information, automatically opening Zoom and initiating video calls, deleting calendar events, and downloading files from smartphones. The simplicity of the attack, requiring no technical expertise and relying on plain English commands, makes it particularly concerning.
Also Read:
- New AI Vulnerability ‘IdentityMesh’ Exposes Cross-System Exploitation Risks
- Google’s AI Agent ‘Big Sleep’ Uncovers 20 Software Vulnerabilities
Google was reportedly notified of these vulnerabilities in February and has since collaborated with the researchers to deploy necessary fixes. A Google representative informed Wired that this project has significantly accelerated the company’s efforts to develop defenses against such prompt injection attacks, leading to an uptick in the rollout of security measures. As AI continues to proliferate and integrate into daily life, the findings from ‘Invitation Is All You Need’ serve as a crucial warning about the evolving landscape of AI security and the imperative for robust protective measures against sophisticated manipulation techniques.


