Skip to main content

First AI Worm

· 3 min read
Roy Firestein
CEO at Autohost.ai

A new study by researchers at Cornell Tech and Intuit has revealed a concerning threat emerging from the rapid adoption of Generative AI (GenAI) capabilities into interconnected ecosystems of semi/fully autonomous agents. The paper introduces Morris II, the first worm designed to target these GenAI ecosystems through the use of adversarial self-replicating prompts.

The researchers demonstrate how attackers can craft malicious prompts that, when processed by GenAI models, replicate themselves from input to output, engage in malicious activities, and propagate to new agents by exploiting connectivity within the ecosystem. This zero-click malware was tested against GenAI-powered email assistants in two use cases - spamming and exfiltrating personal data - under both black-box and white-box access settings, using text and image-based inputs.

"Adversarial self-replicating prompts differ from regular prompts in that they are code triggering the GenAI model to output more code, rather than just data," explained lead author Stav Cohen. "This resembles classic cyber-attacks that exploit the confusion between data and code, like SQL injection and buffer overflow attacks."

The study profiles two classes of vulnerable GenAI applications:

  1. Those using retrieval augmented generation (RAG) with databases continuously updated with new user data
  2. Those with execution flows dependent on the GenAI output to determine subsequent tasks

Morris II was implemented against both profiles, demonstrating its ability to poison RAG databases, jailbreak GenAI models, replicate itself, exfiltrate sensitive user information, and steer application flows towards malicious ends like spamming. The worm was tested against Gemini Pro, ChatGPT 4.0, and LLaVA models.

While the current exposure is limited by the nascent state of GenAI ecosystems, the researchers expect the threat to grow significantly in the coming years as more companies integrate GenAI into products like cars, smartphones and operating systems. They call for deploying appropriate countermeasures where risks are deemed critical.

"This work is not intended to argue against GenAI adoption, but to ensure threats are accounted for in designing secure ecosystems," Cohen emphasized. "We hope our findings serve as a wake-up call to enable a worm-free GenAI era through responsible development and deployment practices."

The researchers disclosed their findings to Google and OpenAI via bug bounty programs prior to publication. While currently categorized as intended behavior, discussions are ongoing with Google's AI Red Team to assess impact and mitigation strategies for their Gemini model. As the transformative potential of GenAI unfolds, proactive collaboration between researchers and developers will be key to confronting the evolving landscape of AI-enabled threats.

See their explainer video below:

Worm explainer video

References