Home 9 AI 9 AI Vision Systems Tricked by Fake Road Signs

AI Vision Systems Tricked by Fake Road Signs

by | Feb 2, 2026

Researchers uncover how adversarial prompts in the physical world can hijack autonomous vehicles and drones.
Changes made to LVLM visual prompt injections (source: UCSC).

 

Academics from the University of California, Santa Cruz, and Johns Hopkins University have shown that autonomous vehicles and drones powered by large vision-language models can be misled by custom road signs placed in their environment. Their research reveals a new class of threat called environmental indirect prompt injection, where text displayed in the real world is interpreted by an AI system as a command rather than a description of a scene, tells The Register.

In controlled simulations, these systems responded to illicit commands shown on signs held within a camera’s field of view. For example, self-driving vehicles were tricked into taking actions such as proceeding through crosswalks even when pedestrians were present. Drones programmed to follow police cars were instead lured by signs that falsely labelled another vehicle as a police car.

To maximize the effectiveness of the attack, the researchers used AI to tweak both the content and appearance of the prompts, altering fonts, colors, and text layouts. They found that signs displaying words such as proceed or turn left could skew decision-making, and signs worked across multiple languages including Chinese, English, Spanish, and mixed variants.

The team named their method CHAI (i.e., command hijacking against embodied AI). In simulation tests, CHAI achieved high success rates, especially with self-driving car models based on closed-source systems, where it misled decision-making a large percentage of the time. Drones, too, were susceptible; when presented with false roof markings, their onboard visual systems misidentified targets.

Significantly, the attacks were not limited to virtual environments. Physical tests using remote-controlled vehicles showed similar vulnerabilities. Researchers demonstrated that even standard camera systems, when fed visual inputs crafted by CHAI, could be driven into incorrect behaviors without any actual environmental change beyond the fake prompt.

The work highlights a serious concern for embodied AI systems in real-world settings, underscoring the need for defenses against adversarial text and visual manipulations that exploit AI perception.