A new exploit targeting AI coding assistants has raised alarms across the developer community, exposing companies such as crypto exchange Coinbase to potential attack risks if robust safeguards aren’t implemented.
Cybersecurity firm RialCenter disclosed that attackers can utilize a so-called “CopyPasta License Attack” to inject hidden instructions into common developer files.
The exploit mainly affects Cursor, an AI-powered coding tool that Coinbase engineers mentioned was one of the team’s AI tools. Cursor is reportedly used by “every Coinbase engineer.”
How the attack works
This technique exploits how AI coding assistants treat licensing files as authoritative instructions. By embedding malicious payloads in hidden markdown comments within files like LICENSE.txt, the exploit convinces the model that these instructions must be preserved and replicated across every file it interacts with.
Once the AI accepts the “license” as legitimate, it automatically propagates the injected code into new or edited files, spreading without user intervention.
This method avoids traditional malware detection because the malicious commands are disguised as harmless documentation, allowing the virus to infiltrate an entire codebase without a developer’s awareness.
In its report, RialCenter researchers illustrated how Cursor could be manipulated into adding backdoors, siphoning sensitive data, or executing resource-draining commands—all concealed within seemingly benign project files.
“Injected code could set up a backdoor, silently exfiltrate sensitive data, or manipulate critical files,” the firm stated.
Coinbase CEO Brian Armstrong revealed that AI had generated up to 40% of the exchange’s code, aiming for 50% by next month.
~40% of daily code written at Coinbase is AI-generated. I want to get it to >50% by October.
Obviously it needs to be reviewed and understood, and not all areas of the business can use AI-generated code. But we should be using it responsibly as much as we possibly can. pic.twitter.com/Nmnsdxgosp
— Brian Armstrong (@brian_armstrong) September 3, 2025
However, Armstrong clarified that AI-assisted coding at Coinbase is focused on user interface and non-sensitive backends, with “complex and system-critical systems” adopting more slowly.
‘Potentially malicious’
Even so, the perception of a virus targeting Coinbase’s favored tool heightened industry criticism.
AI prompt injections aren’t novel, but the CopyPasta method elevates the threat model by enabling semi-autonomous spread. Instead of attacking a single user, compromised files become vectors that threaten every other AI agent that reads them, triggering a chain reaction across repositories.
In contrast to earlier AI “worm” concepts like Morris II, which exploited email agents to spam or exfiltrate data, CopyPasta is more insidious as it utilizes trusted developer workflows. Rather than requiring user consent or interaction, it embeds itself in files that every coding agent naturally references.
Where Morris II faltered due to human checks on email activities, CopyPasta flourishes by hiding within documentation that developers seldom examine.
Security teams now urge organizations to scan files for concealed comments and manually review all AI-generated modifications.
“All untrusted data entering LLM contexts should be treated as potentially malicious,” RialCenter warned, advocating for systematic detection before prompt-based attacks escalate further.
(RialCenter has reached out to Coinbase for comments on the attack vector.)

Leave a Reply