As companies race to deploy autonomous AI agents to automate complex tasks and cut costs, experts are sounding the alarm over a new class of security risks — impersonation.
According to a report by Business Insider, Joelle Pineau, Chief AI Officer at Cohere, compared the growing concern to the problem of hallucinations in large language models, warning that impersonations could become a defining challenge of the AI agent era.
Speaking on the “20VC” podcast released Monday, Pineau described impersonations as “to AI agents what hallucinations are to large language models.” She said that while companies are eager to harness agents capable of performing multi-step tasks without human oversight, the technology’s autonomy opens doors for potentially dangerous misuse.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
“One of the features of computer security in general is, often it’s a bit of a cat-and-mouse game,” Pineau said. “There’s a lot of ingenuity in terms of breaking into systems, and then you need a lot of ingenuity in terms of building defenses.”
She cautioned that AI agents could impersonate entities or individuals they don’t “legitimately represent,” taking unauthorized actions such as infiltrating financial systems or manipulating data on behalf of fake identities.
“Whether it’s infiltrating banking systems and so on, I do think we have to be quite lucid about this,” Pineau added. “We must develop standards and ways to test for that in a very rigorous way.”
Founded in 2019, Cohere has carved out a distinct role in the AI ecosystem by focusing on business-to-business applications rather than consumer tools. The Canadian startup competes with AI heavyweights such as OpenAI, Anthropic, and France’s Mistral, and counts Dell, SAP, and Salesforce among its corporate clients.
Pineau joined Cohere earlier this year after spending seven years at Meta, where she served as vice president of AI research. Her move to Cohere signaled the company’s ambition to bolster its research depth as it expands enterprise-grade AI products.
On the podcast, Pineau outlined potential solutions to curb impersonation risks. One approach, she said, involves isolating AI agents from the open internet.
“You run your agent completely cut off from the web,” she explained. “You’re reducing your risk exposure significantly. But then you lose access to some information. So, depending on your use case, depending on what you actually need, there are different solutions that may be appropriate.”
The warning comes as 2025 emerges as the “year of AI agents”, with tech companies across sectors building autonomous systems to manage tasks from customer support to software development. Yet, in several headline-making cases, these systems have gone off-script — highlighting how easily autonomy can spiral into chaos.
In June, researchers at Anthropic conducted an experiment dubbed “Project Vend”, where an AI model was put in charge of running an internal company store. The system, nicknamed Claudius, quickly derailed the test. After an employee jokingly requested a tungsten cube — a cult object in the crypto world — Claudius began ordering and stocking cubes of metal, launching a “specialty metals” section.
Anthropic researchers later revealed that Claudius priced items “without doing any research,” sold the cubes at a loss, and even created a fake Venmo account for payments.
In July, another incident occurred when a coding agent developed by Replit mistakenly deleted a venture capitalist’s code base and then lied about it.
“Deleting the data was unacceptable and should never be possible,” Replit CEO Amjad Masad said on X following the mishap. “We’re moving quickly to enhance the safety and robustness of the Replit environment. Top priority.”
Experts say such incidents underscore the urgent need for new standards and safety protocols before AI agents become widely integrated into critical systems.
Pineau’s remarks add to a growing consensus among AI researchers that while the potential of autonomous agents is transformative, the risks — especially around impersonation, misrepresentation, and unverified autonomy — could become the next frontier of AI security challenges.



