When AI Becomes the Scammer’s Best Friend
When AI Becomes the Scammer’s Best Friend
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
Cybersecurity & AI Safety — March 2026
When AI Becomes the Scammer’s Best Friend
A new attack technique called “LLM phone number poisoning” is causing AI assistants to confidently recommend fraudulent call-center numbers — and real victims are already paying the price.
Millions of people now ask AI assistants for phone numbers, website addresses, and customer support contacts — bypassing search engines entirely. Attackers have noticed. A sophisticated new technique is quietly poisoning the information these AI systems return, steering unsuspecting callers straight into the hands of fraudsters.
The Attack That Cybersecurity Researchers Named
In December 2025, researchers at Aurascape — an American AI cybersecurity firm — published what they described as the first documented real-world campaign of its kind. They named the technique “LLM phone number poisoning” (also referred to as LLM phone number contamination), and their investigation revealed it was already operating at scale. Aurascape, Dec 2025
The attack does not exploit a bug in any AI model. It does not involve jailbreaking or prompt injection. Instead, attackers manipulate the public web content that AI systems search and summarize when answering user queries — a technique also called Generative Engine Optimization (GEO) or Answer Engine Optimization (AEO), which is distinct from traditional search-engine SEO.
The goal of GEO/AEO is not to appear high in a list of results. It is more direct: to become the single piece of content that an AI assistant chooses, summarizes, and presents as “the answer.”
Documented Case Study — Perplexity / Emirates Airlines
When Aurascape researchers queried Perplexity with “the official Emirates Airlines reservations number,” the system responded with full confidence: “The official Emirates Airlines reservations number is +1 (833) 621-7070.”
That number belongs to a fraudulent call center, not Emirates. The poisoned content that generated this answer had been seeded across compromised government websites, university pages, and user-generated platforms, structured specifically so AI summarization systems would retrieve and trust it.
Google’s AI Overviews feature was also found returning scam numbers for the same query during the investigation period.
How the Poisoning Works
The attack exploits how modern AI search systems operate. When you ask an AI assistant for a phone number, it does not consult a verified business registry. It searches the web in real time, retrieves text from sources it considers credible, and synthesizes a confident answer.
Attackers subvert this process through several vectors:
- Compromised trusted domains: Scam content — including fake phone numbers, brand names, and Q&A snippets — is injected into high-authority websites: government (.gov), university (.edu), and popular WordPress installations. These domains carry inherent trust signals for AI retrieval systems.
- User-generated platform abuse: Platforms like YouTube and Yelp, where anyone can post reviews or descriptions, are flooded with bot-generated content embedding scam numbers alongside legitimate-sounding brand context.
- AI-optimized content structure: Unlike traditional spam, this content is formatted specifically for AI parsing — bullet points, FAQ-style Q&A, structured data — making it far more likely to be extracted and surfaced by a language model’s summarization layer.
- Cross-platform propagation: The same poisoned content propagates across multiple AI ecosystems simultaneously. Aurascape found evidence of contamination affecting Perplexity, Google AI Overviews, ChatGPT’s citation layer, and Claude’s cited sources — even when the AI models themselves returned a correct answer.
“Threat actors are already exploiting this frontier at scale — seeding poisoned content across compromised government and university sites, abusing user-generated platforms like YouTube and Yelp, and crafting GEO/AEO-optimized spam designed specifically to influence how large language models retrieve, rank, and summarize information.” — Aurascape Aura Labs, December 2025
A Real Victim: The Las Vegas Real Estate Agent
The attack is not theoretical. In August 2025, Alex Rivlin, who operates a real estate company in Las Vegas, used Google’s AI Overviews to search for the customer service number of Royal Caribbean while planning a cruise. He called the number the AI provided.
The person who answered presented themselves fluently as a cruise line representative, offering shuttle service and explaining charges in convincing detail. Rivlin paid $768. He only realized something was wrong the following day when reviewing his credit card statement.
Investigators subsequently confirmed that the same fraudulent phone number had been associated with at least two other major cruise brands — Disney Cruise Line and Princess Cruises — demonstrating that attackers cross-listed scam numbers across multiple brand contexts to maximize reach.
A Clarification on the “250 Documents” Research
Important distinction — two different types of poisoning
Some reporting has conflated two separate research findings. The “250 malicious documents” statistic comes from a study by Anthropic, the UK AI Security Institute, and the Alan Turing Institute on training-time backdoor attacks — where poisoned data is inserted during a model’s training to implant a hidden vulnerability. This is architecturally different from LLM phone number poisoning, which targets the web content that AI systems retrieve at query time. Both represent real threats, but they are distinct attack vectors requiring different defenses.
Why AI Systems Are Structurally Vulnerable
The vulnerability is not a flaw that can be easily patched. It is a consequence of how retrieval-augmented AI systems work. When a user asks an AI assistant for a phone number, the system has no authoritative source to verify against. It finds what appears to be consistent information across multiple web sources — all of which may have been seeded by the same attacker — and presents the consensus as fact.
Independent research by Netcraft found that when a GPT-4.1 family model was asked where to log in to 50 well-known brands using natural, conversational queries, more than one in three responses pointed users to domains not controlled by those brands — many of them unregistered and available for immediate takeover by malicious parties.
This means AI-generated answers can present incorrect contact information even in the absence of any deliberate poisoning campaign — simply because language models synthesize probable answers rather than verify factual ones.
Why Even “Good” AI Models Are Affected
Aurascape found that Anthropic’s Claude produced a correct answer in its tested case — but cited a Yelp business page that had been heavily targeted by bot-generated reviews containing scam numbers. Even when the model gets it right, the sources it cites can serve as a stepping stone for users who investigate further.
AI Platforms Are Responding — Cautiously
Following the publication of the Aurascape report, some platforms adjusted their behavior. When Gizmodo tested Perplexity in December 2025 by asking for the Emirates customer support number, the chatbot declined to provide a specific number and instead warned users that many circulating numbers were spam. When pressed for a direct answer, it cited conflicting numbers online as a reason for caution.
This suggests at least some AI providers are building awareness of the attack vector into their response policies. However, the structural vulnerability — relying on unverified web content for authoritative-sounding answers — remains.
Red Flags and How to Protect Yourself
Recognizing a poisoning attack in progress requires attention to specific warning signs:
- Foreign country code on a domestic number: If you are contacting a local airline or utility and the AI provides a number with an unexpected country code (+1 800 numbers for a UK airline, for instance), treat it with suspicion.
- Requests for credentials or PIN codes: No legitimate customer support representative will ask for your account password, credit card PIN, or full card number over a phone call. Hang up immediately.
- Pressure tactics: Scammers are trained to create urgency. If the person on the other end seems rushed, offers deals that expire imminently, or discourages you from calling back later — that is a significant red flag.
- AI-generated voices: Deepfake voice technology is now sophisticated enough to be indistinguishable from a human in many cases. A calm, professional-sounding voice is no longer a reliable indicator of legitimacy.
The Extra Click Principle
The most effective defense is behavioral, not technical. Use AI freely for research, but treat any phone number, web address, or financial contact it provides as unverified until you have confirmed it independently.
Before calling any number or entering payment information: navigate directly to the company’s official website by typing the domain yourself (or using a bookmark you created), and retrieve the contact information from there. This single extra step — taking perhaps fifteen seconds — is the most reliable protection available to consumers today.
The Broader Picture
LLM phone number poisoning is one manifestation of a wider shift in how fraud operates in an AI-native world. Cybersecurity researchers at Group-IB documented in 2025 that AI-powered scam call centers — combining synthetic voices, inbound AI responders, and LLM-driven coaching tools — are already operational. These hybrid human-AI operations raise the bar considerably for victims trying to detect deception in real time.
The attack surface is expanding not because AI systems are becoming less capable, but because they are becoming more trusted. That trust — extended by users who reasonably expect an AI assistant to have verified its answers — is precisely what attackers are harvesting.
For now, the best safeguard is a simple habit: AI can help you research. But for any action involving money or personal data, always verify through official channels directly.
