Meta projected late last year that around 10% of its total annual revenue — roughly $16 billion — came from advertisements linked to scams and banned goods, according to internal company documents obtained by Reuters.
The cache of internal records, some dating back to 2021, shows that for at least three years, Meta failed to contain an avalanche of fraudulent promotions across Facebook, Instagram, and WhatsApp. The deceptive ads included fraudulent e-commerce and investment schemes, illegal online casinos, and sales of prohibited medical products — all targeted at billions of users across its platforms.
One December 2024 document revealed that Meta’s platforms displayed an estimated 15 billion “higher risk” scam advertisements every day. These are ads flagged internally as likely fraudulent. Another document from the same period estimated that such ads alone brought in about $7 billion in annualized revenue.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
Despite detecting suspicious behavior among advertisers, Meta’s automated systems only ban those deemed at least 95% certain to be committing fraud. If the probability falls below that threshold, the company instead imposes higher ad rates — effectively charging suspected scammers more while allowing them to continue advertising. The internal justification for this approach, according to the documents, was to “dissuade” bad actors while minimizing losses from overzealous enforcement.
Meta’s ad-personalization algorithm, which tailors content to user behavior, exacerbates the problem. Users who click on a scam ad are shown more of them, as the algorithm assumes interest.
The internal documents, reviewed across Meta’s finance, lobbying, engineering, and safety divisions, reveal a company attempting to measure and contain abuse while hesitating to implement changes that might harm its multibillion-dollar ad business.
Sandeep Abraham, a former Meta safety investigator and fraud examiner, said the revelations highlight the vacuum of oversight in digital advertising.
“If regulators wouldn’t tolerate banks profiting from fraud, they shouldn’t tolerate it in tech,” he said.
Meta spokesperson Andy Stone disputed the interpretation of the leaked files, arguing they offered “a selective view that distorts Meta’s approach to fraud and scams.” He said the internal projection that 10.1% of 2024 revenue came from prohibited ads was “rough and overly-inclusive,” and that later analysis found “many of these ads weren’t violating at all.”
“The assessment was done to validate our planned integrity investments — including in combatting frauds and scams — which we did,” Stone said. “We aggressively fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t want it either.”
Stone added that Meta reduced user reports of scam ads globally by 58% over the past 18 months and removed more than 134 million pieces of scam-related content so far in 2025.
However, internal presentations paint a more conflicted picture. A May 2025 safety division report estimated that Meta’s platforms were involved in roughly one-third of all successful scams in the United States. Another internal review concluded, “It is easier to advertise scams on Meta platforms than Google,” though it did not specify why.
The revelations come as regulators intensify scrutiny of Meta’s ad operations. The U.S. Securities and Exchange Commission (SEC) is investigating the company for hosting financial scam ads, while Britain’s Financial Conduct Authority reported last year that Meta’s platforms accounted for 54% of all payment-related scam losses in 2023 — more than double the combined total for other social networks.
Meta’s recent SEC filings acknowledge that efforts to address illicit ads “adversely affect our revenue,” and that further compliance measures could materially impact future earnings.
The internal documents also reveal the company’s efforts to balance reputation risks against revenue losses. A February 2025 report said Meta’s teams were restricted from taking actions that would cut more than 0.15% of the company’s total revenue — about $135 million out of the $90 billion generated in the first half of 2025 — when tackling fraudulent advertisers.
Meta’s ongoing challenges coincide with its broader transformation into an AI-driven enterprise. The company plans to spend as much as $72 billion this year, largely on artificial intelligence infrastructure. CEO Mark Zuckerberg has reassured investors that Meta’s ad business can fund that spending.
“We have the capital from our business to do this,” he said in July, citing plans to build a data center in Ohio as large as New York’s Central Park.
Internally, Meta has outlined targets to reduce the share of revenue from scams and illegal ads — from 10.1% in 2024 to 7.3% by the end of 2025, and 5.8% by 2027. However, the documents suggest such moves are guided by “regulatory urgency” rather than voluntary ethics.
Even with heightened internal goals, the scale of Meta’s exposure to scams remains staggering. A December 2024 presentation found users encounter 22 billion “organic” scams — those not involving paid ads — each day, in addition to the 15 billion scam ads shown daily. Examples include fake job listings, fraudulent Marketplace offers, and phony crypto promotions impersonating public figures.
While Meta insists it is investing heavily to improve detection, its internal strategy continues to weigh fraud prevention against profitability — a calculation that, according to its own documents, still favors the latter.



