Meta's internal documents reveal that approximately 10% of the company's 2024 revenue—around US$16 billion—originates from fraudulent and prohibited advertisements, raising concerns over Meta's effectiveness in combating scams. The social media giant is accused of profiting by charging higher rates for suspicious ads instead of fully blocking them.
Reuters cited internal files showing that Meta platforms, including Facebook, Instagram, and WhatsApp, deliver an average of 15 billion suspicious ads daily to users. These ads promote illegal gambling, fake investment schemes, and banned drug sales. Meta estimates these high-risk scam ads alone generate about US$7 billion annually.
Meta employs a detection system that blocks accounts only when there is 95% certainty of fraud. If this threshold isn't met, the system charges advertisers higher fees intended to discourage purchases but effectively increases revenue. Coupled with Meta's personalized ad algorithms, users clicking on one scam ad may be targeted with more scam content.
Some documents indicate Meta intends to reduce scam ads, yet records also show executives worry that stricter reviews and rapid removal could hurt revenue forecasts. Internal policies limit anti-scam actions to causing no more than a 0.15% revenue loss, with review managers instructing teams to proceed cautiously.
A Meta spokesperson denied tolerating scams, stating that 134 million scam ads have been removed over the past 18 months, resulting in a 58% drop in user complaints.
In July, Meta held an anti-scam forum in Taiwan where Simon Milner, vice president of public policy for APAC, said that since March 2024, Meta has taken down over 4.3 million scam ads targeting Taiwanese users.
Milner outlined three key anti-scam strategies: strengthening platform defenses, dismantling scam networks, and partnering with organizations like MyGoPen and the Taiwan Digital Security Development Association to develop AI assistants based on the Llama model to identify scams.
However, AI can both combat and facilitate scams. OpenAI CEO Sam Altman warned at a Federal Reserve conference in July that advancing AI synthesis technology enables realistic voice and video impersonations, posing significant risks to financial sector voice authentication.
Article edited by Jack Wu



