TikTok is making strides in content moderation, yet it still grapples with the persistent issue of scams and AI-generated misinformation. Recent statistics reveal that a significant portion of harmful content continues to slip through the cracks, raising concerns about user safety on the platform. Based on the data provided in the document, it is clear that more robust measures are needed to protect users from these threats.
Scam Content Removal Rate
In the first quarter of 2025, TikTok reported a preview removal rate of only 44.4% for scam content. This alarming figure suggests that over half of the scam videos were viewed by users before they were flagged for removal, highlighting a critical gap in the platform's moderation efforts.
Challenges in Detecting AI-Generated Media
Furthermore, the platform's ability to detect AI-generated media designed to mislead users is also under scrutiny, with a catch rate of just 46.6%. These numbers indicate that while TikTok's AI is proficient in identifying traditional policy violations, it is still adapting to the more nuanced and rapidly changing tactics employed by scammers. As the platform continues to evolve, addressing these challenges will be essential to ensure a safer environment for its users.
A recent report highlights a troubling rise in cybersecurity issues, particularly in Nigeria, where millions of email accounts have been compromised. This alarming trend contrasts with TikTok's ongoing struggles with scams and misinformation. For more details, see email breaches.







