A recent incident involving Mai Fujimoto as a victim of a deepfake has drawn attention to the risks of fraud using artificial intelligence.
Mai Fujimoto's Incident
On June 20, Fujimoto, known online as Miss Bitcoin, narrated her experience of being tricked by a deepfake during a video call. She was communicating with someone whose Telegram account had already been compromised, and did not suspect that it wasn’t her real acquaintance. 'For about 10 minutes in the online meeting, I saw her face but had no clue it was a deepfake,' she shared. Due to connectivity issues, the impersonator sent a link to supposedly fix the problem. By clicking the link, Fujimoto unknowingly installed malware that compromised her Telegram and MetaMask accounts.
Rise of AI-Powered Fraud
Fujimoto's incident is not isolated. A recent report by Bitget revealed that deepfake technology was involved in nearly 40% of all high-value crypto frauds in 2024, leading to losses of $4.6 billion. Criminals are using AI to create convincing fake videos of public figures and simulate customer service interactions. Another report from Chainalysis noted that criminals are increasingly using AI tools to bypass KYC measures and automate fraud.
Warnings and Recommendations
Leading security experts emphasize the need for heightened awareness regarding AI-driven fraud. Changpeng Zhao urged users to avoid installing software from unofficial links, even if they appear to come from trusted sources. He also highlighted the importance of verifying trust through multiple channels.
The case of Mai Fujimoto underscores the importance of caution in the digital landscape. Deepfake technology poses a serious risk in the cryptocurrency world, and users need to stay vigilant and regularly update their security skills.