News and Analytics

Artificial intelligence on the side of evil: neural networks become a weapon of fraudsters
0

Artificial intelligence on the side of evil: neural networks become a weapon of fraudsters

Jan 11, 2024

In an era of rapid advances in neural network technology and artificial intelligence, we face new challenges and threats, one of which is the increasing incidence of neural network fraud. This subspecies of cybercrime goes beyond traditional methods of deception, using powerful algorithms and deep neural networks to achieve their malicious goals. In this article, we will examine the scope of the neural network-focused fraud problem, identify typical scenarios and techniques, and discuss possible precautions and defences against this new type of cyber threat.

Artificial intelligence on the side of evil: neural networks become a weapon of fraudsters - news

Attackers are increasingly using machine learning algorithms and neural networks to access citizens' personal information for fraudulent purposes. Andrey Natashkin, founder and CEO of Mirey Robotics, a specialist in artificial intelligence, talks about new fraud methods using neural networks and gives advice on how to ensure security.

Artificial Intelligence Fraud Techniques

Cybercriminals have many ways to use neural networks for fraud in their armoury. This includes forging documents, creating voice dipfakes, organising charity fundraisers under false pretences, blackmail with fake videos and photos, and phishing attacks to collect personal data.

Cybercriminals have reached a new level of skill, having mastered the art of spoofing voice messages in the popular messenger Telegram using neural networks. Their strategy involves stealing user accounts and creating fake voice messages based on the stolen data. These fake recordings are then used to extort money by sending the created messages into chat rooms.

Likewise, traditional methods of identity verification such as photo verification have lost their effectiveness. Recent reports confirm that a community on Reddit was able to create an artificial character holding up a piece of paper with the required text (community name and nickname) as well as a fake ID. Proof of identity now requires video, but this solution is temporary as well.  Neural networks are great at creating fake documents. They can quickly generate fake passports, driving licences and bank statements.

Artificial intelligence on the side of evil: neural networks become a weapon of fraudsters - news

Moreover, neural networks can copy a particular person's writing style by analysing their social media. This makes it possible to create fake messages, including fraudulent letters to relatives asking for urgent financial help. Andrei Natashkin explains that using speech synthesis technology, attackers can clone your voice, creating audio recordings from your phrases. 

Another example is the theft of $35 million from a bank manager in Hong Kong. The attackers posed as customers and confirmed the transaction via an email spoofed using neural networks. The bank manager, who confirmed the requested transaction, was the victim of a clever fraud where both the phone call and email turned out to be spoofed and had nothing to do with the real customer. This is just one of many examples of neural networks being used in cybercrime, the expert warns.

The expert notes that neural networks are gaining worldwide popularity, and this does not bypass the attention of criminals. The attraction for them is not only the ease of use of modern neural networks, which bypasses the need for highly skilled programmers, but also their consistently high results, which exclude human errors. High speed of operation is another factor that makes neural networks attractive to fraudsters.

The expert emphasises that neural networks such as ChatGPT are capable of replacing entire teams of programmers. He also notes their variability, providing the opportunity to use different fraud schemes and find the most profitable options.

Blackmail and charity fundraising remain popular fraud methods. Far from changing these schemes, neural networks have improved the quality of content, which has a direct impact on audience trust. Attackers are actively using the ability to create fake intimate images based on real photos from social networks, increasing the level of blackmail.

The expert cites examples of neural networks such as FraudGPT and WormGPT being used to hack into bank accounts. He warns that these tools are becoming increasingly accessible to novice hackers and expresses confidence that neural networks will become smarter and more advanced in cybercrime in the future.

Phishing attacks using malicious emails also remain a common type of fraud. Experiments comparing the skills of ChatGPT chatbot and real hackers in creating phishing emails showed that artificial intelligence is able to quickly compose such an email, but its emotional approach needs improvement. However, experts believe that neural networks will soon be integrated more closely into the creation of context-aware phishing emails, increasing their emotional persuasiveness.

The founder of Mirey Robotics warns against the risks of neural network fraud and offers a few recommendations to mitigate the risk. These include protecting accounts through two-factor authentication, securing social media profiles and using anti-spoofing tools. He also encourages always checking information with multiple sources to avoid possible fraud attempts.

Comments

Latest analytics

Revealing the Top...

Revealing the Top Crypto Auto Trading Platforms

Top best centralized...

Top best centralized crypto exchanges

Show more

Latest Dapp Articles

Show more

You may also like