Infosec Security News

How Cybercriminals Misuse and Abuse AI & ML: Report Trend Micro

A jointly developed new report by Europolthe United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro looking into current and predicted criminal uses of artificial intelligence (AI) was released recently.

The research paper “Malicious Uses and Abuses of Artificial Intelligence, provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

The use of both AI and ML in business is rampant. In fact, 37% of businesses and organizations have already integrated AI in some form within their systems and processes in 2020. With tools powered by these technologies, enterprises are able to better predict customers’ buying behaviors that contribute to increased revenues.

How Cybercriminals are abusing AI and ML

The features that make AI and ML systems integral to businesses  such as providing automated predictions by analyzing large volumes of data and discovering patterns that arise are the very same features that cybercriminals misuse and abuse for ill gain.

Deepfakes:

Deepfakes have great potential to distort reality for many individuals for nefarious purposes.

A combination of “deep learning” and “fake media,” deepfakes are perfectly suited for use in future disinformation campaigns because they are difficult to immediately differentiate from legitimate content, even with the use of technological solutions. Because of the wide use of the internet and social media, deepfakes can reach millions of individuals in different parts of the world at unprecedented speeds.

AI-Supported Password Guessing:

Cybercriminals are employing ML to improve algorithms for guessing users’ passwords. With the use of neural networks and Generative Adversarial Networks (GANs), however, cybercriminals would be able to analyze vast password datasets and generate password variations that fit the statistical distribution. In the future, this will lead to more accurate and targeted password guesses and higher chances for profit.

Human Impersonation on Social Networking Platforms:

Cybercriminals are also abusing AI to imitate human behavior. For example, they are able to successfully dupe bot detection systems on social media platforms such as Spotify by mimicking human-like usage patterns. Through this AI-supported impersonation, cybercriminals can then monetize the malicious system to generate fraudulent streams and traffic for a specific artist.

An AI-supported Spotify bot on a forum called nulled[.]to claims to have the capability to mimic several Spotify users simultaneously. To avoid detection, it makes use of multiple proxies. This bot increases streaming counts (and subsequently, monetization) for specific songs. To further evade detection, it also creates playlists with other songs that follow human-like musical tastes rather than playlists with random songs, as the latter might hint at bot-like behavior.

AI-Supported Hacking:

Cybercriminals are also weaponizing AI frameworks for hacking vulnerable hosts. For instance, we saw a Torum user who expressed interest in the use of DeepExploit, an ML-enabled penetration testing tool. Additionally, the same user wanted to know how they could let DeepExploit interface with Metasploit, a penetration testing platform for information-gathering, crafting, and exploit-testing tasks.

(Image Courtesy: www.images.idgesg.net) 

Leave a Comment

Your email address will not be published.

You may also like