NetworkTigers reviews AI cybercrimes and how they are evolving.
Advancements in artificial intelligence are accelerating rapidly. Viral platforms that allow users to generate images based on simple text prompts and large language learning models (LLMs) that can do everything from answering questions to writing code are becoming more capable by the day. Major tech companies such as Google and Microsoft are pouring billions of dollars into funding their proprietary AI programs or scooping up those developed by others, signaling that we are likely on the verge of a technological paradigm shift.
Cybercriminals, ever agile and resourceful, have also integrated AI into their operations. They use this cutting-edge technology to attack more efficiently, leaving security proponents scrambling to keep up.
The days of the disenfranchised Nigerian Prince in need of investment assistance are in the rearview, with today’s criminals launching campaigns that are able to mislead even the most seasoned internet user. The following social engineering techniques, despite their age, continue to be effective and prevalent, thanks to the adoption of AI by threat actors adept at freshening up old tricks to target new victims.
AI-enhanced phishing attacks
Anyone who has opened an email account has been a victim of a phishing attack, hopefully unsuccessful. These phony messages purport to be from a trusted bank, financial institution, retailer, or doctor and have historically been full of typos, poor grammar, and sloppy formatting. While effective in some cases, savvy internet users could usually quickly determine whether or not an incoming message was legitimate.
Thanks to LLMs, however, scammers can craft messages in their victims’ native languages that don’t reveal their true intentions so easily. Today’s phishing campaigns use professional writing styles and characteristics targeted specifically to their intended victims, meaning it can now take some serious detective work to separate a fake email from a real one.
AI platforms can also incorporate current events and newsworthy information into their language models, allowing cybercriminals to build highly convincing texts and emails that contain headlines or news items pertinent to their marks without spending the time doing their own research.
Surgical spear phishing attacks
Spear phishing attacks, designed to target a specific person, have also been made more accessible to craft and more effective due to the implementation of AI.
Traditionally, a spear phishing campaign would require a criminal to manually create messaging using information lifted from the victim’s social media sites or pulled from data breaches. This is a time-consuming process.
With AI, criminals can automate much of this work by using the technology to gather the data required to create the ruse and then using it once more to generate the actual messaging.
For example, an LLM can analyze someone’s social media posts to understand how the person speaks, uses punctuation, and formats their communications. It can then create text that mimics their tone.
Armed with an AI chatbot that can correspond in the same way as someone who may be a high-ranking official at a company, criminals can then send messages to lower-level employees asking for everything from private login credentials to purchasing gift cards.
With AI’s ability to chat in the same way as a trusted boss, coworker, or colleague, many victims don’t know they’ve been fooled until long after they have fulfilled the criminal’s request.
AI-enhanced vishing scams
Vishing, a term for phishing attacks carried out through voice messages and phone calls, has been greatly enhanced by AI.
In the same way that LLMs can analyze and mimic text, an AI can analyze recordings of someone speaking and craft audio of them making statements they never have.
Because of the amount of publicly available content featuring them speaking and emoting, celebrities and politicians are prime targets for this kind of scam.
On the day of the New Hampshire 2024 presidential primary vote, a robocall featuring a fake Joe Biden was sent to voters in which the “President” urged them not to turn out at the polls. The mastermind behind the calls insists that he engaged in the stunt to raise awareness of the danger of AI in our elections and to highlight how unprepared the US government is to deal with them. Whether his claimed intent is true or not, his fake Joe Biden call surely proved his alleged point.
Deepfake video
AI can also be used to create ever more convincing fake video content. However, the era of the video deepfake has yet to come to total fruition due to barriers of entry that still prevent them from becoming fully mainstream. Once more, those in the public eye are easiest to fake due to the amount of footage available for analysis.
Security experts warn that LLMs, vishing, and deepfake video content will eventually be implemented in tandem to generate fraudulent messaging that will challenge the general public’s ability to discern real media from that with ill intent.
“Absolutely, it will improve and will be used in attacks,” warns Gerald Auger, consultant and adjunct professor at The Citadel, in an interview with SecurityWeek.” “Deepfake video of a CFO coupled with the deepfake audio capabilities (especially if there is enough corpus of audio sampling — think of the CFO for a Fortune 50 company that has done public speaking) will be enough to generate compelling content to trick financial analysts into moving money to threat actor controlled accounts”
Fighting fire with fire
As AI cybercrimes become more nuanced and the barrier to entry continues to lower, security firms and IT administrators are leveraging the technology to level the playing field with their adversaries.
AI-enhanced network defenses can analyze system behaviors at a level more granular than previously possible. Armed with these insights, AI can determine if a minute instance of abnormal activity indicates an emerging threat and act accordingly.
This advanced degree of automation has the potential to allow systems to almost immediately adapt to an intrusion that may have otherwise gone unnoticed and allowed to take root. The flip side is the inevitable race between security teams and threat actors jostling for superiority via dueling AIs.
While the science fiction scenarios of autonomous military robots and world-ending software programs that have become entertainment mainstays aren’t on the radar just yet, the exponential increase in the capabilities and adoption of AI is fully expected to continue intensifying the battles fought in cyberspace.