HomeCyber SoapboxGenerative AI and cybersecurity: a double-edged sword
December 24, 2024

Generative AI and cybersecurity: a double-edged sword

NetworkTigers discusses the pros and cons of generative AI in cybersecurity.

It’s impossible to keep up with the tech world without seeing AI enhancements advertised on everything from TVs to kitchen appliances. While many feel that most of these features amount to advertising hype, generative AI is alive and well in cybersecurity and computing, transforming how many people work, collaborate, code, and problem-solve on a daily basis.

As with most major technological advancements, generative AI unlocks double-edged possibilities. AI can offer IT administrators access to tools and automation that help them anticipate cyberattacks and respond to them in record time. However, it also provides cybercriminals and hackers with the same advancements, which they can weaponize to create new threats, ranging from novel malware to more sophisticated phishing scams

Here are just a few of the ways that the advent of generative AI has impacted cybersecurity for better and for worse:

How generative AI bolsters cybersecurity

Enhanced threat detection

A properly tuned generative AI model can analyze tremendous amounts of data at an unprecedented rate. This allows administrators to breathe easily, knowing that disruptions, abnormal traffic patterns, or oddities can be quickly identified, analyzed, and addressed.

Generative AI models can also make IT teams more agile and quick to fix vulnerabilities before the bad guys get their claws in. By using real-time data to identify bugs, flaws, zero-days, and other issues that require immediate attention, systems can be updated or patched the moment a flaw is discovered.

AI-enhanced automation allows these processes to occur with minimal, if any, human intervention. Teams can devote their time to tasks that require human communication and intuition while a large amount of security scanning, responding, and filtering occurs in the background.

Superior training simulations

Generative AI models are good at responding to input in unpredictable or “creative” ways. This makes them ideal for building training simulations that push IT teams out of their comfort zones. By generating flexible, amorphous attacks that can immediately respond to mitigation efforts, security teams can experience the challenges of an actual threat without experiencing any real damage.

These robust training sessions can better prepare teams and the systems they maintain for real threats, allowing them to improve their response skills and assess the fitness of the tools and products they use to stay secure. 

Improved phishing and scam detection

AI models can scan message content to identify patterns or discrepancies indicating an attempted phishing attack.

As phishing scammers have become better at their job, it has become more challenging to tell a real email from a fake one. With the implementation of AI scanning, details that may escape the naked eye can be flagged as suspicious. 

Web browsing can also be potentially hazardous. Browser developers regularly unveil new security features that use AI to assess a website’s security. Determining at a glance whether an e-commerce site is real or a spoof could potentially prevent thousands of scam transactions.

Fewer software bugs upon release

Zero-day flaws in software can make it easy for criminals to exploit vulnerabilities that developers missed during a product’s development. When exploited, these bugs cause software companies to scramble to deliver a patch or remediation to close the gap and secure their customers’ systems. 

AI models allow developers to scan code and test various penetration scenarios at a speed that was impossible just years ago. By identifying and fixing exploits during development, software vendors can release products with greater confidence in their security. Fewer zero-day bugs mean fewer compromises.

How generative AI helps cybercriminals

Advanced phishing attacks

The days of obviously fraudulent phishing emails and messages are receding further into the rearview. Generative AI models allow criminals to create text in different languages largely free of misspellings or grammatical errors that would have revealed a fake in previous years. 

AI can also create malicious websites that are nearly impossible to tell from legitimate ones, making it easy to trick even savvy web users into submitting their personal data or payment information to sites maintained by criminals. 

Deepfakes

While the threat of deepfakes has seemingly been on the horizon for years, generative AI is finally reaching the point where its ability to mimic people’s voices and faces is officially dangerous. 

A high-profile incident in February of 2024 saw a finance worker at a multinational firm hand over $25 million to scammers after engaging with a deepfake version of the company’s chief financial officer over a video call. The victim was confident that he was talking to the actual individual, helped by the fact that there were other people, some of whom he recognized, in the meeting as well.

It turned out, however, that no one on the call was real. 

Deepfake technology has disastrous potential in criminal hands. It has been used to create everything from fake celebrity endorsements for crypto scams to content featuring political leaders that can undermine civilian trust in their government.

AI-assisted malware creation

Since the advent of computing, criminals and hackers have been creating and deploying malware and viruses. In the past, however, such activities required expertise, and malware took time to develop, test, and refine.

With the power of generative AI, today’s cybercriminals can develop malware variants quickly and in large numbers. These variants can be honed to achieve the same goals but through different avenues, meaning that a developer can create malware capable of attacking a target at its weakest point.

This can be tested by running the malware against a defense system and watching how the battle unfolds. Generative AI can then analyze the results and modify the malware to be more effective. This feature allows criminals to adjust their tactics in real-time in response to a target’s defenses. These adaptive malware attacks can be extremely challenging to stave off. 

Generative AI has also significantly lowered the barrier to entry for cybercrime. What previously required coding knowledge and savvy can now be achieved by individuals with little to no computing experience. This means that attacks are not only more sophisticated but also more frequent. 

Malicious AI models

Publicly available generative AI models have guardrails designed to prevent them from allowing people to engage in harmful or criminal behavior. While individuals who know how to “trick” the AI can bypass those limitations, criminals intent on using language learning models more efficiently can turn to platforms specifically designed for them.

These “dark LLMs” tend to specialize in certain areas of criminal activity. Some may be designed to expedite malware testing, while others may be tailored to bot creation or fraud.

About NetworkTigers

NetworkTigers logo

NetworkTigers is the leader in the secondary market for Grade A, seller-refurbished networking equipment. Founded in January 1996 as Andover Consulting Group, which built and re-architected data centers for Fortune 500 firms, NetworkTigers provides consulting and network equipment to global governmental agencies, Fortune 2000, and healthcare companies. www.networktigers.com.

Ben Walker
Ben Walker
Ben Walker is a freelance research-based technical writer. He has worked as a content QA analyst for AT&T and Pernod Ricard.

Popular Articles