Wednesday, September 27, 2023
HomeOpinion & AnalysisThe implications of AI for cybersecurity

The implications of AI for cybersecurity

NetworkTigers debates the possible implications of AI for cybersecurity.

With security experts worldwide predicting what 2023 may have in store about the cyber landscape, all agree that artificial intelligence (AI) and machine learning will play a critical role in how threat actors stage attacks and how organizations and administrators defend against them.

What exactly is AI and machine learning?

AI is the capability of a computer to solve problems and make decisions using a simulation of the human thought process referred to as “machine learning.”

Machine learning, as defined by IBM, “focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.”

Why is AI controversial?

Historically, the term “AI” brought to mind science fiction stories around rogue computers that become sentient or machines that revolt against their human creators. While a scenario of that nature feels a little less like a fantasy every day, our current engagements with AI are far less cinematic.

We are already accustomed to a degree of machine learning through social media in the form of algorithmically displayed content. Streaming platforms such as Netflix also employ machine learning to suggest what entertainment we may enjoy based on our previously viewed shows and movies. These algorithms are designed to adjust dynamically to our habits without human intervention. They observe our actions, take note of how we respond to what it shows us and provide us with similar content to keep us engaging with the platform.

There are ethical arguments to be made about the nature of this type of business model when it comes to social engineering, paid advertisement, the spread of misinformation and the fact that violent content tends to circulate most fluidly. Not to mention that social media platforms can use the power of their algorithms to influence what we see in intentionally inorganic ways that serve the company more so than the individuals who use their apps.

AI is also poised to disrupt employment across all sectors. ChatGPT is an AI chatbot that can scrape the internet for information to provide answers to questions, compose content on request and even write functional coding language. From freelance writers and developers to search engine giant Google, ChatGPT is seen as an existential threat to those who make a living using their human brains to generate original content.

AI art generators like MidJourney behave similarly, using the web’s wealth of data to create startlingly high-quality images based on little more than a prompt from a user. They can accommodate requests based on style (oil painting, 70’s photograph, etc.) and even create images that accurately copy an established artist’s style.

Because AI content generators pull their data from copyrighted works, even sometimes “accidentally” including a rights holder’s watermark in their visual output, a debate is boiling as to whether or not such assimilation is a legal violation. In the meantime, however, this has not dissuaded major publishers from illustrating their articles with AI-generated content that effectively cuts the artist out of the deal. This is much to the dismay of creators who have honed their craft only to see paying clients opt for cheaper computer-generated material that may, in fact, still include aspects of their previously published work.

From medical imaging to cargo hauling, AI’s potential to displace almost the entirety of the world’s workforce is leaving many to wonder what their role will be in the upcoming years, especially with ChatGPT recently having passed the US Medical Licensing Exam and the Bar Exam.

How threat actors can harness AI

While AI content creation and an employment debate has entered public discourse, the utilization of AI among criminal enterprises or threat actors has been largely left out of the discussion. Netflix using an algorithm to suggest movies seems largely innocuous. Still, this same technology used to predict and counter an individual’s behavior in the context of a hack or social engineering scam could result in cyberattacks that are borderline impossible to dodge.

Weaponized Chatbots

Bots like ChatGPT can already create content in the voice of people whose mannerisms are entrenched in popular culture. Want a sugar cookie recipe written up in the voice of Barack Obama addressing the United Nations? Within five seconds, you’ll have it.

Aside from the fact that hackers are already using ChatGPT to write malware more efficiently, chatbots can be used to more effectively communicate with victims in their native language, avoiding the poor grammar that is often a telltale sign of a scam. Advancements in natural inflection and responses are also being developed to create convincing fake personas on dating sites and other platforms where people may be persuaded to make purchases or send money to someone that is, in fact, just a carefully curated automation.

This same AI technology could be fed a diet of a specific person’s mannerisms and used to create spear phishing attacks subtle enough to trick even the savviest internet user into believing that they are texting with their boss or family member.

Deepfakes

Deepfake technology is a form of machine learning that can create convincing video content of a person after scanning images of their face to build a three-dimensional interpretation of how they look with various expressions. This interpretation can then be tracked to a live actor’s face as they emote and speak, resulting in what looks like the deepfaked person performing said actions. 

This technology is being applied extensively to filmmaking. Disney has been investing heavily in their proprietary deepfake algorithms, using them to de-age actors and even bring an 80s-era Mark Hamil to the screen as Luke Skywalker in “The Mandalorian.”

Amazingly, a YouTuber took issue with Disney’s original Skywalker deepfake and created a version that was so superior that they were hired to work on future episodes. While this story is interesting because it shows how a talented, determined artist can eclipse the efforts of a multi-billion dollar global entertainment empire from their desktop computer, it also highlights the danger within reach of hackers.

Criminals will surely use deepfake technology to do everything from create fraudulent videos of workplace superiors requesting login data to political leaders making inflammatory statements or engaging in controversial behavior. We are entering an era in which it will become more and more difficult to discern fact from fiction. It’s this very level of universal uncertainty that bad actors, some state-sponsored, will be able to capitalize on via social engineering schemes that employ deepfakes. 

Currently, a deepfake’s ability to create a realistic facsimile depends on the quality and quantity of photographs it is trained on, making celebrities ideal candidates due to the amount of material available. Even those who have never heard the term before are likely familiar with the comedic social media accounts that feature digitally impersonated versions of actors like Tom Cruise and Keanu Reeves performing mundane daily tasks. 

As technology advances, however, it will certainly be able to do more with less. This means that it may eventually only take a handful of photographs for a threat actor to assemble a deepfake realistic enough to do serious damage.

Deepfaked audio is also within reach. To once again cite Disney, the voice of Darth Vader in their recent “Obi-Wan Kenobi” was generated entirely by AI company Respeecher. James Earl Jones provided none of his iconic voice work for the character, whose lines were generated by a computer having been trained on the actor’s decades of recordings. Soon, we may not even be able to trust a voice call fully.

Staging and executing dynamic attacks

In a battle as old as computers themselves, criminals and developers have been playing leapfrog, each side discovering something about the other and then responding accordingly. A new exploit results in developers releasing a software update to fix the bug. Conversely, every new software version sees hackers poking and prodding for unnoticed weaknesses. 

AI is predicted to end this turn-based scenario, as security firms and criminals alike employ dynamic programs that can predict the moves of their adversary, react in real-time to thrown punches and swoop in for the kill the moment a weakness is revealed. The days of patch downloads and emails encouraging users to download the latest OS version will seem old fashioned as AIs duel in cyberspace, trading thousands of blows a second and even self-patching before an administrator knows their network is under siege.

While that scenario may not unfold now, malware that can evolve to bypass detection and remain hidden within systems is a major concern for security developers. Standard, static defensive measures simply won’t be up for the task. They will have to be supplemented, or completely replaced, with an infrastructure that has the brains needed to hunt down evasive threats actively. 

How can we defend against malicious AI usage?

It’s plain to see that we are on the verge of an arms race around AI’s use in cyberspace. Thankfully, run-of-the-mill criminals simply don’t have access to the best minds in Silicon Valley when it comes to creating proprietary technology. This means that attacks in the near future will likely only leverage familiar, widely available tools similar to ChatGPT. 

However, just as we’ve witnessed a YouTuber take on Disney and beat them at their own game, tech advances are continually leveling the playing field. Additionally, state-sponsored hacking enterprises in countries like Russia and China can focus their resources on developing competitive tools or, as is often the case, simply steal them from others via run-of-the-mill espionage and data exfiltration.

Ultimately, organizations would do well to begin to integrate AI into business operations wherever possible while still maintaining essential cybersecurity best practices like regular staff training on current threats, mandatory multifactor authentication and adopting a zero trust model. As more antivirus and cloud-based security providers integrate AI into their offerings, we can expect the shift to happen organically, as long as administrators keep their defenses regularly updated.

An uncertain future …

AI’s role in cybersecurity may seem fraught, Monica Oravcova, COO and co-founder of cybersecurity firm Naoris Protocol feels that AI’s integration could very well be a net positive for the cyber landscape as long as those on the right side of it act quickly to set the stage. 

Regulation, as noted by Oravcova, moves at a glacial pace compared to technological advancement and market adoption. Therefore, it is essential that organizations set themselves up to battle evolving threats while also maintaining an ethical implementation of their own usage of AI as it relates to their users and customer data and privacy. 

Whether or not such a degree of faith ought to be placed in corporate entities that are foundationally designed to prioritize growth over societal wellbeing and have thus far proven less than stellar at keeping customer and user data out of the hands of criminals is another matter of debate entirely. What is certain, however, is that AI’s utilization and integration into our daily lives is no longer looming in the future, but here now and in for the long haul.

Derek Walborn
Derek Walborn
Derek Walborn is a freelance research-based technical writer. He has worked as a content QA analyst for AT&T and Pernod Ricard.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You might also like

Stay Connected

Must Read

Related News

Share it with your friends:

The implications of AI for cybersecurity