NetworkTigers on AI hacking. What is it and is it a significant threat to cybersecurity?
AI, or artificial intelligence, can sift through massive amounts of data at rates that can outpace anything a human can do. It can spot patterns, aggregate data, and even replicate what it learns convincingly enough to create essays, artwork, coding, and more that can pass for the real deal. How can humans compete against AI hacking? What is it, and is it already a risk for you and your home or business network?
Understanding AI hacking
Artificial intelligence is currently defined as “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.” AI can make decisions in ways that we associate with human intentionality and the ability to learn and absorb. As AI can become more advanced, so can its ability to create and develop threats. For this reason, understanding the danger of AI hacking means continuing to follow developments in AI programming.
Many companies already use AI to spot patterns in how we access data. For instance, some AI is used in analytics and advertising to learn what kinds of movies we like to watch, what types of products we want to buy, and when we are likeliest to hit “proceed to checkout” on an online shopping cart during the week. (The peak purchase time, by the way, is currently 8 to 9 pm on the weekdays, Monday through Thursday). AI is also a robust line of defense, although some argue an underutilized one, in protecting against existing cybersecurity threats. Machine learning can identify TTPs, also known as the tactics, techniques, and procedures that build an access portfolio. From there, AI can be used to build detection capabilities.
How AI hacks are already out there
Unfortunately, AI is a tool that can be harnessed by bad actors, even if it does not wish to cause harm. AI-generated phishing emails are already shown to be opened at a higher rate than those written by humans. Similarly, AI can create more sophisticated deep-fake data that can poison data wells, even preventing AI-led cybersecurity processes from functioning appropriately. AI can build malware that can avoid detection by lurking for extended periods within existing systems. It can also exploit penetration testing tools designed to protect against infiltration.
Some researchers are already attempting to stay one step ahead of AI hacking. However, by doing so, they may develop software that can be used against their interests. For instance, researchers at the Stevens Institute of Technology in Hoboken, New Jersey, have been working to develop more complex GANs or generative adversarial networks. This dual-network neural technology may have the potential to both create and learn the answers to CAPTCHA questions, as well as guess and create a system of potential passwords. Unlike traditional password-guessing operations, these GANs learn from lists of previously leaked passwords and generate convincing guesses. Neural networks can create possible passwords indefinitely and can absorb likely combinations from previous leaks, making future hacks more likely.
Human vs. machine
95% of cybersecurity breaches can be traced back to user error. From falling for phishing attempts, using unsecured home networks, and clicking on unknown email attachments, humans are the weakest link to ensuring network stability and data security. Indeed, in this case, AI-based hacks will be more sophisticated than their human counterparts.
Surprisingly, those same human inconsistencies may be the best defense against AI hacking. Machine learning works based on understanding and recreating patterns. Humans, however, often do not move in predictable ways. Instead, we make choices sometimes based on random elements or outside of a logical assessment of our goals or best interests. These inconsistencies have proven difficult, although not impossible, for AI to master.
Because of their high rate of human collaboration, decision-making, compromise, as well as simple luck, the following fields are expected to be some of the most insulated against AI hacking in the foreseeable future:
On the other hand, financial institutions with predictable patterns and aggregable data may be most at risk.
AI evolution and employee displacement
One unforeseen area of AI hacking is the real threat of employee displacement. Experts predict that AI will create up to 97 million jobs and generate $15.7 trillion for the economy by 2030 if it continues to be implemented at the current rate. However, these predicted positions do not match the expected toll of the 375 million jobs AI will make obsolete. And those who lose their jobs (approximately 1 billion worldwide) should not expect those jobs to be replaced quickly with better or higher-paying options.
All this to say, the hacks that businesses face may now become more sophisticated due to the implementation of AI, but the person instigating the hack may not be very new to us. As AI becomes more valuable and influential, more jobs, particularly those in the tech sector, may become obsolete. AI hacking may not look like the machines turning against us but like a recently laid-off tech employee using this powerful technology to hack more efficiently.
The most significant risk may be the one we know
These collaborative, AI-human led attacks may be the most concerning option. Many hackers worldwide are already highly skilled, tech-literate young people without better opportunities for steady employment. With AI hacking at their command, the threat businesses and individuals face may seem more sophisticated, but it comes from a human source. While AI may be able to identify and pinpoint vulnerabilities in a system at a rate that can be difficult to combat, an experienced hacker behind the screen would be able to use human judgment to be able to navigate the financial payoff, as well as respond to human unpredictabilities that AI may not yet be equipped to answer to.
Which threat is more potent – the AI or the person behind it? Let us know your feelings on this world-altering technology below.