NetworkTiger’s list of 2023 cybersecurity predictions. It’s only going to get worse.
It’s a new year, and cybersecurity researchers and think tanks worldwide are predicting what we may be up against in 2023. While many cybersecurity challenges are expected based on trends witnessed year after year, this year’s forecasts are interesting due to the passage of federal cybersecurity laws and advancements in artificial intelligence.
Ransomware and malware: the eternal threat
Ransomware and malware will continue to plague organizations large and small, inflicting financial and operational damage. However, experts are theorizing that the haphazard methodology of past attacks may give way to more sophisticated, highly targeted campaigns that hone in on multinational organizations, municipalities and infrastructure.
Continued government emphasis on protecting critical infrastructure
A further bolstering of cybersecurity protections for critical infrastructure is predicted to be a key focus in 2023 and beyond. 2022’s passing of the Cyber Incident Reporting for Critical Infrastructure Act and the Better Cybercrime Metrics Act signal the federal government’s intentions to take cybercrime seriously regarding national security and threat tracking. As these bills roll out and see real-world implementation, we can naturally expect amendments and further laws to follow.
Multi-factor attacks are predicted to evolve
Multi-factor authentication touted as something of a silver bullet against hacks has seen its effectiveness wane as threat actors have devised ways to circumvent it or socially engineer users into weakening its purpose. As more users and organizations have adopted MFA, hackers have kept pace and modified their methods to accommodate for it. Depending on how much MFA attacks evolve in the coming months, the end of 2023 may even see MFA derided as an antiquated means of protection.
Deepfakes hit primetime
Deepfake technology, as a quick perusal of YouTube or TikTok can confirm, is becoming easier to achieve even by those possessing limited technological know-how. With so many people already susceptible to being phished by a convincing email or text message, the advancement of deepfake technology will make separating fact from fiction nearly impossible in some scenarios. Fraudulent videos of coworkers, superiors or family members asking for sensitive data will make scam text messages seem quaint.
Deepfakes of politicians making incendiary statements will also have tremendous implications concerning government stability and social unrest. State actors releasing purported video of a leader calling followers to arms, especially in underdeveloped countries where significant portions of the population may not be familiar with cutting-edge technology, could have disastrous destabilizing consequences. In a world where comically absurd misinformation can already garner devout belief, the implications of realistic deepfakes are frightening.
Battling artificial intelligences
With controversial content generators like DALL-E, MidJourney and ChatGPT showing the exponential advancement of the technology and the potential for worldwide disruption in almost every field of expertise, 2022 was the year that artificial intelligence went mainstream and opened up an entirely new universe of potential threats.
On a basic level, threat actors will utilize AI to communicate with victims in their own language. One of the most obvious “tells” when it comes to scams is poor grammar. Intelligent translators will make this hiccup a thing of the past as scammers leverage it to create nuance in their communications and erase instances of typos or incorrect verbiage.
Some experts predict that 2023 will be the first year we witness dueling automated systems battling in cyberspace at speeds human programmers cannot achieve. With hacking collectives and state-sponsored organizations using AI to create and launch attacks against systems protected with AI-directed security, we may be on the cusp of an IT arms race with victories going to the “smartest” technology.
Sounding less like science fiction daily, researchers wonder if an ill-advised user may not accidentally create a destructive AI that unleashes widespread disruption and destruction. It’s a scenario that only James Cameron may have seen coming.
Employers are predicted to get snoopier
Much of the planet’s workforce breathed a collective sigh of relief with the adoption of remote work, as it signaled an end to hovering bosses checking in to ensure that employees were living up to their standard of “busy.”
That respite may be short-lived, however, as employers are adopting so-called “productivity surveillance” tools to gain insight into who is and is not at their desk toiling, no matter where said desk may be located. This Orwellian scenario, essentially mandating that employees install spyware on their devices to ensure compliance, is already being used by a whopping 78% of employers and shows no sign of slowing down as companies struggle with managing workers that they cannot physically monitor.
Mandated security features
As the wheels of government slowly grind in the direction of cybersecurity prioritization, it’s reasonable to assume that regulatory bodies will set their sights on the private sector and mandate that their offerings meet minimum security requirements to be legally sold. Products that store and process user data will likely need to meet government guidelines, holding manufacturers accountable for security holes in their offerings.
The Metaverse will give criminals a new playground
While Mark Zuckerberg’s investment in the Metaverse has yet to attract many users, let alone change the internet landscape, an alternate virtual landscape will undoubtedly break through soon. As people migrate into it and incorporate it into their lives, criminals will surely follow and devise new ways to scam users, steal data, sell illicit goods and otherwise wreak havoc.
At this point, it seems unlikely that Meta will lead the charge into this new frontier. Security experts have their fingers crossed that whoever does take a more preemptive and holistic approach to user data and privacy than Facebook has.
Privacy concerns are predicted to escalate
Social networks, surveillance cameras and our plethora of connected devices have coalesced to create a world in which privacy is challenging, if not impossible, to achieve.
Individuals who believe themselves to be “off the grid” could potentially still be tracked, located and observed through facial recognition algorithms capable of scanning publicly accessible social media accounts for photos and videos that they unintentionally appear in.
The public will likely demand better protection for their personal data even though 70% of all countries have privacy protection legislation in place already. The American Data Privacy and Protection Act has been introduced in the US. However, it is hard to imagine how laws and legislation will put the genie back in the bottle when there is a constant tug-of-war between protecting privacy and seeming to care very little for it.