AI and Cybersecurity: Protecting Digital Landscapes with Artificial Intelligence

Written by Scott Wilson

digital eye ai

Artificial intelligence stands to bring enormous changes to many different parts of many different industries, our world, and our daily lives.

But there may be no field being turned on its head as quickly and dramatically as cybersecurity.

Malevolent, impervious, or just plain misunderstood AI systems are a mainstay of science fiction from 2001: A Space Odyssey to Ex Machina. Society is primed to see the risk and downside in AI when it comes to cybersecurity.

The reality is more complex. Artificial intelligence does present some new and sobering threats to computer security. But AI also offers some exciting new avenues for cybersecurity professionals to throw a new layer of protection over the digital landscape.

Exploring the Advantages AI Offers Cybersecurity Professionals

cybersecurity network protection in the officeThe broad capabilities AI tools offer make for many applications in the digital security landscape. Like every other area of AI research, new ideas and tools are being tried out all the time. What seems hot today could be a blind alley. What looks crazy and impossible may be the future.

AI is already playing a significant role in cybersecurity, however. In the inevitable tit-for-tat that comes with cyber defense, any kind of edge for either side can be a game-changer. As cybercriminals come to the court with new AI-generated attacks, cybersecurity professionals are leveraging some of the most powerful abilities in artificial intelligence to blunt the threats.

In a field that is facing serious hiring shortfalls, AI may also help cybersecurity teams bridge the gap. A 2022 ISC Study found a global shortfall of more than 3 million positions in cybersecurity. With a serious lack of available expertise, the efficiency that comes with automated tools may allow the professionals who are available to do more and act more effectively in their roles.

Automation Offers an Edge in Speed to Cybersecurity Defenses

digital networkOne of the major advantages any computerized system has over humans is speed. Any electronic system boasts reaction times that a human can never match. When you’re reacting to a massive cyberattack, that’s a huge benefit.

The flaw has been that while machines bring plenty of speed to the table, they lack judgment. Conventional intrusion detection and response systems are programmed with elaborate but inflexible signatures to recognize and react to attacks.

That’s all well and good when the cyberattack is a known technique that has been cataloged and described. But against a novel or zero-day attack, these offer very little defense. Where they are programmed with looser parameters, they risk catching legitimate traffic; where too restrictive, they spot nothing. Only expert human assessment can determine the difference in such events… but it takes time.

AI has the potential to bring the reaction time of computers to bear on the problem, combined with the reasoning and analytical skills of a human expert.

AI can make informed calls about intrusion attempts in milliseconds, but that’s not its only advantage. It also has a span of consideration that no mere human consciousness could track with.

For example, signals analysis of internet traffic is a time-honored technique to detect malicious uses. But it involves thousands of bits per second flowing through hundreds of connections coming from systems all around the world. Even today, human analysts lean heavily on software tools to pick apart packets and assess the flow of information.

But with an unblinking electronic eye, AI sight algorithms may surface suspicious traffic long before it would be spotted by human cybersecurity teams. With the ability to see and compare and spot trends in vast amounts of data in near real time, these tools can leave cybercriminals with nowhere to hide.

Analyzing Vulnerabilities and Predicting Threats Keeps Cybersecurity Teams Ahead of the Hackers

computer hacker in hoodieBig data is both one of the reasons that modern AI has become as capable as it is, and one of the first areas where AI tools really made an impact. By using machine learning (ML) algorithms to clean, assess, and analyze very large and disparate stacks of information, insights can be generated that humans could take decades to isolate, if ever.

There are few fields generating as much data as cybersecurity today. From the logs of thousands of IoT devices, to security cameras, to IDS systems, every second of every day more data points are flowing into the stores of cybersecurity teams.

AI algorithms are sifting through and making sense of that information to identify attack patterns, utilization trends, and even to make predictions about future vulnerabilities. By establishing deep familiarity with normal utilization, these systems quickly identify and flag, or even respond to anomalous behavior.

These types of applications can even extend into the real world. Facial recognition and behavioral analysis can identify bad actors who are making physical penetration attempts before they even get near a terminal.

System Hardening and Risk Management Creates Resilient Networks Where Cyberattacks Wither and Die

cyber attack on screenAI’s flexibility also makes it the ideal tool for developing dynamic responses to cyberattacks. Beyond the actual intrusion detection and response tools, AI can build resilience into cyber systems. AI operations platforms can play a big role in cybersecurity even if they weren’t expressly designed for the task.

In 2022, for instance, telecommunications tech company Ericsson released their Service Continuity AI app suite to deliver hardening and self-healing capabilities to digital networks. By letting AI evaluate key failure points and automatically route traffic past problems, they claim reductions of more than 30 percent in critical incidents and a 60 percent reduction in performance issues overall.

This is an area of cybersecurity that doesn’t see as much attention as the flashier roles of tracking hackers and fighting off cyberattacks. But it has a much greater span of protection. AI-driven network resilience can guard systems against natural disasters and accidents as much as intentional breaches.

In the AI Arms Race, Cybersecurity Professionals Are Facing Serious Dark Side Competition

lone hackerThere’s a dark side to AI in cybersecurity as well. For all that AI and ML tools can do to revolutionize monitoring and response to cyberattacks, they can also offer attackers unfathomable power.

The rapid rate at which ML can adjust parameters and rewrite algorithms can be used just as easily to launch rapid and repeated attacks along many vectors. ML-driven cyberattacks can also flood the zone, creating polymorphic code that evades any kind of signature detection system. The proof-of-concept BlackMamba key logger released by Hyas Infosec in 2023 used generative AI to create malicious code that went entirely undetected by industry-standard endpoint detection and response software.

Even the most basic and apparently unrelated AI can present a security threat. Overseas organizations launching large-scale phishing attacks once suffered from the drawback of having to craft compelling email messages outside their native language. The poor grammar and stilted word choices of various Nigerian princes was a major tell that kept victims from biting.

But the carefully honed words of a ChatGPT-generated message slide like honey into the inboxes of unwary recipients. While ChatGPT and other commercial tools theoretically have safeguards against such uses, it’s proven trivial to bypass them. In any case, a black hat generative tool called WormGPT without such restrictions is already in the wild. Better ones are very likely already in development.

On the other side of the table, there have already been documented cases of scammers creating spoof AI sites to look like ChatGPT. Their hope is that users who imagine they are conversing with a machine may be more likely to reveal personal details that can be used in identity theft, blackmail, and a host of other crimes. In other cases, they try to dupe users into downloading malware.

Artificial Intelligence Itself Can Be Vulnerable to Hacking Attempts

ai deep learning

Of course, AI itself represents both a valuable target and a new attack vector for hackers. One of the most interesting aspects of cybersecurity in the AI age may be finding ways to get artificial intelligence to guard itself.

Almost as soon as ChatGPT was released, security researchers and cybercriminals alike began attempts at bypassing various safeguards in the system. While many researchers focused on breaking the house training to get the system to produce misinformation or evidence of unacceptable biases, the criminals were looking for something else: access to personal information.

With interactions rooted in what appear to be common conversations, it’s not always even clear what constitutes a cyberattack on AI systems.

The massive datasets that generative AI is trained on are generally built from public information. But that won’t continue to be the case for many special-purpose generative transformers. It’s easy for private details to slip into very large datasets that are beyond the scope of human review, too. For that matter, poisoning the datasets that go into generative models represents and new and worrisome avenue of attack.

On top of that, people tend to divulge more than they should when they believe they are confiding in a machine. If the machine can be made to repeat back conversations that were supposed to be confidential, hackers may get a bonanza of personal data.

So tricking AI to cough up secrets through careful prompt engineering is already a thing. And cybersecurity professionals will also have to pay careful attention to how AI is built on legacy code that can introduce vulnerabilities. The biggest ChatGPT hack to date came not through the AI code itself but an open-source library used to store chat history.

How Can You Secure a Machine That Thinks for Itself?

3 laws of robotics

The idea that an artificial intelligence might be open to new and unusual attacks due to the very fact that they have been given reasoning ability isn’t a new one. In fact, you can go right back to the inventor of the Three Laws of Robotics for some of the first thoughts on hacking the artificial brain.

Isaac Asimov’s thinking on artificial intelligence and his stories about robots and AI have shaped or inspired many people working in the field today. While such ideas were firmly in the realm of science fiction when Asimov was working on them, he nonetheless foresaw some of the logical and social obstacles facing AI in the real world.

While the Three Laws were an attempt to put forward a model to render intelligent machines safe for humans and society, Asimov also understood that humans might be a threat to machines. Unlike a human brain, the positronic brains of his robots could be manipulated and jammed simply by interacting with them in the right sequence. Posing illogical questions or a sequence of unsolvable problems could crash the robot.

Once again, his vision may be coming to life. At least one enterprising Redditor asked ChatGPT to give a prompt that would crash it… which it did, and promptly was disconnected from the session as the prompt was fed back to it.

AI itself will have to be carefully constructed and built to protect against such vulnerabilities before it can be effectively used to guard other systems.

So Far, AI Offers a Winning Hand to Cybersecurity Teams

deep learning meditating robotOf course, the silver lining in these dark clouds is that for every malicious use that AI can be put to, it can also serve as a red team tool for detecting vulnerability in systems.

So far, the arms race is on the side of the angels. There are three factors playing in their favor.

First, generative AI requires massive computer power to train and develop. That’s out of the reach of most black hat groups so far.

Second, it takes massive datasets to learn from. In cyberattacks, the logs and traffic will always accumulate more heavily on the defender’s side. So cybersecurity teams today are running with a home field advantage—they know what patterns constitute expected and normal behavior in their systems, and they are better positioned to spot and squash anomalous attacks.

Finally, AI development is spearheaded today by highly trained engineers and researchers. Frequently holding master’s and even doctoral degrees in artificial intelligence, they have a level of expertise that the average black-hat hacker will find tough to match.

But the field of AI is shifting fast and generative tools might not always be at the front of the pack. Any breakthrough could render current advantages null and void. AI engineers will have to continually focus on cybersecurity implications in their work to keep the field safe and one step ahead of the bad guys.