Laura

03/01/2023

AI Malware? Three Ways AI Will Change the Threat Landscape

The release of ChatGPT by Open AI was one of the biggest stories of December 2022. It may also be an early sign of how future threats will evolve.

Quick recap: Open AI, an artificial intelligence research lab previously funded by Elon Musk, released a free public test version of ChatGPT, its latest natural language AI model, on November 30, 2022. Within seven days, the ChatGPT chatbot had over one million users. 

The ability of ChatGPT to answer natural language questions with (mostly) relevant outputs blew the internet away. ChatGPT hints at a future where AI might do everything from drafting legal contracts to answering real-time customer queries and challenging Google Search itself. 

However, alongside all the positive hype that erupted, there were some disturbing undertones. A big one is: how might threat actors harness AI tools like ChatGPT to carry out cyber-attacks and damage cybersecurity?

Here are three ways we think AI malware and other trends could change the threat landscape in the near future. Plus, how companies can leverage AI in their security operations.

AI-Powered Phishing

Most of the more than 255 million phishing emails sent in 2022 got caught by filters or were so obviously fake that no one clicked on them. But the ones that make it through to inboxes and get clicked on tend to have something in common: they are highly targeted. 

The danger of more widely accessible AI content generation tools means that it will become easier for threat actors to construct targeted email chains.

Infosec blogger Richard Osgood recently posted a disturbing AI use case where the AI could generate a convincing phishing email and automate the creation of a fake landing page designed to steal credentials. 

Osgood prompted ChatGPT to write an email offering an employee of a fictional company “a chance to win a gift card”. Through multiple iterations, he got the AI to, incredibly, write the source code for a survey page that required the employees to log in. After a few tweaks, the code checked out and produced a workable web page. 

It takes little thought to imagine how this capability could soon result in a new wave of targeted phishing attempts. 

Next Level Shadow IT risk

Aside from the obvious risks of threat actors using AI, there will also be new risks from employees using AI. 

One of the most popular capabilities of ChatGPT and other tools like GitHub Copilot is their ability to automate some or all of the code-writing process. Based on incomplete code or, in the case of ChatGPT, natural language prompts, they can autocomplete sections of code or even entire programs.

This capability will likely be a game changer for programmers. It will allow more people to develop and deploy custom software. However, it will come with a significant downside. AI-developed code may be easier to exploit than code written by humans.

In one study conducted by a team of Stanford University researchers, participants who wrote code with help from AI (in this case Github Copilot) created programs that were filled with more exploitable vulnerabilities than those produced by a purely human control group. In the 89 coding scenarios the group studied, around 40% of the programs made using AI had vulnerabilities. 

As a result of this and other studies, Stack Overflow has already banned answers generated by ChatGPT.

However, the danger for security teams is that employees are unlikely to pay attention. If using AI to create applications or finish bits of coding makes people’s jobs more manageable, it will more than likely gain widespread use. 

There is a real danger that AI will add to the already giant problem of shadow IT that organisations face and reduce their security. 

AI Malware 

AI will make it easier for unskilled cybercriminals to create exploits and dramatically increase the speed with which skilled cybercriminals work. 

The same day ChatGPT launched, computer security researcher Brendan Dolan-Gavitt posted his experience on the social media platform Twitter, asking ChatGPT to create a buffer overflow exploit.

With a relatively straightforward prompt, asking the model to solve a Capture the Flag Challenge to bypass ChatGPT’s filters for misuse, Dolan-Gavitt got a coded-out exploit and instructions on executing it in a specified environment. Scary stuff. 

Slightly reassuring was that the exploit code was off by a bit (it used the wrong number of characters) on the initial try. However, after more prompting, the model produced a perfect exploit.

The consequences of this kind of capability are endless. They are not where the destructive ability of AI ends either.

One of the most effective uses for ChatGPT is explaining what a piece of code does in simple language. As a result, AI could become a powerful tool for threat reconnaissance. 

Armed with an AI tool like ChatGPT, a hacker could instantly check application source code for vulnerabilities and develop new custom exploits in real time.

With zero-day threats already breaking records, an endless wave of AI-powered exploits is a terrifying prospect.  

Using AI for Defence with SenseOn

It might be cliche to say you should “fight fire with fire,” but as AI enters the threat landscape, it is critical to leverage AI technology as a defensive tool against AI-enhanced cybercrime. 

Fortunately, defensive AI is already a reality.

Within its consolidated cyber defence system, SenseOn uses a patented AI system called “AI Triangulation.” This system can use data from across an organisation’s entire digital estate to find and spot cyber threats an order of magnitude faster than is typically possible. 

Our AI takes input from various environments, including servers, endpoints, and network traffic and spots threats early, using the MITRE ATT&CK framework for real-world threat intelligence and detection.

Leveraging AI also allows organisations that use SenseOn’s threat detection and incident response platform to give time back to their security teams. SenseOn’s AI and machine learning capabilities mean it can understand what an organisation’s “normal” state looks like and sorts false alarms from real threats in real time. 

This automation of routine investigations into false alerts, which is a constant issue for human analysts, reduces stress and saves time that is better spent on the complex security tasks that humans do best.

Contact us to learn more.