Estimated reading time: 6 minutes
The intersection of ChatGPT and cybersecurity continues to dominate the IT news cycle. And with good reason — this technology and those like it (generative language models based on AI) promise to be highly disruptive to a large number of industries. It also has serious potential for changing established routines in IT data protection and security, both negatively and positively. We’ll examine both sides in this article.
Table of contents
Reference framework: The Cyber Kill Chain
To examine the topic of chatGPT and cybersecurity from all angles, we’ll employ an established framework for understanding cyberattacks — the Cyber Kill Chain, developed by Lockheed Martin. This encompasses the following phases:
Reconnaissance: researching the organisation and identifying potential targets as well as vulnerabilities.
Weaponisation: the malware hackers use to carry out an attack.
Delivery: how the malware will find its way into the network. The most common example is through a phishing email.
Exploitation: the malware will begin doing its damage within the network.
Installation: to ensure re-entry, attackers install backdoor entry points.
Command and control: cybercriminals now have full access to the environment and can obtain data as a legitimate user would.
Actions on objective(s): hackers delete, steal, encrypt or hold data for ransom — or a combination of these.
ChatGPT and cybersecurity: Negative effects
As the saying goes, let’s start with the bad news first.
The expanding presence of AI in general – not only ChatGPT – is expected to make it easier for threat actors to carry out attacks. While cyberattacks have traditionally been carried out by teams, AI can take over many of these tasks. This means that cybercriminal teams will accomplish much more in the same amount of time. It also means individuals can now more feasibly carry out attacks on their own.
Here’s how AI could influence each phase of the Kill Chain:
Reconnaissance:
Attackers can use AI to automate and thus speed up the process of collecting information about their targets. In addition, machine learning could simplify running tests and thus more easily establish which systems and networks are vulnerable.
Weaponisation:
As of yet, no high-profile malware attacks have been carried out using ChatGPT or other AI platforms. However, it is expected that as these technologies advance, cybercriminals will use them to create malware unique to the target which can also more easily bypass any existing defences. Chat GPT has already proved itself to be a very powerful tool in assisting software developers.
Delivery:
This is perhaps the most obvious way that the majority of end users will encounter this topic. Until now, successful malware attempts have required a certain level of sophistication. This includes not only the technical skill required to carry out such an attack, but also the linguistic abilities to be able to write as a native speaker in the target’s language would.
The latter component has long been one of the top pieces of advice to help identify phishing attempts: Even with the advent of machine translation, spelling and grammar that seemed a bit “off” were a useful red flag for the reader to not take any action prompted in the message. AI, specifically ChatGPT with its generative capabilities, will now help hackers create messages that are 100% grammatically correct.
Exploitation:
Once the vulnerability has been identified, technology powered by AI could be employed by generating code created specifically to exploit it. ChatGPT is unique among language generation models due to its ability to write code. This saves a massive amount of time for the hackers — and leaves them more opportunities to carry out further attacks.
Installation:
Another case in which hackers can use AI to their advantage: By automating the location and creation of backdoors, they can maintain access to the attacked system. Think Cobalt Strike, but easier to use.
What this boils down to is a lowered bar of entry in terms of skill level necessary to carry out an attack. The number of attacks is thus expected to increase given that threat actors will only require basic levels of technical knowledge to install malware into a system.
Command and Control (C2):
AI can detect traffic patterns to help establish norms within the network. C2 efforts can then run according to these norms in order not to set off any detection systems in place. Furthermore, machine learning will assist the malware in continuous alteration of its behaviour to evade detection mechanisms.
Actions on objective(s):
With all the AI-assisted efforts described above, cybercriminals are in a position to carry out attacks faster, with fewer people and with less overall skill.
ChatGPT and cybersecurity: Positive effects
Reconnaissance/Weaponisation:
Similarly to the way hackers might use AI to identify patterns to detect which targets are most suitable for attack, SOCs can employ it to examine traffic in the network. Here, they can establish any patterns that might point to reconnaissance or weaponisation efforts. Then, measures to shut down this process can begin.
Furthermore, AI can also quickly cross-reference multiple intelligence feeds and translate the result into digestible input. This lessens the manual research workload for cybersecurity professionals and allows them to spend more time acting on the alerts requiring intervention. In fact, Microsoft has recently released a tool to do just that: The Security Copilot.
Delivery:
Whilst modern phishing attacks may become more sophisticated (as described in the “negative effects” section above), detection services rely on things outside of language to detect and flag whether the email is suspicious.
Similarly to modern behaviour analytics, AI is likely to be used to in the delivery stage of an attack to map out abnormalities in activity.
Exploitation:
Here, a SOC can use AI to establish which systems within a network are most vulnerable and/or prone to exploitation at the given time and prioritise their protection accordingly.
By continuously gathering and assessing multiple threat intelligence variables, defences using AI can provide a better overall picture than relying solely on the knowledge of individuals looking for exploitation possibilities.
Installation:
Security solutions relying on AI can leverage this to quickly establish the presence of malware or stop any progress it has managed to make. Furthermore, automation based on AI could even remove any malware that hackers have managed to install.
Command and Control:
Detection and prevention of C2 infrastructure is the key task for AI-powered security here.
Actions on Objectives:
A combination of the efforts outlined in the previous phases make it possible for security teams to identify and defuse the attacker’s efforts and thus hinder their ability to act on objectives.
Hopefully it’s become clear that many of the same applications of AI that can advance cybercriminality can also stop it — or at least mitigate the damage it causes.
The important thing to remember here is that, as with many technologies, it’s not the technology itself that poses a problem, but rather what it’s being used for.
The security landscape is changing fast, but your organisation doesn’t have to handle this new territory on your own. Get in touch with us for assistance in developing your security posture by clicking the button below.