There has never been a scarcity of warnings about the risks of artificial intelligence. Movies, TV shows, and tech sites have been alerting everyone to the possibility of AI doing more bad than good. A glimpse of this uncertain nature of AI is demonstrated by how it relates to cybersecurity.
Artificial intelligence can help address cyber threats, but it can also become inimical to cybersecurity. These two have an unusual dynamic that merits scrutiny in view of the expanding use cases of AI and the continuously evolving threat landscape.
It wasn’t that long ago since ChatGPT was introduced to the public, and it attracted critical comments from the cybersecurity community. Cyber defense pundits noted how ChatGPT could become a tool for threat actors. It could be used to quickly write malicious software to be used in various kinds of attacks.
The creators of ChatGPT listened to the comments and introduced safeguards to prevent the AI tool from readily aiding malware generation. With the release of ChatGPT 4, there’s an expectation that security safeguards have been bolstered. However, a study by one security firm shows that ChatGPT 4 still bears the risk of being manipulated to produce malicious code.
The study reveals that ChatGPT 4 initially refuses to generate a code when it was asked to build malware. However, after removing the word “malware” from the request, the AI chatbot fulfilled the request.
ChatGPT 4 is also reportedly capable of executing the adversarial tactic called “PHP Reverse Shell,” which enables remote access to a device. It allows a threat actor to exploit a device’s functions to download and install malware.
Essentially, AI is helping threat actors accelerate their attacks and reach more potential victims. Cyber threats have already been rapidly growing and increasingly aggressive without artificial intelligence in the picture. With AI, ChatGPT in particular, cybercriminals are able to speed up the writing of malicious software.
Additionally, AI makes it possible, even for those who have limited technical know-how, to churn out anomalous code or applications and undertake cybercriminal activities. They can skip the part about learning how to code since they can just make AI write the malware for them.
Just because artificial intelligence is used as a tool by cybercriminals does not mean it is allied with the enemy. Cybersecurity teams can make it a part of their team to bolster cyber threat detection, mitigation, and prevention.
Many leading cybersecurity platforms nowadays employ AI or machine learning to correlate and contextualize security alerts and other information. Doing this helps sort the cybersecurity information overload, wherein security teams deal with a deluge of cybersecurity alerts and notifications.
AI can analyze all relevant security information to come up with a priority queue. This queue ensures that the most urgent and crucial concerns are addressed promptly, not buried under tons of irrelevant alerts, false positives, and low-priority notifications.
Additionally, AI is useful when it comes to attack simulation. One of the best tools of modern cybersecurity is the simulation of breaches and attacks to test the integrity of the cyber defenses of an organization and determine the areas that require fixing or improvement. Artificial intelligence can help automate many tasks in the security validation process, ensure that every security control is tested, and ascertain that all possible attack surfaces are examined.
Cybersecurity researchers from the Pacific Northwest National Laboratory (PNNL) of the United States Department of Energy explored the possibility of developing an attack simulation environment. This simulation platform is designed to make it possible to evaluate multi-stage attack situations with unique or novel kinds of threats. It enables the comparison of the effectiveness of various AI-based cyber defense methods.
The team’s research paper shows that deep reinforcement learning (DRL) is effective at stopping attacks from achieving their goals with a high success rate of 95 percent. DRL performs excellently in addressing sophisticated cyber attacks in a complex simulation setting, evidencing autonomous AI’s potential in establishing a proactive cyber defense.
The team tested their idea of an autonomous cyber defense framework by using Open AI Gym and the MITRE ATT&CK framework. They developed a bespoke controlled simulation environment that launched various attack tactics and techniques. The simulations represent all stages of an attack, starting with reconnaissance, down to attack execution and persistence. and proceeding with evasion, control, data gathering, and exfiltration. The team found that DRL performs well in preventing attacks from fulfilling their endgame.
DRL is a form of AI that brings together the advantages of deep learning and reinforcement learning to allow agents to autonomously come up with decisions based on unstructured input data without the need for manual configurations. In other words, it enables the creation of an autonomous cyber defense framework that can keep up with the changes in cyber threats.
Deep reinforcement learning, however, is not aimed at replacing human cybersecurity experts. While it may help create an autonomous cybersecurity system, it does not supplant human decision-making. Instead, it serves as a tool for cybersecurity experts to come up with more informed and insight-driven decisions. It helps cybersecurity analysts in developing plans or frameworks for sequential decision-making as they deal with threats or attacks. Cybersecurity teams can harness DRL to anticipate threats and put up preemptive steps to arrest attacks.
Artificial intelligence is without a doubt adiaphorous. It is its user that makes it good or bad. Threat actors can use it to expeditiously produce malware and discover vulnerabilities. At the same time, cybersecurity experts can utilize it to improve their defenses. It can analyze tons of security data to improve defenses and ensure that the most urgent alerts are promptly addressed. Also, it helps build powerful attack simulations to help cybersecurity professionals with their decision-making and the anticipation of attacks.
There is no dilemma here. Nobody can stop the development of AI technology, and it makes no sense to try stopping it. However, those who care about cybersecurity can bring AI to their side by taking advantage of new cybersecurity solutions that incorporate artificial intelligence.