AI and Cybersecurity: Balancing Innovation with Caution

Cyber Security and Digital Data Protection Concept. Icon graphic interface showing secure firewall technology for online data access defense against hacker, virus and insecure information for privacy.

23 150

By Aaron Bugal, Field CTO APJ, Sophos

Undoubtedly one of the most influential technologies in recent decades, the ascent of artificial intelligence has produced a mixture of reactions from individuals, organisations, and countries. Eyes widen as we explore its potential, concerns grow as it threatens jobs, and conversations take place at a global level on how it should be regulated. However, for cybersecurity professionals’ artificial intelligence presents a double-edged sword.

Although AI has shown the ability to enhance cybersecurity solutions with its pattern recognition, summarisation, and assistance capabilities, it also opens the door for threat actors to harness the technology in much more sinister ways. So, in a world where we are in a constant race to out-innovate cybercriminals, what impact will AI have, especially as it continues to evolve itself?

New technologies mean new threats

Cybercriminals have proven they shouldn’t be underestimated. They are continually updating their tactics, strategies, and tools to breach businesses, and AI only strengthens their arsenal.

AI has commonly been used to help threat actors better imitate real people – altering voices, pictures, and messages to carry out convincing phishing attacks.

Beyond mimicking human behaviour, cybercriminals have begun to experiment with AI at a more technical level. Malicious GPTs have been advertised on cybercriminal marketplaces, with functions such as automated penetration testing or malicious malware development. However, sharing a similar experience to legal industries and businesses, there is still some hesitance from cybercriminals when it comes to implementing the technology into operations, as threat actors are mainly exploring generative AI in the context of experimentation and proof-of-concepts.

This does not mean organisations should see this as a sign to slow down, as artificial intelligence will inevitably become a regular feature of cyberattacks. Instead, businesses should be evaluating if they are using the technology in a secure and optimal way within their cybersecurity set up. 

AI adoption is not about being first, but being smart

Businesses of all sizes are examining how AI can be used, with Sophos finding 98 per cent of organisations are using it within their cybersecurity infrastructure in at least some capacity. Further to this, 65 per cent of organisations use cybersecurity solutions that include generative AI capabilities, and 73 per cent use solutions that include deep learning models.

While AI adoption in cybersecurity can bring many advantages, it also introduces a number of risks if approached incorrectly. Poorly implemented AI models can inadvertently introduce considerable cybersecurity risks of their own – if it isn’t provided with the right inputs, it cannot provide adequate outcomes. Organisations are alert to this risk, with the vast majority (89%) of cybersecurity professionals saying they are concerned about how potential flaws in cybersecurity tools’ generative AI capabilities will harm their organisation, with 43 per cent highlighting they are extremely concerned.

This alertness must also remain for AI that’s implemented in non-cybersecurity related tools, as emerging technologies pose threats in their infancy. Agentic AI for example has become highly topical recently, but will a technology that learns from humans be able to adequately defend itself from cyber threats? At its current level, AI should be approached with the intention that it can serve a single purpose and expecting an individual system or ‘AI agent’ to do everything with minimal human interference is risk inducing.

Therefore, an organisation’s artificial intelligence advances – both within cybersecurity infrastructure and its entire technology stack – must be done with guardrails up and thorough oversight.

Fighting fire with fire without getting burnt

In an ongoing race against cybercriminals, artificial intelligence will only become a multiplier to innovation that takes place on both sides. For businesses, avoiding the risks of AI within cybersecurity systems is possible when implementation is approached with care. This can be achieved through:

  • Inquiring about vendor’s AI capabilities: AI requires transparency, and asking cybersecurity vendors about how their data is trained, what AI expertise their professionals have, and their roll out process for deploying AI capabilities will help paint a clearer picture of AI development best practises.   
  • Providing strict outlines to AI investment: AI investment cannot be rushed, so it is important to assess whether AI provides the best solution for current cybersecurity challenges, prioritise specific AI investments, and measure the impact of AI once it is implemented into cybersecurity infrastructure.
  • Remain human first in AI adoption. Organisations should never take a set-and-forget approach to cybersecurity, and this is even more the case when AI is involved. Ultimately, cybersecurity is a human responsibility, and AI should be used as an accelerant to support cybersecurity professionals, not a replacement.

Artificial intelligence will become mainstay within organisations for many years to come. This is no different for cybersecurity, however with such high stakes it is vital that AI is used correctly, or it will only work against its intended purpose – giving cybercriminals the leg up over organisations in this ongoing battle. It is not about implementing a range of AI capabilities to expand your cybersecurity infrastructure, but the right capabilities that addresses your cybersecurity needs.