Veracity Blog

The impact of AI in cyber security

The impact of AI in cyber security

The UK’s National Cyber Security Centre (NCSC) has evaluated and addressed the potential threats and risks associated with Artificial Intelligence (AI) in cyber security. 

Following the Bletchley Park AI Safety Summit in 2023, the UK and a number of other countries, signed a declaration that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. 

The NCSC released guidelines for providers of systems that used any form of AI, including AI in cyber security, with the hope that they would help them build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties. 

What is AI in cyber security? 

AI in cyber security uses AI to analyse and collect event and cyber threat data across multiple sources, turning it into actions which can be used by security professionals for further investigation, response and reporting. 

Here at Veracity Trust Network, we use a form of AI called Machine Learning (ML), to combat the ever-growing threat of malicious bots and cybercriminals deploying AI as a means of accessing personal data and hijacking business websites for financial gain.   

ML uses algorithms to identify patterns and trends in data and then uses those to make predictions and decisions based on what it has “learnt”. It is then used to then build predictive models, classify data and recognise patterns and works as an essential component of many AI applications. 

One of the main advantages of ML and Gradient Boosters specifically, is its ability to improve accuracy and precision in various tasks.  

This makes it particularly effective in cyber security deployment as models can process vast amounts of data and identify patterns that might be overlooked by humans.  

By identifying trends, correlations, and anomalies, ML helps businesses and organisation with fraud detection. In cyber security, ML algorithms analyse network traffic patterns to identify unusual activities indicative of cyber-attacks. 

The history of AI in cyber security 

AI has been used by the security community since at least the late 1980s.  

An article on Microsoft’s Security blog notes that the following key technology advancements took place: 

  • In the beginning, security teams used rules-based systems that triggered alerts based on parameters they defined. 
  • Starting in the early 2000s, advances in machine learning, a subset of AI that analyses and learns from large data sets, has allowed operations teams to understand typical traffic patterns and user actions across an organisation to identify and respond when something unusual happens. 
  • The most recent improvement in AI is generative AI, which creates new content based on the structure of existing data. People interact with these systems using natural language, allowing security professionals to dive deep into very specific questions without using query language. 

Back in 2016 researchers were discussing the potential of AI and ML techniques and their application in cyber security.  

In an article in The Guardian, Duncan Hodges, then a lecturer at the Centre for Cyber Security and Information Systems at Cranfield University, said: “AI is an approach that turns problems around. Rather than creating something to solve a problem, we create something that learns how to (solve a problem).” 

The article also points out the potential growth for deep learning and the need to make sure the values on which computers make decisions from are ethical.  

Threats from AI to cyber security 

NCSC Assessment (NCSC-A) is the authoritative voice on the cyber threat to the UK.  It recently published an assessment focusing on how AI will impact the efficacy of cyber operations and the implications for the cyber threat over the next two years. 

Among the key judgements found were the following of concern: 

  • All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees; 
  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025; 
  • AI will almost certainly make cyber-attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models; 
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years. 

The assessment focuses on how AI will impact the effectiveness of cyber operations and the implications for the cyber threat over the next two years.  

It does not address the cyber security threat to AI tools, nor the cyber security risks of incorporating them into system architecture. However, the assessment also makes it clear that AI will also be used as a means of combating rising threats.  

“The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design. More work is required to understand the extent to which AI developments in cyber security will limit the threat impact.” 

The FBI also issued a warning last December which revealed that cyber criminals were using GenAI to create documents and images and sharing them with victims to trick them into thinking they were speaking to a real person, rather than a criminal. 

Ways to deal with the threat of GenAI 

The UK Government also published a set of guidelines for the ethical use of AI. Called – Code of Practice for the Cyber Security of AIit is a two-part intervention which will be used to help create a global standard in the European Telecommunication Standards Institute (ETSI) that sets baseline security requirements. 

The proliferation of generative AI tools is already being used to impersonate, clone and deceive people and systems. 

The USA’s National Security Agency issued an introductory guidance on Content Credentials, in January this year which is endorsed by NCSC and other international cyber security partners.  

In the document, it advises the use of content provenance solutions aim to establish the lineage of media, including its source and editing history over time. Content Credentials allow metadata to be attached to the media content during export from software or even at creation on hardware, making it easier to trace its source and legitimacy. 

To facilitate global accessibility, Content Credentials have built-in functionality to work offline, such as the ability to copy certificates to an enclave. The C2PA technical specification describes the format of Content Credentials and the assertions that they contain about a media item’s provenance. 

Effective development of watermarking and provenance standards could raise the bar for criminals and state actors seeking to exploit inauthentic data or media in their cyber-attacks. 

The NCSC will be exploring this topic in more detail, as improving the integrity of online information is a major part of making the UK a safer place to live and work online. 

Find out how Veracity’s Web Threat Protection can stop criminals gaining access to your vital business data.  

Our 14-day assessment reveals both the volume of bot traffic and highlights the effectiveness of our AI-enabled bot-blocking capabilities. Unlike traditional approaches, Veracity defends against malicious bots that can mimic human behaviour.  

Stay ahead of potential threats, safeguard your online assets, and avoid devastating cyber-attacks. 

https://veracitytrustnetwork.com/pricing-enterprise-getintouch/  

, , , , ,

Award-winning malicious bot protection.

Cyber Award Winner 2021

AI-Enabled Data Solution of the Year – DataIQ Awards 2023 Finalist

Tech Innovation of the Year Winner – Leeds Digital Festival Awards

Cyber Security Company of the Year – UK Business Tech Awards 2023 Finalist

Best Use of AI – Tech Awards 2023 – Highly Commended

UK’s Most Innovative Cyber SME 2024 –
Runner Up