The potential for AI in the UK
With machine learning – and the looming disruptive potential of quantum computing on the horizon – where are we with AI in the UK?
We have seen huge breakthroughs in the use and application of artificial intelligence in recent years but there are also major concerns about its future impact – witness the current SAG-AFTRA and Writers Guild of America strike, which is partially about wanting regulations for how AI content can be used.
With machine learning or other types of AI involved in pretty much every industry in some form or another, there are a number of key factors to take into account when discussing the future of AI in the UK.
Google’s impact on AI in the UK
Google’s Economic Impact Report for 2023 set out to understand the potential impact of AI on the UK’s economy.
Compiled by Public First, the report revealed how Google’s tools — such as Search, Maps, Workspace, Cloud, Play and Android — will create an estimated £118bn in economic value in the UK in 2023.
It also highlights how, under the right conditions, AI-powered innovation could generate more than £400 billion in economic value for the UK economy by 2030.
Debbie Weinstein, Vice President of Google and Managing Director of Google UK & Ireland, said: “AI is the most profound technology that humanity is working on today. It’s a critical part of solving big societal challenges, from tackling climate change to developing new personalised medicines.
“That’s why it’s so inspiring that some of the early seeds of this extraordinary technology were sown right here in the UK. From British mathematician Alan Turing in the 1950s, to the team at Google DeepMind’s work on protein folding today.”
Driving growth through digital technology
The UK is home to world-leading academic institutions, an enviable startup ecosystem and millions of businesses, large and small, using digital to drive growth.
Turing’s landmark research paper on “machines that think” was published in 1950 and The Alan Turing Institute, headquartered in the British Library in London, was created as the UK’s national institute for data science in 2015.
From 2017, as a result of a government recommendation, it has also included artificial intelligence in its remit. The Turing Institute’s remit is one of national leadership, providing focus on UK priorities for the public good and supporting the UK’s ambition to be a global leader in data science and AI.
One of the key challenges in the development of data science and AI is that it is continually evolving in unpredictable ways. The Turing Institute hopes that by providing an end-to-end, interdisciplinary pathway in data science and AI – that enables impact at scale and drives major progress against societal challenges – it will help alleviate some of those challenges.
The UK Government has been pro-active in the field of AI, consulting on a White Paper – A pro-innovation approach to AI regulation – earlier this year.
A National AI Strategy document was also published in 2021 which set out steps for how the UK will begin its transition to an AI-enabled economy, the role of research and development in AI growth, and the governance structures that will be required.
In the strategy document, the UK’s position as a global superpower in AI is highlighted and it adds: “… is well placed to lead the world over the next decade as a genuine research and innovation powerhouse, a hive of global talent and a progressive regulatory and business environment.”
Many of the UK’s successes in AI were supported by the 2017 Industrial Strategy, which set out its vision to make the UK a global centre for AI innovation. In April 2018, the Government and the UK’s AI ecosystem agreed a near £1 billion AI Sector Deal to boost the UK’s global position as a leader in developing AI technologies.
AI as a threat?
AI has been officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023.
The Rt Hon Oliver Dowden, Deputy Prime Minister, Chancellor of the Duchy of Lancaster and Secretary of State for the Cabinet Office, wrote in the forward: “Technologies such as artificial intelligence (AI) are transforming our world – bringing with them opportunities, but also a number of risks.
“This is the most comprehensive risk assessment we’ve ever published, so that Government, and our partners, can put robust plans in place and be ready for anything.”
The NRR describes AI as a “chronic risk”, meaning it poses a threat over the long-term, as opposed to an acute one such as a terror attack. The Government also raised cyber-attacks from limited impact to moderate impact.
The UK Government has committed to hosting the first global summit on AI Safety which will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor risks from AI.
Dan Lohrmann, an internationally recognised cybersecurity leader, writing for CSO Online, said: “It’s almost impossible to keep up with the growing list of generative AI tools released and updated in 2023. What I’m concerned about is not the variety, productivity gains or other numerous benefits of GenAI tools. Rather, it’s whether these new tools now serve as a type of Trojan Horse for enterprises.
“Are end-users taking matters into their own hands by using these apps and ignoring policies and procedures on the acceptable use of non-approved apps in the process? I believe the answer for many organisations is yes.”
According to new research from BlackBerry, three-quarters of global businesses are currently implementing or considering bans on ChatGPT and other generative AI applications within the workplace, with risks to data security, privacy, and corporate reputation driving decisions to act.
BlackBerry’s findings draw from a survey of 2,000 IT decision makers across North America (USA and Canada), Europe (UK, France, Germany, and the Netherlands), Japan, and Australia.
Founded in 1984 as Research In Motion (RIM), BlackBerry is now a leader in cybersecurity, helping businesses, government agencies, and safety-critical institutions of all sizes, secure the Internet of Things (IoT).
AI at Veracity Trust Network
Veracity Trust Network currently uses Machine Learning within our bot detection system. This is a rules-based algorithm and, although it works well, it is static and cannot adapt to new advances in malicious bot technology without changing the algorithm itself.
According to Veracity’s Head of Data, Reuben Sodhi, this is a problem due to increased bot sophistication across the cybersecurity industry as a whole; good and bad.
“AI is a dynamic technology and can change its behaviour by being given new data to learn from. Veracity Trust Network is currently building an AI algorithm that classifies traffic as either a human, or a malicious bot,” he added.
This will then be used as part of the Web Protection and Ad Threat Protection services offered by the Veracity Trust Network and the AI will be able to change its behaviour over time to keep up with advances in malicious bot behaviour.
Veracity Trust Network safeguards organisations from the threat of bot attacks, through its deep tech machine-learning solutions which address Security, Fraud and Ad Tech.
It is award-winning technology* applicable to any business operating a website and works to block a wide range of bot attacks, preserving website performance, while optimising infrastructure costs and security resources.
Start protecting your website and ad spend from bot attacks by booking a call now:
*Digital City Awards 2022: Innovation of the Year, Best Business Awards 2022: Best Innovation, Best Martech Innovation at Prolific North Tech Awards 2021, B2B Marketing Expo Innovation Award for Best Marketing Tool 2021, and the Tech Nation Rising Stars 3.0 Cyber Award 2021, as well as holding Verified by TAG status.