Veracity Blog

What happens now after the UK AI Summit?

What happens now after the UK AI Summit?

One of the biggest takeaways from the UK AI Summit was the signing of a declaration establishing a shared understanding of the opportunities and risks posed by frontier AI. 

Almost 30 countries, including the UK, the US, Australia and China, along with the European Union, signed the Bletchley Declaration, which recognises the need for governments to work together to meet the most significant AI challenges. 

The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration. 

As part of agreeing a forward process for international collaboration on frontier AI safety, The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months. France will then host the next in-person Summit in a year from now. 

UK Prime Minister Rishi Sunak (pictured above) said: “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren.” 

In addition, countries and companies developing frontier AI have agreed a ground-breaking plan on AI safety testing and the “Godfather of AI”, Yoshua Bengio, is to lead the delivery of a “State of the Science” report which will help build an understanding of the capabilities, and risks, posted by frontier AI. 

Professor Bengio is a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board. His report will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety. 

What is Frontier AI? 

The UK Government, at the AI Summit, described frontier AI as: “Highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.” 

This includes large language models (LLMs) such as those underlying ChatGPT.4, Claude.5,  and Bard.6 but is not restricted to only this type of generative AI. 

Frontier AI can perform a wide variety of tasks, is being augmented with tools to enhance its capabilities, and is being increasingly integrated into systems that can have a wide impact on the economy and society.  

“Although these models still have major limitations such as their factuality and reliability, their current capabilities are impressive, may be greater than we have been able to assess, and have appeared faster than we expected,” adds a report prepared for the AI Summit. 

Already frontier AI can, with varying degrees of success and reliability: 

  • Converse fluently and at length, drawing on extensive information contained in training data. 
  • Write long sequences of well-functioning code from natural language instructions, 
  • including making new apps. 
  • Score highly on high-school and undergraduate examinations in many subjects. 
  • Generate plausible news articles. 
  • Creatively combine ideas together from very different domains. 
  • Explain why novel sophisticated jokes are funny. 
  • Translate between multiple languages. 
  • Direct the activities of robots via reasoning, planning and movement control. 
  • Analyse data by plotting graphs and calculating key quantities. 
  • Answer questions about images that require common-sense reasoning. 
  • Solve maths problems from high-school competitions. 
  • Summarise lengthy documents. 

This image (below) from an article by Max Roser at Our World In Data – The brief history of artificial intelligence: The world has changed fast – what might be next? – shows how quickly frontier AI has developed in creating human images from learned data.  

None of the “people” shown in this image actually exist. 

Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the following decades, and some think it will exist much sooner. 

What are the risks of frontier AI? 

For the first time senior government representatives from leading AI nations, and major AI organisations, have agreed a plan for safety testing of frontier AI models.  

The plan involves testing models – both pre- and post-deployment – and includes a role for governments in testing, particularly for critical national security, safety and society harms. 

A new global hub, based in the UK and tasked with testing the safety of emerging types of AI, has also been backed. 

In the Bletchley Declaration, it’s stated: “We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation.  

“There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.” 

Back in August, AI was officially classed as a national security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. 

The extensive document details the various threats that could have a significant impact on the UK’s safety, security, or critical systems at a national level. The latest version describes AI as a “chronic risk”, meaning it poses a threat over the long-term, as opposed to an acute one such as a terror attack.  

The UK government also raised cyber-attacks from limited impact to moderate impact in the 2023 NRR. 

About Veracity Trust Network  

 

 

 

 

 

Founded in 2016, Veracity was formed with one intention: to fight the rise of malicious bot activity. 

Our technology began life as a tool to intelligently detect click fraud and save money for businesses using online advertising. Once it became clear that our AI-powered detection engine could do even more, and protect people from legitimately dangerous bot attacks and compromised data, we developed Veracity Web Threat Protection. 

Elegantly designed to mitigate everything from data theft attempts to advertising click fraud, our engine solves problems for multiple business functions. From security to finance, marketing to data analysis, customer experience and reputation management.  

It is award-winning technology* applicable to any business operating a website and works to block a wide range of bot attacks, preserving website performance, while optimising infrastructure costs and security resources.     

Start protecting your website and ad spend from bot attacks by booking a call now:   

https://veracitytrustnetwork.com/talk-to-us  

*Winner ‘Tech Innovation of the Year 2023’, Leeds Digital Festival Awards, Highly Commended ‘Best Use of AI 2023’, Prolific North Tech Awards, Shortlisted ‘Cyber Security Company of the Year 2023’, UK Business Tech Awards, Winner ‘Best Innovation 2022’, Best Business Awards, shortlisted ‘Innovation in Cyber 2022’, The National Cyber Awards, Shortlisted ‘Emerging Technology of the Year 2022’, UK IT Industry Awards as well as holding Verified by TAG status.

 

, , , , , , , , , , ,

Award-winning malicious bot protection.

Cyber Award Winner 2021

AI-Enabled Data Solution of the Year – DataIQ Awards 2023 Finalist

Tech Innovation of the Year Winner – Leeds Digital Festival Awards

Cyber Security Company of the Year – UK Business Tech Awards 2023 Finalist

Tech Leader of the Year – Tech Awards 2023 Finalist

Best Use of AI – Tech Awards 2023 Highly Commended