Veracity Blog

Marketers need to be prepared for the EU’s AI Act

Marketers need to be prepared for the EU’s AI Act

The European Union’s AI Act has now gone into force four years after it was first proposed – and marketers need to be prepared for it. 

Although the AI Act will not directly impact UK businesses’ domestic operations, the legislation will still play a role in governing their operations in the EU, especially for those who wish to trade within it. 

In an article on Verdict, Steve Lester, CTO of business services consultancy Paragon, said the EU AI Act will reshape how businesses operate, especially for UK companies engaging with the EU market. 

“Compliance with the Act’s requirements is not optional for businesses; it applies to any AI systems that affect EU citizens or markets,” he added. 

Businesses in the UK will have to clearly disclose when AI is being used and ensure that they are adhering to the Act’s guidelines in terms of targeting and personalisation. 

“The prohibitions on practices like biometric categorisation require a re-evaluation of existing AI strategies to align with ethical standards,” he added. 

Lester also believes UK companies should embark on a thorough audit of their AI systems, as well as taking the time to invest in staff training on AI ethics. 

What’s happening with AI in the UK? 

The UK AI (Regulation) Bill, proposed by Lord Chris Holmes as a Private Member’s Bill, failed to make it through Parliament before the recent General Election.  

However, King Charles confirmed that legislation to regulate artificial intelligence would be coming under the UK’s new Labour government when he gave the King’s Speech at the State Opening of Parliament. 

“The government will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models,” said King Charles, echoing previous statements from the Labour Party 

The new government has already announced the expansion of The Department for Science, Innovation and Technology (DSIT), with Secretary of State Peter Kyle releasing a statement that said transforming public services and fuelling economic growth through science and technology would be the defining mission of a revamped department. 

In addition to driving forward a modern digital government, the Secretary of State will lead DSIT to accelerate innovation, investment and productivity through world-class science and research across the economy, as well as ensure technologies are safely developed and deployed across the country, with the benefits more widely shared. 

What are the key features of the AI Act? 

The AI Act prioritises safety, transparency, traceability, and ensuring AI models are non-discriminatory and environmentally friendly. 

Taking a risk-based approach, it classifies each application into three categories; unacceptable risk; high risk; and limited, minimal, or no risk. 

It places protection around the use of some AI systems deemed by EU authorities to pose a potential threat to citizens’ privacy or safety and lays down the following regulations: 

  1. harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union; 
  2. prohibitions of certain AI practices; 
  3. specific requirements for high-risk AI systems and obligations for operators of such systems; 
  4. harmonised transparency rules for certain AI systems; 
  5. harmonised rules for the placing on the market of general-purpose AI models; 
  6. rules on market monitoring, market surveillance, governance and enforcement; 
  7. measures to support innovation, with a particular focus on SMEs, including start-ups. 

The regulation is mainly concerned with systems that gather and process personally sensitive information. “Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots,” the Commission said. 

How will this affect marketers? 

In practise, the new AI Act probably won’t have a noticeable effect on the daily lives of most marketing professionals as it’s aimed primarily at developers and AI providers. However, they do need to be aware of the new rules as a “deployer” of AI if they use the functionality of systems like GPT-4o. 

Thomas Regnier, a spokesperson for the European Commission, the executive branch of the EU, said: “It’s very important to keep in mind that the obligations of the AI Act apply [mostly] to the AI providers,” 

“For marketing companies and all the other citizens, we want them to be able to use all the potential benefits of AI, but in the end the obligations to comply with the legislation is not really for the ones using these AI systems – it’s for the ones placing them on the market.” 

OpenAI, one of the biggest providers of Large Language Models (LLMs) and other GenerativeAI products, published a primer on its website which stated: “Importantly, the AI Act differentiates between providers and deployers of AI systems. Providers are entities, like OpenAI, that develop an AI system or a general-purpose AI model.” 

However, the company’s statement also added: “Although the majority of obligations under the AI Act fall on providers rather than deployers, it’s important to note that a deployer that integrates an AI model into their own AI system can become a provider under the Act, such as by using their own trademark on an AI system or modifying the AI system in ways that weren’t intended by the provider.”  

This is where marketers need to pay attention when using AI in any way for their ad campaigns and designs. The majority of the changes will revolve around transparency and copyright compliance, but there are some notable requirements including disclosures for AI-driven interactions and content creations, particularly when creating “deepfakes”. 

Deepfakes are videos, pictures, or audio clips made with artificial intelligence to look like they are real. 

The AI Act explicitly states that deployers using AI to generate deepfake audio, images or video “shall disclose that the content has been artificially generated or manipulated”. 

Businesses using any form of AI should carry out an audit to determine which risk category they fall into and then establish compliance guidelines for their use.

Also quoted in an article on Verdict, Jacob Beswick, director of AI governance solutions at AI company Dataiku, said there are a number of steps UK businesses should take over the next 18 months to ensure they are fully prepared.  

“As one of the most comprehensive pieces of AI regulation to be passed to date, preparing for compliance is both a step into the unknown as well as an interesting bellwether as to what might be to come in terms of AI-specific regulatory obligations across the globe,” Beswick said. 

Adding: “Determining exposure to future compliance obligations will enable businesses to begin taking action to mitigate the risk of non-compliance and avoid disruptions to business operations, whether through fines or pulling operational systems from the market”. 

The full legislation for the AI Act can be found here: 

https://artificialintelligenceact.eu/ai-act-explorer 

 

, , , , , , , , , , , , ,

Award-winning malicious bot protection.

Cyber Award Winner 2021

AI-Enabled Data Solution of the Year – DataIQ Awards 2023 Finalist

Tech Innovation of the Year Winner – Leeds Digital Festival Awards

Cyber Security Company of the Year – UK Business Tech Awards 2023 Finalist

Best Use of AI – Tech Awards 2023 – Highly Commended

UK’s Most Innovative Cyber SME 2024 –
Runner Up