AI in cybercrime is a growing threat
AI in cybercrime has become a powerful new weapon for scammers and many people are falling for sophisticated schemes that use stolen identity data.
A PwC report, in collaboration with Stop Scams UK, published last December found identifying the scale of AI use by scammers is a difficult task. Voice cloning has become a growing concern for the authorities when it comes to ransomware scams.
An article in the Financial Times gives details of one man who thought his daughter had been kidnapped after he received a “phone call from her” saying she was going to be hurt if he didn’t give the kidnappers money.
It was only when he received a legitimate text from his daughter as he was heading to the bank to withdraw the money that he realised it was a scam.
“Scammers are already very successful, and it could be that they just don’t need to use this type of tech, or it could be that they are using AI and we just aren’t able to distinguish when it has been used,” said Alex West, one of the authors of the report.
And, according to TSB’s head of fraud risk Steve Cornwell, the rising sophistication of voice technology and AI is a major worry for banks.
He added: “If you think of the way Generative AI is coming along, how long [is it] before that AI solution could have a real-time conversation with you [using] a synthetic voice?”
Data from Cifas, the UK’s leading fraud prevention service, also gives cause for concern.
While statistics from 2022 show identity fraud rose by nearly a quarter, reports of AI tools being used to try and fool banks’ systems increased by 84 per cent and in its latest Fraudscape report, Cifas CEO Mike Hailey said: “The fraud trends we identified in 2023 continue into 2024, with an increased risk of identity theft, first party fraud and internal fraud.”
Identity fraud continues to be the dominant case type recorded to the National Fraud Database (64% of filings, 237,642 cases).
“We’re seeing an increased use of deepfake images, videos and audio being used during application processes, along with synthetic identities being identified as a result of ‘liveness’ checks that are now being carried out at the application stage,” said Stephen Dalton, director of intelligence at Cifas.
Natalie Kelly, chief risk officer for Visa Europe, said there was a growing number of criminal-focused systems, such as WormGPT, FraudGPT and DarkBART, adding: “It can be hard to tell the authentic from the artificial these days.”
UK government pledges to tackle online fraud
Back in November 2023 the UK government and some of the world’s biggest tech companies agreed to a series of pledges to protect the public from online fraud.
With fraud being the most common crime in the UK, the government joined forces with leading tech companies – Amazon, eBay, Facebook, Google, Instagram, LinkedIn, Match Group, Microsoft, Snapchat, TikTok, X (Twitter) and YouTube – to develop and commit to the Online Fraud Charter, the first agreement of its kind in the world.
As part of the agreement, all parties are committed to bringing in a raft of measures to help protect people from fraud and scam content when using their sites.
Actions include verifying new advertisers and promptly removing any fraudulent content. There will also be increased levels of verification on peer-to-peer marketplaces, and people using online dating services will have the opportunity to prove they are who they say they are.
The charter will also be supported by tough action to crack down on illegal adverts and ads for age-restricted products, such as alcohol or gambling, being seen by children.
An action plan was also published, agreed by the Online Advertising Taskforce, which sets out steps both industry and the government is taking to tackle harms and increase protection for children.
It includes developing a base of evidence, improving information sharing and promoting an industry wide best practice.
US Treasury issues warning to financial sector
The US Treasury Department issued a report Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector aimed at helping banks and other financial institutions address the emerging AI threat earlier this year.
The 52-page report touches on cybersecurity and fraud protection; fraud threats; the regulatory landscape; and major challenges and opportunities and also includes a section on best practices:
- Include AI risk management within your organisation’s broader enterprise risk management program;
- Develop and implement an AI risk management framework tailored specifically for your organisation and its use cases;
- Add AI-specific questions to your vendor risk-management questionnaire and processes to assess AI risks and issues, such as:
- AI technology integration,
- data privacy,
- data retention policies,
- AI model validation,
- AI model maintenance.
“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang.
“Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”
AI is reliant on data
Unlike many other technologies, advancement for AI is dependent on data. In most cases, the quality and quantity of data used for training, testing, and refining an AI model, including those used for cybersecurity and fraud detection, directly impacts its eventual precision and efficiency.
This means data has also become a money-making scheme for scammers and criminal gangs, who can either use stolen data to commit identity fraud or use it to threaten businesses into paying ransoms to stop them releasing it.
In May this year Cifas launched its Fraud Pledges 2024 – a set of proposals which request the government commit to reforms and prevention strategies that better protect communities and UK business from fraud.
The reforms challenge the government to ‘do more’ and provide greater counter-fraud assurances.
The Cifas Fraud Pledges are:
- Provide cross-government leadership in the response to fraud;
- Improve the policing response to fraud;
- Enhance support to victims of fraud;
- Make the criminal justice response fit for tackling 21st Century fraud;
- Require social media and online platforms to join the multi-sector response to fraud.
CEO Mike Haley said: “The Government’s 2023 Fraud Strategy was a good starting point, however, more needs to be done to tackle the epidemic of fraud. Our pledges set out the next generation of fraud reforms for a future government.”
Veracity Trust Network
Our patented, AI-powered solutions work to stop malicious attacks, data theft, ad click fraud and more.
Data breaches cost businesses an average of $160k to fix. Global ad fraud costs businesses over $60bn each year.
Get peace of mind for all aspects of your digital business with the complete Veracity Bot Protection Suite. Talk to us now: