An article appeared in the Cyber Security section of ft.com: ‘Fears grow of deepfake scams following Progress hack’. This is a slightly alarming headline that reads a little like an introduction to a dystopian novel – and the article itself fell off the front page of ft.com reasonably quickly. Blink and you missed it. But the content of the article is of concern, and it’s talking about the types of data and operational security that are important for all businesses to be aware of. 

The crux of the article is a discussion of the potential aftermath of the Progress Corp hack, a software company based in Massachusetts, USA. This hack exploited a ‘back door’ in Progress Corp’s software in a sophisticated attack to steal sensitive customer information from a range of companies – including British Airways, Shell and PwC. While it was initially expected that the hackers would seek to extort the breached organisations, it is now feared that this pristine and validated data could be used for identity theft scams.  

With the advancements in AI and deepfake software, such identity theft scams could be far more lucrative than traditional, vanilla-type extortion demands.  

 This fear is all the greater because of the types of data stolen in the Progress Corp hack: details of names, dates of birth, home addresses, driving licences (including photo ID), health and pension information and partial Social Security numbers of millions of Americans. This is all information that could be used to create deep fake video selfies that US state and government agencies currently use for identity verification.   

What are ‘deepfakes’? 

Deepfakes are audio, video or still image files that are created using a form of AI called deep learning. The fact that this technology is utilises real images and real audio to create a representation of a fictional event (or person) gave rise to the term ‘deepfake’.  

In a report published by the Guardian in January 2020, an AI firm Deeptrace was quoted as having located 15,000 deepfake videos online, with 96% of these being pornographic in nature. This technology has been around for a few years now. However, the technology has migrated far from the darker corners of the internet and into spaces where it is potentially hidden in plain sight. In 2018, Jordan Peele created the infamous video of US President Barack Obama to demonstrate how easy it was to create a passable deepfake of a high-profile person – albeit making completely outrageous statements. But deepfakes are getting ever more difficult to spot, as noted in this 2019 report from Bloomberg

How to spot deepfakes? 

First, it is important to acknowledge that deepfake technology exists, and it is only growing in terms of both volume and sophistication. Generally, awareness of the potential and vigilance are the two key weapons that people can use against the threat posed by deepfakes – the deep learning AI technology is sophisticated, but it cannot think like a human. 

 So, when we talk about ‘vigilance, what does this mean in personal terms? There are several ways you can seek to ensure you can spot a deepfake if you come across one online 

  1. Be critical when selecting information sources – the term ‘fake news’ is commonplace these days, but it is not without validity; check your sources and verify if needs be 
  2. Be cautious about the type of information that is shared online about you 
  3. Run the ‘real or bot?’ test if communicating online via a chat facility – does the language and rhythm of text being used ‘ring true’ to it being a real person on the other end? 

As for businesses, ensure that you have real, authentic and (if possible) in-person communications with business contacts, colleagues, and clients. Make sure that you know the real person – how they sound, their body language, etc. The more you know the real person, to greater the chance you will recognise if they are being impersonated. 

What is being done to regulate deepfakes? 

So far, there is little concrete regulation around the generation or use of deepfake technology. Interestingly, China enacted legislation in December 2022 governing the creation and use of deepfakes (as detailed in this article by the South China Morning Post; for details on the regulations when proposed in January 2022, see this article on Reuters.com)  

With its legislation, China is protecting individuals from being impersonated online, by requiring that any individual who is impersonated in a deepfake must consent to the use of their likeness.  

In contrast, a Wall Street Journal reporter, Joanna Stern, recently created an AI ‘twin’ and put it through a series of challenges – including creating a TikTok, making video calls to friends and family members, and attempting to get past her bank’s biosecurity login measures. The key element missing from the AI ‘twin’s interactions was emotion within the tone of voice used. This experiment alone suggests that governments need to be looking closely at guardrails or regulations.  

Why is this important to SMEs in the UK? 

The most important aspect of all of this is awareness of the current situation as regards both business data security and the capabilities of AI technology.  

The Process Corp hack has put data security under the microscope. It is crucial that businesses – regardless of their size – are aware of their data security measures and relevant changes or advancements in technology.  

It is also vital that businesses are aware of current data security regulations and stay on top of regulatory and legislative changes as they occur. This is something that SMEs may not have the internal capacity to do, which is where having a trusted supplier with current knowledge of, and expertise in data security like Farringford Legal is so important.  

What can SMEs do to mitigate their risk?   

There are several steps that SMEs can take to protect themselves from both being victims of a hack or a victim of a deepfake scam. 

To prevent hacks, have appropriate data security protocols and systems in place to ensure that your company’s data is collected and stored in a safe manner. If you are unsure what your business should have in place, or if what you currently have is appropriate, get in touch with Farringford Legal and our data security consultants will be happy to discuss your business needs. 

And to guard against deepfake scams – be aware and vigilant. If something about a communication feels a bit ‘off’ chances are there is something to be concerned about.