Deepfakes and legal implications: Seeing is not believing
A uniquely challenging issue
With advances in technology and media software, the use and publication of 'deepfakes' by content creators has increased significantly in the past few years. The use of deepfakes raises a number of important ethical and legal implications in relation to the manner in which the person is being depicted in the deepfake. Indeed, deepfakes can be used in the context of comedy as a parody or pastiche, however, certain unscrupulous content creators have used deepfakes for more nefarious purposes. This article examines the use of deepfakes to date and the legal rights of the person being depicted.
What are deepfakes?
The term 'deepfake' combines the terms 'deep-learning' and 'fakes'. They are a form of synthetic media in which either the audio or video content is manipulated in such a way that makes them indistinguishable from real media. Synthetic media is a catch-all term for the artificial production, manipulation and modification of data and media by automated means, especially through the use of artificial intelligence algorithms.
Media manipulation is not a new phenomenon and has been used throughout history to varying degrees of effectiveness. For example, media manipulation has been used in the context of political propaganda as well as in blockbuster films which use special effects.
Challenges
Although forms of synthetic media have existed for a long time, modern technological advancements present several new challenges:
- Deepfakes have become an extremely easy to create. The democratisation of the internet combined with vast improvements in artificial intelligence algorithms allow non-experts to both manipulate and doctor media and then use social media and the internet to disseminate this media rapidly and at scale.
- Current detection techniques are inadequate combat the problems that deepfakes present.
- Although, the underlying technology behind deepfakes can have positive and commercial uses if the technology is used nefariously, the consequences could be extreme.
- There is currently no coherent legislative framework in place either at the UK or EU level to effectively regulate deepfakes to protect individuals or organisations.
How does it all work?
Deepfakes are made using deep learning algorithms. Deep learning is an AI function that mimics the workings of the human brain in processing data and so is able to learn without human supervision, as it learns by example.
More specifically, synthetic media and deepfakes rely on Generative Adversarial Networks (GANs) and involve two deep neural networks competing to produce the most high-quality fakes. The network is made up of three components:
a) real-world data;
b) a discriminator; and
c) a generator.
The discriminator network is trained using true, real-world, data and it assesses whether the generator is producing real or fake content. The generator typically creates text, images, or video. It begins with random data, and, as the name suggests, it generates progressively better samples, to convince the discriminator that the sample is genuine real-world data.
Initially, the generator network’s attempts will be completely off the mark. Its guesses are incomprehensible text, static, or noise but over time, the generator’s performance improves. This is done by iteratively improving both the discriminator and generator components of the network. These components compete in order to produce fakes that are extremely similar to the real thing.
Where are we seeing deepfakes?
The use of deepfakes is becoming increasingly widespread. In July 2019, Sensity, a visual threat intelligence company, identified 14,678 deepfakes online. Eleven months later, in June 2020, that number had jumped to 49,081, an increase of 330%. The number of deepfakes being found online is growing by a factor of two, roughly every six months and this continued level of growth confirms its exponential nature.
A proliferation of deepfakes may lead people to cast doubt on videos that are real by making it easier for someone in a compromising video to claim that the video was a deepfake. This phenomenon has been dubbed 'the liar's dividend'. As the public becomes more familiar with the concept of deepfakes, they will become more sceptical of videos in general, and it will become more plausible to dismiss authentic videos as fake.
The AI technology being used to develop deepfakes and synthetic media is relatively nascent, however, the technology has already developed to the point that it can perfectly generate fake still images, and video and audio manipulation capabilities are not far behind. There have been several notable examples of deepfakes that have garnered mainstream attention such as this public address from 'President Obama' or Jim Carrey taking on Jack Nicholson's role in The Shining.
The Legal Position
The law has not currently caught up to the technology in this space and existing regulatory gaps must be filled. Additionally, there are also positive real-world applications for deepfake technology. Deepfakes and synthetic media is set to have a positive impact on a number of commercial uses such as in the banking industry, where AI chatbots will serve as life like customer service agents, removing the need for human interaction. In the Accessibility space, the technology is expected to help disabled individuals augment themselves and regain agency and their independence. For example, allowing people with ALS can record their voice before they lose the ability to speak and then use AI technology to digitally recreate their voices in the future.
UK Perspective:
Copyright
There is currently no comprehensive legislative framework aimed specifically at tackling deepfakes in the UK. However, there are a number of existing avenues of legal action that may be drawn upon. For instance, a harmed individual may attempt to have deepfakes removed from media platforms by obtaining an injunction pursuant to a copyright infringement claim. This may be difficult to assert given the number of different rights holders and will depend on the specific images and/or content being used in the deepfake and whether they amount to an act of 'copying' (in whole or substantial part) of the copyrighted work being asserted.
In addition, depending on the context, deepfakes may fall under the one of the fair dealing exemptions in the Copyright, Designs and Patents Act 1998 (CDPA) (e.g. parody; current events).
At present, it therefore seems that the UK copyright framework is not appropriately set-up to deal with deepfakes. That being said, regulators and law makers are making efforts to address this issue. For instance, the World Intellectual Property Organisation (WIPO) recently published 'The Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence'. The paper questioned whether the copyright system was an appropriate vehicle to regulate deepfakes, or whether a new audiovisual framework was required. WIPO also questioned who the copyright should belong to and whether there should be a system of equitable remuneration for persons whose likenesses and 'performances' are used in a deepfake.
Passing-Off
Image rights are not formally recognised in the UK; however, English case law has developed to offer protection where an individual's image is commercially misappropriated.
In Fenty v Arcadia Group, UK high street retailer Topshop, featured singer Rihanna on one of its t-shirts which it made available for purchase in its UK stores. However, the singer was not connected with the company and had not consented to the use of her image on the t-shirt. Consequently, Rihanna pursued a 'passing-off' claim before the UK High Court. In its judgment, the Court held that a substantial number of purchasers would be confused/deceived into thinking that the t-shirt had been endorsed by Rihanna, would have bought it for that reason, and that this would be damaging to her goodwill. The UK Court of Appeal upheld the High Court's decision, unanimously dismissing the appeal. Despite Rihanna's success in the case, the Court made it clear that the use of the image of a person on garment was not itself passing off and that under English law a celebrity does not have a right to control the use of their image generally.
In light of the above, successful reliance on a passing off claim is not guaranteed for public figures and is likely to be wholly unworkable for individuals who are not in the public eye or whose image has not been commercialised previously. Such limitations may prove to be an obstacle in the context of deepfakes where the individual being depicted is not a celebrity or who's image is not being used to endorse, or promote a commercial product/service.
Defamation
Alternatively, if it can be shown that the deepfake has or is likely to cause serious reputational harm, a harmed individual could rely on defamation legislation. The Defamation Act 2013 codified and consolidated large parts of existing caselaw and statute in this area and notably, established a new threshold for bringing a defamation claim. Under this new threshold, a harmed individual must show that a deepfake caused or likely cause serious reputational harm in order to be considered defamatory. In the 2019 case Lachaux v Independent Print Ltd & Ors, the Supreme Court held that the Defamation Act 2013 did in fact raise the threshold of seriousness required to bring a claim and that fulfilment of the 'serious harm' test, must be determined by reference to the actual facts about of the impact of the offenders actions.
Although this higher threshold was designed to discourage frivolous claims, it may also have the inadvertent effect of limiting available remedies for victims of deepfake crime. What constitutes 'serious reputational harm' in the context of deepfakes remains unclear and this new threshold may adversely affect the success of those seeking remedy under the Act.
Online Harms Bill
Despite limited forms of current remedy, the UK Government is in the process of enacting legislative change in this area as it is seeking to introduce an 'Online Harms Bill', that follows an initial whitepaper of the same name.
In practical terms, the Online Harms Bill would seek to protect freedom of expression, the freedom of the press and implement additional protection measures for children in order to restrict them from seeing inappropriate or harmful content online. Somewhat controversially, the UK Government has also proposed plans to introduce a new statutory duty of care, which will be enforced by a new independent regulator. This will require companies to take reasonable and proportionate action to combat harmful online content. The new regulator will be tasked with creating 'codes of practice' to offer guidance on how these online harms will be dealt with. Under the legislation the regulator will be able to issue fines (in proportion to the offender's revenue). Further sanctions include disrupting the business of the offender, which may involve asking third party companies to stop providing services to the non-compliant platform. The action of last resort for the regulator would be blocking access through internet service providers (ISPs).
The main criticism levied against the bill is that the term 'online harms' is overly broad and could potentially cover anything from hate crimes to the sale of illegal goods. While it is clear that the legislation will give protection to internet users, it is seemingly more of a broad-based attempt to address a range of harms rather than a precise instrument that offers redress to specific problems.
Although the 2019 whitepaper acknowledged the existence of deepfakes, it did not expand on them further and while the broad nature of the Online Harms Bill will undoubtedly increase the safety for internet users it will likely not do enough to address the specific issues presented by deepfakes.
The Online Harms Bill, initially conceived in 2017, was set to become law in 2021 but has been delayed and is now not due to come into effect until 2023/2024.
EU Perspective:
At this stage, much like in the UK, there are no European laws in place that offer specific redress to the problems presented by deepfakes. However, a broader effort to tackle disinformation in Europe more generally is underway and this includes deepfakes and synthetic media.
As part of its attempts to limit disinformation online, the European Commission in 2018, outlined 'Codes of Practice on Disinformation'. The Code is a self-regulatory framework that sets various standards, that its signatories must adhere to, including transparency in political advertising, the closure of fake accounts and the demonetization of purveyors of misinformation. Signatories to the Code include Facebook, Google and Twitter. The Commission has also made proposals that would see the EU citizenry become more media literate and has called for the creation of an independent European network of fact checkers to stimulate quality journalism and better understand the methods and origin of disinformation.
In a sign of recognition of the specific problem that deepfakes present and a potential willingness to tackle them in the future, The European Parliament noted that AI can be used to manipulate media and recommended that the Commission use its ethical framework to impose an obligation for all deepfake material to state that it is not an original.
Much like the Online Harms Bill, these measures at the EU level are welcome as they recognise and start to address the problems presented by deepfakes. While these actions represent progress, further steps must be taken to ensure that those individuals that suffer as a result of actions by bad actors, have concrete legislative frameworks to rely on when seeking remedy. Given the context of the expected rise in the use of synthetic media online, it is not clear that current UK and EU legislation, nor planned measures, go far enough in protecting individuals that are harmed as a result of deepfake related abuse.
Solutions
Deepfakes are a uniquely challenging issue. Future legislation such as the Online Harms Bill look set to be too broad to precisely tackle deepfakes and their complex ethical issues specifically. There are clear gaps in regulation relating to online platforms and issues such as deepfakes shine a bright light on these regulatory gaps. With no specific laws in place to combat this specific issue, what other solutions can be sought, in order to limit the negative impacts of synthetic media?
Some recommendations include the creation of a new 'Office for Digital Society' that would regulate online content, data and privacy. A centralised regulator would help to bring together existing regulators and limit the number of regulatory holes that currently exist. Any meaningful regulation would also require regulators to be given power and resources in order to affect change.
Future deepfake centric legislation, both in the UK and the EU, should outline which uses of deepfakes are acceptable and which are not. This will help give social media companies clear guidelines along which they can police content on their platforms. Such legislation should also allow internet platforms to share information about deepfakes with one another. This will also make it easier for platforms to warn each other of malicious content and will likely limit the infiltration of synthetic media into the mainstream media. As well as future legislative change, governments must be prepared to invest in forensic media techniques that make it easier to detect deepfakes.
Both the UK and the EU need to be prepared to actively and meaningfully legislate in this area. More generalised attempts to curb disinformation and protect internet users are welcome. However, as the growing epidemic of misinformation is showing no signs of slowing down, deepfakes and synthetic media are set to become the most potent vehicle for misinformation dissemination. As such, large scale, timely and precise legislative action is likely to be the most effective way to limit the harmful effects of synthetic media.
Karim Vellani , TMT Group Trainee, contributed to the writing of this article.