Published August 10, 2023

In a ground-breaking US lawsuit, authors Mona Awad and Paul Tremblay have sued OpenAI for copyright infringement, accusing the AI company of ‘training’ its language model on ChatGPT using the authors’ copyrighted works without permission. This legal action not only propels us into uncharted territories of intellectual property law and AI ethics, but also sparks questions about how a similar case would play out in the UK.

How could this play out in the UK?

Compared to the US, the UK’s copyright laws are stricter and do not include a ‘fair use’ defense. This US legal doctrine allows the use of copyrighted material without permission, providing it is transformative and does not impinge on the market for the original work.

On the other hand, UK law has ‘fair dealing’, which is relatively narrow and typically only allows for unlicensed usage in certain circumstances, such as for criticism, review, news reporting, etc. Therefore, if authors were to mount a similar claim in the UK, the lack of a fair use doctrine could potentially assist their argument that the AI infringed their copyright. Nonetheless, this case could be an ideological battle as much as a legal one – OpenAI could argue that ChatGPT does not ‘copy’ text but instead mimics the way in which humans process information.

However, copyright infringement is just one side of the coin. Proving damages — the losses suffered as a result of the alleged infringement — is equally essential for a successful lawsuit. In the US case, Awad and Tremblay may find it challenging to demonstrate tangible harm. After all, ChatGPT may work exactly the same way even if it had not ‘ingested’ the books, especially if we consider that ChatGPT may have been trained on third parties’ discussion of the books, rather than the original text. Similarly, under English law, the authors may also have to face this issue since quantifying financial losses or dilution of their work’s value could be difficult, especially when generative AI models like ChatGPT are typically trained on hundreds of thousands of data sets.

Beyond the traditional economic measures, authors in the UK could consider relying on infringement of their moral rights, particularly the right to object to the derogatory treatment of their work. In other words, they could argue that the AI’s unauthorized use of their work might alter its meaning, potentially damaging their reputation or the work’s artistic value. Until this argument is tested in a case, it is difficult to predict how this argument would be viewed by the courts in such an unprecedented context.

Potential Violations and their Implications

All these issues bring into focus the UK Government’s principles guiding AI regulation. The principles are:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

They are implemented to create an environment where AI development is responsible, ethical, and beneficial to society. The principle of ‘appropriate transparency and explainability’, for instance, could be violated if AI companies don’t disclose the extent and nature of the data used to train their models.

These principles will not be incorporated into statute in the initial stages of the new framework and will only serve as guidelines for good practice. However, the UK Government may eventually pursue the enactment of a statutory duty for regulators to “have due regard” to them. As such, demonstrating a violation of these principles might not necessarily lead to legal victory but could drive home the ethical implications of the case, prompting reform in AI practices and regulation.

Looking Forward

The existential threat posed by generative AI to creative industries including published authors could be said to mirror the threat posed to the music industry in the early 2000s by peer-to-peer music sharing in the form of websites such as Napster. Napster itself was found to be illegal and had to be shut down, but ultimately that led to the streaming model we have today. A similar model may take shape in the face of AI developments, whereby published authors could rely on a licensing principle to have their content used by AI for training purposes. The owners of copyright works may receive credit, but if the parallel with the music industry plays out, the companies running the generative AI programmes are likely to be those who stand to gain the most.

If you have any query about these issues, do feel free to contact Dennis Lee (dennislee@bdbpitmans.com) or Ludovico Lugnani (ludovicolugnani@bdbpitmans.com).

Dennis Lee,

Written by Dennis Lee

Partner, BDB Pitmans

BDB Pitmans

You may also like…

New CITMA President elected

New CITMA President elected

Kelly Saliger has been elected as President of The Chartered Institute of Trade Mark Attorneys (CITMA) and will make...

Contact us to write for out Newsletter

Subscribe To Our Newsletter

Our weekly newsletter is exclusively based on trademarks, instead of a generic IP newsletter! We also will be including a selection of the top articles from The Trademark Lawyermagazine. Please enter your details below to be included in our mailing list.

You have Successfully Subscribed!