In a ground-breaking US lawsuit, authors Mona Awad and Paul Tremblay have sued OpenAI for copyright infringement, accusing the AI company of ‘training’ its language model on ChatGPT using the authors’ copyrighted works without permission. This legal action not only propels us into uncharted territories of intellectual property law and AI ethics, but also sparks questions about how a similar case would play out in the UK.
How could this play out in the UK?
Compared to the US, the UK’s copyright laws are stricter and do not include a ‘fair use’ defense. This US legal doctrine allows the use of copyrighted material without permission, providing it is transformative and does not impinge on the market for the original work.
On the other hand, UK law has ‘fair dealing’, which is relatively narrow and typically only allows for unlicensed usage in certain circumstances, such as for criticism, review, news reporting, etc. Therefore, if authors were to mount a similar claim in the UK, the lack of a fair use doctrine could potentially assist their argument that the AI infringed their copyright. Nonetheless, this case could be an ideological battle as much as a legal one – OpenAI could argue that ChatGPT does not ‘copy’ text but instead mimics the way in which humans process information.
However, copyright infringement is just one side of the coin. Proving damages — the losses suffered as a result of the alleged infringement — is equally essential for a successful lawsuit. In the US case, Awad and Tremblay may find it challenging to demonstrate tangible harm. After all, ChatGPT may work exactly the same way even if it had not ‘ingested’ the books, especially if we consider that ChatGPT may have been trained on third parties’ discussion of the books, rather than the original text. Similarly, under English law, the authors may also have to face this issue since quantifying financial losses or dilution of their work’s value could be difficult, especially when generative AI models like ChatGPT are typically trained on hundreds of thousands of data sets.
Beyond the traditional economic measures, authors in the UK could consider relying on infringement of their moral rights, particularly the right to object to the derogatory treatment of their work. In other words, they could argue that the AI’s unauthorized use of their work might alter its meaning, potentially damaging their reputation or the work’s artistic value. Until this argument is tested in a case, it is difficult to predict how this argument would be viewed by the courts in such an unprecedented context.
Potential Violations and their Implications
All these issues bring into focus the UK Government’s principles guiding AI regulation. The principles are:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and redress.
They are implemented to create an environment where AI development is responsible, ethical, and beneficial to society. The principle of ‘appropriate transparency and explainability’, for instance, could be violated if AI companies don’t disclose the extent and nature of the data used to train their models.
These principles will not be incorporated into statute in the initial stages of the new framework and will only serve as guidelines for good practice. However, the UK Government may eventually pursue the enactment of a statutory duty for regulators to “have due regard” to them. As such, demonstrating a violation of these principles might not necessarily lead to legal victory but could drive home the ethical implications of the case, prompting reform in AI practices and regulation.
Looking Forward
The existential threat posed by generative AI to creative industries including published authors could be said to mirror the threat posed to the music industry in the early 2000s by peer-to-peer music sharing in the form of websites such as Napster. Napster itself was found to be illegal and had to be shut down, but ultimately that led to the streaming model we have today. A similar model may take shape in the face of AI developments, whereby published authors could rely on a licensing principle to have their content used by AI for training purposes. The owners of copyright works may receive credit, but if the parallel with the music industry plays out, the companies running the generative AI programmes are likely to be those who stand to gain the most.
If you have any query about these issues, do feel free to contact Dennis Lee (dennislee@bdbpitmans.com) or Ludovico Lugnani (ludovicolugnani@bdbpitmans.com).
Written by Dennis Lee
Partner, BDB Pitmans
You may also like…
Pho no! Restaurant chain rescinds trademark following social media backlash
British-owned Vietnamese restaurant Pho recently filed to surrender its trademark of the word 'pho' – the national...
INTA hosts workshop on IP valuation and finance at London Stock Exchange
What: The International Trademark Association (INTA) launches its global workshop series "Integrating Intellectual...
UK artists set for New Zealand royalties boost
UK artists will earn new royalties when their works are resold in New Zealand following the Free Trade Agreement. UK...
Contact us to write for out Newsletter