Published November 8, 2023
  • Intellectual property law will struggle to keep up with fast-paced AI developments
  • Companies urging professional advisors not to input any of their confidential data into AI amid fears of data leaks

The surge in new artificial intelligence (AI) legal cases to decide if infringements arise when copyrighted data is used to train AI systems, suggests that intellectual property law will struggle to keep pace with the speed of AI developments, says leading intellectual property law firm, Mathys & Squire.

Companies are increasingly concerned about both the risks of intellectual property infringements and confidentiality issues when using AI. Companies are even urging their advisors, such as law firms and professional services firms, not to input any of their information into AI systems such as large language models (LLMs) amid fears of data leaks.

Lack of clarity on AI-related copyright breaches

Mathys & Squire says that, as with the growth of the internet, AI is going to see both the courts and legislation struggle to provide businesses with clear direction on rapidly developing laws relating to AI.

Clear infringements occur if generative AI systems are directly reproducing copyrighted information. However, it is unclear whether copyright infringements arise when the AI system producing them has been trained on this copyrighted information.

AI LLMs are fed massive amounts of data so that the model can automatically return results, which could be copyrighted material. This is creating confusion for owners of copyrighted material who want to protect their intellectual property rights.

Andrew White, Partner at Mathys & Squire says, “AI is creating a multitude of new challenges and questions in relation to intellectual property. Many legal cases are ongoing as individuals seek clarification on what their rights and responsibilities are. Companies with copyrighted material and AI companies themselves are in urgent need of clarification.”

Recent legal cases regarding AI copyright disputes include:

  • Getty Images’ case against AI text-to-image model Stable Diffusion, which claims more than 12 million images have been copied from its database without permission
  • A group of visual artists’ claim against Stability AI, Midjourney, and DeviantArt, which accuses the companies of training AI systems with their artworks without their permission
  • A group of US authors, including Pulitzer Prize winner Michael Chabon, sued OpenAI, accusing the company of misusing their writing to train ChatGPT
  • Universal Music Group’s case against AI startup Anthropic. Anthropic’s chatbot Claude is said to have generated copyrighted lyrics without Universal’s permission
  • A group of programmers’ lawsuit against coding platform GitHub, in which they accuse the platform of enabling an AI system to use their licensed code snippets without providing credit.

Some tech firms themselves have already taken action to provide clarity on the issue. Microsoft said it will take legal responsibility if customers get sued for copyright breaches while using its AI Copilot platform. Google has also reassured users of its AI tools on Cloud and Workspace platforms that it will defend them from copyright claims.

Companies fearful of AI data leaks

AI also presents serious potential confidentiality risks to companies. As data is stored in the cloud, data leaks can occur even if companies use their own private AI systems to safeguard against information being shared outside of their organization.   

Italy had previously banned ChatGPT over data protection concerns in April 2023 before reversing its ban later that month. Apple has restricted its employees from using AI tools over fears that confidential data could be leaked to outside sources.

Adds Andrew White, “It is important to have safeguards in place to ensure that confidential information is not input into these AI systems. Given the enormous risks, companies have made formal requests to their professional advisors like law firms and management consultants not to enter any of their information into AI systems.”

High-profile examples show that entering confidential information into AI can have serious consequences. These include:

  • Samsung workers unknowingly leaked secret data while using ChatGPT to fix problems with their source code. This data was retained by the AI model which uses inputs to train itself
  • Microsoft’s own AI research team accidentally leaked 38 terabytes of confidential data onto its GitHub page where it was publishing a tranche of public training data.

 

Mathys & Squire

You may also like…

Contact us to write for out Newsletter

Subscribe To Our Newsletter

Our weekly newsletter is exclusively based on trademarks, instead of a generic IP newsletter! We also will be including a selection of the top articles from The Trademark Lawyermagazine. Please enter your details below to be included in our mailing list.

You have Successfully Subscribed!