ChatGPT: What are the legal risks?

30th March 2023

ChatGPT, OpenAI’s latest artificial intelligence (AI) language model, has hit the headlines over the last few months. Within seconds, ChatGPT creates human-like text responses to prompts and queries. As well as being entertaining, ChatGPT also has huge potential in allowing organisations to incorporate a degree of automation in the provision of services.

While the technology appears to be incredibly useful, users should thoroughly review responses and ensure their factual correctness. During the launch of Google’s equivalent model, Bard, an obvious error ultimately wiped out approximately £82 billion of Google’s parent company’s market value. This should remind users to exercise care and be aware of potential consequences of using such AI. Not only may inaccurate responses be produced, the legal consequences could expose individuals and organisations to liability.

Who owns the content?

According to OpenAI’s terms and conditions, users of ChatGPT own all ‘input’ and subject to compliance with the terms, the ‘output’ generated by ChatGPT is assigned to the user by OpenAI. This means compliance with applicable laws falls on the user, including compliance with UK intellectual property laws. Although the output may be assigned to the user, this does not prevent OpenAI from using both the input and output (together the ‘content’) to develop and improve its performance overall. This also means similar outputs may be produced for other users who ask similar questions; however, users may opt out of allowing Open AI to use the content to improve its services.

Copyright law in the UK prevents others from using works without the author’s permission, regardless of whether the works are publicly available. ChatGPT has been trained using significantly large datasets, including existing articles, literature, quotes and websites and so when a user prompts a response, the output may incorporate materials subject to copyright. OpenAI’s T&Cs make it very clear that services are provided by it ‘as is’ and OpenAI disclaims warranties relating to non-infringement, leaving the user responsible for compliance with laws or otherwise, potentially liable to third party infringement, as owner of the output.

Data protection and confidentiality

Organisations should ensure compliance with data protection laws and company privacy policies before permitting it to be used within the workplace. If the intention is to use ChatGPT to process personal data, data protection law must be complied with. Sharing sensitive data with ChatGPT may also lead to significant consequences for organisations if that information is confidential. An inadvertent disclosure of sensitive data may lead to a breach of contract, exposing the organisation to liability and damages. A recent glitch in ChatGPT allowed users to see the titles of other users’ conversations and may also have exposed payment information, exposing security risks.

There are also risks involved with the sub-processing of personal data. For example, prompting ChatGPT to generate a response using personal data which has been disclosed to it will ultimately appoint OpenAI as a data ‘sub-processor’ of the personal data. In practice, data processing addendums (DPA) often prohibit sub-processing or at least without the consent of the data controller, leaving data processers potentially liable to a breach of contract. In circumstances where sub-processing is permitted under a DPA, users will need to enter into a sub-processing agreement with OpenAI. Commercially speaking, sub-processing agreements require extensive negotiations and more often result in shifting all liabilities of the controller to the processor.

Recent proposed changes to data protection laws could mean that organisations are able to use a higher degree of automation in the data processing. To mitigate risks, organisations should consider implementing policies relating to the use of AI in the workplace, which places restrictions or requires users to synthesise or anonymise data when using ChatGPT or similar language models.

Ensure your business is compliant

AI is evolving fast, and the legal framework around it will take time to catch up. If personal data or confidential information is shared with ChatGPT, is it possible that ChatGPT will be able to access and use it to inform subsequent requests indefinitely? If copyrighted information or images are used to create new content, how are the copyright owners protected? Even if the user is liable, once this information is entered into an AI model it may be irretrievably lost.

Organisations should remain vigilant and alert to any potential legal risks and ensure compliance with applicable laws and company policies. Responses produced by AI should be thoroughly reviewed for inaccuracies and risks of infringement should always be assessed.

Related Blogs

View All