Article

AI: key legal risks for charities

23 July 2025

A person checking an AI app on their phone

Artificial intelligence (“AI”) tools are increasingly attractive to charities keen to stretch limited resources. From using chatbots to engage with donors, drafting funding bids to triaging beneficiary enquiries, the promise of efficiency is compelling.

Deployment of AI is not without risk and there are a number of legal and ethical pitfalls to overcome. Trustees and senior managers remain subject to the same duties when deploying AI as with any other technology and should take precautions.

Data

Data fed into, or generated by, an AI tool might be retained by the provider and used to refine its model, as Samsung discovered. Remember also, UK GDPR applies to personal data you upload. Charities act as “controllers” under the legislation, meaning you must identify a lawful basis for using AI and provide a clear privacy notice explaining how the data will be processed. Failure to do so could expose the charity to regulatory action, fines and, perhaps more damagingly, reputational harm with donors and beneficiaries.

Confidential information

Many charities handle sensitive data including health records, survivor testimonies, financial hardship details. Uploading such content to an online AI tool may amount to an unauthorised disclosure if the provider stores data overseas or allows staff to review prompts for moderation purposes. Contractual due diligence should cover data localisation, encryption standards and incident-response times. A non-disclosure agreement alone is insufficient. Trustees must be satisfied that technical and organisational measures meet UK GDPR standards. Charities are not exempt from hacking and according to a recent DSIT survey 30% reported having experienced any kind of cyber security breach or attack in the last 12 months. So make sure there is appropriate cybersecurity.

Intellectual property

Training data scraped from the internet often incorporates copyright-protected works. Unless the AI provider has obtained licences or relies on a statutory exception, your use of outputs that “substantially reproduce” those works risks infringing another party’s IP. While the law is still evolving in this area, trustees should ensure that AI supplier contracts include robust IP indemnities and, where possible, restrict use of the tool to internal purposes. Equally, if a volunteer uploads the charity’s own copyrighted materials (training manuals, impact reports, donor lists) into a public AI tool, the charity may lose control over subsequent reproduction.

Bias

AI is only as fair as the data on which it is trained. Historic bias within datasets can result in discriminatory outputs. For example, a grant-making/assessing algorithm might disadvantage certain ethnic groups if they are not properly represented in the underlying dataset. The Charity Commission expects trustees to act in the best interests of beneficiaries and the Equality Act 2010 makes organisations vicariously liable for discriminatory practices, even if unintentional. Deploying opaque “black-box” systems without understanding how decisions are reached could breach that duty. Charities must tackle this head-on.

Accuracy

There have been many instances of AI producing false results. The High Court recently warned lawyers over the risks of using AI without checking the output after yet another court submission involved case-law hallucinated by AI. Ensure human supervision.

Human impact

Of course, the human impact should not be ignored. AI tools that replace administrative roles could lead to staff redundancies or diminished volunteer engagement. Naturally, trustees must comply with employment law obligations. They should also engage with staff to ensure morale is not affected and consider redeployment and training to upskill where possible.

What next?

There are practical steps boards and trustees can take to protect themselves:

  1. Consider paying for AI. If the product is free, then you’re the product. If you want to ensure your data is not retained and used by the AI tool, you may have to pay to use it.
  2. Implement an AI usage policy to identify who can use which tool for what and how. Remind people to never input personal or highly confidential data into public models.
  3. Undertake a Data Protection Impact Assessment if you’ll be using AI tools to process personal data to understand what data will used, the basis for this and how it will be protected.
  4. Appoint an “AI champion” trustee or sub-committee to help keep your governance up-to-date as this fast-moving landscape develops.
  5. Train staff and volunteers on how to use AI so that they will be onboard with the changes.
  6. Read the terms! It’s important to check the terms of use for the AI tool to ensure you understand what protections, if any, the provider is giving you. Then adapt your usage accordingly.

How can we help you?

Related articles

View All