Veterinary practices adopting artificial intelligence (“AI”) tools have a great opportunity to change the way they work. Advantages include operational efficiencies through note-taking, staff rostering and automated medicine inventories. AI also offers enhanced diagnostics and treatment, remote monitoring through wearable devices, personalised treatment plans and predictive analytics of diseases.
You must ensure you remain compliant with law and regulation and there are several factors to consider here. Data protection laws like the UK GDPR require lawful data processing, transparency and breach notification. Data privacy and security are major concerns, especially with sensitive client and staff information and the potential for vendor breaches. Undertake a Data Protection Impact Assessment regarding use of personal data by an AI tool.
There are also equality law and ethical issues. These include bias in AI outputs, lack of transparency and the issue over obtaining informed client consent regarding use of AI tools. Use of AI in recruitment or employee management could create employment law issues or breach anti-discrimination rules.
While AI tools can help with diagnosis, clinical safety or animal wellbeing and liability risks can arise from overreliance on AI or inaccurate outputs. Practices must ensure they maintain proper oversight by clinicians and ensure it’s clear who is responsible for decisions. Practices must comply with professional standards for clinician oversight and maintain accurate records. This could entail including prompts used and the output from the AI tool in patient records, noting the clinician’s decision.
Even after these risks are addressed, practices need to assess the operational risks associated with the AI tool itself. Practices should engage in good industry practice in implementing AI, drawing on RCVS guidance where available. It’s important to manage updates to the AI and train staff in AI strengths and weaknesses and the correct way to use it. Inform them of limitations of AI, privacy, and record-keeping.
Practices should assess whether the AI tool complies with cybersecurity frameworks (e.g. Cyber Essentials, ISO 27001, SOC 2), which are increasingly relevant to AI implementations.
AI regulations for vet practices are evolving. The EU AI Act requires transparency and risk management for most vet AI tools with stricter rules for high-risk uses. Until the UK has its own regulation, AI vendors will likely have to follow the EU rather than treat the UK as a separate market.
To manage AI risks, practices should establish clear governance, including an AI policy, risk register and designated AI lead. All AI use cases should be catalogued, and human oversight checkpoints defined. Practices should run pilots to validate AI tools before full rollout. You should prohibit staff from entering identifiable data into public AI tools without approval. Monitor AI tool accuracy, review performance regularly, and maintain an incident response plan in case anything goes wrong.
Vendor due diligence is essential too. You should assess clinical validation of the AI, intended use, tool update frequency, security measures and incident history. Clarify data roles, flows, retention and request bias testing summaries. Insist on audit rights in contracts or at least obtain access to general third-party assessments of the tool.
Don’t just sign the contract: remember, the standard vendor contract won’t favour the vet practice! Make sure your contracts include data processing clauses, breach notification, timelines and restrictions on vendor use of identifiable data. Agree service levels, functionality change-control procedures and clear terms for intellectual property, liability, and insurance.
As a general rule, you should treat AI as a decision-making support, require clinician verification, set flagging rules and record decisions. Provide your clients with clear notices about AI use and obtain their consent if appropriate. Minimise and de-identify personal data and implement strong security measures like multi-factor authentication and regular backups.
Our Technology and Innovation team can help your practice adopt AI safely and confidently by providing tailored plans, AI usage policies, contract advice and staff training materials.