The productivity race: why the tortoise may be the winner on smarter innovation
22 September 2025
The race to deploy new tech is relentless, but speed doesn’t always deliver results.
A slower, more deliberate approach that incorporates foresight into both the design and implementation phases is more likely to boost productivity, avoid costly mistakes and build trust.
The speed trap
Hyped marketing and peer pressure push businesses towards generative AI, robotic process automation and hyperscale clouds. Vendors promise instant savings, yet private sector productivity still dipped 0.6% in 2024.
Rushed adoption hides three risks: data misuse that erodes compliance and reputation, intellectual property (IP) pitfalls that blunt competitive edge and accountability gaps that amplify losses.
Guardrails and discipline may seem like brakes to growth – but projects that respect these constraints need fewer fixes and keep stakeholders on side. The tortoise might be seen as slow but navigates safely while the hare hits avoidable risks.
1. Data misuse: building trust
All personal data processing must comply with UK GDPR. Business and personal data are often intermingled, so it’s simpler to adopt compliant practices by default and introduce appropriate internal policies. This includes collecting only necessary data, anonymising or pseudonymising where possible, encrypting, restricting access and maintaining breach procedures and records.
The new Data Use and Access Act introduces a broader concept of “legitimate interests” and eases restrictions on automated decision-making. Businesses can now deploy AI in HR, credit scoring and customer service, provided there’s human oversight, transparency, the ability for individuals to contest decisions and safeguards against bias and discrimination.
If personal data is transferred outside the UK, you must adopt an appropriate mechanism, like the UK’s International Data Transfer Agreement and a transfer impact assessment. This isn’t just a paperwork exercise; non-compliance can lead to fines or even a ban on further processing.
2. Protecting intellectual property
Generative AI is trained on third-party content, triggering lawsuits over unauthorised use of IP and data. Some claims have favoured the provider and others the rights owner. Anthropic recently settled for $1.5 billion for its unauthorised use of books to train Claude, its popular AI tool.
On the other hand, the tech can create new IP rights. Copyright protects software, database rights cover structured collections of data and confidentiality preserves know-how. Patent protection might be available for computer-implemented inventions that deliver a technical contribution.
Ownership questions arise around improvements, overlaps with pre-existing IP and third-party infringement. Does the vendor, business user or third party own it? What if AI causes infringement? Open-source software is commonly used but risky – courts have awarded compensation where the software is used beyond the standard licence and where the code itself infringes earlier code.
Take time to clarify who owns the rights at each stage of use. Contracts with vendors, employees, subcontractors and collaborators should define IP rights. Make it clear if automated code generation is permitted and whether the provider has relevant consents. Consider the warranties, indemnities, code or data escrow, continuity plans and if the provider has insurance.
3. Accountability gaps and liability
Implementing technology is fast and easy; doing it properly with accountability takes longer. This situation becomes more difficult when businesses do not endorse or even proactively refuse to adopt technology. This can lead to “Shadow IT”, where staff adopt technologies that make their jobs easier without authorisation, leaving the business exposed to risks they can’t control.
Three governance practices are key here:
- Publish clear rules on what technologies can be used, explaining why tools are excluded to reduce Shadow IT. Also, address data misuse and IP risks.
- Appoint an empowered owner for each system to approve changes, pause rollouts and authorise go-live only after stability and bias tests pass.
- Maintain a decision log that ties risk assessments to design choices and technology implementations, showing why certain trade-offs were made.
These practices are inexpensive and will reduce exposure to risk and improve post-incident defences.
It’s also important to add protections into your contracts with vendors, such as service level agreements, performance warranties and indemnities against breaches. Adjust limitation of liability clauses to reflect the risk. High enough to deter corner-cutting, but not so unlimited that providers walk away.
Internal controls are essential for model validation, drift monitoring, acting on alerts and documenting steps to prove reasonable care. You must also address product-liability exposure and algorithmic bias under equality law.
Future considerations
AI and cloud regulation is evolving. The EU AI Act already shapes global practices; systems placed on, or that affect, the EU market must comply and this affects the UK too, even after Brexit.
The US and UK have avoided regulation so far, fearing it will hamper development and put them behind in the AI race, but momentum for regulation is growing.
You should include change control clauses so contracts can adapt to new laws, require vendors to maintain certifications and disclose material changes to sub-processors or their security posture. Align technology adoption with your existing governance and risk frameworks to avoid ad hoc processes.
Resilience matters too. Draft continuity plans for tech outages and data corruption. Negotiate data portability, termination and exit rights and maintain appropriate cyber insurance that reflects actual risk.
Conclusion: slow down to speed up
Productivity isn’t just about fast tech adoption, but smart, secure implementation taking several steps. You should map your AI use cases and classify legal risk, undertake Data Protection Impact Assessments and identify data controller/processor roles.
Negotiate your vendor contracts, not just availability SLAs and security but also data processing clauses, ownership of IP, indemnities and of course liability.
Bear in mind, the law in this area is at an early stage and will change. Implement an AI policy and train your staff on how and when to use it.
Keep a “human in the loop” and watch out for bias or hallucinations in the outputs. Prepare incident response plans if something goes wrong.
The tortoise wins not by dawdling but choosing the safest path, bypassing obstacles the hare hasn’t seen.
Informed Insight
Read more tech blogs or explore other articles from our Informed Insight newsletter.
What could be in store? A preview of the UK’s November Budget
Read more