Article

Employment in the age of automation

16 February 2026

Make an enquiry
workers in a factory

As job automation accelerates and AI has become our everyday reality, the UK workforce finds itself at crossroads. What began as a push for improved productivity and reduced labour costs has begun to transform the modern workplace – one that is full of opportunity for some while provoking deep uncertainty for others.

Naturally, automation is reducing the need for certain workers, particularly those in routine roles and many businesses may be considering how to lawfully reduce headcount or a restructure. If redundancies are unavoidable, employers must follow a fair, reasonable and meaningful consultation (including collective consultation where 20 or more dismissals are proposed), explore redeployment and alternatives to dismissal and, if dismissals are necessary, terminate contracts lawfully and pay any statutory or contractual redundancy entitlements.

Alternatively, employers may offer voluntary schemes where employees agree to leave their roles in exchange for a severance payment, usually documented in a settlement agreement under which the employee relinquishes their right to bring employment-related claims.

Amid the rapid spread of automation, the distinctive value of human skills, such as craft, judgement and people-focused expertise have never been more apparent. As a result, some demand for employment remains. Plus, in the short term, it’s not always cost-effective for employers to upskill or reskill their workforce and implement new AI systems, robotics or machinery.

AI and HR

AI and automation are no longer futuristic concepts; they’re transforming workplaces today. From recruitment algorithms to predictive analytics in performance management, AI promises efficiency and cost savings. But alongside these benefits come significant operational challenges for employers and HR professionals.

Many UK organisations already use AI in HR processes – from automated CV screening to employee engagement platforms – to reduce manual workload, identify performance trends and risks and tailor training needs. Algorithms trained on historical data can perpetuate inequality, exposing employers to bias and discrimination risks under the Equality Act 2010. Making decisions solely through automated means can also breach privacy laws, given that fully automated decision-making is prohibited and may conflict with ACAS Guidance and Codes of Practice without a human element. Further, whilst the Employment Rights Act 2025 (ERA 2025) does not contain specific provisions on AI, HR teams must ensure that their decisions (relating to recruitment, performance management and/or grievances and disciplinaries) withstand heightened legal and procedural scrutiny, as the consequence of the ERA 2025 includes a stricter duty for employers to prevent harassment, strengthened trade union rights and stronger protection against unfair dismissal.

It remains to be seen whether the existing legal framework adequately protects employees from the misuse of AI tools in the workplace. To support the shift from human-centric labour to job automation, it’s essential that the UK government provides educational tools to equip the future workforce with baseline technical skills and sets out legal and ethical standards for automation in the workplace.

Automation in the Tribunal system 

AI has also entered the Employment Tribunal. Under the ERA 2025, the government intends on extending time limits for employment tribunal claims from three to six months, which increases the window for employees to challenge AI-influenced decisions. Beyond that, parties are increasingly using AI to prepare claims, but this is not without risk; AI tools have been found to cite fake cases and inaccurate information. This gives users false confidence that they’re producing professional-quality documents when, in reality, the submissions are inaccurate or speculative, causing delays and increased costs in proceedings.

Although not an Employment Tribunal case, Harber v Revenue and Customs Commissioners illustrates the risks of relying on unverified AI content. The appellant, acting in person, cited fictitious cases generated by AI. The Tribunal described this as “a serious and important issue” due to the time and costs consequences and the potential impact on public confidence in the UK’s judicial system. While AI tools may appear to be efficient and cost-effective for case preparation during litigation, particularly for unrepresented appellants or claimants, this case shows the information generated by AI is not entirely reliable and can’t replace the expertise of qualified lawyers.

In December 2023, guidance was published on the use of AI, stating that parties may need to confirm they have independently verified the accuracy of any AI-generated case citations or submissions. This is proving to be necessary despite the advancements in technology.

In the case of Mr S Murly-Cleves v University Hospitals Sussex NHS Foundation Trust, a London Employment Tribunal ordered an NHS employee to pay £18,000 to his employer after finding he’d altered his medical report using content believed to have been generated by AI. This case highlights that evidence must not be dishonestly altered; using AI to amend or create evidence undermines the integrity of legal proceedings and can result in hefty financial penalties.

AI governance in the EU

While the UK has taken a principles-based, sector-specific approach, the EU has opted for overarching and comprehensive legislation.

The EU AI Act was published in the Official Journal of the European Union and came into force on 1 August 2024. It applies to AI systems overall, rather than creating rules for specific sectors and its provisions will be implemented across the EU by August 2026. The Act provides a legal framework for the development, supply and use of AI systems in the EU and some UK businesses may also be bound by its requirements.

It includes consistent rules for the supply and use of AI across its member states and prohibits certain practices, such as emotion-recognition systems in workplaces or educational institutions (except for medical or safety reasons) to prevent the violation of fundamental rights. The Act also reinforces transparency requirements for certain AI systems that interact directly with humans.

On 19 November 2025, the European Commission published the “Digital Omnibus on AI”, which proposes amendments to the EU AI Act to streamline and simplify the implementation of the AI Act across the EU, whilst easing compliance obligations and encouraging the use of AI.

Spain’s approach

Out of the EU member states, Spain has been at the forefront of AI regulation. It created the Spanish Agency for AI Supervision (AESIA) in 2023 to oversee ethical use, promote transparency and ensure compliance with national and EU regulations.

On 11 March 2025, Spain began aligning its domestic law with the EU AI Act by issuing a draft bill on the good use and governance of AI. This aims to ensure AI systems are used ethically, transparently and for the benefit of society only. Until this legislation is implemented, the updated National AI Strategy 2024 provides guidance on AI use for Spanish nationals.

An example of Spain’s proactive stance in dealing with the age of automation includes introducing the Rider Law, which made it mandatory for algorithmic decisions to be transparent.

This reform requires employers to inform workers’ representatives about the parameters, rules and instructions on which algorithms or AI-based systems rely when they affect working conditions and access to employment. This gives workers insight into how AI impacts their roles.

Conclusion

There appears to be a regulatory lag in the UK compared with neighbouring countries, as there is no dedicated AI-specific legislation place to safeguard employees from automation and technological changes in the workplace. This gap persists despite the recent scrutiny and reform of UK employment law, via the Employment Rights Act 2025.

Both employment law and privacy laws must evolve to keep pace with an increasingly digital society. AI and automation are here to stay. For HR leaders in particular, the question isn’t whether to adopt these technologies, but how to do so responsibly and strategically. This means:

  • Creating AI policies
  • Training staff on legal and ethical implications
  • Investing in training and reskilling initiatives to future-proof the workforce
  • Staying ahead of regulatory developments in the UK and EU.

By embracing AI responsibly, HR teams can improve productivity while safeguarding fairness, compliance and trust.

How can we help you?

Related articles

View All