Article

AI in employment; time saver or ticking claim timebomb?

17th March 2022

Artificial intelligence (AI) lurks behind so many web-based processes and the technology is growing as we embrace new ways of working and interacting, from video call software to remote/hybrid working.

AI wins 

One simple example is that of Vodafone – the company has more than 100,000 graduates applying for 1,000 vacancies annually; that’s a lot of filtering.

Vodafone’s HR team is reported to have contracted with HireVue, using AI technology to produce a more efficient recruitment process while apparently removing the element of human bias component. This has apparently resulted in Vodafone reducing its average recruitment period from 23 days to just 11 days, whilst also lowering candidate dropout rates by an impressive 30% and boosting cost savings. Remarkable outcomes in anyone’s book.

The technology is also game changing in the clinical/healthcare sector, with IBM’s ‘cognitive supercomputer’ apparently having diagnosed a patient with a rare type of leukaemia which doctors (even after months of study) could not. 

AI challenges 

However, while many companies are happy using AI technology (for example, in the form of facial recognition software) some businesses (such as Amazon and Axon) have allegedly discontinued or banned its use. While AI technology clearly brings with it many advantages, it also involved real risks, particularly where it is being used to make decisions about people in the workplace, and where it is used within recruitment and redundancy processes. 

Employers and recruiters are increasingly using AI technology in the following ways: – 

  • To manage mundane tasks (such as reducing the number of CVS actually reviewed), to eliminate the human bias element within recruitment, redundance and appraisal processes (by using AI during remote interviews to gauge vocal and facial cues/flags) and to ensure the slicker onboarding of new recruits
  • to advertise job roles on social media platforms
  • in terms of data collection and the day-to-day surveillance of employees/workers
  • in verifying employees’/workers’ identities when accessing software.

A key issue is the algorithm(s) used, how well they have been developed and how expertly used – a lack of well-tested functionality could give employers who use them a commercial and financial headache. 

Take the recent cases against Uber (Raja v Uber and Majang v Uber) where the International Workers Union of Great Britain and the App Drivers and Couriers Union are supporting Uber drivers with their employment tribunal claims. These drivers allege that they’ve been affected by the inability of Uber’s IA facial recognition system to identify them.

The Uber system uses facial recognition software which involves drivers taking a ‘real time’ photograph of themselves for authentication in order to use the app. This, in turn, allows them access to work. 

The drivers allege that the failure of the software to recognise their faces is because of their race and, as such, they are claiming indirect racial discrimination.  

Unsurprisingly, bias is an ongoing and highly challenging issue within the data-based technology industry. AI software is typically designed with a basis in historical data. This past data will likely contain inequalities, disparities and biases and, unfortunately, the algorithms used within AI can therefore reinforce, rather than abolish, these. 

Areas of risk 

AI is increasingly in a wide range of employment law issues, so employer decisions in these cases are likely to be enormously important. Whilst indirect race discrimination claims are likely in the above driver scenario, this technology also has the potential to open up these further legal cans of worms:

  • Unfair dismissal claims – where AI is used as part of a dismissal process (whether on grounds of redundancy or not) who was the decision maker? Was a fair process followed? Did the technology eliminate any human bias element or did it, in fact, exacerbate it? Fair processes, with clear decision makers and reasoning, form key parts of carrying out fair dismissals and avoiding unfair dismissal claims.
  • Disability discrimination claims – where AI is used during interview processes to gauge vocal and/or facial cues, what adjustments are made for those individuals who may have disabilities (whether obvious or hidden) and who, as a result of these disabilities, may have impaired speech and/or facial reactions? Aside from the potential disability discrimination claim angle, use of this technology (without adjustments) may discourage disabled applicants from applying for roles.
  • UK GDPR/data protection claims – the technology brings with it potential data protection claims. The UK General Data Protection Regulation, Retained Regulation (EU) 2016/679 (UK GDPR) contains a provision that an individual shouldn’t be subject to a decision which is made solely based on automated processing (for example, where the use of artificial intelligence isn’t followed up by a human ‘sense check’ or other input) and which triggers legal effects concerning them, or which significantly affects them, without any human intervention.
  • Right to privacy claims – Might the technology breach workers’ rights to privacy under Article 8 of the European Convention on Human Rights (ECHR) in addition to the Human Rights Act 1998? For instance, if an employer monitors an employee’s computer/keyboard activity, their ‘active’ engagement pattern(s) and/or their time spent logged into a software system, does that infringe their privacy?

There are inevitable pros and cons to the integration and use of AI. Time will tell if, in the world of employment and recruitment, it is a time saver or a ticking claim timebomb.

Related Blogs

View All