A major test case involving the General Data Protection Regulations (GDPR) is being brought by Uber drivers against the company.
The App Drivers and Couriers Union (ADCU) has started the action against Uber, claiming that an artificial intelligence algorithm has allowed decisions with legal or significant effects on its drivers to be made based solely on automated processing. This tests the provisions of Article 22 of the GDPR, and may well inform the Information Commissioner’s Office’s stance on the ethical use of AI and the protection which should be afforded to data subjects who fall victim to automated decision making.
What does the GDPR have to say on automated processing?
Article 22 prohibits decisions based solely on automated processing, including profiling, which would result in a legal impact on the data subject or have another significant effect, unless the decision is:
- necessary to enter into or perform a contract between the parties
- authorised by [European] Union or Member State law to which the controller (in this case, Uber) is subject and which also lays down suitable measures to safeguard the data subject’s rights, freedoms and legitimate interests
- based on the data subject’s explicit consent.
Where the decision making is carried out for contractual processes, or is based on the data subject’s consent, the controller needs to implement measures to allow the data subject to express his or her point of view and to contest the decision.
The ADCU claims that since 2018, more than 1,000 Uber drivers have reported being wrongly accused by Uber of fraudulent activity as a result of artificial intelligence used by Uber, leading to their accounts being terminated without reason.
In many cases this has meant that drivers have lost their livelihoods as a result of the automated decision making. Further, if the driver is based in London, they are then reported to Transport for London by Uber and given 14 days to justify them being allowed to keep their licence.
In the majority of cases the drivers are unaware of the reason for Uber terminating their account and so do not have the necessary information at their disposal to properly represent themselves to TfL, or to express their point of view or contest the decision.
Uber claims that the algorithm in question is used to identify suspicious activity, which is then reviewed by human managers prior to a decision being made in respect of whether to terminate the driver’s account.