The Struggle of an Uber Eats Courier Against AI Bias: A Glimpse into UK Law Challenges

On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit U.K. law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Uber faced years of litigation, with Manjang being unable to have her claims ignored or a deposit needed for pursuing the case. This tactic seems to have strengthened the litigation, as described by the EHRC stating that the case was still in its “early stages” in fall 2023 and indicating its complexity as it deals with AI technology. A final hearing was set for 17 days in November 2024.

The final hearing never happened after Uber proposed and Manjang agreed to a settlement, meaning the complete details of the exact malfunction won’t become public. The financial terms haven’t been shared, and neither has Uber given details or any comment on the malfunction when we asked.

Microsoft was also reached for comments about the lawsuit’s outcome, but the firm declined to comment.

Even though Uber has settled with Manjang, the company hasn’t publicly admitted any faults in its systems or procedures. It claims that courier accounts aren’t terminated based entirely on AI evaluations because facial recognition verifications are supported by “strong human review,” according to its statement about the settlement.

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Worker Info Exchange (WIE), a platform for workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under U.K. data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in a discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

It has been disclosed that in the case of Manjang’s complaint, both Uber’s facial recognition assessments and the supplementary system for manual review, stated to be a safety measure for automated verdicts, were unsuccessful.

This case poses doubt on the effectiveness and validity of U.K. law when it includes the regulation of AI application.

Manjang ultimately managed to secure a legal settlement from Uber, working on the basis of equality law — particularly, a discrimination claim laid out in the U.K.’s Equality Act 2006, which identifies race as a safeguarded characteristic.

Baroness Kishwer Falkner, head of the EHRC, expressed her disapproval over the fact that Uber Eats courier was required to initiate a legal claim “to comprehend the veiled processes impacting his job,” she contained in her statement.

AI is intricate and poses novel dilemmas for business owners, legal professionals and law enforcers. It’s vital to acknowledge that the escalation in AI utilization can cause discrimination and breach of human rights,” she expressed. “Our utmost concern is that Mr. Manjang was not alerted that his account was being deactivated, nor gave him any definite and efficient path to confront the technology. More measures need to be implemented to ensure bosses are clear and candid with their employees about when and how they execute AI.”

U.K. data security regulation is the other applicable law in this context. On the surface, it should offer robust defenses against obscure AI operations.

Manjang’s claim relied on the relevant selfie data that was secured using data access rights entrenched in the U.K. GDPR. If he could not secure such indisputable evidence that Uber’s ID verification had mishaps, the firm might have chosen not to settle at all. To prove that a proprietary system has flaws without allowing users to access related personal data, would further incline the scale towards the more financially capable platforms.

Apart from data access rights, the U.K. GDPR is meant to avail extra protections to individuals, including from automated decisions that have a legal or fairly significant impact. The law also necessitates a legal basis for processing personal data, and promotes system operators to be proactive in identifying possible damages by conducting a data protection impact evaluation. This must necessitate additional scrutiny against destructive AI systems.

However, for these protections to be effective, enforcement is essential – this includes having a deterrent effect against the deployment of biased AIs.

In the case of the U.K., the relevant enforcement body, the Information Commissioner’s Office (ICO), did not intervene to investigate complaints against Uber, despite having received complaints about its faulty ID checks since 2021.

According to Jon Baines, a senior data protection specialist at Mishcon de Reya law firm, the ICO’s “lack of adequate enforcement” has undermined legal protections for individuals.

He argues that we should not be quick to assume that existing legal and regulatory frameworks are incapable of mitigating potential damages from AI systems. “In this particular case,” he speaks to TechCrunch, “it seems to me that the Information Commissioner certainly has the jurisdiction to consider both the individual case and, on a larger scale, whether the processing was legal under U.K. GDPR.”

“Considerations like – is the processing fair? Is there a lawful basis for it? Is there a condition under Article 9 (considering it involves processing of special categories of personal data)? Most importantly, was there a thorough Data Protection Impact Assessment conducted before the verification app was implemented?”

“Certainly, the ICO should be more proactive,” he goes on to challenge the absence of regulatory intervention.

We reached out to the ICO in relation to Manjang’s situation, questioning if it’s scrutinising Uber’s usage of AI for ID verifications due to the complaints raised. A representative from the watchdog did not directly answer our queries, but shared a general statement highlighting the obligation of organizations to “understand how to utilize biometric technology in a manner that does not compromise people’s rights”.

“In our latest biometric guidance, it is mentioned that organizations must take steps to reduce the risks associated with handling biometric data, such as errors in accurately identifying individuals and bias in the system,” the statement also mentioned, and further added, “Anyone who has issues with how their data has been managed, can report these to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite Prime Minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the U.K. secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1 million to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the U.K.’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding toward being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.

Uber under pressure over facial recognition checks for drivers

UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’

Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *