top of page

Absurd AI-powered worker surveillance: the latest from our London casework

Are you a worker who’s been accused of fraud or suspicious activity by the platform you work for? Do you have a proper understanding of how they reached this conclusion? 

Your data protection rights may have been violated. We can support you to challenge your dismissal by making a data access request on your behalf. Get in touch at 


At Worker Info Exchange, we've seen a huge volume of cases in London from workers who have been summarily blocked from their accounts and fired from the platforms they work for. These cases reveal flawed, error-ridden algorithms at play behind the scenes of apps like Uber, Deliveroo, Just Eat, and Free Now, whose practices in London set the stage for their operations in other major cities. These algorithms and AI-systems monitor for certain criteria such as driver cancellation rates, on-the-job activity patterns including frequent locations, valid ID and insurance documents, vehicle movement down to the metre, among many other. These are supposedly to detect for fraudulent activity but which in reality maintain a far-reaching system of worker surveillance, performance management, and control. Workers are not told in advance what rates they must maintain; its not clear whether they must accept 10 rides out of every 20 offered, or consistenly meet their estimated arrival times for delivery 90% of the time. Moreover, even after they are dismissed they are not given a proper explanation as to why. Told only that they have not met the standards required, and worse, accused of suspicious activity or fraud.  

We have been working on such cases since 2018, we’ve taken Uber and Ola to court over unfair dismissals, and we’ve written about our investigations to uncover the absurd algorithms in place, such as one which fired a Just Eat worker for moving 3 metres away from a restaurant. You’d think we’d be used to the murky practices of companies like Uber and Deliveroo. Yet some cases still make us roll our eyes and laugh. Here’s the latest absurd cases we’ve seen from London -  


1 – In March this year, Uber Eats blocked a delivery worker from their account for uploading a ‘fraudulent insurance document’. 

Turns out, the document in question simply had been uploaded incorrectly with a small section of the PDF cut off from the side. Despite this not affecting the content of the document, clearly stating the verifiable policy number of the insurance, Uber’s AI-powered document checker marked it as fraudulent and no human review was triggered until our intervention. 

The worker remained blocked from their account, unable to earn, for two months. 


2 – Delivery company, Stuart, responded in May THIS year to a data request from October 2023.

We were informed that due to a change in internal systems at Stuart for dealing with data requests, several email addresses were filtered from view, including ours. Consequently, our data request from October did not meet their attention and the worker who had been deactivated was for months denied their GDPR rights to access their data and challenge any decision made by automated means. This also raises many questions about Stuart's management systems and security of processing, and whether they are able to properly meet their data responsbilities towards workers under Article 32 of GDPR.

Similarly, Uber have reactivated workers after our interventions, not by accepting our challenge to their accusations, but because they no longer have the relevant records. Despite the original dismissals occurring on grounds of ‘suspicious activity’ and ‘fraud detection’, they now casually and shamelessly reactivate these alleged risky fraudsters without so much as an apology or recognition of the impact that an accusation of fraud and loss of ability to earn can have.


3 – UberEats worker is blocked TWICE in four months for the same reason, despite acknowledging our evidence the first time.  

In February this year, this worker was deactivated for failing the third party ID authentication checks, run by Ubble, an AI-powered video identity verification system. After multiple uploads via the Uber app, and an intervention by WIE providing evidence of our own successful checks of the documents, the worker was reactivated in March. Just TWO months later, the same worker was deactivated again for the same reason! Both times, he is told the decision is permanent. Despite having already accepted the validity of the documents, Uber Eats still subjects this worker to constant scrutiny, with the threat of dismissal at any moment hanging over his head, and only an email months later stating that this was due to a ‘technical error’.  

What these cases, and the many others we investigate, have in common are the accusation of fraud and the wafer-thin justifications behind it. Workers are at the whim of a flawed AI-driven system that treats them as disposable and scrutinises their day-to-day activity both on and off the clock to feed data into an obscure, fraud-detection calculation that can fire them at any instant.  

Platform apps describe their systems as security and risk management. In reality they are worker management systems, evaluating worker performance and issuing punishments or rewards just as an employer would. We will continue to investigate these fraud-detection systems through our data requests to expose their malpractice and the employer relationship they try to keep hidden. 



bottom of page