Managed by Bots
Data-Driven Exploitation in the Gig Economy

WIEReportCover.png

​Summary
 

Employment in the so-called gig economy has boomed in recent years with the TUC reporting that 4.4 million people in the UK now work in the sector at least once per week. Large digital platforms have disrupted traditional players particularly in the taxi, private hire and logistics sector with a business model of digitally mediated work and flexible labour terms. 

 

The sector has been an employment rights battle ground as platforms sought to misclassify workers as independent contractors so as to avoid employer obligations, as well as tax and national insurance contributions. Having a huge workforce engaged on completely flexible terms has allowed platforms to rapidly scale and build competitive advantage from an excess supply of unpaid and underpaid workers who wait for work, while depressing their own wages.

 

A 2018 New School study of New York City drivers found that only 58% of a driver's time at work is utilised serving passengers. The rest of the time is spent waiting, unpaid, yet providing valuable immediacy to the platform. As the Employment Tribunal ruling in Aslam v Uber put it: "Being available is an essential part of the service drivers render to Uber.” The ruling went on to quote Milton to illustrate the point: “They also serve who only stand and wait.”   

 

In the UK, Uber has chosen to cherry pick the recent Supreme Court ruling to refuse to paying for waiting time. At the same time, our report shows that drivers are surveilled and subjected to algorithmic control even during this waiting time. Profiling used for automated work allocation determines how long or short the waiting will be for individual drivers. And where there is management control, there is an employment relationship which attracts rights for workers.

 

As case law has developed and platforms matured, employers have become more adept at hiding management control in automated algorithmic processes. The employment misclassification problem continues, but the mask rarely slips. To sustain the rights already hard won and to further secure the right of employment status, in one form or another, workers need to evidence management control.

 

The current situation for precarious workers in the gig economy is a dual challenge. Employment law and institutions of enforcement have been slow to tackle abuses of platform employers. Data protection law offers tools to protect the rights of individuals, however, there has not yet been adequate legal protection for digital rights at work, for individuals or the collective as represented by their trade unions. 

 

For these reasons Worker Info Exchange was set up in 2019 as a digital rights NGO dedicated to research and advocacy of digital rights for workers and their trade unions. This report is an accounting of our experience so far in helping workers exercise their digital rights in the UK  and the Netherlands, as well as other territories that have a European based data controller, including Australia. 

 

Our aim is to develop a data trust to help disparate and distributed workforces to come together to aggregate their personal data at work and with a common understanding, begin the process of building real collective bargaining power. We believe worker data trusts and greater algorithmic transparency can go a long way to correcting the balance so workers can have a fairer deal.   

 

However, just as gig economy platforms have resisted their responsibilities under employment law, our experience shows their compliance with data protection law has been poor. We have processed more than 500 subject access requests over the last eight months on behalf of workers at Amazon Flex, Bolt, Deliveroo, Free Now, Just Eat, Ola and Uber. 

The persistent and widespread lack of compliance with data protection laws has hindered worker access to data and yielded almost no meaningful algorithmic transparency over critical worker management functions such as recruitment, performance management, work allocation and dismissals. The obfuscation and general lack of compliance has prevented us from reaching scale with a worker data trust. Instead, we have had to turn to strategic litigation across international boundaries to help workers once again secure their workplace rights. 

 

On the other hand, driven by increasing pressure by transport regulators such as Transport for London and maturation of technology, we have seen widespread proliferation and a disproportionate use of worker surveillance in the name of fraud prevention. In our opinion, the management of ‘fraud’ is often conflated with performance management rather than detection of actual criminal fraud.  An example of this is where worker fraud probability scores are inappropriately used in automated work allocation decisions by a number of apps.  

 

In the UK, these already weak digital rights for workers will be fatally compromised if the government’s proposals on GDPR divergence are passed into law. The proposals would give employers more discretion in how or whether to respond to data access requests and to charge a fee for doing so. There is also a proposal to strip out the current Article 22 protections that allow workers to know how they have been subjected to automated decision making and the likely effect of such, the right to challenge such decisions and the right to give your point of view. 

 

The government also plans to reduce the obligation on employers to prepare data protection impact assessments (DPIA) before the processing of highly sensitive personal data, which is routinely carried out by gig employers for facial recognition identity checks, location tracking and anti-fraud surveillance. This would be a hammer-blow for precarious workers who already have long been denied basic employment rights who could now be robbed of the means to hold rogue employers to proper account. 


Given the threats to and shortcomings in GDPR implementation, many jurisdictions, such as the EU as well as some US states, are currently considering greater employment rights protections for gig workers that address the issues arising from algorithmic management. In the UK, the TUC have published an AI Manifesto, proposing a series of amendments to employment and data protection law to promote greater transparency and equality in digitally mediated work. We strongly support the call for greater digital rights protections.

 
 

© 2021 Worker Info Exchange

© 2021 Worker Info Exchange

Introduction
 

The past year has marked a turning point for gig platform workers in the realisation of their employment and digital rights. The practice of digitally mediated work has led to a convergence of employment and data protection rights and the increasing litigation and advocacy activity by workers has been yielding results in these domains. Across Europe, courts have passed several significant judgments recognising the exploitative role of algorithmic management practices by gig platforms while also condemning the lack of fairness and transparency in such automated systems. 

 

In Italy, the Bologna court ruled that Deliveroo’s rating system had discriminated against workers while the data protection authority, Garante, served two GDPR fines to Deliveroo and Glovo due to their failure to adequately disclose the workings of their job allocation and performance management algorithms. Spain passed the first legislation to attempt to regulate AI in the area of employment, establishing both worker status for gig workers and the right to be informed about the rules and parameters of the algorithms they are subject to - unleashing a torrent of complaints. This resulted from yet another court case against Glovo that ended up in the Spanish Supreme Court. 

 

Along with these high-profile decisions, the UK Supreme Court also concluded this year that Uber drivers were party to a transportation service that is “very tightly defined and controlled by Uber” betraying a clear employment relationship, which the company claimed did not exist in its endeavour to (mis)classify the workers as independent contractors. Significantly, evidence of this relationship comes from the data driven systems rideshare platforms use to manage their workforces. Some of the issues highlighted by the UK Supreme Court related to the management of drivers through the algorithmic monitoring of job acceptance rates, route choices, driving behaviour and customer ratings. However, even though there is greater recognition of algorithmic management, the recent gains in the courts do not fully protect workers against its harms. The limb (b) worker status given to Uber drivers as a result of the Supreme Court decision is an intermediary status between contractor and employee, and still falls short of shielding them from unfair dismissals, for example.

 

Our experience suggests that these algorithmic management tools, with the addition of intensifying surveillance practices, continuously scrutinising workers for potential fraud or wrongdoing, are resulting in a deeply exploitative working environment. We are seeing an inordinate number of automated dismissals across the entire gig industry, many of which we believe to be unlawful according to Article 22 of the General Data Protection Regulation (GDPR). Article 22 does provide workers with some limited protections against the adverse effects of automated decision making and profiling, through the right to obtain human intervention and contest the decision. Article 15 of the GDPR guarantees the right to be informed about the existence of such automated decision making and to be provided with meaningful information about the logic of processing.

 

Taking these rights as a basis, Worker Info Exchange was set up with the mission of supporting gig workers in navigating this complex and under regulated space. The goal and remit of our work is to test whether these GDPR instruments can be utilised to address unfair employment practices and expand the scope of the data made available to individuals in their capacity as workers. In other words, our ambition is to use data access as a method of building collective worker power for testing mechanisms of redress in a digitally mediated labour market. 

 

When the employment relationship between the gig platform and the worker is executed through extensive data collection and analysis, employment rights become inextricably linked with the exercise of data rights. Gig platforms assert control over workers by maintaining an informational asymmetry, and data access can provide a means of exposing the power (im)balance generated by the informational gap between gig platforms and their workers. Getting access to personal data can allow workers to make independent evaluations about their working conditions and answer questions concerning their pay calculations, the quality and quantity of work offered, as well as challenging the grounds for adverse performance management including suspension and  dismissal.
 

Our goal in facilitating data access is to create collective stores of data to develop a greater understanding of working conditions and consequently bargaining power. In recent years, a number of noteworthy initiatives have emerged operating with similar aims but using different methodologies for retrieving data. Some projects in this field run their own data collection and analytics on earnings and performance to assess the fairness of labour conditions (for example Driver’s Seat Coop and WeClock, among others.) These all present unique insights into the gig economy and should be thought of as constituting a continuum of data practice. We have approached this issue by demanding that platforms share the data that workers are legally entitled to, however this has introduced additional obstacles to the larger goal of collectivising data. We took this route because we wished to set standards and precedents in data protection law, but also because we believe there are certain types of information that can only be obtained by requesting the data directly from the platforms.

 

We have found, particularly in the case of surveillance fuelled allegations of irregular activity and fraud, that it is necessary to have the data held by the companies to understand and contest the accusations. Data access can help us unearth the inconsistencies in the narratives advanced by platform companies and help shift the burden of proof from the workers back on to the platforms. From this perspective, the endeavour of demanding platform data has proven extremely successful in resolving numerous employment disputes. The simple demonstration of platforms' refusal to provide personal data has reversed several license revocations (enforced by TfL) in court and thus become an additional tool in the exercise of employment rights.

 

This constitutes the other branch of activity for Worker Info Exchange; as we are frustrated in our attempts to gain clarity and transparency over the complex systems determining workplace conditions, we frequently need to resort to litigation and turn to courts for decisions in the emergent field of digital labour rights. The artificial ‘data crisis’ the gig platforms have created is in many ways an attempt to exhaust and deplete the resources of precarious workers and unions alike by drawing disputes into courts where they can be prolonged and the accountability for corporate misconduct delayed. 

 

In line with these strands of activity, this report is written in three parts: The first section explores different facets of algorithmic management and its harms, with associated case studies. The second section deals with our process in utilising Data Subject Access Requests (DSARs) while the third offers an overview of the GDPR related cases we have taken forward in Amsterdam as well as the licensing cases we are supporting in London. We hope this report will demonstrate the current state of play in the exercise of rights at the intersection of data and labour and reveal the cumulative effects of repeated non-compliance by gig platforms.

"Platform companies are operating in a lawless space where they believe they can make the rules. Unfortunately this isn't a game; virtual realities have harsh consequences for gig workers in real life. What's encouraging is that workers themselves are not waiting for laws, policymakers or even allies in the human rights movement to rescue them. Gig workers are organizing and using their collective voice to demand new protections that are fit for purpose in a digitizing economy."

Bama Athreya, Fellow, Open Society Foundations

 

Part I: Misclassification 2.0 
Controlled by Algorithm

 

In the six year battle for worker rights in the UK’s gig economy, Uber argued that it was merely the agent of the self employed driver doing nothing more than passively booking work orders and collecting payment. To advance this fiction, gig platforms set up elaborate contracts that make it appear as though the driver and passenger are transacting directly with each other, when in fact all passenger information is closely shielded by companies. Uber, for example, generates a notional invoice on behalf of the driver to every passenger they carry. The invoice will cite only the passenger’s first name and is never actually sent to the customer.

 

These misclassification techniques, commonly used across the gig economy, enable platforms to avoid employer legal responsibilities such as basic worker rights protections and national insurance contributions. In the UK it has also enabled platform companies to avoid value added sales tax (VAT). But earlier this year, the Supreme Court affirmed the right of the lower courts to discard artificial contracts and to determine the true nature of the employment relationship based on the evidence of a management relationship of control over workers.

 

As platform companies conclude that using misleading contracts is no longer viable as a method of employment misclassification, they will be tempted to double down on process automation for the concealment of management control. Algorithmic control becomes misclassification 2.0. Indeed, there is ample evidence that this is already happening. Gig platforms are more determined than ever to pursue misclassification strategies so that they can continue to control the workforce while avoiding the risk that drivers might graduate from ‘worker’ status with limited rights to employee status with substantially more rights. 

So what is algorithmic control and what are the specific risks for gig workers? In the ride-share and delivery industries specifically the means of algorithmic management of greatest concern to us include the following:


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The management decisions above are mostly automated or semi-automated with limited human intervention. Business models of the gig economy rely on mass automation of management decisions and workplace supervision. While some employers are reticent on this point, Deliveroo has been quite forthright about it in their rider privacy policy
 

Asset 8_1.5x.png

Performance Management

This includes but is not limited to monitoring of driving behaviour including ETA, customer ratings, job acceptance & completion rates, interaction with support staff, availability.

Asset 7_1.5x.png

Pricing
 

Closely related to work allocation is automated pricing decision making. Perhaps the most well known method is Uber’s so called ‘surge’ or ‘dynamic pricing’ which purports to clear market demand with real time, local price fluctuations. 

Surveillance
 

Intrusive surveillance for the stated purpose of security and identification. This encompasses the use of fraud detection and facial recognition technologies. We are aware that surveillance is conducted even when the worker has not logged in to make themselves available for work. It also included surveilling the worker’s use of the app as a consumer.

Work Allocation
 

Uber has until very recently insisted that work allocation is decided on the proximity of drivers and passengers to each other however now admit that past behaviour and preferences are factored in. Ola admits that driver profiles which include ‘earning profile’ and ‘fraud probability’ scoring are used in the work allocation automated decision making.

Asset 4_1.5x.png
Asset 6_1.5x.png

Part I: Misclassification 2.0 
Controlled by Algorithms

 

In the six year battle for worker rights in the UK’s gig economy, Uber argued that it was merely the agent of the self employed driver doing nothing more than passively booking work orders and collecting payment. To advance this fiction, gig platforms set up elaborate contracts that make it appear as though the driver and passenger are transacting directly with each other, when in fact all passenger information is closely shielded by companies. Uber, for example, generates a notional invoice on behalf of the driver to every passenger they carry. The invoice will cite only the passenger’s first name and is never actually sent to the customer.

 

These misclassification techniques, commonly used across the gig economy, enable platforms to avoid employer legal responsibilities such as basic worker rights protections and national insurance contributions. In the UK it has also enabled platform companies to avoid value added sales tax (VAT). But earlier this year, the Supreme Court affirmed the right of the lower courts to discard artificial contracts and to determine the true nature of the employment relationship based on the evidence of a management relationship of control over workers.

 

As platform companies conclude that using misleading contracts is no longer viable as a method of employment misclassification, they will be tempted to double down on process automation for the concealment of management control. Algorithmic control becomes misclassification 2.0. Indeed, there is ample evidence that this is already happening. Gig platforms are more determined than ever to pursue misclassification strategies so that they can continue to control the workforce while avoiding the risk that drivers might graduate from ‘worker’ status with limited rights to employee status with substantially more rights. 

So what is algorithmic control and what are the specific risks for gig workers? In the ride-share and delivery industries specifically, the means of algorithmic management of greatest concern to us include the following:


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The management decisions above are mostly automated or semi-automated with limited human intervention. Business models of the gig economy rely on mass automation of management decisions and workplace supervision. While some employers are reticent on this point, Deliveroo has been quite forthright about it in their rider privacy policy
 

“Given the volume of deliveries we deal with, we use automated systems to make the automated decisions described above as they provide a more accurate, fair and efficient way of identifying suspected fraud, preventing repeated breaches of your Supplier Agreement and limiting the negative impact on our service. Human checks would simply not be possible in the timeframes and given the volumes of deliveries that we deal with.”

WIE-Report-Illustration-1_2x.png
WIE-Report-Illustration-3_2x.png

Surveillance
 

Intrusive surveillance for the stated purpose of security and identification. This encompasses the use of fraud detection and facial recognition technologies. We are aware that surveillance is conducted even when the worker has not logged in to make themselves available for work. It also includes surveilling the worker’s use of the app as a consumer.

Work Allocation
 

Uber has until very recently insisted that work allocation is decided on the proximity of drivers and passengers to each other however now states that past behaviour and preferences are factored in. Ola uses driver profiles which include ‘fraud probability scores' in automated decision making for work allocation.

WIE-Report-Illustration-2_2x.png

Performance Management

Assessment of work performance includes but is not limited to monitoring of driving behaviour including ETA, customer ratings, job acceptance & completion rates, interaction with support staff, availability.

WIE-Report-Illustration-4_2x.png

Pricing
 

Closely related to work allocation is automated price setting. Perhaps the most well-known method is Uber’s so called ‘surge’ or ‘dynamic pricing’ which purports to clear market demand with real time, local price fluctuations. 

Surveillance Arms Race
 

We have been seeing a surveillance arms race in the gig economy since Uber introduced its so-called Hybrid Real Time Identification System during 2020. Just one day before Transport for London (TfL) announced its decision to refuse renewal of their license in November 2019, Uber offered to introduce this surveillance system which incorporates facial recognition with GPS monitoring
 

This was in response to TfL’s complaint that 21 drivers had been detected (out of 90,000 analysed over several years) as engaged in account sharing which allowed potentially unlicensed and uninsured drivers to illegally offer their services on the app. The activity was made possible by resetting the GPS location of the device as outside the UK, where it is possible for drivers to upload their own photos. This gap was quickly closed by Uber and the activity detected was vanishingly small compared to the scale of Uber’s operation. The introduction of facial recognition technology by the industry has been entirely disproportionate relative to the risk perceived. Nevertheless, the requirement for real time identification went on to become a condition of Uber’s license renewal at the Westminster Magistrates Court in September 2020.
 

In the case of Uber, both the platform’s management and TfL have failed to ensure that appropriate safeguards were put in place to protect the rights and freedoms of drivers despite TfL having reviewed the data protection impact assessment for the technology in March 2020. According to TfL reports, 94% of private hire vehicle (PHV) drivers are from black and ethnic minority backgrounds and the introduction of this technology, which is well recognised for its low accuracy rates within these groups, has proven disastrous for vulnerable workers already in precarious employment. 
 

Bolt has since announced that it was investing €150 million in AI driver anti-fraud detection systems including facial recognition. Deliveroo announced that they too would introduce facial recognition identity checks. Ola Cabs has also rolled out facial recognition identification as a feature of its Guardian system, incorporating machine learning which they claim enables them to “continuously learn and evolve from millions of data points every single day, to improve risk signalling and instant resolution.”  
 

Free Now, a Daimler and BMW joint venture, also closely surveils drivers as part of their fraud prevention programme. Indeed, documents filed by Free Now with the High Court in a Judicial Review of TfL’s decision to grant them a license in London, they disclosed that TfL has made monthly reports of driver dismissals for various reasons (including ‘fraudulent activity’) a condition of their recent license renewal. But the description of the data processed for the purpose of fraud prevention raises more questions than is answered by Free Now’s privacy policy.

In this document, Free Now states that they use a ‘random forest’ algorithm to produce a fraud score which they use to “prioritise the dispatched journeys accordingly. This ensures a fair and risk minimised dispatchment.” Free Now contested their use of this fraud detection system when we inquired about it in June 2021, claiming that this section of the privacy policy was outdated (please see company case study in part II of the report.) However, the reference to this system remained in the policy, despite an update made 
in September 2021. We shared our report with Free Now in November and highlighted this discrepancy. Free Now has now removed the description of the ‘random forest’ algorithm, but continues to use GPS location data for fraud prevention purposes.

 

What is particularly concerning about the use of these systems is that they conflate fraud management with performance management. The fact that such ‘fraud’ indicators are used as variables for work allocation and that the behaviours generating them are allowed to continue on the platform demonstrates that these are not instances of criminal fraud, but mechanisms of control, which assess how well workers are performing against the opaque metrics set by companies. We suggest that any ‘fraud’ terminology used in these contexts also function as part of the misclassification game, designed to conceal the employment relationship. 

 

Surveillance Case Study I: Facial Recognition

 

 


 

 

In April 2020, Uber introduced a Real Time ID (RTID) system in the UK which uses a combination of facial verification and location checking technologies to authenticate drivers' identities and prevent them from sharing access to their accounts. The RTID system incorporates Microsoft’s FACE API, facial recognition software, and requires drivers and couriers to periodically take real-time selfies to continue using the Uber app. The photo is then checked against the driver’s account profile picture (and in some jurisdictions, against public databases to “prevent identity borrowing or to verify users’ identities.”)

Pa Edrissa Manjang had been working with Uber for about a year when he was deactivated due to a selfie verification failure. While Uber drivers and couriers routinely provide selfies, these are not stored on the workers’ phones and they cannot retain the evidence of their submissions. Pa was not given any warnings or notified of any issues until his dismissal; the Real Time ID verification system appeared to approve all of his photographs with a green check. 

 

Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told “we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you.” We obtained the selfies in question through a subject access request, which revealed that all of the photos Pa submitted were in fact of him. This was the first instance in which we succeeded in obtaining the selfies submitted by a courier or driver. It is unclear why this request succeeded when many before it failed.

pa.jpg

We also wrote to Microsoft earlier in the year to raise our concerns regarding Uber’s unregulated use of FACE API across its platform. In response, Microsoft stressed that all parties involved in the deployment of such technologies have responsibilities which include: "incorporating meaningful human review to detect and resolve cases of misidentification or other failure" and "to provide support to people who believe their results were incorrect; and to identify and address fluctuations in accuracy due to variation in conditions." Pa’s case suggests that these crucial checks have not been implemented in the processing of RTID images. 

When asked about the human review process and the facial recognition issues outlined in this case study, Uber claimed that all human reviewers take a test developed by cognitive psychologists to qualify as reviewers, and go through additional training in the form of weekly coaching and quality audits. Uber also stated that they had conducted internal fairness tests on their ‘facial verification’ technology and “found no evidence that the technology is flagging people with darker skin complexions more often, nor that it is creating longer waiting times due to additional human review.”

 

Pa is now bringing a case against Uber to challenge its racially discriminatory facial recognition deployment, represented by Bates Wells, with support from the Equality and Human Rights Commission, the App Drivers and Couriers Union and Worker Info Exchange.

 

Surveillance Case Study II: Geolocation Checks 
 

While the use of flawed facial recognition systems is undoubtedly problematic, we have also seen many drivers dismissed following false accusations from Uber that they were engaged in fraudulent account sharing, due to two devices being detected by Uber in two locations at the same time. In all the cases we have analysed, we have found that the problem is related to the driver having installed the app on two devices for convenience but with only one of the devices logged-in for work. 
 

Just before 8 pm on September 11, 2020 and Aweso Mowlana was working for Uber in South London. He was a 4.95 star rated driver who had conducted more than 11,500 trips in over 5 years working for Uber. Aweso had just dropped a passenger near Elephant and Castle when he logged-off for a short break. Like many drivers, Aweso had installed the app on a second device which was an iPhone. This particular evening he had left the iPhone at home and was working with his other phone, a Samsung.
 

At 8:02 pm Aweso attempted to log back into the Uber app to make himself available for his next job. Before he was allowed to log back in, he was prompted to provide a selfie as part of Uber’s Real Time Identity Check (RTID). His photo matched Uber’s reference photo so he successfully completed the log-in procedure to continue his shift. But unknown to him, Uber systems had either detected and/or pinged his second phone. Earlier that day, his son had picked up his second phone by mistake and

taken it with him to his girlfriend’s house in Uxbridge. Uber later said they requested an RTID check from this device at 8:03 pm but by this time Aweso was already online in South London. Uber claims the response to the ID check was sent from the iPhone at around 11:55 pm that evening. 
 

The next day, Uber informed him that his account had been ‘flagged for suspicious application activity’ and that his account would now be suspended while ‘a specialised team reviews this.’ Sometime later, Uber permanently dismissed Aweso via text saying that they had ‘found evidence indicating fraudulent activity’ on his account. Uber then alleged that he was sharing access to his account and in doing so had breached terms and conditions. The next month, Transport for London immediately revoked Aweso’s license on the grounds that he could no longer be found to be ‘fit and proper’ to hold a public license, based on his dismissal from Uber. 
 

Worker Info Exchange assisted Aweso in making a subject access request and analysing the data received. One file called ‘Driver Detailed Device Data’ records at least some of the data streaming from devices to Uber in real time.  From this file we could see as much as 230 rows of data per minute being recorded by Uber from devices. The data Uber collected from Aweso’s devices included geo-location, battery level, speed, course heading, IMEI number etc. The data showed that the device in Uxbridge had never been logged in for work on that day because a field entitled ‘driver_online’ showed the iPhone as ‘FALSE’ at all times that day including the time it was recorded at Uxbridge. This is proof that the device was not being shared for work with others as alleged by Uber and Transport for London. Uber failed to provide access to personal data processed in both RTID checks including the photos collected. The 'Detailed Device Data' shows no record of any further activity for the iPhone after 8:03:43 pm. We saw no data evidence of device activity at 11:55 pm when Uber said it received a response to the earlier issued ID check.     

The experience of Pa and Aweso was very prevalent during the past year and made up a significant volume of casework handled by Worker Info Exchange and the App Drivers & Couriers Union. In London, Transport for London tended to immediately revoke the licenses of drivers who were reported to have failed Uber’s RTID checks despite the obvious problems with the system. There are often reasonable explanations for multiple device use which are automatically classified as fraud. Uber’s own privacy policy indicates that where a device has the app open in the background or foreground, even if not online and ready to accept fares. It is worth noting that under Article 6 of the proposed new EU Directive on platform work the surveillance of and collection of any personal data while the platform worker is not offering or performing platform work (as in this case) would be banned.

 

In more than a dozen cases where we supported driver’s appealing their revocations at the Magistrates Court, every appeal was upheld and TfL were ordered to reinstate the licenses. Uber commented that in cases like these, “human reviewers may still decide to deactivate the account, even if the driver has passed the photo verification.”

 

Worker Info Exchange, Big Brother Watch and the App Drivers & Couriers Union wrote a joint letter to the Mayor of London to raise our concerns about Transport for London’s reliance on flawed evidence from Uber in making a revocation decision and demanded that, as Chair of Transport for London’s board, that he order a review of all such wrongful revocations. To date, neither the Mayor nor TfL have responded.

 
 

Opaque Performance Management
 

The opacity of platforms inhibits understanding of how algorithmic control might be integrated across the span of critical processes and over time. For example, workers have not been provided the transparency they are legally entitled to in order to understand how performance profiling links to the quality and quantity of the work offered, as well as the expected yields for such work. 

In the case of Ola, we have some knowledge of the data categories they collect and process in their work allocation systems - such as fraud probability scores, earning profiles, booking acceptance and cancellation history, among others - however this does not reveal the different weightings applied to these variables, nor the logic of processing.

Uber has long maintained that its matching system is solely determined by location, despite its own “Partner-Driver” interface suggesting otherwise. Uber’s Pro programme (which drivers are automatically enrolled into so they can be incentivised to meet performance goals in exchange for benefits and rewards) informs drivers in vague language that “higher confirmation rates mean shorter waiting times for customers and shorter pick-up times for all drivers” loosely alluding to the fact that declining jobs may result in fewer job offers. 

Uber has only recently offered more transparency on the matching system through an update to their privacy policy which states, “Users can be matched based on availability, proximity, and other factors such as likelihood to accept a trip based on their past behavior or preferences.” We made a freedom of information request to TfL, inquiring about what updates Uber had provided on its matching system, as it is obliged to do when making changes to its operating model. This returned no results, further highlighting the obfuscation of its algorithmic management practices and the absence of regulatory oversight. However, despite this recently updated description of the matching system, Uber strongly contested the use of past behaviour or preferences for work allocation in a statement they provided about our report: “To be clear and specific: Uber does not use individual behavior or performance when matching drivers with riders. It is based on location together with road and traffic conditions, rather than based on who they are, how they behave or perform.”  

These uncertainties on the variables determining work allocation also raise important questions about the quality of jobs offered to drivers. Are drivers with high job acceptance rates offered trips of longer length and duration, resulting in higher pay, on the basis of similar profiling? In recent years, Uber has replaced time and distance based pricing for customers with a fixed pricing model in which an upfront price is accepted at the start of a trip. Uber states, “upfront pricing is dynamic, which means the price is worked out in real time to help balance supply and demand.” How these systems interact and whether and how algorithmic pricing is brought together with work allocation is a sensitive issue about which little is still known. Even if this is not the intention of platforms, how can we be reassured that past driver preferences for higher or lower yielding work won’t result in them being offered more of the same, producing an auction type bidding for differently priced trips?

 

With the inconsistent narratives provided on these systems, and the concerns already raised about the discriminatory outcomes of using dynamic pricing systems on passengers, the prospect that drivers could also be subject to such pricing mechanisms is an issue that requires close inspection. There are serious ethical issues here if operators are offering lower prices to vulnerable workers based on profiling which indirectly predicts their willingness to accept work at different price points. In response to this question, Uber again denied any connection between user profiling and individual driver pay or passenger pricing. In their statement, Uber said: “suggestions that Uber offers variable pricing based on user-profiling is completely unfounded and factually incorrect.”

In the UK, such practices appear to run contrary to the provisions of Section 1 of the Employment Rights Act which entitles workers to receive from their employer a clear statement of the terms of conditions of their work including rates of pay. 

Case Study: Algorithmic Control
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Uber routinely sends drivers messages when they are flagged by its fraud detection systems to warn them that they may lose their job if they continue whatever behaviour is triggering the system. The messages contain a non-exhaustive list of the potential triggers, but do not provide a reason specific to the individual driver that is being accused of fraud. When Alexandru received the second and final one of these messages, knowing another flag would result in dismissal, he decided to call the driver support team to get further details on why he was triggering the anti-fraud system and what he could do to avoid it. Through the call, Alexandru and the support agent discussed a variety of situations that may have caused his trips to appear irregular, revealing the limited ability support teams have in deciphering the indications made by the system. Three months after this call, Uber sent an apology message stating that he had been sent the warnings in error.

 

Alexandru's Call with Uber Support
00:00 / 39:14

 

While the conversation is enlightening in terms of understanding the dilemmas drivers face when company policy and passenger demands diverge, of particular interest to us was the discussion (25 minutes into the call) concerning a detour Alexandru took due to roadworks, as well as his low job acceptance rates as potential causes of his detection by the anti-fraud system. Following the Supreme Court ruling that classified Uber drivers as workers earlier this year, Uber claimed to have made significant changes to its platform such as offering transparency of price and destination, as well as removing punitive measures for refusing jobs, in a bid to argue that the Supreme Court ruling did not apply to current Uber drivers. 
 

Alexandru’s experience on the platform r