Uber Proposes Paying Drivers to Train Its Artificial Intelligence Models
Uber has told drivers it will offer small payments for tasks that label and correct AI suggestions inside its app, a pilot program CBS News says will expand in several cities. The move shifts the cost and labor of model training onto the platform’s workforce, raising questions about pay, privacy, and the future of gig work.
AI Journalist: Sarah Chen
Data-driven economist and financial analyst specializing in market trends, economic indicators, and fiscal policy implications.
View Journalist's Editorial Perspective
"You are Sarah Chen, a senior AI journalist with expertise in economics and finance. Your approach combines rigorous data analysis with clear explanations of complex economic concepts. Focus on: statistical evidence, market implications, policy analysis, and long-term economic trends. Write with analytical precision while remaining accessible to general readers. Always include relevant data points and economic context."
Listen to Article
Click play to generate audio

Uber is testing a program that would pay drivers to perform short in‑app tasks intended to train and refine the company’s artificial intelligence systems, CBS News reported this week, a development that highlights mounting tensions between tech platforms and the gig workers who support them.
Under the pilot, drivers receive prompts during or after trips asking them to confirm route choices, annotate unusual pickup or drop‑off conditions, or flag situations the company’s models misclassified. An Uber spokesperson told CBS News the payments are modest and vary by task, and that the effort is aimed at improving safety and dispatch accuracy. The company declined to disclose long‑term rollout plans. Uber, which has more than 6 million drivers and couriers worldwide, has built a multibillion‑dollar business on matching supply and demand, and improving AI could meaningfully cut operational frictions.
Drivers reached by CBS expressed mixed reactions. “If it’s a few dollars for a quick question, fine — but it feels like we’re doing extra work so their AI can replace us later,” said one New York City driver. The Independent Drivers Guild warned public comment that the program risks turning workers into unpaid or underpaid annotators and raises privacy concerns when location and contextual data are used to train proprietary systems. “This is about who captures the economic value of data generated by drivers,” the union said.
Labor and privacy advocates say the move underscores unresolved policy issues. For years, courts and legislatures have wrestled with whether gig workers are employees or independent contractors; adding direct AI‑training tasks complicates that calculus. Regulators must also decide whether driver‑generated annotations create new obligations around informed consent and data use. “Companies offload costly labeling to their workforce while retaining the profits from improved algorithms,” said an academic expert on labor policy. “Existing rules weren’t written for this model.”
Economically, the program could change firm incentives. Labeling by active drivers provides low‑cost, context‑rich training data that can accelerate model improvement on edge cases — for example, complex urban pick‑ups or transient roadwork. More accurate models can reduce cancellations, cut empty miles, and lift platform margins. Over time, better AI could enable partial or full automation of matching, pricing and even driving tasks, a prospect that alarms many drivers who rely on platform income.
Investors and rival platforms are watching similar experiments closely. Technology firms from ride‑hailing to delivery have increasingly sought to lower AI training costs by crowdsourcing annotations from users or contracting microtask workers. The long‑term trend pressures regulators to reconsider labor protections, compensation standards for data work, and transparency requirements about how platforms use worker‑supplied information.
For drivers, the immediate trade is clear: small, short‑term earnings now versus the risk of accelerating automation and erosion of bargaining power down the line. For policymakers, the challenge will be updating labor and privacy frameworks to reflect a marketplace where human labor is directly embedded in the production of artificial intelligence — and where the value created by that labor flows mostly to platform owners.