Green - Labour market
Require human review of workplace AI
Require assessments, worker consultation and human review for AI hiring, monitoring and discipline.
Last updated: May 2026.
Regulatory baseline
The central case moves beyond the UK white paper’s principles-led approach by creating workplace-specific duties and human review rights.
- Public bodies must audit AI systems.
- Employers face compliance duties.
- Productivity delay is the main risk.
Core trade-offs
Workers gain protection from opaque automated decisions. Employers lose some speed and automation savings. If broad, the policy can slow useful AI adoption.
- Workers gain contestability.
- Employers face compliance costs.
- AI productivity may slow.
Illustrative fiscal impact
+GBP 0.2bn to +GBP 4.0bn. Central estimate: +GBP 1.0bn.
- Positive numbers mean public-finance pressure; negative numbers mean Exchequer savings.
- Gross costs are separated from tax, NI and benefit offsets.
- Private business costs are not automatically fiscal costs.
- Behavioural responses widen the range materially.
- This is not an official costing.
Economic impact by 2027-28
- Jobs: May protect some workers, but slower AI adoption can preserve inefficient tasks.
- Wages: Protects against unfair wage and discipline decisions, not a general pay rise.
- Prices: Compliance costs may pass through in AI-intensive services.
- GDP / productivity: Likely negative if rules delay low-risk AI productivity gains.
Assessment
The policy is easier to justify for hiring, discipline and surveillance than for all workplace AI. A broad human-review rule could protect workers while slowing productivity-enhancing adoption.
Confidence: Low. Compliance cost and lost-productivity channels are not officially costed.
Main risks
- Overbreadth: Covering low-risk AI could delay useful productivity tools.
- Regulatory capacity: Existing regulators may lack technical resources.
- Box-ticking: Human review may become formal rather than meaningful.
Safeguards
- Limit hard duties to high-risk uses.
- Fund technical regulator capacity.
- Require audit trails, not blanket bans.
Academic evidence
Acemoglu and Restrepo, Journal of Political Economy, 2020
Robots and Jobs: Evidence from US Labor Markets
Automation can displace tasks and workers even when it raises output in some firms.
Supports caution on AI rules that trade protection against productivity.
Acemoglu, Autor, Hazell and Restrepo, NBER, 2022
Artificial Intelligence and Jobs: Evidence from Online Vacancies
AI exposure is visible in vacancy patterns and skill demand, not just future speculation.
Relevant to worker protections around AI deployment.
Artificial Intelligence and Jobs: Evidence from Online Vacancies (2022)
UK government evidence
Department for Science, Innovation and Technology, 2023
A pro-innovation approach to AI regulation
The UK AI white paper relies on principles and existing regulators rather than a single AI regulator.
Defines the baseline for stronger workplace AI law.
House of Commons Library, 2023
Artificial intelligence and employment law
Commons Library identifies employment-law issues around automated decision-making, transparency and contestability.
Supports worker-risk channels for AI protections.
Sources
- PolicyLens illustrative scenario methodology for require human review of workplace ai Internal - PolicyLens, 2026
- A pro-innovation approach to AI regulation UK government white paper - Department for Science, Innovation and Technology, 2023
- Artificial intelligence and employment law Parliamentary briefing - House of Commons Library, 2023
- Using AI in the workplace International evidence - OECD, 2024
- Robots and Jobs: Evidence from US Labor Markets Academic article - Acemoglu and Restrepo, Journal of Political Economy, 2020
- Artificial Intelligence and Jobs: Evidence from Online Vacancies Academic article - Acemoglu, Autor, Hazell and Restrepo, NBER, 2022
- Workers' Charter 2026 Party policy source - Green Party of England and Wales, 2026
Other Green policies
PolicyLens estimates are illustrative and not official costings.