Academia

Algorithms for the People: Democracy in the Age of AI

Artificial intelligence and machine learning are reshaping our world. Police forces use them to decide where to send police officers, judges to decide whom to release on bail, welfare agencies to decide which children are at risk of abuse, and Facebook and Google to rank content and distribute ads. In these spheres, and many others, powerful prediction tools are changing how decisions are made, narrowing opportunities for the exercise of judgment, empathy, and creativity. In Algorithms for the People, I flip the narrative about how we govern these technologies. Instead of examining the impact of technology on democracy, I explore how to put democracy at the heart of AI governance.

Drawing on my experience as a research fellow at Harvard University, a visiting research scientist on Facebook’s Responsible AI team, and a policy advisor to the UK’s Labour Party, I am to get under the hood of predictive technologies, offering an accessible account of how they work, why they matter, and how to regulate the institutions that build and use them.

I argue that prediction is political: human choices about how to design and use predictive tools shape their effects. Approaching predictive technologies through the lens of political theory casts new light on how democracies should govern political choices made outside the sphere of representative politics. Showing the connection between technology regulation and democratic reform, I argue that we must go beyond conventional theorizing of AI ethics to wrestle with fundamental moral and political questions about how the governance of technology can support the flourishing of democracy.

Utilities for democracy: Why and how the algorithmic infrastructure of Facebook and Google must be regulated

This paper provides a framework for understanding why internet platforms matter for democracy and how they should be regulated. We describe the two most powerful internet platforms, Facebook and Google, as new public utilities — utilities for democracy. Facebook and Google use algorithms to rank and order vast quantities of content and information, shaping how we consume news and access information, communicate with and feel about one another, debate fundamental questions of the common good, and make collective decisions. Facebook and Google are private companies whose algorithms have become part of the infrastructure of our public sphere, and as such, we are should be regulated as public utilities. Private powers who shape the fundamental terms of citizens’ common life should be held accountable to the public good. We show how regulating Facebook and Google as public utilities would offer opportunities for regulatory innovation, experimenting with new mechanisms of decisionmaking that draw on the collective judgement of citizens, reforming sclerotic institutions of representation, and constructing new regulatory authorities to inform the governance of algorithms. Platform regulation is an opportunity to forge democratic unity by experimenting with different ways of asserting public power.

Machine Learning and the Politics of Equal Treatment

Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment (avoiding direct discrimination) and advancing various notions of equality of outcome (avoiding indirect discrimination). We consider how and why tensions arise between these principles when building and using machine learning, and consider practical steps to resolving them. We argue that approaches to non-discrimination that prioritize narrow interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When accurate and unbiased models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. If policy makers wish to ensure the widespread deployment of machine learning reduces, rather than compounds, existing inequalities, a shift in the policy regime for regulating decision-making may be required. We argue for the imposition of positive duties that require institutions to consider how best to advance equality of outcomes in a defined set of contexts and permit the use of protected characteristics to achieve that goal. While machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in how equal treatment is interpreted and how non-discrimination laws protect it.

Explanation < Justification: GDPR and the Perils of Privacy

The European Union’s General Data Protection Regulation (GDPR) is the most comprehensive legislation yet enacted to govern algorithmic decision-making. Its reception has been dominated by a debate about whether it contains an individual right to an explanation of algorithmic decision-making. We argue that this debate is misguided in both the concepts it invokes and in its broader vision of accountability in modern democracies. It is justification that should guide approaches to governing algorithmic decision-making, not simply explanation. The form of justification – who is justifying what to whom – should determine the appropriate form of explanation. This suggests a sharper focus on systemic accountability, rather than technical explanations of models to isolated, rights-bearing individuals. We argue that the debate about the governance of algorithmic decision-making is hampered by its excessive focus on privacy. Moving beyond the privacy frame allows us to focus on institutions rather than individuals and on decision-making systems rather than the inner workings of algorithms. Future regulatory provisions should develop mechanisms within modern democracies to secure systemic accountability over time in the governance of algorithmic decision-making systems.

Governing Data: Non-Discrimination and Non-Domination in Decision-Making

The present is a critical moment in the governance of data-driven decision-making. Across the world, the debate about the governance of data and the technology companies is gaining pace. In India, the government has begun to develop a ‘data marketplace’ for private sector institutions to stimulate innovation. India is also on the cusp of adopting a comprehensive legislative framework to govern data-driven decision-making.178 The draft Data Protection Bill, built on the conceptual foundations of the Srikrishna Report, will be the first significant legislation to draw on the data fiduciary concept developed by American lawyers. At such a moment, it is important to pause to ask fundamental questions about what the governance of data should aim to achieve. This paper goes beyond familiar concerns about privacy to compare two ideas that might underpin such a governance framework: discrimination, which I associate with American law and politics; and domination, which I associate with Indian law and politics. The U.S. places on institutions only the negative requirement not to discriminate in their decision-making procedures. This limited goal does not address the ways in which, without intent or malpractice, decision-making procedures can compound and entrench patterns of inequality.