Utilities for democracy: Why and how the algorithmic infrastructure of Facebook and Google must be regulated

This paper provides a framework for understanding why internet platforms matter for democracy and how they should be regulated. We describe the two most powerful internet platforms, Facebook and Google, as new public utilities — utilities for democracy. Facebook and Google use algorithms to rank and order vast quantities of content and information, shaping how we consume news and access information, communicate with and feel about one another, debate fundamental questions of the common good, and make collective decisions. Facebook and Google are private companies whose algorithms have become part of the infrastructure of our public sphere, and as such, we are should be regulated as public utilities. Private powers who shape the fundamental terms of citizens’ common life should be held accountable to the public good. We show how regulating Facebook and Google as public utilities would offer opportunities for regulatory innovation, experimenting with new mechanisms of decisionmaking that draw on the collective judgement of citizens, reforming sclerotic institutions of representation, and constructing new regulatory authorities to inform the governance of algorithms. Platform regulation is an opportunity to forge democratic unity by experimenting with different ways of asserting public power.

Explanation < Justification: GDPR and the Perils of Privacy

The European Union’s General Data Protection Regulation (GDPR) is the most comprehensive legislation yet enacted to govern algorithmic decision-making. Its reception has been dominated by a debate about whether it contains an individual right to an explanation of algorithmic decision-making. We argue that this debate is misguided in both the concepts it invokes and in its broader vision of accountability in modern democracies. It is justification that should guide approaches to governing algorithmic decision-making, not simply explanation. The form of justification – who is justifying what to whom – should determine the appropriate form of explanation. This suggests a sharper focus on systemic accountability, rather than technical explanations of models to isolated, rights-bearing individuals. We argue that the debate about the governance of algorithmic decision-making is hampered by its excessive focus on privacy. Moving beyond the privacy frame allows us to focus on institutions rather than individuals and on decision-making systems rather than the inner workings of algorithms. Future regulatory provisions should develop mechanisms within modern democracies to secure systemic accountability over time in the governance of algorithmic decision-making systems.

Machine Learning and the Politics of Equal Treatment

Approaches to non-discrimination are generally informed by two principles: striving for equality of treatment (avoiding direct discrimination) and advancing various notions of equality of outcome (avoiding indirect discrimination). We consider how and why tensions arise between these principles when building and using machine learning, and consider practical steps to resolving them. We argue that approaches to non-discrimination that prioritize narrow interpretations of equal treatment which require blindness to difference may constrain how machine learning can be deployed to advance equality of outcome. When accurate and unbiased models predict outcomes that are unevenly distributed across racial groups, using those models to advance racial justice will often require deliberately taking race into account. If policy makers wish to ensure the widespread deployment of machine learning reduces, rather than compounds, existing inequalities, a shift in the policy regime for regulating decision-making may be required. We argue for the imposition of positive duties that require institutions to consider how best to advance equality of outcomes in a defined set of contexts and permit the use of protected characteristics to achieve that goal. While machine learning offers significant possibilities for advancing racial justice and outcome-based equality, harnessing those possibilities will require a shift in how equal treatment is interpreted and how non-discrimination laws protect it.


Governing Data: Non-Discrimination and Non-Domination in Decision-Making

The present is a critical moment in the governance of data-driven decision-making. Across the world, the debate about the governance of data and the technology companies is gaining pace. In India, the government has begun to develop a ‘data marketplace’ for private sector institutions to stimulate innovation. India is also on the cusp of adopting a comprehensive legislative framework to govern data-driven decision-making.178 The draft Data Protection Bill, built on the conceptual foundations of the Srikrishna Report, will be the first significant legislation to draw on the data fiduciary concept developed by American lawyers. At such a moment, it is important to pause to ask fundamental questions about what the governance of data should aim to achieve. This paper goes beyond familiar concerns about privacy to compare two ideas that might underpin such a governance framework: discrimination, which I associate with American law and politics; and domination, which I associate with Indian law and politics. The U.S. places on institutions only the negative requirement not to discriminate in their decision-making procedures. This limited goal does not address the ways in which, without intent or malpractice, decision-making procedures can compound and entrench patterns of inequality.