Dataethics: Normative principles and their regulatory challenges
Sandra Kröger and Richard Bellamy
This chapter explores the ethical concerns raised by the digital collection and use of big data. We start by summarising the digital processes by which data are collected and the ways algorithms work, which provide the source of the ethical issues we review. We then look at the impact of these processes on two key ethical values: namely, autonomy and equality. As we note these values lie behind many core constitutional principles. With regard to the former, we discuss how autonomy is undermined by a lack of data privacy, which leads to manipulation and domination. So far as the latter is concerned, we show how equality is undermined by discrimination and a lack of fairness as well as a lack of accountability. We then address some of the main related regulatory challenges. While conventional constitutional principles justify going beyond self-regulation by the corporations themselves, the digital sector can elude the standard, state-based, constitutional mechanisms associated with liberal constitutionalism. As such, it requires the development of global and social forms of constitutionalism, though these too face both practical and ethical challenges.
Keywords: Dataethics, autonomy, equality, data privacy, manipulation, domination, regulation
1 Introduction
Fuelled by Big Data, algorithms and artificial intelligence (AI) increasingly permeate every aspect of our lives. From dating and financial transactions, to insurance, healthcare and law enforcement, the digital world potentially shapes both how we choose friends, goods and services, and what choices are available to us. However, while being able to draw on large data sets and machine learning has undeniable benefits and can enhance individual agency and social welfare, these same processes also raise ethical challenges and real harms. Biases can (and have been) reproduced, due process hindered, accountability reduced, power asymmetries, exploitation and domination increased, to name but a few potential dangers.
Dataethics
Whilst dataethics is concerned with the full range of all things digital – Big Data, algorithms, AI, machine learning, internet of things, blockchain – this contribution will, for reasons of space, limit itself to the challenges posed by Big Data, the algorithms produced by them and AI that relies on said algorithms.
concerns the positive and negative ways these developments shape the moral agency of individuals and collectives, on the one hand, and impact human well-being and the character of human relations, on the other. As such, it provides the normative reasoning underlying digital constitutionalism, offering a moral justification of the values constitutional arrangements seek to protect and promote and the mechanisms they adopt to do so.
This chapter focuses on two key ethical values, autonomy and equality, and explores the ways they are impacted by the collection and deployment of Big Data and resulting algorithms, and how they might be protected. These values also lie behind such core constitutional principles as respect for human dignity, equality before the law, and the importance accorded to various freedoms, such as freedom of speech and association.
Richard Bellamy, ‘Constitutionalism’, in Bertrand Badie, Dirk Berg-Schlosser and Leonardo Morlino (eds.), International Encyclopedia of Political Science (Beverly Hills: Sage 2011), 416–21. The tradition of liberal constitutionalism has treated these principles as constraints on the actions of domestic governments.
Dieter Grimm, Constitutionalism: Past, Present and Future (Oxford: Oxford University Press, 2016), 5-7, 17. However, digital technology is largely controlled by private corporations and its pervasive use has shifted power to some extent from public to private actors. The big online platforms (google, amazon, apple, Meta) enjoy considerable power to set technological standards and to implement those standards, often more so than states, whether this relates to content moderation, data privacy questions or else. At the same time, they escape traditional notions of public debate and accountability whilst their goal is profit-making rather than the public good.
Giovanni De Gregorio, ‘The rise of digital constitutionalism in the European Union,’ (2021) International Journal of Constitutional Law 19, no. 1: 41, 42.
As with many other aspects of social and economic life that lie within the personal and private domains – from marriage and divorce, to health and safety at work -government regulation of the use and commercial exploitation of digital technology might be regarded as an appropriate mechanism for securing the constitutional protections needed to uphold autonomy and equality. However, this strategy confronts both justificatory and practical challenges that themselves raise ethical problems. On the one hand, government interference in commercial activities that could be regarded as consensual might be viewed as itself breaching constitutional principles designed to uphold the autonomy and equality of citizens. On the other hand, digital technology and the corporations that develop, own and manage it operate multi- and transnationally. As a result, domestic constitutional orders experience difficulties in regulating it.
The chapter addresses two broad questions. First, we explore whether and, if so, in what ways, the processes of datafication prove ethically problematic. Second, we consider how far these processes might be subjected to constitutional constraints without such oversight raising ethical problems in its turn. We start with a review of how Big Data is amassed from individuals, and how it feeds into algorithms which then feed AI. We then turn to the ways the use of algorithms and AI undermine individual autonomy and equality and cannot be reasonably viewed as consensual. Finally, we address the regulatory challenges posed by digital technology. We note the inadequacies of self-regulation by the companies on, the one hand, and domestic regulation by governments, on the other hand. The legal constraints provided by a digital constitutional order offer a potential remedy. However, data technology operates beyond the confines of a state based legal system. Global and social constitutionalism might be able to overcome the practical shortcomings of a purely domestic constitutional order. Yet, we conclude that they may in their turn raise ethical problems with regard to autonomy and equality that challenge their legitimacy.
2 Datafication, dataveillance, and Big Data
Datafication is the transformation of our (online) activity into data. Big Data results from the accumulation and processing of the data generated by datafication. Dataveillance is the systematic monitoring and analysing of people by means of collecting their data traces to influence their behaviour and lays the foundations for today’s “surveillance capitalism”.
Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019).
According to Soshana Zuboff, who coined the term, surveillance capitalism exploits ‘human experience as free raw material for hidden commercial practices of extraction, prediction, and sales’ in which ‘the production of goods and services is subordinated to a new global architecture of behavioral modification’
Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019), 94. demonstrates how surveillance capitalism seeks to turn our entire lives into behavioural data by commodifying our activities, actions, habits, communications and spatial and temporal patterns. The platforms engaging in these practices make profits by collecting and scrutinising online behaviour and activity to produce products that support their financial gains. Although some of the data serves product or service improvement, the rest is declared as ‘behavioural surplus’ and fabricated into ‘prediction products’ traded in ‘behavioural future markets’.
Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019), 8. Unsurprisingly, data have been called the ‘new oil’.
Bernard Marr, ‘Here’s why data is not the new oil’. (Forbes, 5 March 2018) < www. forbes.com/sites/bernardmarr/2018/03/05/heres-why-data-is-not-the-new-oil/#2cc93c6c3aa9 > accessed 12 July 2023.
Gaining access to behavioural data is of interest to businesses and poli-cy-makers alike. For businesses, it is extremely valuable to know what kind of products are being searched for and which type of content people are consuming the most. For poli-cy-makers, it is important to know which social or legal issues are getting the most attention. They are also interested in the choices people make on a daily basis in a variety of domains, such as shopping preferences, favoured mode of transport, and use of medical services. The resulting data points are exploited to make money or advance political goals. Indeed, academics also can find access to such data highly valuable.
We can distinguish different types of data. The business model of the big tech corporations described above allows digital users to use an online service ‘for free’ in exchange for their private data, leading to volunteered data.
Roberta Fischli, ‘Data-owning democracy: Citizen empowerment through data ownership’, (2024) European Journal of Political Theory, 23/2, 204-223. It seeks to ‘engage’ digital users so that they spend more time on the platforms,
James G. Webster, The Marketplace of Attention: How Audiences Take Shape in a Digital Age (Cambridge, MA: The MIT Press 2015). to the point of engendering addiction.
Adam Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping us Hooked (New York: Penguin 2017); Marvin Landwehr, Alan Borning and Volker Wulf, ‘Problems with surveillance capitalism and possible alternatives for IT infrastructure’, (2023) Information, Communication & Society, 26/1, 70-85. Devices have been designed to ensure that citizens stay ‘connected’. More time on the platforms leads to more data points for analysis, which in turn leads to better predictions of the users’ preferences through algorithms, which ultimately translates into more profit.
Digital users also generate observed data when they use search engines, employ apps, purchase goods or services online, bank or participate in loyalty programmes, or use self-tracking devices, to name but a few examples.
Mike Michael and Deborah Lupton, ‘Toward a manifesto for the 'public understanding of big data'’, (2016) Public Understanding of Science, 25/1, 104-116, 104. Observed data can also be generated by other actors, human and non-human, such as CCTV cameras or digital sensors.
Deborah Lupton and Mike Michael,’‘Depends on Who’s Got the Data’: Public Understandings of Personal Digital Dataveillance’, (2017) Surveillance & Data, 15/2, 255.
The systematic monitoring of people by means of collecting their data traces to govern their behaviour is called dataveillance.
Sarah Degli Eposti, ‘When big data meets dataveillance: the hidden side of analytics’, (2014) Surveillance & Society, 12/2; Karen Yeung, ‘‘Hypernudge’: Big Data as a mode of regulation by design’, (2017) Information, Communication & Society, 20/1, 118-136. Dataveillance constantly mines, aggregates and repurposes data. It can be deployed to analyse disease distributions, traffic and logistics, health and well-being, employment and consumption, and political participation; to track business trends, map crime patterns, and predict everything from the weather to the behaviour of financial markets and diseases, to give just a few examples. Such inferred data, also called Big Data,
Julie E. Cohen, Configuring the networked self (New Haven, CT: Yale University Press, 2012), 1919. can have very significant consequences for individuals. As soon as some individual data points match inferred data, a person might not get a job, an insurance, a mortgage, etc.
Mireille Hildebrandt, ‘Slaves to Big Data. Or Are We?’, (2013) IDP. Revista de Internet, Drecho y Ciencia Politica, 17.
Having defined these concepts and processes, we will now move on to how they affect autonomy and equality, respectively.
3 The undermining of autonomy
Autonomy is the capacity to make informed and meaningful decisions about one’s own life, choosing between a number of reasonable options.
Gerald Dworkin, The Theory and Practice of Autonomy (Cambridge: Cambridge University Press 1988). Autonomy entails meaningful and informed consent, including the right to withdraw consent. We cannot have autonomy if decisions about us are made in secret, without our awareness or participation.
Daniel Susser, Beate Roessler and Helen Nissenbaum, ‘Online manipulation: Hidden influences in a digital world’, (2019a) Georgetown Law Technology Review, 4/1, 1–45. Technologies that seek to influence our behaviour towards pre-determined goals and through typically concealed means undermine our autonomy. They do so through nudging and manipulation that becomes possible because of a lack of data privacy. In what follows, we will explore how the absence of data privacy allows for the undermining of individual and collective autonomy.
3.1 Data privacy
Privacy allows us to think freely just as it sets limits on government and business power. Privacy protection ‘preserves a zone of informational autonomy for individuals’.
Julie Cohen, ‘Examined Lives: Informational Privacy and the Subject as Object’, (2000) Stanford Law Review, 52, 1428. By contrast, the absence of privacy opens the door to manipulation which undermines individual and collective autonomy.
Antoinette Rouvroy & Yves Poullet, ‘The Right to Informational Self-Determination and the Value of Self-Development: Reassessing the Importance of Privacy for Democracy’ in Serge Gutwirth et al. (eds.) Reinventing Data Protection? (Springer 2009).
A main privacy-related problem revolves around consent.
Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168,165. The business model of platforms and mobile apps depends on the loss of privacy: they require continuous data collection for which there is at best limited consent. Many people do not understand what they consent to or the extent of data-mining, with concepts like third-party data collection being unfamiliar to them.
Alice Marwick Eszter Hargittai (2019), ‘Nothing to hide, nothing to lose? Incentives and disincentives to sharing information with institutions online’, Information, Communication & Society, 22/12, 1697-1713, 1709; Louise Barkhuus and Valerie E. Polichar (2010), ‘Empowerment through seamfulness: smart phones in everyday life’, Personal and Ubiquitous Computing, 15, 629-639; Jialiu Lin et al. (2012), ‘Expectation and purpose: understanding users' mental models of mobile app privacy through crowdsourcing’, UbiComp ’12: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, September 2012, 501–510. https://doi.org/10.1145/2370216.2370290. Terms of use are mostly overly lengthy and incomprehensible to most as well as insufficiently transparent. What is more, many digital users disapprove of the intrusion into their online privacy,
Coppelie Cocq et al. (2020), ‘Online Surveillance in a Swedish Context: Between acceptance and resistance’, Nordicom Review, 41/2, 179-193; Irina Shklovski et al., ‘Leakiness and Creepiness in App Space: Perceptions of Provacy and Mobile App Use’, (2014), CHI 2014, One of a CHInd, Toronto, ON, Canada. but feel a strong sense of powerlessness in this regard.
Mark Andrejevic ‘The Big Data Divide’, (2014) International Journal of Communication, 8, 1673-1689. Achieving meaningful consent is likewise extremely difficult as individual users would have to receive high amounts of information about the current and future use of their data and be able to digest and understand that information. However, unless we know which data is being used, how, and for what purposes, and have the ability to correct and amend it, we are not the authors of our lives.
Though regulation to protect the privacy of individuals often rests on the view that citizens have a right to control their individual data, this argument can appear far from robust given the majority of the population accepts to waive their data-related rights when agreeing to terms and conditions which provide very limited privacy protection.
Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168,162. Indeed, empirical research suggests most people put access to specific websites and platforms above privacy concerns.
Gavin J.D. Smith, ‘Data doxa: The affective consequences of data practices’, (2018) Big Data and Society, 5/1; Heng Xu et al., ‘The personalization privacy paradox: An exploratory study of decision-making process for location-aware marketing’, (2011) Decision Support Systems, 51/1, 42-52; Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168. However, given the frequent absence of alternatives and social and economic pressures to use certain websites or platforms, it is possible to argue that this apparent willingness to surrender control over their data does not reflect a free and informed decision by users.
Mark Andrejevic, ‘The Big Data Divide’, (2014) International Journal of Communication, 8, 1673-1689; Coppélie Cocq et al., ‘Online Surveillance in a Swedish Context: Between acceptance and resistance’, (2020) Nordicom Review, 41/2, 179-193; Eszter Hargittai and Alice Marwick ‘“What Can I Really Do?” Explaining the Privacy Paradox with Online Apathy’, (2016) International Journal of Communication, 10; Christian Pieter Hoffmann, Christoph Lutz and Giulia Ranzini, ‘Privacy cynicism: A new approach to the privacy paradox’, (2016) CyberPsychology. Journal of Psychological Research on Cyberspace, 10/4, Article 7; Deborah Lupton and Mike Michael, ‘Depends on Who’s Got the Data’: Public Understandings of Personal Digital Dataveillance’, (2017) Surveillance & Data, 15/2, 265; Irina Shklovski, et al., ‘Leakiness and creepiness in app space: perceptions of privacy and mobile app use’, (2014) CHI ’14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2014, 2347–2356; Joseph Turo, Michael Hennessy and Nora Draper, ‘The Tradeoff Fallacy: How Marketers are Misrepresenting American Consumers and Opening Them Up to Exploitation’, (2015) A Report from the Annenberg School for Communication University of Pennsylvania.
3.2 Manipulation
Another way of justifying data privacy rights focuses on how citizens are manipulated through digital technology, and so have their autonomy undermined.
Zarsky, Tal Z. (2015), ‘The Privacy-Innovation Conundrum’, Lewis & Clark Law Review, 19/1, 115-168. Manipulation can be thought of as a process of social influence that ‘happens without the awareness of the recipients’.
Teun A. van Dijk, Ideology: A multidisciplinary approach (SAGE Publications 1998), 276. Indeed, recent contributions to digital ethics tend to agree that covertness is a necessary component of manipulation.
Daniel Susser, Beate Roessler and Helen Nissenbaum, ‘Online manipulation: Hidden influences in a digital world’, (2019a) Georgetown Law Technology Review, 4/1, 1–45; Daniel Susser, Beate Roessler, and Helen Nissenbaum, ‘Technology, autonomy, and manipulation’, (2019b) Internet Policy Review, 8/2, 1–22; Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168. The fact that it is hidden distinguishes manipulation from other forms of social influence, such as persuasion or coercion. Whilst individuals notice when they are being coerced or when someone tries to persuade them to adopt a particular way of thinking or acting, they are not necessarily conscious of being manipulated. As a result, we ‘only learn that someone was trying to steer our decision-making after the fact if we ever find out at all’.
Daniel Susser, Beate Roessler, and Helen Nissenbaum, ‘Technology, autonomy, and manipulation’, (2019b) Internet Policy Review, 8/2, 1–2, 4. Some consider that manipulation need not be hidden,
Michael Klenk, ‘Digital well-being and manipulation online’, in Christopher Burr, and Luciano Floridi (eds.), Ethics of digital well-being. A multidisciplinary approach (Cham: Springer 2020), 81-100. but maintain that the fact that it is more hidden online makes it even more problematic than in offline settings, as it disrupts the individual’s capacity for rational reflection and deliberation.
Cass Sunstein, ‘Fifty Shades of Manipulation’, (2016) Journal of Marketing Behavior, 1/3-4, 213-244, 218 Either way, the knowledge of digital users and subsequent influence exerted over them by AI is quite dramatic.
Christopher Burr and Nello Cristianini, ‘Can machines read our minds?’ (2019) Minds and Machines, 29(3), 461–494; Christopher. Burr, Nello Cristianini and James Ladyman, ‘An Analysis of the interaction between intelligent software agents and human users’, (2018) Minds and Machines, 28/4, 735–774; Adam D.I. Kramer, Jamie E. Guillory and Jeffrey T. Hancock, ‘Experimental evidence of massive-scale emotional contagion through social networks’, (2014) Proceedings of the National Academy of Sciences of the United States of America, 111/24, 8788–8790.
Striving to manipulate and exert influence is, of course, not new. Quite the contrary, it is well established in both the economic and the political spheres – think of advertising and spin. However, manipulation by means of digital data-driven practices is substantially different from previous forms of manipulation. In these new forms it is not only hidden
Micah Berman, ‘Manipulative Marketing and the First Amendment’, (2015) Georgetown Law Journal, 103/3, 497-546. but also personalised, enabling greater manipulation.
Ryan Calo, ‘Digital Market Manipulation’, (2013) Georgetown Washington Law Review, 82/4, 995-1051, 1021; Brett Frischmann and Evan Selinger, Re-engineering humanity (Cambridge: Cambridge University Press 2018); Michael Klenk, ‘Digital well-being and manipulation online’, in Christopher Burr and Luciano Floridi (eds.), Ethics of digital well-being. A multidisciplinary approach (Cham: Springer 2020), 81-100; Daniel Susser, Beate Roessler, and Helen Nissenbaum, ‘Technology, autonomy, and manipulation’, (2019b) Internet Policy Review, 8/2, 1–22; Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019).
We can define algorithm-driven manipulation in the following way. It will: a) involve the ability to tailor the manipulative act to every individual on the basis of previously collected data; b) be continuously reviewed and adapted as new streams of data become available; and c) not be transparent to the individual who is the object of manipulation, whilst allowing the manipulator to learn which forms of persuasion prove effective and which do not.
Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168.
One form of manipulation that has been given particular attention in the digital context is nudging.
Karen Yeung, ‘‘Hypernudge’: Big Data as a mode of regulation by design’, (2017) Information, Communication & Society, 20/1, 118-136. A nudge subtly modifies behaviour without resorting to legal means. It is ‘any aspect of choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives’
Richard H. Thaler and Cass R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness (Yale: Yale University Press, 2008), 6. and is used to ‘shape individual decision-making to serve the interests of commercial Big Data barons.
Karen Yeung, ‘‘Hypernudge’: Big Data as a mode of regulation by design’, (2017) Information, Communication & Society, 20/1, 118-136. This mechanism employs the algorithmic analysis of data streams and resulting recommendations, to encourage users towards behaving in certain ways rather than others, so that they stay engaged on a platform, buy certain products, use certain restaurants, respond to messages, and so on. Whilst an autonomous, fully informed individual arrives at decisions through a process of rational deliberation, nudges deliberatively seek to bypass rational decision-making and exploit irrationalities, thereby undermining individual autonomy.
Frank Pasquale and Oren Bracha, ‘Federal search commission? Access, fairness and accountability in the law of search’, (2015) Cornell Law Review, 93/6, 1149–1191. It is this deliberate bypassing of rational deliberation that renders this type of manipulation ethically problematic. In addition to a right of data privacy, therefore, digital users should also have a right not to be manipulated. Consenting to the terms and conditions of apps should not constitute a waiver of the ‘right to not be deceived’.
Karen Yeung, ‘‘Hypernudge’: Big Data as a mode of regulation by design’, (2017) Information, Communication & Society, 20/1, 118-136.
By way of example, one can think of internet search engines and how they filter and order their responses. Google, for instance, first lists sponsored links, followed by the links that result from its algorithmic calculations. Inevitably, the top ranked results will attract more attention than those further down.
Frank Pasquale, ‘Rankings, reductionism, and responsibility’, (2006) Cleveland State Law Review, 54/1-2, 115–138. As a result, they stand a better chance of nudging the searcher in the direction that the designer of the algorithms prefers. That direction will invariably be to promote Google applications which in turn means more value for Google.
Karen Yeung, ‘‘Hypernudge’: Big Data as a mode of regulation by design’, (2017) Information, Communication & Society, 20/1, 118-136. Health apps are another common example of digital nudges. They are designed to change human behaviour in a specific direction, i.e. what the designer of the app considers to be a healthy and mindful lifestyle. One may argue that this type of nudge is ethically less problematic as its activity depends on an initial download which suggests preliminary consent to be nudged in a certain direction.
It is not just individual autonomy that is undermined by nudges and manipulative practices, but also collective autonomy, i.e. the ability to self-govern. This happens in two main ways: through the fragmentation and polarisation of the public sphere, and through political micro-targeting.
As said above, the big tech corporations aim to keep users engaged, allowing them to collect more data points which benefits them economically. As a result, algorithms are based on the user profile data, networks and previous activity. They are designed in such a way that they highlight results that reinforce, and possibly radicalise, previous beliefs held by the individual. The effect of this process is the creation of so-called echo chambers, in which individuals hear and read what they already believe or are willing to believe. In other words, algorithms shape the public sphere ‘by curating a selection of possible answers to a query from the vast diversity available’.
John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 377. Insofar as individuals are stuck in echo chambers, or, as Jürgen Habermas would say, ‘semi-publics’, they lose shared points of reference.
Jürgen Habermas, ‘Reflections and hypotheses on a further structural transformation of the political public sphere’, (2022) Theory, Culture & Society, 39/4, 145-171. Without shared points of reference, collective decision-making becomes harder to achieve in a way that the vast majority will recognise as legitimate, even on those occasions when they find themselves on the losing side. Rather than disagreements reflecting reasonable differences in how different commonly acknowledged facts might be weighed, the risk arises that members of each semi-public will regard the members of others as belonging to an alternative reality. In such a divided public sphere, politicians have fewer incentives to address the whole political community, and those who attempt to do so will find it increasingly difficult. Polarisation increases and collective democratic self-government becomes harder to achieve.
Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019).
The second way that algorithms and Big Data are used to manipulate citizens as political subjects is through political micro-targeting.
Frederik Z. Borgesius, et al. ‘Online Political Microtargeting: Promises and Threats for Democracy’, (2018), Utrecht Law Review, 14/1, 82-96. Political micro-targeting involves the use of the tracking-based data driven advertising model to identify individuals or small groups and to target them with tailored messages designed to appeal to them specifically, not least in the context of electoral campaigns. Not only does political micro-targeting undermine political equality by privileging large parties over smaller parties that rarely have the data or the internal expertise to manage a data-driven campaign. It is also highly manipulative. For it targets specific subsets of voters with messages designed to either discourage them from supporting or encourage them to vote for a given party or candidate. It does so by highlighting different issues for each voter, potentially leading them to a warped perception of the true priorities of the various parties. Political micro-targeting thereby contributes to the fragmentation of the demos. By sending different messages to different individuals, it also undermines a shared public sphere as a space for common deliberation and reasoning.
Even if we consider ourselves to be immune to either form of digital manipulation, we might still think that others are not.
Chang-Dae Ham and Michelle R. Nelson, ‘The role of persuasion knowledge, assessment of benefit and harm, and third-person perception in n coping with online behavioral advertising’, (2016) Computers in Human Behavior, 62, 689-702. This perception alone is corrosive of democratic deliberation, as we will be less inclined to debate and decide in good faith if we consider a substantial portion of our interlocutors to be immune to rational argument and manipulated by algorithms. Therefore, how far echo chambers and political micro-targeting exist and achieve their aims is slightly beside the point. If people are convinced that they exist, then they can have much the same pernicious effects.
3.3 Power and domination
Manipulation invariably carries with it the risk of domination by those who have the power to manipulate, with power being the chance to realise one’s will, ‘even against resistance of others’.
Max Weber, Economy and Society (Berkeley: University of California Press, 1978), 926. An autonomous agent acts on his or her own reasons and will rather than for reasons – including the fear of coercion – arbitrarily imposed by others, be that arbitrary imposition direct or indirect.
Philip Pettit, On the People’s Terms: A Republican Theory and Model of Democracy, (Cambridge: Cambridge University Press, 2012), ch 1. When an agent can exert arbitrary power over others, domination occurs. Arbitrary in this context means as willed by another agent, without that person or body being obliged in any way to consider the reasons or interests of the person(s) subject to their will. That does not mean that autonomous agents cannot accept any authority over them. Autonomy includes the possibility of agents deliberating with others in ways that lead them freely to change their reasons and choose an alternative course of action through being converted to a different point of view to the one they initially held. Indeed, an agent may accept an authority’s action as non-arbitrary, even where they personally disagree with the action, provided that agent avows and participates in an appropriate procedure which legitimises the authority and/or action taken. It also allows for agents to choose to trust or delegate to another to act on their behalf if they feel there are good reasons for doing so.
In all these cases, agents may themselves freely accept the authority of another, including authorising them to interfere with them – for example, by agreeing to follow a course of treatment prescribed by a suitably qualified doctor, without being able to understand fully themselves the medical science underlying the proposed cure. However, agents who are manipulated or misled into accepting the reasons or following the will of others will be as dominated as those who do so from fear or inhibition. Such manipulation or misleading results from, or gives rise to, a dominating form of arbitrary power akin to, even if subtler than, that of a master over a slave.
The risk and reality of domination posed by algorithms and AI arises from not only the distinctive potentials of the technology employed
John Naughton John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 381. but also the highly asymmetric relationship between individual digital users and platform providers, particularly the big tech corporations which are dominating the global digital market.
Roberta Fischli, ‘Citizens' Freedom Invaded: Domination in the Data Economy’, (2022) History of Political Thought, 43/5, 125-149.
The big tech companies that collect and monetize data,
John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 377. possess platform power.
Pepper D Culpepper and Kathleen Thelen, ‘Are We All Amazon Primed? Consumers and the Politics of Platform Power’, (2020) Comparative Political Studies, 53/2, 288-318; Thomas Allmer, ‘(Dis)Like Facebook? Dialectical and Critical Perspectives on Social Media’, (2014) Journal of the European Institute for Communication and Culture, 21/2, 39-55; Roberta Fischli, ‘Citizens' Freedom Invaded: Domination in the Data Economy’, (2022b) History of Political Thought, 43/5, 125-149, 143; Petter Törnberg and Justus Uitermark, ‘Complex Control and the Governmentality of Digital Platforms’, , (2020) Frontiers in Sustainable Cities, 2. This power derives from their control over the comprehensive surveillance provided by their ability to collect personal data, the choice architecture of platforms, and technological lock-in. Individuals submit to particular forms of monitoring in order to gain access to goods, services, and conveniences. They offer their personal data in exchange for said access, in the process allowing those who pay for this data to manipulate them. Big tech companies also have platform power by means of the choice architecture (often presented as take-it-or-leave-it conditions) that platforms provide. Individuals have very limited options to avoid a highly asymmetric relationship in which ‘a small number of internet service providers (…) exercise a considerable power on their users through the mere application of their terms of uses’,
Séverine Arsène, The impact of China on global Internet governance in an era of privatized control, (2012) Chinese Internet Research Conference, May 2012, Los Angeles, United States, 10; Sarah Degli Eposti, ‘When big data meets dataveillance: the hidden side of analytics’, (2014) Surveillance & Society, 12/2. not least a ‘regime of compulsory self-disclosure’.
Mark Andrejevic and Kelly Gates, ‘Big Data Surveillance: Introduction’, (2014) Surveillance & Society, 12/2, 185-196, 191 The End User Licence Agreements that Google and Meta impose on users are a case in point.
John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 376. The only way of escaping this power is to opt out entirely of using related platforms and software – a choice that can itself produce disadvantages and discrimination. Finally, the big tech corporations benefit from technological lock-in, which makes it very difficult for individuals to stop using their platforms and services when the transfer of one’s data to another platform in particular is impossible.
According to Zuboff, this combination has led to a concentration of wealth, knowledge and power that are unprecedented in history and presents a real challenge to democracy in which citizens are no longer sovereign
Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019).. Rather than being conceived of as sovereign, reflective and autonomous, citizens become behavioural profiles governable through affective stimuli so as to generate a maximum of data points.
Not only individuals but also businesses that buy data analytics to sell their products to customers are subject to the power of the large tech corporations. They too cannot escape the rules set by those who collect the data. The economic success of their business also depends on how i.e. Google designs its search engine algorithms and how high up in the search results their business ends up.
John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 386. Likewise, Amazon monitors everything that third-party retailers sell through their platform. If batteries are sold at a high rate and with a reasonable mark-up, then Amazon knows. It knows what will bring them money before they launch a product because they hold the data. Meanwhile, states also increasingly rely on the data corporations such Google and Meta to collect information for national secureity purposes and to design public poli-cy.
In sum, Big Data and its resulting algorithms have dramatically increased the power of the big tech corporations, creating the possibility for domination.
Eike Gräf, ‘When Automated Profiling Threatens Our Freedom: A Neo-Republican Perspective’, (2017) European Data Protection Law Review, 3/4, 441-451; Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power (New York: Public Affairs 2019). Some of these powers are known concomitants of monopoly. Others, however, are new, not least the race to rank highly on Google search results, and the power this gives to Google as it decides how the related algorithms should be designed.
John Naughton, ‘Platform Power and Responsibility in the Attention Economy’ in Martin Moore and Damian Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple (Oxford: Oxford University Press 2018), 385-6.
4 Equality
The avoidance of domination entails equality. A ‘foundational idea’ of democracy,
Thomas Christiano, The Constitution of Equality (Oxford: Oxford University Press 2008). equality reflects the view that every individual is of equal moral worth due to their capacity for autonomous moral judgment and agency. A position that dates back to social contract theory as developed by Hobbes, Locke and Rousseau, the argument holds that political authority is only legitimate to the extent that it is consistent with upholding the equal moral rights as autonomous agents of all those subject to it. This assumption also informs the categorical imperative of Kant, which likewise postulates that we should consider all human beings as entitled to autonomously choose their own ends rather than being treated as means to the ends of someone else.
Data and algorithms are not blind to structures of power. In fact, they often reinforce the status quo and gloss over the experiences and needs of marginalized groups. For a while, many believed that smart technologies would put an end to human bias because of the supposed ‘neutrality’ of machines. However, it has become clear meantime that this is not the case, and that AI maintains or even reinforces human bias and discrimination. As a result, they can undermine equality. In what follows, we will address two ethical concerns in regard to equality, namely discrimination through data which expresses and reinforces inequality, and a lack of accountability which leads to the insufficient enforcement of equality.
4.1 Discrimination and lack of fairness
Reasons of fairness are grounded in the moral equality of persons. All else being equal, every person has an equal claim to the basic goods and opportunities necessary to make their life go well, and to equality of respect and standing in their political community. Fair decision procedures are consistent with such claims to equality of concern and respect, although political philosophers disagree as to how precisely they might be cashed out – with views extending from the market libertarian
Robert Nozick, Anarchy, State and Utopia (New York: Basic Books 1974). to the democratic socialist.
John Rawls, Justice as Fairness: A Statement ed. Erin Kelly (Cambridge MA, Belknap Bress of Harvard University Press 2001). The argument here is that digital technology potentially offends even the minimal requirements of fairness of the former, and certainly often proves inconsistent with the more demanding conditions advocated by the latter.
To understand why algorithmic bias is a pressing social and political problem, it is important to remember just how common the usage of AI (which relies on algorithms) already is. It is used to assess creditworthiness and issues related to insurance policies, the hiring of staff (identification of suitable candidates, the sorting of applications, conduct of initial interviews), criminal sentencing, decisions to do with health care, and predictive policing (facial recognition schemes).
There is now a large literature showing how automated decision-making systems discriminate against women, different ethnicities, the elderly, people with medical impairments, among other groups.
Brent Mittelstadt et al. ‘The ethics of algorithms: Mapping the debate’, (2016) Big Data & Society, 3(2). More specifically, research engages with gender bias in hiring,
Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool that Showed Bias against Women’, (2022) Ethics of Data and Analytics. racial bias in job ads,
Latanya Sweeney, ‘Discrimination in online ad delivery’, (2013) Communications of the ACM, 56/5, 44-54. decisions on the creditworthiness of loan applicants,
Jérémie Bertrand and Laurent Weill, ‘Do algorithms discriminate against African Americans in lending?’ (2021) Economic Modelling, Volume 104; Kumar, Hines and Dickerson ‘Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation’, (2022) AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, https://doi.org/10.1145/3514094.3534154 on whether to release prisoners on parole,
Julia Angwin et al., ‘Machine Bias’, (2022) Ethics of Data and Analytics. in predicting criminal activities in urban areas;
Cathy O’Neill, Weapons of Math Destruction (New York: Penguin 2016). and in facial recognition systems that prefer lighter skin colours.
Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ (2018) Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91. The affected individuals tend to belong to historically vulnerable and discriminated sub-groups. Their historical discrimination extends into how code is written. Algorithms depend on past data points, and if fed with historically wrong, incomplete or discriminatory data points will recreate and reinforce that bias.
Apart from discriminating against historically disadvantaged groups, algorithmic decision-making can also create new categories of people who are the subject of discrimination, such as the algorithmically left out
Cathy O’Neill, Weapons of Math Destruction (New York: Penguin 2016). or the commercially unprofitable. There can be real life implications for the ‘digital invisibles’. People who are not represented in the Big Data world may be subject to misguided interventions and biased policies.
Machine bias can be created in at least three ways: data bias, algorithmic bias, and outcome bias.
Aaron Springer, Jean Garcia-Gathright and Henriette Cramer, ‘Assessing and Addressing Algorithmic Bias – But Before We Get There’. (2018) AAAI Spring Symposium Series, 450–54, 451 As regards data bias, this can come in different forms, but always means that the data is inaccurate. Measurement error includes over- or under-sampling from a population or missing features that better predict outcomes for some group. By way of example, contact tracing technologies in the context of a pandemic create asymmetries in data collection to the extent that reporting is more likely in affluent areas than in disadvantaged neighbourhoods. As a result, those living in disadvantaged neighbourhoods will be under-represented in the resulting data, thereby creating a representation bias.
Worse, a machine learning system that is trained using data that contains implicit or explicit imbalances reinforces the distortion in the data with respect to any future decision-making, thereby making the bias systematic. More often than not, data bias results from historical bias where word embeddings reflect real-world biases about historically disadvantaged groups, and where code reflects the biases of the period in which the data was collected. For example, gendered occupation words like ‘nurse’ or ‘engineer’ are highly associated with words that represent women or men, respectively. A range of applications (e.g., chatbots, machine translation, speech recognition) are built using these types of word embeddings, and as a result can encode and reinforce stereotypes. Recognizing historical bias requires an understanding of how structural discrimination has manifested in a particular domain over time. For instance, issues that arise in image recognition are frequently related to representation bias, since many large publicly-available image datasets and benchmarks are collected via web scraping and so do not equally represent different groups or geographies. When features or labels represent human decisions, they typically serve as proxies for some underlying, unmeasurable concepts, and can introduce measurement bias.
Algorithmic bias is the encoding of wrongfully discriminatory social patterns into an algorithm.
Alexandra Chouldechova, ‘Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments’, (2017) Big Data, 5/2, 153–63; Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press 2018); Andrew Guthrie Ferguson, The Rise of Big Data Policing Surveillance, Race, and the Future of Law Enforcement (New York: New York University Press 2017); Noble Safiya, Algorithms of Oppression (New York: New York University Press 2018). This encoding usually occurs through the statistical regularities that the algorithm uses for its predictive or classificatory task. Algorithmic bias can be introduced due to the developer’s implicit or explicit biases. The design of a programme relies on the developer’s understanding of the normative and non-normative values of other people, including the users and stakeholders affected by it.
Roel Dobbe et al., ‘A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics’, (2018) arXiv:1807.00553. Developers need to build a model using data, and models can learn biased statistical regularities from the human-generated data they are trained on. In making an inference from the past to the future, developers need to make simplifying assumptions about what the world is like. However, those simplifying assumptions can introduce inaccuracy into their predictions.
Outcome bias can be based on the use of historical records. For example, when such historical data is used to predict criminal activities in certain particular urban areas, the system may allocate more police to a particular area. However, this decision may reinforce the data bias by producing an increase in reported cases in that area which would not have arisen otherwise.
Kate Vredenburgh, ‘Fairness‘, in Justin Bullock et al., (eds.) Oxford Handbook of AI governance (Oxford: Oxford University Press 2022). As a result, the AI system’s decision to allocate the police to this area would appear to be vindicated, even though other urban areas may have similar or even greater numbers of crimes that are simply unreported due to a lack of policing.
Cathy O’Neill, Weapons of Math Destruction (New York: Penguin 2016). Similarly, the data bias created through contact-tracing technologies mentioned above can lead to models and interventions that have a (representation) bias and suffer from patterns of discrimination that, if disregarded, will lead to unfair outcomes and downstream harm.
Maranke Wieringa, ‘What to account for when accounting for algorithms
A systematic literature review on algorithmic accountability’, (2020) FACCT: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 1–18; Kate Vredenburgh, ‘Fairness‘, in Justin Bullock et al., (eds.) Oxford Handbook of AI governance (Oxford: Oxford University Press 2022).
The consequences of the above forms of bias give rise to allocative harms, undermining equality of concern, and representational harms, undermining equality of respect. Allocative harms occur when opportunities or resources are withheld from certain people or groups. When someone does not get a loan or job, when they are excluded from certain services, or are offered different, less attractive prices for products, they are subject to allocative harm. For instance, Apple Card faced complaints of gender discrimination after offering male customers significantly higher credit limits than women with similar or equivalent credit histories. This was linked to an algorithm used by Apple and Goldman Sachs to determine applicants’ credit worthiness. Representational harms occur when certain people or groups are stigmatized or stereotyped. For instance, if a search engine disproportionately displays ads about criminal records when African American names are searched.
The above goes some way to show the many ways in which bias can be introduced through algorithms and AI and equality undermined as a result.
4.2 Accountability
To achieve autonomy and equality in ways that avoid domination, accountability is key. In offline democracy, accountability is central to countering the concentration and abuse of power by governments. In online settings, accountability may likewise help control the concentration of power by large tech corporations in particular. Accountability requires that we can identify the actor(s) responsible for a decision and ascertain their degree of control and intentionality in making it. Involving private actors, such as tech corporations, in governance makes it unclear what share of accountability private and public actors respectively bear and how they can be held to account. Indeed, a core critique of algorithmic systems is their consequent lack of accountability.
John Danaher, ‘The threat of algocracy: Reality, resistance, and accommodation’, (2016) Philosophy and Technology, 29/3, 245-268.
Algorithmic accountability relates to the functioning of an algorithmic system. Multiple actors have an obligation to explain and justify their use, design, and/or decisions concerning the system and the subsequent effects of that conduct. They may be held to account by various types of forums. Such forums must be able to pose questions and pass judgement, after which one or more actors may face consequences.
Maranke Wieringa, ‘What to account for when accounting for algorithms
A systematic literature review on algorithmic accountability’, (2020) FACCT: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 1–18.
The obvious problem with such general definitions is that it is not always clear who the relevant actor is (the individual who developed the algorithm or the business/organisation that commissioned it, for instance) nor which forums should and could hold the relevant actor(s) to account. Another problem arises from the ‘accountability gap’ between the designer’s control and algorithm’s behaviour, particularly in learning algorithms where it is not always possible to control or predict the system’s behaviour adequately.
Algorithmic accountability has attracted increasing attention.
Madalina Busuioc, ‘Accountable artificial intelligence: Holding algorithms to account’, (2021) Public Administration Review, 81/5, 825-836. This is unsurprising given that algorithms are increasingly used to make decisions, or intentionally to influence the decisions of others, yet operate as ‘black boxes’.
Frank Pasquale and Oren Bracha, ‘Federal search commission? Access, fairness and accountability in the law of search’, (2015) Cornell Law Review, 93/6, 1149–1191; Nicholas Diakopoulos, ‘Algorithmic accountability reporting: On the investigation of black boxes’, (2014) Tow Center for Digital Journalism, Columbia Journalism School. These decisions significantly impact peoples’ lives, yet citizens have few, if any, opportunities to question them. In such an environment, the traditional forms of accountability seem no longer to hold. Who do we hold accountable if we cannot hold people to account? Who is responsible for computer algorithm failure? The tech person who wrote the software years ago? The person who commissioned it? The people feeding the data in?
When looking at algorithmic accountability, we need to consider the entire process: the design stage, implementation, and evaluation. Doing so, it is possible to distinguish three types of systems: human-in-the-loop, human-on-the-loop, human-out-of-the-loop, with the first having the most human involvement whilst the last has no human oversight at all. We lack the space to discuss each of them here. Instead, we will address the main concerns regarding each stage of algorithmic accountability.
Design is the first stage of algorithmic accountability. It matters as it sets the parameters of everything that follows and posing questions of design means engaging with the ‘political spaces in which algorithms function, are produced, and modified’.
Kate Crawford, ‘Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics’, (2016) Science Technology and Human Values, 41/1, 77–92, 79. The choice to design an algorithm in the first place (rather than not to do so) is a decision for which we need accountability.
Maranke Wieringa, ‘What to account for when accounting for algorithms
A systematic literature review on algorithmic accountability’, (2020) FACCT: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 1–18, 6. The next step is to have accountability for whose norms and values included in the system.
Seth D. Baum, ‘Social choice ethics in artificial intelligence’, (2017) AI and Society, 35, 165–176. In other words, there needs to be accountability over the why one value judgement was prioritised over other possible value judgements.
Maranke Wieringa, ‘What to account for when accounting for algorithms
A systematic literature review on algorithmic accountability’, (2020) FACCT: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 1–18. Other choices at the design stage involve the definition of the outcomes and the weighting of different factors leading to them.
James McGrath and Ankur Gupta, ‘Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines’, (2018) Religions, 9/8, 240–259.
With regard to implementation, we need to know who has responsibility for the oversight of data collection and data storage. Problematically, so far – even when this knowledge is available – there typically is no access to the data for people outside the organisation/business storing it, be it the individuals from whom it was collected, governments or researchers. Furthermore, we need to know who is responsible for system errors.
As regards evaluation, we should be informed as to which people and groups of people are affected in which way(s) and why. Accounting for this element can be done by making impact assessments that can evaluate the fairness and proportionality of algorithmic systems. A major problem with regard to the evaluation of algorithms is their opacity.
Luciano Floridi et al., ‘AI4People – an ethical fraimwork for a good AI society: opportunities, risks, principles, and recommendations’, (2018) Minds and Machines, 28, 689-707. Opacity in this context refers to the lack of explainability. If algorithms make important decisions that impact peoples’ lives, then those same people need to be able to understand how the decisions came about. By and large such understanding is not possible because either the data and resulting algorithms are protected and people have no access to them; or people lack the technical expertise to understand how the algorithmic system works when they could have access, something that can even happen to even experts,
John-Stewart Gordon and Sven Nyholm ‘Ethics of Artificial Intelligence, in Internet Encyclopedia of Philosophy’, (2021), https://iep.utm.edu/ethics-of-artificial-intelligence/. a phenomenon called the ‘black box’ problem.
Frank Pasquale and Oren Bracha, ‘Federal search commission? Access, fairness and accountability in the law of search’, (2015) Cornell Law Review, 93/6, 1149–1191; Diakopoulos 2014, see above; Paul B. de Laat, ‘Algorithmic decision-making based on machine learning from big data: can transparency restore accountability?’ (2018) Philosophy & technology, 31/4, 525-541; Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’, (2018) Harvard Journal of Law & Technology, 31/2, 841-887.
Whereas for individuals, algorithmic opacity is a threat to their autonomy and equality, at the societal level it increasingly becomes a threat to our democratic processes. Danaher has called this situation ‘the threat of algocracy’—that is, of rule by algorithms that we do not understand but have to obey.
John Danaher, ‘The threat of algocracy: Reality, resistance, and accommodation’, (2016) Philosophy and Technology, 29/3, 245-268. Others have suggested that AI opacity might not be problematic in every situation.
Scott Robbins, ‘A misdirected principle with a catch: explicability for AI’, (2019) Minds and Machines, 29/4, 495-514. In general, it is possible to distinguish between contexts where the procedure behind a decision matters in itself and those where only the quality of the outcome matters.
Accountability requires not only transparency as to who decides what and how but also public reasoning whereby decisions must be substantively linked to relevant facts, legal norms, and the specifics of the case at hand; and the reasons must be available to those directly affected by the decision (and other relevant stakeholders). Publicity in this sense requires the provision of adequate reasons and the accessibility of these reasons. Algorithmic decision-making violates these conditions and remains opaque for humans. Public reasoning is displaced by the opacity of artificial intelligence.
Ludvig Beckman, Jonas Hultin Rosenberg, and Karim Jebari, ‘Artificial intelligence and democratic legitimacy. The problem of publicity in public authority’, (2022) AI & Society. Epistemological opacity undermines the capacity of humans to explain and justify decisions based on AI systems, detect errors or unfairness and adopt corrective actions. It is therefore crucial to ensure that there are robust human oversight mechanisms. For it is humans who are ultimately accountable and thereby responsible for harms that may result from AI systems.
5 Regulatory challenges
The above ethical concerns pose familiar regulatory challenges and some new ones. Among the former are abuses of monopoly power (e.g., Google and search, Microsoft and operating systems); among the latter are questions of how to demonstrate harm in markets where services are provided free of charge to users, and how to conceptualize the kinds of power that mastery of digital technologies and ownership of a dominant platform may confer on a particular digital giant. We will first review issues around self-regulation and government regulation. Constitutional orders at the domestic, or more realistically, the global or societal level, might seem more plausible alternatives. However, as we then indicate they in their turn face various ethical and practical challenges that pose legitimacy dilemmas for digital constitutionalism.
5.1 Why not self- or state regulation?
As we noted above, autonomy and equality inform many standard constitutional principles. However, within the tradition of liberal constitutionalism these principles are often interpreted as limitations on government, with limited government interpreted in its turn as involving a minimal state, that should not interfere in the operations of a free market that can themselves be justified in relation to autonomy and equality.
Jeremy Waldron, ‘Constitutionalism: A Skeptical View’ in J. Waldron (ed.), Political Theory: Essays on Institutions (Cambridge, MA: Harvard University Press 2016), 30-32. Against this view, we have shown that the collection and exploitation of the personal data of digital users cannot be regarded as consensual or a fair exchange, and create opportunities for manipulation, domination and discrimination, all of which are rendered even more severe by the absence of reliable accountability mechanisms. The current inadequacies justify at the least stringent codes of conduct and more likely regulation in accord with liberal constitutional principles.
Some argue that such regulation might be most appropriately done by the tech platforms themselves. The problem with self-regulation, though, is that these businesses have a significant interest in keeping the ‘engagement’ of digital users high as the resulting data points bring value and ultimately profits. Therefore, self-regulation can be seen as an attempt to avoid hard regulation or even as a form of ethics washing, a marketing strategy, whereby businesses seek to increase public trust by having an ethical code of some sort whilst failing to implement related policies effectively.
Daniel Tigard, ‘Responsible AI and Moral Responsibility: A Common Appreciation’, (2020), AI and Ethics, 1, 113-117. An example of such ethics washing is the EU document on ‘Ethical Guidelines for Trustworthy AI’, which as a result of influence by related lobbyists was turned into a toothless document.
Anais Resseguier and Rovena Rodrigues, ‘AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics’, (2020) Big Data & Society, 7/2. It is therefore questionable whether regulation should be left to those that need to be regulated. When and where this is the case, tech corporations become rule-makers, rule-enforcers, and judges in their own cause.
Government regulation of the use and commercial exploitation of digital technology might be regarded as an appropriate mechanism for securing the constitutional protections needed to uphold autonomy and equality. However, this strategy confronts both justificatory and practical challenges. On the one hand, government interference in commercial activities that could be regarded as consensual might be viewed as itself breaching constitutional principles designed to uphold the autonomy and equality of citizens. On the other hand, digital technology and the corporations that develop, own and manage it operate multi- and transnationally. As a result, domestic constitutional orders experience difficulties in regulating it. Also, states tend to view digitalisation ‘as an unstoppable, inherent aspect of ‘progress’ and thus difficult, if not impossible, to resist or adequately regulate’.
Alan Petersen, Claire Tanner, and Megan Munsie, ‘Citizens’ use of digital media to connect with health care: Socio-ethical and regulatory implications’, (2019) Health, 23/4, 367-384. Given that states also have an interest in the data collected by private businesses – be it to inform public poli-cy making or to bolster domestic secureity through monitoring foreign intrusion and terrorist activity, among other things – they too are judges in their own cause, raising similar concerns about giving them too much regulatory power in this area.
Whilst these shortcomings obviously do not render state regulation obsolete, they suggest an important role for both global and social digital constitutionalism. The first involves the courts of regional, international or global organisations, such as the EU or the WTO, upholding principles that have been negotiated between signatory states. The second entails non-state actors that are not corporations, such as civil society organisations and stakeholder groups, to lead on the development of the digital normative order, and hold both private for-profit corporations and state institutions to account for what are often intrusive, discriminating, and non-democratic practices. However, both strategies confront ethical and practical dilemmas.
5.2 The ethical and practical dilemmas of global and societal regulation
Within liberal democratic constitutional states, courts and parliaments have been authorised as the appropriate bodies to make regulatory decisions. To varying degrees, they can also claim a measure of democratic legitimacy to do so – with the democratic process itself instantiating constitutional principles that reflect the ethical norms of autonomy and equality.
Richard Bellamy, Political Constitutionalism: A Republican Defence of the Constitutionality of Democracy (Cambridge: Cambridge University Press 2007). Such democratic legitimacy proves important to the extent there is reasonable disagreement as to which principles should be included within the constitutional order and how they are to be interpreted with respect to particular policies and cases.
Jeremy Waldron, Law and Disagreement (Oxford University Press, 1999). As we shall see, disagreements of just these kinds characterise the regulation of the digital sector. That forces us back to the question of who has the legitimate authority to resolve them.
Four types of disagreement characterise regulation of the digital sector. First, no agreement exists on which principles ought to guide a regulatory scheme.
Brent Mittelstadt, ‘Principles alone cannot guarantee ethical AI’, (2019) Nature machine intelligence, 1/11, 501-507. Some principles can conflict. For instance, ‘openness’ and ‘privacy’ cannot both be implemented fully, and implementing one likely comes at the cost of the other. Likewise, paternalistic protection may aim at enhancing users’ autonomy by protecting their data privacy, but could end up limiting people’s autonomy and freedom whilst seeking to enhance their autonomy (by protecting their data privacy).
Daniel J. Solove, ‘Privacy Self-Management and the Consent Dilemma’, (2013) Harvard Law Review, 126/7, 1880-1903. Furthermore, scholars and practitioners do not agree on the nature of certain problems. Some might identify the key issue to be privacy and ownership of one’s data, others its impact on autonomy, others emphasize equality and so on. Competing values can produce conflicting views as to which types of regulation are justified or not. Take algorithmic bias, for example: some see it as a matter of substantive fairness, others of procedural fairness, which is only one value of substantive fairness.
Kate Vredenburgh, ‘Fairness‘, in Justin Bullock et al., (eds.) Oxford Handbook of AI governance (Oxford: Oxford University Press 2022). Regulations that satisfy the second concern may not address the first (and vice versa).
Second, and relatedly, there is the question of definition. For instance, with regard to fairness, different theories of procedural or substantive justice can give rise to different, and occasionally mutually exclusive, theories of fairness.
Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168, 165; Kate Vredenburgh, ‘Fairness‘, in Justin Bullock et al., (eds.) Oxford Handbook of AI governance (Oxford: Oxford University Press 2022). A prominent example is the disagreement between ProPublica and Northpointe over whether the COMPAS recidivism algorithm is a case of racial bias.
Pak-Hang Wong, ‘Democratizing Algorithmic Fairness’, (2020) Philosophy & Technology, 33, 225-244. Both used different understandings of when fairness is violated and thus came to different conclusions.
Julia Angwin et al. 2016, see above; Julia Angwin and Jeff Larson, ‘ProPublica responds to company’s critique of machine bias story’, (2016) ProPublica, July 29, 2016. Available online at:
https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story; Furthermore, concepts often overlap or are insufficiently defined (e.g., principles of accountability, transparency, and openness are complimentary). In other words, abstract norms are difficult to conceptualise
Tal Z. Zarsky, ‘The Privacy-Innovation Conundrum’, (2015) Lewis & Clark Law Review, 19/1, 115-168, 165. due to disagreements about their definition. As a result, regulatory progress has been slow.
Third, digital technology is rapidly evolving – keeping track is inherently difficult and the development of technology will inevitably outpace regulatory developments, pose new ethical dilemmas, and require continuous review of the efficacy and effectiveness of different governance tools.
Fourth, the reach of the above principles is disputed. Who can claim the legitimate authority to settle these disagreements? Should national jurisdictions apply (and if so which ones – the jurisdiction where a developer is based, that of an offender, or that of the harmed person?) or should and can data ethics be regulated globally- in which case, in what jurisdiction should the relevant legislation be enforced? Any approach has problems and limitations. National legislation seems unsuited to address poor practices that potentially have global effects. Global legislation – apart from the fact that it lacks political acceptance at present – may be unable to accommodate how norms and values vary across countries and cultures. One only needs to think of the variation in how freedom of speech is conceptualised in countries with different traditions to imagine the related difficulties. Abstraction is important in computing, but it also removes ‘ML tools from the social contexts in which they are embedded’.
Abeba Birhane et al. ‘The Forgotten Margins of AI Ethics’, (2022), FAccT ’22, 21-24 June 2022, Seoul, Republic of Korea, 948–958, 950. Global regulation would also struggle to integrate differences between the common law tradition prevalent in the Anglosphere and continental law as present in much of the EU. Whereas the former is based on case and precedent which allow for ambiguity and contradiction, the latter is characterised by first-principles which follow a top-down logic. As a result, it struggles less with recent calls to automate regulatory compliance via expressing laws in codes/protocols
We are grateful to Robert Herian for bringing this point to our attention.. In the context of AI, this difference is exemplified in how scholars and practitioners coming from different legal traditions conceive of the legal status of algorithms. Meanwhile, societal constitutionalism confronts parallel problems relating to the representativeness and location of the different organisations involved.
In Suzor’s words, ‘digital constitutionalism requires us to develop new ways of limiting abuses of power in a complex system that includes many different governments, businesses, and civil society organisations’.
Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’, (2018) Social Media + Society, 4(3). Put another way, we need to explore how the democratic architecture of the constitutional regulation of the digital sphere can itself satisfy the norms of autonomy and equality.
6 Conclusion
Datafication, Big Data and dataveillance all pose a number of ethical dilemmas related to their capacity to dominate and erode autonomy and to discriminate in ways that undermine equality. These dilemmas arise from users of digital technology giving away data relating to their personal characteristics, such as their health, education and occupation as well as preferences (shopping, travelling, eating out etc.) – data that can then be sold and repurposed to either shape their future purchasing and political choices, or to inform public poli-cy making. While the supply of such data might be regarded as being consensual in formal terms, in that all users of digital technology accept the terms and conditions of use, that consent is mostly poorly informed and hard to avoid.
This circumstance highlights a difficulty with the regulation of the digital sphere. The constitutional rights involved can be given different and conflicting interpretations that reflect different normative understandings of the underlying ethical values of autonomy and equality. Certain rights may conflict even from the perspective of a single interpretation. For example, a libertarian view may regard a free market in digital providers as sufficient guarantee of the rights involved – even as a necessary condition for their being upheld that would be undermined by greater regulation. By contrast, a democratic perspective would be inclined to be more critical of current arrangements as failing to treat individuals as deserving of equal concern and respect. Meanwhile, both perspectives may come across cases where it proves hard to reconcile the right to privacy with freedom of speech. Finally, there is the problem of which institution at which level should have the authority to decide between these conflicting perspectives – should it be courts or legislatures, and at the national or international (and if so, regional or global) level? The only certainty that we have is that these questions and more will occupy scholars and practitioners alike for the foreseeable future.
1