The Paradox of Automation As Anti-Bias Intervention
The Paradox of Automation As Anti-Bias Intervention
The Paradox of Automation As Anti-Bias Intervention
IFEOMA AJUNWA*
ABSTRACT
*
Assistant Professor of Employment and Labor Law, Cornell University School of
Industrial and Labor Relations, and Associated Faculty Member, Cornell Law School;
Faculty Associate, Berkman Klein Center at Harvard Law School. Many thanks to Ryan
Calo, Zachary Clopton, Gautam Hans, Jeffrey Hirsch, Maggie Gardner, Genevieve Lakier,
Jonathan Masur, John Rappaport, Angie Raymond, Katherine Strandburg, Elana Zeide, the
participants at the University of Chicago Public and Legal Theory Colloquium, the Cornell
Law School Colloquium, and the Yale Information Society and Center for Private Law
Seminar for helpful comments. A special thanks to my research assistants, Kayleigh Yerdon,
Jane Kim, and Syjah Harris.
TABLE OF CONTENTS
INTRODUCTION.................................................................................3
I. THE ALGORITHMIC TURN ............................................... 11
A. Data Objectivity .......................................................... 13
B. Data as Oracle............................................................ 14
C. Data-laundering .......................................................... 16
II. ALGORITHMIC CAPTURE OF HIRING AS CASE STUDY ...... 18
A. Algorithms as Anti-Bias Intervention .............................. 21
B. The Fault in the Machine.............................................. 23
1. Recruitment ...................................................... 24
2. Hiring ............................................................... 26
C. Exposing the Mechanical Turk ...................................... 27
III. A LEGAL PROBLEM, NOT A TECHNICAL PROBLEM .......... 30
A. A Legal Tradition of Employer Deference ........................ 30
B. The Problem of “Cultural Fit” ....................................... 34
C. Re-Thinking Employer Discretion .................................. 37
IV. EX LEGIS: NEW LEGAL FRAMEWORKS ............................ 39
A. Improving on the Fiduciary Duty Concept ......................... 39
1. Platform Authoritarianism ................................ 42
2. The Tertius Bifrons .............................................. 42
B. Discrimination Per Se................................................... 44
C. Consumer Protection for Job Applicants ........................... 50
CONCLUSION ..................................................................................55
INTRODUCTION
1 “Advocates applaud the removal of human beings and their flaws from the assessment
process . . . .” Algorithms or automated systems are often seen as fair because they are
“claimed to rate all individuals in the same way, thus averting discrimination.” Danielle Keats
Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 WASH. L.
REV. 1, 4 (2014).
2 Title VII of the Civil Rights Act, 42 U.S.C. §§ 2000e to 2000e-17 (2000).
3 See, e.g., Solon Barocas & Andrew Selbst, Big Data’s Disparate Impact, 104 CAL. L. REV. 671
It Was Discriminating Against Women, BUS. INSIDER (Oct. 10, 2018, 5:47 AM),
www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-
2018-10.
5 Id.
6 Id.
7 Id.
‘women’s’ and filtered out candidates who had attended two women-only
colleges.”8 As legal scholars such as Professor Sandra Mayson and others have
demonstrated, such algorithmic bias is not limited to gender; algorithmic
decision-making can also produce disparate racial impact, especially in the
criminal justice system.9 Amazon’s story of negative discrimination against
protected classes as an (un)intended outcome of automated decision-making
is not singular. Recent books, like Algorithms of Oppression, have detailed the
racially-biased impact of algorithms on information delivery on the internet,10
and others, like Automating Inequality, have outlined the biased results of
algorithms in criminal justice and public welfare decision-making,11 yet, with
the exception of the work of a few legal scholars,12 the role of algorithms in
perpetuating inequality in the labor market has been relatively overlooked in
legal scholarship.13
Often, when legal scholars raise the topic of bias in algorithmic systems, a
common retort is: “What’s new?”14 This rhetorical question is meant to convey
the sentiment that bias in algorithmic systems cannot be a novel topic of legal
inquiry because it has a pre-existing corollary, bias in human decision-making.
8
Id. Ironically, as the use of an automated hiring system revealed the gender disparity here in
concrete numbers, this meant that such disparities could potentially be addressed by
employment antidiscrimination law. Contrast this to what the legal scholar Professor Jessica
Fink has identified as the more nebulous “gender-sidelining,” a workplace dynamic in which,
for example, “women often lack access to important opportunities or feel subjected to
greater scrutiny than their male peers.” See Jessica Fink, Gender Sidelining and the Problem of
Unactionable Discrimination, 29 STAN. L. & POL’Y REV. 57 (2018).
9 More often, legal scholars have considered algorithmic racial inequities in the context of the
criminal justice system. See, e.g., Mayson, supra note 3 (arguing that the problem of disparate
impact in predictive risk algorithms lies not in the algorithmic system but in the nature of
prediction itself); Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 DUKE L.J. 1043
(2019); Aziz Z. Huq, The Consequences Of Disparate Policing: Evaluating Stop and Frisk as a
Modality of Urban Policing, 101 MINN. L. REV. 2397, 2408 (2017); Andrew Guthrie Ferguson,
Big Data and Predictive Reasonable Suspicion, 163 U. PA. L. REV. 327 (2015).
10 See NOBLE, supra note 3.
11 See EUBANKS, supra note 3.
12 See, e.g., Stephanie Bornstein, Antidiscriminatory Algorithms, 70 ALA. L. REV. 519, 570 (2018);
Stephanie Bornstein, Reckless Discrimination, 105 CAL. L. REV. 1055, 1056 (2017); Pauline
Kim, Data-Driven Discrimination at Work, 58 WM. & MARY L. REV. 857, 908 (2017); Matthew
Bodie, Miriam Cherry, Marcia McCormick & Jintong Tang, The Law and Policy of People
Analytics, 88 U. COLO. L. REV. 961 (2017); Charles Sullivan, Employing AI, 63 VILL. L. REV.
395 (2018); James Grimmelmann & Daniel Westreich, Incomprehensible Discrimination, 7 CAL.
L. REV. ONLINE 164 (2017).
13 Going beyond the specific role of algorithms, some scholars have argued that workplaces
(“[F]ew of the legal issues posed by the new informatics technologies are novel.”).
However, scholars such as Professor Jack Balkin have exposed this retort as a
facile dismissal of what are legitimate lines of scholarly legal inquiry.
[T]o ask “What is genuinely new here?” is to ask the wrong question.
If we assume that a technological development is important to law only
if it creates something utterly new, and we can find analogues in the
past—as we always can—we are likely to conclude that because the
development is not new, it changes nothing important. That is the
wrong way to think about technological change and public policy, and
in particular, it is the wrong way to think about the Internet and digital
technologies. Instead of focusing on novelty, we should focus on
salience. What elements of the social world does a new technology
make particularly salient that went relatively unnoticed before? What
features of human activity or of the human condition does a
technological change foreground, emphasize, or problematize? And
what are the consequences for human freedom of making this aspect
more important, more pervasive, or more central than it was before?”15
Other legal scholars have made similar points. As Professor Katyal notes:
“the true promise of AI does not lie in the information we reveal to one
another, but rather in the questions it raises about the interaction of
technology, property, and civil rights.”16 My scholarly agenda has focused on
examining the myriad ways in which new computing technologies bring to high
relief existing societal biases and continued inequities, particularly in the
employment sphere. In past work, I have parsed how online platforms might
contribute to age discrimination in the labor market,17 and I have noted how
wearable technologies deployed to manage the workplace prompt novel legal
questions and suggest a new agenda for employment and labor law
scholarship.18 I have also conducted an empirical study of work algorithms,
which involved a critical discourse analysis and affordance critique of the
advertised features and rhetoric behind automated hiring systems as gleaned
through 135 archival texts, tracing the timeline of the development of hiring
platforms from 1990–2006.19 That study concluded that while one purported
raison d’etre and advertised purpose of automated hiring systems was to reduce
hirer bias—“replacing messy human decisions with a neutral technical
15 Jack M. Balkin, Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the
Information Society, 79 N.Y.U. L. REV. 1 (2004).
16
Sonia Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. REV. 54, 54
(2019).
17 Ifeoma Ajunwa, Age Discrimination by Platforms, 40 BERKELEY J. EMP. & LAB. L. 1 (2019).
18 Ifeoma Ajunwa, Algorithms at Work: Productivity Monitoring Applications and Wearable
New Intermediaries in the Organization of Work, in WORK AND LABOR IN THE DIGITAL AGE 61
(Stephen P. Vallas & Anne Kovalainen eds., 2019).
them to “clone your best people” with the identification of existing high-sales employees
within client records. A process that would be sure to replicate existing inequalities in the
demographics of the workers. Id.
22 See generally Ferguson, supra note 9; Citron & Pasquale, supra note 1, at 4; Barocas & Selbst,
supra note 3.
23 Cynthia Estlund has argued that Title VII should be understood as an “equal protection
clause for the workplace.” Cynthia Estlund, Rebuilding the Law of the Workplace in an Era of Self-
Regulation, 105 COLUM. L. REV. 319, 331 (2005); see also Samuel R. Bagenstos, The Structural
Turn and the Limits of Antidiscrimination Law, 94 CAL. L. REV. 1, 40–41 (2006) (arguing that the
best explanation for employment discrimination law is its reflection of a broad goal of social
change to eliminate group-based status inequalities).
24 Benjamin I. Sachs, Employment Law as Labor Law, 29 CARDOZO L. REV. 2685, 2688 (2008).
25 Id.
26 Jack M. Balkin & Reva B. Siegel, The American Civil Rights Tradition: Anticlassification or
Antisubordination?, 58 U. MIAMI L. REV. 9 (2003) (relating the history of the development and
application of the two distinct antidiscrimination threads in American law); see also Jessica L.
the focus is on the improper use of variables for protected classes in the
decision-making process.27 In the context of hiring, such an emphasis would
revolve around the disparate treatment cause of action under Title VII.28
Whereas employment & labor law literature now mostly focuses on anti-
subordination, where the concern is the adverse impact of decision-making on
protected groups, which mostly implicates the disparate impact theory under
Title VII.29 This Article seeks to reconcile this gulf by noting first that machine
learning algorithmic systems present opportunities for both disparate
treatment and disparate impact discriminatory actions.30 The Article then notes
the particular difficulties of proving a disparate impact theory of discrimination
Reidenberg, David G. Robinson & Harlan Yu, Accountable Algorithms, 165 U. PA. L. REV. 633
(2017). For a notable exception, see generally Barocas & Selbst, supra note 3 (detailing issues
of disparate impact associated with algorithmic decision-making).
28 Title VII states:
that causes a disparate impact on the basis of race, color, religion, sex or national origin.” 42
U.S.C. § 2000e-2(k)(1)(A)(i). But see Bradley A. Areheart, GINA, Privacy, and Antisubordination,
46 GA. L. REV. 705 at 709 (2012). Areheart argues that “GINA . . . [represents] a turn toward
anticlassificationist principles and a possible turn away from antisurbodination norms.”; Cf.
Bornstein, Antidiscriminatory Algorithms, supra note 12, at 571. Professor Bornstein asserts that
in addition to anticlassification and antisubordination theories underlying antidiscrimination
law, antistereotyping principle should be considered since algorithmic discrimination can be
liable for intentional discrimination as well as disparate impact.
30
See Richard Thompson Ford, Bias in the Air: Rethinking Employment Discrimination Law, 66
STAN. L. REV. 1381 (2014) (noting that concepts in employment law such as “intent” and
“causation” escape precise definition).
law has long been ripe for updating. Many of the core cases regarding how discrimination is
defined and proved arose in the 1970s in a very different era and were designed to address
very different kinds of discrimination.” Michael Selmi, The Evolution of Employment
Discrimination Law: Changed Doctrine for Changed Social Conditions, 2014 WIS. L. REV. 937, 938
(2014).
35 See infra Section II.A.
36 I firmly believe that whether or not hiring algorithms produce more or less biased results
than humans cannot be a legal adjudication. As Professor Charles Sullivan has remarked:
“And the antidiscrimination statutes don’t really care whether any particular selection device
actually improves productivity so long as it does not discriminate.” Sullivan, supra note 12, at
398. Rather, determining whether algorithmic systems evince less bias than human managers
requires empirical data obtained via rigorous social scientific research. Some legal scholars
have argued, based on preliminary studies, that automated hiring systems have “allowed
some employers to easily and dramatically reduce the biasing effects of subjectivity from
their hiring decisions . . . .” Bornstein, Reckless Discrimination, supra note 12, at 1056. I argue,
however, that since algorithmic hiring systems are a relatively new invention, to assess any
bias reduction would require longitudinal studies in several industries, and with adequate
controls.
37
Julie E. Cohen, Law for the Platform Economy, 51 U.C. DAVIS L. REV. 133, 189 (2017).
38
See infra Section II.C.
David Stark, Ethnic Diversity Deflates Price Bubbles, 111 PROC. NAT’L ACAD. SCI. 18524 (2014)
(detailing sociological research showing that diverse teams make better decisions and are
more innovative); see also Katherine W. Phillips, Katie A. Liljenquist & Margaret A. Neale,
Better Decisions Through Diversity, KELLOGG SCH. MGMT.: KELLOGG INSIGHT (Oct. 1, 2010),
https://insight.kellogg.northwestern.edu/article/better_decisions_through_diversity
(showing that diverse groups outperform homogenous groups because of both an influx of
new ideas and more careful information processing); Sheen S. Levine & David Stark,
Diversity Makes You Brighter, N.Y. TIMES (Dec. 9, 2015),
https://www.nytimes.com/2015/12/09/opinion/diversity-makes-you-brighter.html.
41 JOHN RAWLS, A THEORY OF JUSTICE (1971). Rawls, a social contract philosopher, argued
that the competing claims of freedom and equality could be reconciled when decisions about
justice are made on the basis of the difference principle behind a “veil of ignorance,”
wherein no one individual knows their original position (that, is they could be members of
low status groups in society), with the result that the only rational choice is to make
decisions that would improve the position of the worst off in society. Id. See also Mark
Kelman, Defining the Antidiscrimination Norm to Defend It, 43 SAN DIEGO L. REV. 735 (2006)
(rejecting “the [utilitarian ethics] idea that the antidiscrimination norm’s propriety should be
evaluated solely by reference to its impact on a mere subset of experiences or capacities to
engage in certain activities, for example, a claim that what is relevant in deciding whether the
plaintiff merits protection is the plaintiff’s legitimate sense that, absent protection, he is not
treated as a ‘first-class’ citizen”).
42 See, e.g., Kroll, Huey, Barocas, Felten, Reidenberg, Robinson & Yu, supra note 27.
43
Several other legal scholars have applied the tort law principle of a duty of care to
employment discrimination. See, e.g., Ford, supra note 30 (arguing that employment law
imposes a duty of care on employers to refrain from practices that go against equal
opportunity in employment). See also, Robert Post, Lecture, Prejudicial Appearance: The Logic of
American Antidiscrimination Law, 88 CALIF. L. REV. 1 (2000), (arguing that antidiscrimination
law aims to achieve positive interventions in social practices as opposed to solely dictating
prohibitions). Other professors have also used a “duty of care” framework to propose
remedial measures for employment discrimination. See, David Benjamin Oppenheimer,
Negligent Discrimination, 141 U. PA. L. REV. 899 (1993); Noah D. Zatz, Managing the Macaw:
Third-Party Harassers, Accommodation, and the Disaggregation of Discriminatory Intent, 109 COLUM.
L. REV. 1357 (2009).
44 DONALD E. KNUTH, STANFORD DEPARTMENT OF COMPUTER SCIENCE REPORT NO.
rapidly growing from the 1980s. Two recently published books document the widespread
use of algorithms both in governmental decision-making and in the delivery of search results
online. See EUBANKS, supra note 3; NOBLE, supra note 3.
46 See Neil M. Richards & Jonathan H. King, Big Data Ethics, 49 WAKE FOREST L. REV. 393,
393 (2014) (noting that “large datasets are being mined for important predictions and often
surprising insights”).
47 This nomenclature takes, as inspiration, Professor Julie Cohen’s description of the
“participatory turn” in which innovative surveillance methods are positioned as exempt from
legal and social control and rather held up as evidence of economic progress. See Julie
Cohen, The Surveillance-Innovation Complex: The Irony of the Participatory Turn, in THE
PARTICIPATORY CONDITION 207, 207–26 (2016).
48 See Harry Surden, Machine Learning and Law, 89 WASH. L. REV. 87, 88 (2014) (detailing gaps
conference, the inaugural AI conference. See Martin Childs, John McCarthy: Computer Scientist
Known as the Father of AI, INDEPENDENT (Nov. 1, 2011, 8:00 PM),
https://www.independent.co.uk/news/obituaries/john-mccarthy-computer-scientist-
known-as-the-father-of-ai-6255307.html.
media,50 and thus, in this Article, in lieu of “AI,” I employ the more precise
terms of “algorithms”51 and “machine learning algorithms.”52
Consider that an algorithm decides all of the following: the answer to a
search one conducts online,53 the best romantic prospects provided by a dating
website,54 what advertisements one sees during a visit to a given website,55
one’s creditworthiness,56 whether or not one should be considered a suspect
for a crime,57 and whether or not one is qualified for a job.58 As I detail in the
50 In most media, “artificial intelligence” and “algorithms” are used interchangeably. Lauri
Donahue writes about the process of machine learning, in which an algorithm learns from its
experiences and adapts to new sets of information, based on data. See Lauri Donahue, A
Primer on Using Artificial Intelligence in the Legal Profession, JOLT DIG.: COMMENT. (Jan. 03,
2018), https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-
legal-profession. Donahue uses the terms AI and algorithm interchangeably throughout the
article.
51 In defining an algorithm, Alan D. Minsk references the Gottschalk v. Benson decision, in
which the court defined an algorithm as a “procedure for solving a given type of
mathematical problem . . . . [An algorithm is] . . . a generalized formulation for programs to
solve mathematical problems of converting one form of numerical representation to
another.” Alan D. Minsk, Patentability of Algorithms: A Review and Critical Analysis of the Current
Doctrine, 8 SANTA CLARA HIGH TECH. L.J. 251, 257 (1992); see also Gottschalk v. Benson, 409
U.S. 63, 65 (1972). Minsk also references the Paine, Webber, Jackson & Curtis, Inc. v. Merrill
Lynch, Pierce, Fenner & Smith, Inc. decision, which defines a mathematical algorithm and a
computer algorithm. A mathematical algorithm is a “recursive computational procedure
[which] appears in notational language, defining a computational course of events which is
self-contained.” Paine, Webber, Jackson & Curtis, Inc v. Merrill Lynch, Pierce, Fenner & Smith Inc.,
564 F. Supp. 1358, 1366–67 (D. Del. 1983) (“[A] computer algorithm is a procedure
consisting of operation[s] to combine data, mathematical principles and equipment for the
purpose of interpreting and/or acting upon a certain data input.”). In one of the earliest
mentions of algorithms in case law, we find that “algorithms are procedure[s] for solving a
given type of mathematical problem.” Diamond v. Diehr, 450 U.S. 175, 186 (1981).
52 See PEDRO DOMINGOS, THE MASTER ALGORITHM: HOW THE QUEST FOR THE ULTIMATE
LEARNING MACHINE WILL REMAKE OUR WORLD (2015) (“Every algorithm has an input
and an output: the data goes into the computer, the algorithm does what it will with it, and
out comes the result. Machine learning turns this around: in goes the data and the desired
result and outcomes the algorithm that turns one into the other. Learning algorithms –
known as learners – are algorithms that make other algorithms.”).
53 See, e.g., Latanya Sweeney, Discrimination in Online Ad Delivery, in ASSOC. FOR COMPUTING
MACHINERY QUEUE 44 (2013) (detailing a study in which a search of names associated with
African-Americans returned results featuring advertisements for arrest records as a result of
machine learning by Google’s Ad algorithm); see also NOBLE, supra note 3.
54 Leslie Horn, Here’s How OkCupid Uses Math to Find Your Match, GIZMODO (Feb. 14, 2013),
http://gizmodo.com/5984005/heres-how-okcupid-uses-math-to-find-your-match.
55 Thorin Klosowski, How Facebook Uses Your Data to Target Ads, Even Offline, LIFE HACKER
was a more individualized process, police can now rely on large datasets to make
probabilistic determinations of criminal activity).
58 Claire C. Miller, Can an Algorithm Hire Better than a Human?, N.Y. TIMES: UPSHOT (June 25,
2015), http://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-
human.html; Sarah Green Carmichael, Hiring C-Suite Executives by Algorithm, HARV. BUS. REV.
A. Data Objectivity
A common adage is “the numbers speak for themselves,”59 and as
identified by previous researchers, this demonstrates an unquestioning belief
in data objectivity, particularly regarding large numbers of data.60 This in turn
becomes a problematic feature of algorithmic systems—as their decision-
making relies on algorithms trained on a corpus of data, the belief in data
objectivity then often results in an uncritical acceptance of decisions derived
from such algorithmic systems.61 In the article, Think Again: Big Data,62
Professor Kate Crawford disputes the reverence accorded to big data. First,
she argues that numbers do not speak for themselves even with enough data
because “data sets are still objects of human design,”63 which means that big
data are not free from “skews, gaps, and faulty assumptions.”64 Biases can exist
in big data as much as they do in the real world with individual perceptions.65
For one, Professor Crawford notes the “signal problems”66 associated with
big data, which arise when citizens or subgroups are underrepresented due to
unequal creation or collection of data. She also observes that more data does
not necessarily improve transparency or accountability; rather, mechanisms to
aid the better interpretation of data are more important.67 Moreover, Professor
Crawford argues that although many believe that big data cause “less
discrimination against minority groups because raw data is somehow immune
to social bias”68 and helps people avoid group-based discrimination at a mass
level,69 big data may in fact contribute to the segregating of individuals into
Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, WIRED
(June 23, 2008, 12:00 PM),), https://www.wired.com/2008/06/pb-theory/.
60 Danah Boyd & Kate Crawford, Critical Questions for Big Data, 15 J. INFO. COMM. & SOC’Y
662 (2012).
61 See, e.g., Anderson, supra note 59 (arguing that “correlation is causation” and that the
https://foreignpolicy.com/2013/05/10/think-again-big-data/.
63 Id.
64 Id.
65 Id.
66 Id.
67 Id.
68 Id.
69 Id.
groups because of its “ability to make claims about how groups behave
differently,”70 an action forbidden by anti-classificationist laws.
These sentiments are echoed by the legal scholar Professor Anupam
Chander, who, in disavowal of data objectivity, argues for “algorithmic
affirmative action.”71 Chander emphasizes that although algorithms are
perceived as fair because computers are logical entities, their results may still
bear the traces of real world discrimination. He argues that “[a]lgorithms
trained or operated on a real-world data set that necessarily reflects existing
discrimination may well replicate that discrimination.”72 This means that
because data are historically biased towards certain groups or classes,
discriminatory results may still emerge from automated algorithms that are
designed in racial- or gender-neutral ways.73 Also, discriminatory results can
occur even when decision-makers are not motivated to discriminate: “Because
race or gender might be statistically associated with an observable trait—such
as worker productivity74 or propensity to remain in the labor market—profit-
maximizing employers might discriminate on the basis of race or gender, using
the observable characteristics as proxies for the unobservable traits.”75 Thus,
in addition to the problem of intentional discrimination, “automated
algorithms offer a perhaps more ubiquitous risk: replicating real-world
inequalities.”76
B. Data as Oracle
Concomitant with the belief in data objectivity is the uncritical
acquiescence to data-driven algorithmic decision-making as the final arbiter on
any given inquiry. Thus, the results of algorithmic systems are heeded as
oracular proclamations; they are accepted at face value without any attempt to
analyze or further interpret them. In the article, Critical Questions for Big Data,77
70 Id.
71 See Anupam Chander, The Racist Algorithm?, 115 MICH. L. REV. 1023, 1041 (2017).
72 Id. at 1036.
73 See id. at 1036–37.
74 It is important to clarify that neither I nor Professor Chander are denying that employers
have a vested interest in worker productivity. The issue here is how productivity is observed
and whether statistics for productivity are every wholly objective and not tainted for bias
when it comes to protected categories.
75 Id. at 1038.
76 Chander’s call for algorithmic affirmative action is rooted in idea that it is necessary to
design algorithms in race- and gender-conscious ways to account for discrimination already
embedded in the data. Id. at 1039. This action goes along with what the Obama
Administration offered as an approach to handle big data: “we need to develop a principle of
‘equal opportunity by design’—designing data systems that promote fairness and safeguard
against discrimination from the first step of the engineering process and continuing
throughout their lifespan.” EXEC. OFFICE OF THE PRESIDENT, BIG DATA: A REPORT ON
ALGORITHMIC SYSTEMS, OPPORTUNITY, AND CIVIL RIGHTS 5–6 (2016),
https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_dat
a_discrimination.pdf [https://perma.cc/9XQT-9VYQ].
77 Boyd & Crawford, supra note 60.
the authors offer six provocations to conversations about big data issues. They
define big data as “a cultural, technological, and scholarly phenomenon”78 that
rests on the interplay of technology, which “[maximizes] computation power
and algorithmic accuracy to gather, analyze, link, and compare large data
sets,”79 analysis, which “[identifies] patterns in order to make economic, social,
technical, and legal claims . . . .”80 Furthermore, they note that this analysis
carries with it a mythology, notably the prevalent belief that large data sets
offer better intelligence and knowledge and could algorithmically “generate
insights that were previously impossible, imbued with the aura of truth,
objectivity, and accuracy.”81
What I term the phenomenon of data as oracle is best illustrated by Chris
Anderson, who proposes that, because of big data, the scientific method is
now defunct.82 According to his article, the scientific approach has traditionally
consisted of three parts—hypothesize, model, and test.83 As scientists know
that correlation is not causation, they understand that no conclusions should
be based simply on correlation.84 Anderson argues, however, that this
approach to science is becoming obsolete with big data because petabytes of
data allow people to conclude that “correlation is enough.”85 He places such
trust in data and algorithms that he believes that people can now “throw the
numbers into the biggest computing clusters the world has ever seen and let
statistical algorithms find patterns where science cannot.”86
There is a danger, however, with treating algorithmic systems driven by big
data as oracles given that “interpretation is at the center of data analysis”87 and
that without proper interpretation the decision-making of algorithmic systems
could devolve to apophenia, which results in “seeing patterns where none
actually exist, simply because enormous quantities of data can offer
connections that radiate in all directions.”88 Thus, when approaching a data set
and designing algorithmic systems on that data set, researchers or interpreters
should understand not only the limits of the data set but also of which
questions they can ask of a data set and appropriate interpretations.89
To illustrate the problem of apophenia for employment decision-making,
consider this hypothetical example. Company A decides to use an
unsupervised machine learning algorithm to create a profile of the ideal
78 Id.
79 Id.
80 Id.
81 Id.
82 See Anderson, supra note 59.
83 Id.
84 Id.
85 Id.
86 Id.
87 Boyd & Crawford, supra note 60, at 668.
88 Id.
89 Id. at 670. Professor Jim Greiner exposes the same type of problem in civil rights litigation,
when the use of regression analysis can prompt unjustified casual inferences. See D. James
Greiner, Causal Inference in Civil Rights Litigation, 122 HARV. L. REV. 533 (2008).
C. Data-laundering
Perhaps an opposite problem to seeing patterns where there are none is
the potential for large data sets to be deployed to create patterns based on
faulty threads of causation, all with the goal of masking intentional
discrimination. I term this feature “data-laundering,” that is, the use of data to
“launder” or disguise intentional discrimination. In their seminal article,
Barocas and Selbst argue that existing law mostly fails to address the
discrimination that comes from data mining because some instances of
discriminatory data mining will not generate legal liability under Title VII.94
Based on the idea that data mining is “always a form of statistical
discrimination,”95 the authors describe five mechanisms by which
discriminatory outcomes might occur. The five mechanisms are 1) defining the
target variable, 2) labeling and collecting training data, 3) using feature
selection, 4) using proxies, and 5) masking. Notably, the authors argue “[t]he
90 This real-life case highlights exactly why I make the case in another law review article that
there ought to be an auditing imperative for hiring algorithms. Ifeoma Ajunwa, Automated
Employment Discrimination (unpublished manuscript) (on file with author).
91 See Dave Gershgorn, Companies Are on the Hook if Their Hiring Algorithms Are Biased,
definition of the target variable and its associated class labels determine what
data mining happens to find,”96 and concerns with discrimination enter at this
stage because whatever choices are selected will influence whether there are
adverse impacts on protected classes.97
Secondly, labeling and collection of training data is important because the
effectiveness of data mining is dependent on the quality of the data from which
it draws lessons.98 Data should serve as a good sample of a protected group in
order for data mining to be a nondiscriminatory basis for future decision-
making.99 This is not always the case, however, and in an act of data-laundering,
the decision-maker may choose to use data known to be incomplete or
inaccurate. Next, the authors indicate that organizations “make choices about
what attributes they observe and subsequently fold into their analyses”100
through the process of feature selection. This could result in a discriminatory
impact on legally protected classes if the factors that “better account for
pertinent statistical variation among members of a protected class are not well
represented in the set of selected features.”101
For example, making an employment decision based on an individual’s
criminal record would have a disparate impact on protected racial groups given
that mass incarceration has disproportionately impacted racial minorities in the
United States.102 Similarly, I would note that using a “ lack of gaps in
employment” as a hiring criterion could negatively impact women candidates
as women disproportionately leave the workplace to shoulder the family
burden of child or elderly care. Thus, as the authors note, the existence of
proxies could also be a mechanism that drives discrimination if “the criteria
that are genuinely relevant in making rational and well-informed decisions also
happen to serve as reliable proxies for class membership.”103 As Barocas and
Selbst explain, decision-makers with prejudicial values can mask their
intentional discrimination as accidental by exploiting the mechanisms above
because the data mining process helps conceal the fact that those decision-
makers considered class membership.104
96 Id. at 680.
97 See id.
98 See id. at 687.
99 See id.
100 Id. at 688.
101 Id.
102 See id. at 690; see also MICHELLE ALEXANDER, THE NEW JIM CROW: MASS
105 Boyd & Crawford, supra note 60 (noting the aura of efficiency associated with big data-
driven algorithms).
106 ERIK BRYNJOLFSSON & ANDREW MCAFEE, THE SECOND MACHINE AGE: WORK,
how computer algorithms may find it difficult to decipher language changes that are readily
comprehensible to humans). But see, e.g., Erin Winick, Lawyer-Bots Are Shaking up Jobs, MIT
TECH. REV. (Dec. 12, 2017), https://www.technologyreview.com/s/609556/lawyer-bots-
are-shaking-up-jobs/. “While computerization has been historically confined to routine tasks
involving explicit rule-based activities, algorithms for big data are now rapidly entering
domains reliant upon pattern recognition and can readily substitute for labour in a wide
range of non-routine cognitive tasks.” Carl Benedikt Frey & Michael A. Osborne, The
Future of Employment: How Susceptible are Jobs to Computerisation? (Sept. 17, 2013)
(unpublished manuscript),
http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
; see also ERIC SIEGAL, PREDICTIVE ANALYTICS: THE POWER TO PREDICT WHO WILL CLICK,
BUY, LIE, OR DIE (2013).
to gain employment.108 This is particularly true for the U.S. low-wage and
hourly workforce, as a co-author and I found through a survey of the top
twenty private employers in the Fortune 500 list (comprised of mostly retail
companies).109 That survey indicated that job applications for such retail jobs
must be submitted online, where they will first be sorted by automated hiring
platforms powered by algorithms.110
The algorithmic capture of the hiring process also goes beyond the hourly
workforce, as white collar and white shoe firms are increasingly turning to
hiring automation.111 In 2016, the investment firm Goldman Sachs announced
a key change to its process for hiring summer interns and first-year analysts.112
Candidates now have their resumes scanned—ostensibly by machine learning
algorithms, in search of keywords and experiences that have been pre-judged
to be “good barometers of a person’s success at Goldman.”113 Goldman Sachs
has also considered the addition of personality tests as part of its hiring
program.114 The world’s largest hedge fund has taken the automation gambit
the furthest, as starting in 2016, it is building an algorithmic model that would
automate all management, including hiring, firing, and other managerial
decision-making processes.115 Thus, automated hiring represents an ecosystem
in which, if left unchecked, a closed loop system forms—with algorithmically-
driven advertisement determining which applicants will send in their resumes,
automated sorting of resumes leading to automated onboarding and eventual
automated evaluation of employees, and the results of said evaluation looped
back into criteria for job advertisement and selection.
Games and Artificial Intelligence – And It’s a Huge Success, BUS. INSIDER (June 28, 2017, 9:30
AM), http://www.businessinsider.com/unilever-artificial-intelligence-hiring-process-2017-6;
Louis Efron, How A.I. Is About to Disrupt Corporate Recruiting, FORBES (July 12, 2016, 01:57
PM), https://www.forbes.com/sites/louisefron/2016/07/12/how-a-i-is-about-to-disrupt-
corporate-recruiting/#75ae172d3ba2.
112 Mary Thompson, Goldman Sachs Is Making a Change to the Way It Hires, CNBC: FIN. (June
Model from Its Employees’ Brains, WALL STREET J. (Dec. 22, 2016, 01:14 PM),
https://www.wsj.com/articles/the-worlds-largest-hedge-fund-is-building-an-algorithmic-
model-of-its-founders-brain-1482423694.
Algorithmically-driven
advertisement:
Potential for
Algorithmically- discrimination via
driven resume exclusion of target Automated
sorting: Potential for audiences as justified by evaluation:
discrimination evaluation results Potential for
through selection discrimination via
criteria which are evaluation criteria
proxy variables Automated which are proxy
onboarding: Potential variables
for inadvertent
discrimination through
one-size fits all
116 These legal challenges exist precisely because even with computing advancements that
allow computers to perform non-routine cognitive tasks, as noted by the legal scholar Cass
Sunstein, “at the present state of art, artificial intelligence cannot engage in analogical
reasoning or legal reasoning.” Kevin Ashley, Karl Branting, Howard Margolis & Cass R.
Sunstein, Legal Reasoning and Artificial Intelligence: How Computers “Think” Like Lawyers, 8 U.
CHI. L. SCH. ROUNDTABLE 1, 19 (2001); see Surden, supra note 48, at 88 (detailing gaps in the
law in regards to machine learning algorithms); see also Barocas & Selbst, supra note 3, at 673–
34 (detailing issues of disparate impact associated with algorithmic decision-making).
117 Cohen, supra note 37, at 189.
118 Id. at 190.
119 Id. at 199.
https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms.
125 Id.
126 Id.
127 Id.
128 Id.
129 Id.
130 Id.
131 Id.
132 Id.
133 Id.
134 Id.
135 Id.
136 Id.
137 Id.
138 Bornstein, Antidiscriminatory Algorithms, supra note 12.
139 Id. at 520.
140 See id. at 521–23.
stereotypes and implicit biases that often infect human decisions.”141 However,
Professor Bornstein acknowledges that despite the promise of algorithms to
reduce bias in decision-making, there are concerns about algorithmic
discrimination and the risk of reproducing existing inequality142 because the
effectiveness of algorithms and decision-making greatly relies on what data is
used and how.143 Yet, Professor Bornstein believes that if algorithms are
handled properly, they can still “suppress, interrupt, or remove protected class
stereotypes from decisions.”144
As noted earlier, my aim is not to arbitrate whether algorithms are less
biased than humans,145 and I do not believe that such a determination is
necessary to observe the inadequacy of current laws to govern machine
learning algorithms or to conceive of better legal frameworks. Therefore,
although in many respects the algorithmic turn to hiring is purportedly driven
by a desire for fairness and efficiency—for example, Goldman Sachs’s hiring
changes were prompted by a desire for a more diverse candidate pool,146 as
these machine learning algorithms may have the (un)intended effects of
perpetuating structural biases or could have a disparate impact on protected
categories,147 the law should evolve more robust governing mechanisms to
guard against those outcomes. In the next sub-section, I detail how bias may
still creep into algorithmic decision-making systems in the context of
recruitment and hiring.
algorithms); Barocas & Selbst, supra note 3 (detailing issues of disparate impact associated
with algorithmic decision-making).
148 See infra Section III.A.
1. Recruitment
A recent ProPublica investigation revealed that Facebook allowed
advertisers (both for jobs and for housing) to exclude audiences by ethnic
group.150 In what investigators described as “a modern form of Jim Crow,”151
Facebook had developed a feature it termed “Affinity Groups”—essentially, a
method for advertisers to use demographic data to algorithmically target who
will receive certain Facebook ads.152 For example, one page on Facebook
Business, titled “How to Reach the Hispanic Audience in the United States,”
boasts of the potential for advertisers to reach up to 26.7 million Facebook
users of “Hispanic Affinity.”153 From this specific Affinity Group, advertisers
can choose to narrow in on bilingual users, those who are “Spanish dominant,”
or those who are “English dominant,” in order to “refine their audiences.”154
Although, ostensibly, this algorithmic feature might help business owners
refine their audiences and target ads to individuals who might be more likely
customers, the use of Affinity Groups as an ad distribution tool holds high
potential for unlawful discrimination. In demonstration of this discriminatory
potential, ProPublica reporters were able to buy dozens of rental house ads on
Facebook that excluded “African Americans, mothers of high school kids,
people interested in wheelchair ramps, Jews, expats from Argentina, and
Spanish speakers.”155
Following on the heels of this ProPublica investigation, a 2017 class action
lawsuit against Facebook contended that Facebook Business tools both
“enable and encourage discrimination by excluding African Americans,
Latinos, and Asian Americans – but not white Americans from receiving
advertisements for relevant opportunities.”156 In an amended complaint, the
class action also alleged that “Facebook offers a feature that is legally
149
Oliver Sylvain, Discriminatory Designs on User Data, KNIGHT FIRST AMENDMENT
INSTITUTE. AT COLUM. U.: EMERGING THREATS (Apr. 01, 2018),
https://knightcolumbia.org/content/discriminatory-designs-user-data; see, also, Olivier
Sylvain, Intermediary Design Duties, 50 CONN. LAW REV. 203 (2018).
150 Julia Angwin & Terry Parris Jr., Facebook Lets Advertisers Exclude Users by Race,
(2018),
https://www.facebook.com/business/help/717368264947302?helpref=page_content.
153 See U.S. Hispanic Affinity on Facebook, FACEBOOK BUS. (2018),
https://www.facebook.com/business/a/us-hispanic-affinity-audience.
154 See id.
155 Jessica Guynn, Facebook Halts Ads That Exclude Racial and Ethnic Groups, USA TODAY
157 See First Amended Class and Collective Action Complaint, Bradley v. T-Mobile, Inc., No. 17-
cv-07232-BLF, at 21 (N.D. Cal. May 29, 2018),
https://www.onlineagediscrimination.com/sites/default/files/documents/og-cwa-
complaint.pdf.
158 See id.
159 See id. See generally, About Lookalike Audiences, FACEBOOK BUS.: ADVERTISER HELP,
2. Hiring
Job recruitment algorithms on platforms like Facebook are, however, not
the sole problem. Algorithms that quickly sort job applicants based on pre-set
criteria may also (inadvertently) be unlawfully discriminatory. In her book,
Weapons of Math Destruction, Cathy O’Neil poignantly illustrates how personality
tests may serve to discriminate against one protected class, job applicants
suffering from mental disabilities.162 In one class action, the named plaintiff,
Kyle Behm, a college student with a near-perfect SAT score and who had been
diagnosed with bipolar disorder, found himself repeatedly rejected for
minimum wage jobs at supermarkets and retail stores that all used a personality
test that had been modeled on the “Five Factor Model Test” used to diagnose
mental illness.163 Thus, personality tests, as part of automated hiring systems,
could be seen as a covert method for violating antidiscrimination law—
specifically, the Americans with Disabilities Act.164 In addition, other test
questions, such as the length of commute time, could be seen as covertly
discriminating against those from under-resourced neighborhoods which lack
a reliable transportation infrastructure.165
In addition to personality tests, companies are using other algorithmic
processes to screen applicants. For example, the company HireVue offers
virtual interviews with individual applicants. HireVue’s innovative hiring tool
identifies facial expression, vocal indications, word choice, and more.166 The
problem is that “speech recognition software can perform poorly” and “facial
analysis systems can struggle to read the faces of women with darker skin.”167
Some skeptics express their concerns about the legitimacy of using physical
101-336) grants mentally ill workers equal opportunity in employment. See, e.g., Press Release,
U.S. Equal Emp’t Opportunity Comm’n, Worker with Bipolar Disorder to Receive $91,000 in
Disability Discrimination Case Settled by EEOC (Mar. 18, 2003),
https://www.eeoc.gov/eeoc/newsroom/release/3-18-03b.cfm; see also Depression, PTSD, &
Other Mental Health Conditions in the Workplace: Your Legal Rights, U.S. EQUAL EMP’T
OPPORTUNITY COMM’N, https://www.eeoc.gov/eeoc/publications/mental_health.cfm (last
visited Aug. 10, 2019).
165 Debra Cassens Weiss, Do Job Personality Tests Discriminate? EEOC Probes Lawyer’s Complaint,
Filed on Behalf of His Son, A.B.A. J. (Sept. 30, 2014, 9:08 AM),
http://www.abajournal.com/news/article/do_job_personality_tests_discriminate_eeoc_pro
bes_lawyers_complaint_filed_o.
166 Hilke Schellmann & Jason Bellini, Artificial Intelligence: The Robots Are Now Hiring, WALL
features and facial expressions that have no causal link with workplace success
to make hiring decisions.168
Another example of automated hiring is the use of algorithms to conduct
social media background checks. Such checks are fraught with issues for
several reasons. First, they “presume that a person’s online behaviors, like
some use of foul language, are relevant to their professional activities.”169
Second, they have “limited ability to parse the nuanced meaning of human
communication.”170 In addition, such checks could “surface details about an
applicant’s race, sexual identity, disability, pregnancy, or health status, which
employers should not consider during the hiring process.”171
Finally, as the last step of the hiring process, employers make offers to
applicants using automated hiring systems. For example, there exist software
programs that predict the probability that a candidate will accept a given job
offer and that suggest what the employer could do to increase those chances.
For example, these programs allow the employer to “adjust salary, bonus, stock
options, and other benefits to see in real time how the prediction changes.”172
The worry remains that such programs might amplify pay gaps for white
women and racial minorities because the data commonly include “ample
proxies for a worker’s socioeconomic and racial status, which could be
reflected in salary requirement predictions.”173 They might also undermine laws
that bar employers from considering candidates’ salary histories.174
in setting parameters for solving any given problem, with the final result
attributed solely to the machine.176 Consider that proponents of automations
have always tended to downplay or deny the role of the human mastermind.177
As an early example, consider “The Mechanical Turk” also known as “the
chess Turk,” which was a chess-playing machine constructed in the late
eighteenth century.178 Although the Mechanical Turk was presented as an
automaton chess-playing machine that was capable of beating the best human
players, the secret of the machine was that it contained a human man,
concealed inside its chambers.179 The hidden chess master controlled the
machine while the seemingly automated machine beat notable statesmen, like
Napoleon Bonaparte and Benjamin Franklin, at chess.180 Thus, the Mechanical
Turk operated on obfuscation and subterfuge and sought to reserve the glory
of the win to the machine.181
With the growing allure of artificial intelligence as a venture-capital-
generating marketing ploy,182 modern day corporations have been discovered
creations without demonstrating that they have made socially valuable contributions” and
concluding that “this is bad for competition and bad for consumers”).
176 Surden, supra note 48, at 115; Jatinder Singh, Ian Walden, Jon Crowcroft & Jean Bacon,
Responsibility & Machine Learning: Part of a Process (Oct. 27, 2016) (unpublished manuscript),
http://dx.doi.org/10.2139/ssrn.2860048 (arguing that machines can learn to operate in ways
beyond their programming levels, meaning that the responsibility for problems created by
the algorithms cannot lie solely with the algorithms creators or the algorithms themselves).
177 Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the
Machine-Learning Era, 105 GEO. L.J. 1147, 1207 (2017) (noting that machine-learning
technology is not yet fully understood and that most people “simply lack the interpretive
ability to . . . show that X causes Y on a machine-learning platform.”).
178 TOM STANDAGE, THE TURK: THE LIFE AND TIMES OF THE FAMOUS 19TH CENTURY
ANOMALIES (2000).
180 Id.
181 See PASQUALE, supra note 3 (arguing that algorithms operate on obfuscation). Conversely,
Amazon’s Mechanical Turk program does the opposite. The program allows businesses or
individual clients to assign human intelligence tasks, that is, tasks that are difficult or
impossible for machines to complete (like sorting photographs, writing product descriptions,
completing surveys, etc.) to humans. Amazon explicitly bans the use of automated bots to
complete such tasks. See AMAZON MECHANICAL TURK, https://www.mturk.com/ (last
visited Aug. 8, 2019).
182 See Ellen Huet, The Humans Hiding Behind the Chatbots, BLOOMBERG: TECH. (Apr. 18, 2016,
operating their own versions of the mechanical Turk. Consider, for example,
that the humans on the Amazon Mechanical Turk crowd-sourcing work
platform consider themselves “the AI behind the AI.”183 On this internet-
based platform, human workers are recruited to accomplish mundane tasks
that are difficult for algorithms to tackle. These tasks, referred to as “human
intelligence tasks” (or HITs), include: “transcribing audio clips; tagging photos
with relevant keywords; copying photocopied receipts into spreadsheets.”184
While the work on Amazon Turk and its notoriously low pay is no secret, a
Bloomberg exposé revealed that several corporations were disingenuously
passing off the labor of human workers as that of AI.185
Even when corporations are not attempting to pass off human workers as
AI, it is important to understand that there is always a human behind the AI.
Modern day algorithms operate in ways similar to the Mechanical Turk in that
the human decisions behind the creation of algorithms operated by businesses
are generally considered trade secrets that are jealously guarded and protected
from government oversight.186 But while algorithms might remove some
decisions from a human entity, humans must still make the initial decisions as
to what data to train the algorithm on and as to what factors are deemed
relevant or irrelevant.187 Even more importantly, the decisions for what data is
important in the training data—decisions that are then matched as closely as
possible by the algorithm—are also made by humans.188 For example, if a hiring
algorithm is trained on a corpus of resumes, a human must still make
183 Miranda Katz, Amazon’s Turker Crowd Has Had Enough, WIRED (Aug. 23, 2017, 06:55
AM), https://www.wired.com/story/amazons-turker-crowd-has-had-enough/.
184 Sarah O’Connor, My Battle to Prove I Write Better Than an AI Robot Called ‘Emma’, FIN.
robots pretending to be humans. In the past two years, companies offering do-anything
concierges (Magic, Facebook’s M, GoButler); shopping assistants (Operator, Mezi); and e-
mail schedulers (X.ai, Clara) have sprung up. The goal for most of these businesses is to
require as few humans as possible. People are expensive. They don’t scale. They need health
insurance. But for now, the companies are largely powered by people, clicking behind the
curtain and making it look like magic.”).
186 See Frank Pasquale, Restoring Transparency to Automated Authority, 9 J. TELECOMM. & HIGH
TECH. L. 235 (2011). Congress enacted the Digital Millennium Copyright Act (DMCA) in
1998. Pub. L. No. 105-304, 112 Stat. 2860 (1998) (codified as amended, in sections of 17 and
28 U.S.C.). Section 1201 of the DMCA creates liability for hacking or reverse engineering an
automated system protected under copyright law. 17 U.S.C. § 1201 (2012); see also Perel &
Elkin-Koren, supra note 175 (noting the chilling effect on researchers who would like to
reverse engineer automated processes, given the potential to incur liabilities).
187 Even when automated feature selection methods are used, the final decision to use or not
use the results, as well as the choice of feature selection method and any fine-tuning of its
parameters, are choices made by humans. For more on feature selection see, e.g., GARETH
JAMES, DANIELA WITTEN, TREVOR HASTIE & ROBERT TIBSHIRANI, AN INTRODUCTION TO
STATISTICAL LEARNING WITH APPLICATIONS IN R (1st ed. 2013).
188 See, e.g., the way that hiring startup Jobaline verifies their technique by using the ratings
that people listening give voice snippets of job candidates. Ying Li et al., Predicting Voice
Elicited Emotions, in PROCEEDINGS OF THE 21ST ACM SIGKDD INT’L CONF. ON
KNOWLEDGE DISCOVERY AND DATA MINING 1969 (2015).
189 Professor Sandra Mayson also makes this argument in regard to algorithmic decision-
making in the criminal justice field. Mayson, supra note 3 (arguing that the problem of
disparate impact in predictive risk algorithms lies not in the algorithmic system but in the
nature of prediction itself).
190 Perhaps the most emblematic example of the American legal system’s deference to
employers is the case of Lochner v. New York, 198 U.S. 45 (1905), which held that any limit on
the number of hours (in excess of 60 hours) that employees of a bakery could work was
unconstitutional. Although that particular decision has since been overturned, there is a
wealth of scholarship noting the continued deference to employers. See Cynthia Estlund, The
Ossification of American Labor Law, 102 COLUM. L. REV. 1527, 1527 (2002) (noting that the
“ossification of labor law” is due, in part, to a lack of “democratic revision”); see also Franita
Tolson, The Boundaries of Litigating Unconscious Discrimination: Firm-Based Remedies in Response to a
Hostile Judiciary, 33 DEL. J. CORP. L. 347 (2008). Courts want to avoid turning Title VII into a
rule by which employers could be held liable for “perceived slights” towards employees. Id.;
see also Kevin M. Clermont & Stewart J. Schwab, How Employment Discrimination Plaintiffs Fare
in Federal Court, 1 J. EMPIRICAL LEGAL STUD. 429 (2004) (claiming that employment
discrimination plaintiffs (unlike many other plaintiffs) have always done substantially worse
in judge trials than in jury trials); Michael J. Zimmer, The New Discrimination Law: Price
Waterhouse is Dead, Whither McDonnell Douglas?, 53 EMORY L.J. 1887, 1944 (2004) (“The 5.8
percent reversal rate of defendant trial victories is smaller in employment discrimination
cases than any other category of cases except prisoner habeas corpus trials.”); see also Ruth
Colker, The Americans With Disabilities Act: A Windfall for Defendants, 34 HARV. C.R.-C.L.L.
REV. 99, 100 (1999) (looking at reported decisions from 1992-1998 and finding that
defendants prevailed in more than 93% of the cases decided at the trial court level and were
more likely to be affirmed on appeal); Theodore Eisenberg, Litigation Models and Trial
Outcomes in Civil Rights and Prisoner Cases, 77 GEO. L.J. 1567, 1567 (1989) (noting that only
claims filed by prisoners have a lower success rate than that of employment discrimination
plaintiffs).
191 Clermont & Schwab, supra note 190.
192 Wendy Parker, Lessons in Losing: Race Discrimination in Employment, 81 NOTRE DAME L.
197 “The rule of employment at will allows either the employer or the employee to terminate
the employment relationship at any time for good reason, bad reason, or no reason.” Julie C.
Suk, Discrimination at Will: Job Security Protections and Equal Employment Opportunity in Conflict, 60
STAN. L. REV. 78 (2007). At-will employment is the law in every U.S. state except for
Montana. See At-Will Employment - Overview, NAT’L CONF. ST. LEGISLATURES (Apr. 15, 2018),
http://www.ncsl.org/research/labor-and-employment/at-will-employment-overview.aspx.
198 Suk, supra note 197.
199 Id. at 81.
200 CYNTHIA ESTLUND, WORKING TOGETHER: HOW WORKPLACE BONDS STRENGTHEN A
Liability Does Not Induce Hiring Quotas, 74 TEX. L. REV. 1487, 1489 (1996).
203 Suk, supra note 197, at 81.
204 St. Mary’s Honor Ctr. v. Hicks, 509 U.S. 502, 508 (1993).
205 Under McDonnell Douglas v. Green, 411 U.S. 792 (1973), a plaintiff could establish a prima
facie case without direct evidence by proving (1) that he was a member of a protected group,
(2) that he was qualified for the job, (3) he applied for the job and was rejected, and (4) the
job continued to remain open. Id. at 802.
206 St. Mary’s Honor Ctr., supra note 204, at 508.
bias, particularly given the well documented technological capability for those
types of hiring systems to substitute facially neutral variables as proxies for
protected demographic characteristics such as race and gender.207 The nature
of the hiring relationship can be explained succinctly by the following quote:
“the typical matching of a worker to a position does not reflect the outcome
of workers picking from among several job offers. Rather, it is the result of an
employer picking from among several job applicants.”208 Employers choose
candidates and not the other way around. Today, with the growing expanse of
online job applications, job seekers apply to an average of twenty-seven jobs
before they attain one interview.209 Of course, since only 17% of interviews
actually result in an offer of employment, it is likely that these applicants apply
to far more than twenty-seven jobs throughout their entire job search.210 In
one extreme case, an applicant even built his own algorithm to apply to
thousands of jobs at once, in an attempt to “beat” being sorted out by
automated hiring platforms.211
On the employer’s side of this surge in applications, on average, fifty-nine
people apply for each open position.212 From this pool of applicants, then, the
employer is required to eliminate a large number of candidates in order to find
candidates to interview—and ultimately hire. Due to the large pool of
applicants, though, an average of only 12% of applicants will be interviewed
for any open position.213 This indicates that employers must use the
information available to them to eliminate a large number of applicants before
they can make substantial progress in finding the “most talented candidates.”
The sheer necessity for this culling of possible job applicants has left some
scholars in support of the employers’ total discretion in the hiring process.214
Yet, it is undeniable that granting such near-total discretion opens the door for
human bias to be introduced into the employment decision-making process.
The subsequent use of algorithmic systems only allows said bias to become
entrenched and more difficult to detect.
http://time.com/money/4053899/how-long-it-takes-to-get-hired/.
211 Robert Coombs, I Built a Bot to Apply to Thousands of Jobs at Once – Here’s What I Learned,
Legal, and Ethical Ramifications of Cultural Profiling at Work, 14 DUKE J. GENDER L. & POL’Y
369 (2007).
218 Margaret Rouse, What is Cultural Fit?, TECH TARGET: SEARCHCIO (last updated Sept.
2014), https://searchcio.techtarget.com/definition/Cultural-fit
219 JOBVITE, JOBVITE RECRUITER NATION REPORT 2016: THE ANNUAL RECRUITING
Social and Cultural Response to Women Has Shaped Securities Regulation, 33 HARV. J.L. & GENDER
175 (2010) (arguing that cultural fit within the finance industry is imperfect, as bias has been
historically ingrained).
222 Robert Half, How to Know If a Candidate Will Match Your Company Culture, ROBERT HALF
223 Jeff Pruitt, 3 Ways to Know If an Employee Is a Culture Fit, INC. (Aug. 12, 2016),
https://www.inc.com/jeff-pruitt/3-ways-to-know-if-an-employee-is-a-culture-fit.html.
224 Richard Pascale, The Paradox of “Corporate Culture”: Reconciling Ourselves to Socialization, 27
232 Id.
233 Id. at 1159.
234 Id. (citing Charles R. Lawrence III, The Id, the Ego, and Equal Protection: Reckoning with
Plaintiff described her treatment at the school as discriminatory from the start;
the principal snubbed her at the first staff meeting, disciplined her but not
another teacher, and came late to her scheduled evaluations. In addition, the
principal told Natay at one point that she was “geographically, racially,
culturally, and socially out of place” at the school, and her contract was not
renewed after unfavorable evaluations.245 The school district superintendent
also decided that the plaintiff was “not an excellent teacher and not someone
[he] would want Murray School District to hire on a long-term basis.”246 Natay
had an informal conference with the superintendent, but her arguments did
not change the decision. Of the district’s provisional teachers hired for that
school year, only Natay’s contract was not renewed, and on her last day of
work, the principal made another racially derogatory comment to her.247 Natay
brought a discriminatory discharge claim in federal district court, and the court
entered summary judgment in favor of the employer. She appealed.
In the Tenth Circuit, the court reviewed the district court’s grant of
summary judgment, using the same standards, as well as the evidence and
reasonable inferences drawn from the evidence. It stated that although the
plaintiff proved that the principal showed discriminatory actions, she lacked
evidence showing that the superintendent, the ultimate decision-maker, had a
discriminatory reason not to renew her contract.248 Based on the “cat’s paw”
doctrine, she did not prove that the “manager who discharged the plaintiff
merely acted as a rubber stamp, or the ‘cats’ paw,’ for a subordinate employee’s
prejudice,”249 regardless of the manager’s discriminatory intent. Also, the court
used the McDonnell Douglas burden-shifting framework. The court decided that
the plaintiff satisfied the prima facie case and succeeded in shifting the burden
to the school district. Although the plaintiff claimed that the superintendent’s
investigation of her performance was inadequate because he never sat in her
classroom to observe her, the court stood on the side of the superintendent,
whose affidavit detailed other steps in his investigation and decision to not
renew the contract due to her ineffectiveness. Thus, the court concluded that
the plaintiff’s showing did not reasonably give rise to an inference that the
employer’s reasons were pretextual and affirmed the decision of the district
court.250
the criteria used in algorithmic hiring must have some probative value for
determining fitness to perform required job duties. This proposal is supported
by new studies that show that “cultural fitness” is not always necessary for
long-term success at a firm. One such study conducted by business professors
at Stanford and Berkeley found that the capacity to change and flexibility—
that is, high “enculturability”—were more important than pre-existing cultural
fit in regard to long-term success.251 According to the authors of the study:
Our results suggest that firms should place less emphasis on screen for
cultural fit, . . . [a]s other work has shown, matching on cultural fit
often favors applicants from particular socioeconomic backgrounds,
leading to a reduction in workplace diversity. Instead, our work points
to the value of screening on enculturability.252
The study concludes with three enculturability questions that employers might
pose to potential candidates during the hiring process: “1. To what extent do
candidates seek out diverse cultural environments? 2. How rapidly do they
adjust to these new environments? 3. How do they balance adapting to the
new culture while staying true to themselves?”253
As a result of such new studies, more companies are moving away from
“cultural fit” as a factor for hiring. For example, in a bid to create a more
inclusive hiring process, Facebook outlawed the term “culture fit” as interview
feedback, “requiring interviewers to provide specific feedback that supported
their position.”254 Facebook also took steps to “proactively identify
unconscious bias” in their interview process and “developed a ‘managing
unconscious bias’ training program.”255
Other companies now embrace “hiring for values fit” as a method to
decrease unconscious bias in interviewing. For example, Atlassian, an
Australia-based company, redesigned its interview process: “values fit
interviewers are carefully selected and given training on topics like structured
interviewing and unconscious bias.” The interview is structured with a set of
behavioral questions to assess whether a candidate would thrive in an
environment with their company values.256 As one of Atlassian’s chief officers
explains: “Focusing on ‘values fit’ ensures we hire people who share our sense
of purpose and guiding principles, while actively looking for those with diverse
251 Amir Goldberg, Sameer B. Srivastava, V. Govind Manian, William Monroe &
Christopher Potts, Fitting In or Standing Out? The Tradeoffs of Structural and Cultural Embeddedness,
81 AM. SOC. REV. 1190 (2016).
252 Rich Lyons, Lose Those Cultural Fit Tests: Instead Screen New Hires for ‘Enculturability’, FORBES
https://www.forbes.com/sites/larsschmidt/2017/03/21/the-end-of-culture-
fit/#3045454e638a.
255 Managing Conscious Bias, FACEBOOK NEWSROOM (July 28, 2015),
https://newsroom.fb.com/news/2015/07/managing-unconscious-bias/.
256 Schmidt, supra note 254. A description of Atlassian’s company values is available here:
https://www.atlassian.com/company/values.
viewpoints, backgrounds, and skill sets. We’re trying to build a healthy and
balanced culture, not a cult.”257 This approach has borne positive results for
Atlassian. In 2015, 10% of their technical workforce identified as female. In
2016, 17% of recent hires were women, and women held 14% of all technical
roles. Similarly, in 2015 their U.S.-based team had 23% of employees
identifying as people of color. In 2016, people of color comprised 32% of their
new hires.258
Kenneth C. Laudon, Markets and Privacy, in ICIS 1993 PROCEEDINGS 65, 70–71 (1993)
(proposing a “National Information Market” within which “information fiduciaries
would . . . accept deposits of information from depositors and seek to maximize the return
on sales of that information in national markets or elsewhere in return for a fee”). Professor
Jack Balkin popularized the term in several writings. See Jack Balkin, Information Fiduciaries in
the Digital Age, BALKINIZATION (Mar. 5, 2014),
https://balkin.blogspot.com/2014/03/information-fiduciaries-in- digital-age.html; Jack M.
Balkin, Information Fiduciaries and the First Amendment, 49 U.C. DAVIS L. REV. 1183 (2016); Jack
M. Balkin, Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech
Regulation, 51 U.C. DAVIS L. REV. 1149, 1160–63 (2018); Jack M. Balkin, Free Speech Is a
Triangle, 118 COLUM. L. REV. 2011, 2047–55 (2018); Jack M. Balkin, AEGIS SERIES PAPER
NO. 1814, FIXING SOCIAL MEDIA’S GRAND BARGAIN 11–15 (2018),
https://www.hoover.org/sites/default/files/research/docs/balkin_webreadypdf.pdf.
Professor Jonathan Zittrain has also made important theoretical contributions to the concept
of information fiduciaries. See Jack M. Balkin & Jonathan Zittrain, A Grand Bargain to Make
Tech Companies Trustworthy, ATLANTIC (Oct. 3, 2016),
https://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346;
Jonathan Zittrain, Facebook Could Decide an Election Without Anyone Ever Finding Out, NEW
REPUBLIC (June 1, 2014), https://newrepublic.com/article/117878/information- fiduciary-
solution-facebook-digital-gerrymandering; Jonathan Zittrain, How to Exercise the Power You
Didn’t Ask For, HARV. BUS. REV. (Sept. 19, 2018), https://hbr.org/2018/09/how-to-
exercise-the-power- you-didnt-ask-for; Jonathan Zittrain, Mark Zuckerberg Can Still Fix This
Mess, N.Y. TIMES (Apr. 7, 2018),
https://www.nytimes.com/2018/04/07/opinion/sunday/zuckerberg- facebook-privacy-
congress.html.
261 Balkin, Information Fiduciaries and the First Amendment, supra note 260, at 1209.
262 Id. at 1209.
263 Id. at 1209.
264
Lina M. Khan & David Pozen, A Skeptical View of Information Fiduciaries, 133 HARV. L.
REV. (forthcoming 2019).
265
See id.
266 See id.
267 Cf. James Grimmelmann, When All You Have Is a Fiduciary, LAW & POL. ECON. (May 30,
building block towards other theorization regarding online platforms, and also
as one of several potential checks to the currently unbridled power of
automated hiring platforms to serve exclusionary ends. I believe that there
could be a multiplicity of approaches to the governance of online platforms
for the better of society. For example, while hitherto I have tended to focus
on governmental action as the appropriate governance mechanism for
algorithmic bias, other legal scholars, like Professor Sonia Katyal, have called
for private accountability measures.268 No one scholar can claim the cure-all
solution to the problem of algorithmic bias, but rather than reject proposed
solutions outright as nostrum, we should acknowledge that we are all blind
men grasping at the elephant and that our collective intellectual attempts may
yet reveal a full view of the problem at hand. Several legal scholars have
recently called for the extension of fiduciary duties to other areas of law.269
And some legal scholars have also argued against the expansion of fiduciary
duties.270 I argue that established concepts from organizational theory
scholarship further bolster the argument that hiring platforms are performing
a brokerage function and thus should be considered fiduciaries.271
My theorizing then works to clarify both the power and information
asymmetry relationships present in the triad of job applicant, hiring platform,
and employer. For example, with platform authoritarianism, I make clear the
unequal power relationship between the job applicant and the platform which
allows the platform to dictate in what manner the job applicant may make use
of the platform thus belying the caretaking imagery conjured by a doctor-
patient analogy. With the tertius bifrons concept, I reveal the duplicitous
relationship between the hiring platform and the job applicant which then
supports the argument for greater employment discrimination liability for the
platform.
268 Katyal, supra note 16 (calling for a variety of tools to tackle algorithmic accountability
such as codes of conduct, impact statements, and whistleblower protection).
269 See Theodore Rave, Politicians as Fiduciaries, 126 HARV. L. REV. 671 (2013);
see also Theodore Rave, Fiduciary Voters, 66 DUKE L.J. 2 (2016); Cf. Seth Davis, The False
Promise of Fiduciary Government, 89 NOTRE DAME L. REV. 3 (2014).
270
See James Grimmelmann, Speech Engines, 98 MINN. L. REV. 868, 904 (2014) (“[W]e are
undergoing something of an academic fiduciary renaissance, with scholars arguing for
treating legislators, judges, jurors, and even friends as fiduciaries.”); Daniel Yeager, Fiduciary-
isms: A Study of Academic Influence on the Expansion of the Law, 65 DRAKE L. REV. 179, 184
(2017) (arguing that “academic writing, deploying a sense of fiduciary so open as to be
empty, has influenced courts to designate” more entities as fiduciaries).
271 I take as authority the definition of brokerage set forth by Marsden: brokerage is
1. Platform Authoritarianism
As Professor Olivier Sylvain has noted, platforms “shape the form and
substance of their users’ content.”272 Furthermore, platforms also shape
relationships as they connect users to one another while also enjoying “a great
deal of control over how users’ encounters are structured.”273 In evaluating
certain design policy choices that these companies make, such as the methods
through which they facilitate the amount of information users can learn about
one another and how they are to do so, one argument is that online platforms
can make choices that exacerbate the discrimination in our current society.274
Thus, makers of platforms cannot be blameless for the discrimination that
occurs on them—even if their users may be influenced by pre-existing
biases.275 Thus, I theorize “platform authoritarianism” as a socio-technical
phenomenon that has transformed the responsibility and liability of
platforms.276
Platform authoritarianism is what I term our present social position vis-à-
vis platforms, wherein creators of platforms demand that we engage with those
platforms solely “on their dictated terms, without regard for established laws
and business ethics.”277 Some scholars have noted that many online platforms
“can control who is matched with whom for various forms of exchange, what
information users have about one another during their interactions, and how
indicators of reliability and reputation are made salient.”278 This means that for
example, job applicants on hiring platforms must acquiesce to data demands
from the platforms, they are also not in control of how their candidacy is
presented, but rather must relinquish all control to the platform as quid pro quo
for accessing job opportunity. Rejecting platform authoritarianism in favor of
a duty of care that the purveyors of online platforms owe to their users is the
first step towards returning to a rule of law for algorithms.
272
Oliver Sylvain, Discriminatory Designs on User Data, KNIGHT FIRST AMENDMENT
INSTITUTE. AT COLUM. U.: EMERGING THREATS (Apr. 01, 2018),
https://knightcolumbia.org/content/discriminatory-designs-user-data
273 Karen Levy & Solon Barocas, Designing Against Discrimination in Online Markets, 32
B. Discrimination Per Se
As holding corporations responsible for the algorithmic bias of the
automated hiring platforms they use represents a challenging legal problem
because of the difficulty of discovering proof and establishing intent, I propose
a new burden-shifting theory of liability, discrimination per se.288 Discrimination per
se would allow for a third cause of action under Title VII.289 The purpose is to
aid plaintiffs who cannot show proof of disparate treatment or who would
have difficulty obtaining the means to show the statistical proof of disparate
impact. Title VII requires intent for liability to attach, or in the absence of
intent a clear demonstration of disparate impact with no excuse of business
necessity for the disparity.290 When bringing disparate impact claims, plaintiffs
are likely to face three interrelated obstacles: “(1) compiling the requisite
statistics to show that the policy has a disparate impact . . . (2) identifying a
specific policy or practice that caused the adverse employment decision, and
basis of sex, race, color, national origin, and religion. See U.S. Civil Rights Act of 1964 §7, 42
U.S.C. §2000e (2012). Plaintiffs must establish that “a respondent uses a particular
employment practice that causes a disparate impact on the basis of [a protected
characteristic] and the respondent fails to demonstrate that the challenged practice is job
related for the position in question and consistent with its business necessity.” 42 U.S.C. §
2000e-2(k)(1)(A)(i).
290 Proving clear intent is necessary when attempting to make a disparate treatment case
under Title VII. However, under the disparate impact of clause of action codified in Title
VII, the intent is implied from an established pattern. See U.S. Civil Rights Act of 1964 §7, 42
U.S.C. §2000e-2(1)(A).
(3) rebutting the employer’s defense that the policy is justified by a business
necessity.”291 Also notable, “courts are inconsistent in addressing the
requirement of compiling appropriate statistics to show that a policy has a
disparate impact.”292 Second, courts often fail to find a “particular employment
practice” that caused the disparity because they cannot distinguish actual job
tasks from the default norms.293 Many times, courts use the phrase “particular
employment practice” to narrow the applicability of disparate impact
liability.294
In their essay, Incomprehensible Discrimination, Professors James
Grimmelmann and Daniel Westreich, make the case that when a plaintiff has
met the burden of showing disparate impact, “the defendant’s burden to show
a business necessity requires it to show not just that its model’s scores are not
just correlated with job performance but explain it.”295 This heightened burden
acknowledges the information asymmetry that exists between the employer
and the employee in the context of automated hiring. My proposed doctrine
of discrimination per se while concurring that there is a duty of care owed by the
employer, seeks to further rectify both the information asymmetry and power
imbalance present in automated hiring situations by entirely shifting the
burden of proof from plaintiff to defendant.
Per my proposal, a plaintiff can assert that a hiring practice (for example,
the use of proxy variables resulting or with the potential to result in adverse impact
to protected categories) is so egregious as to amount to discrimination per se, and
this would shift the burden of proof from the plaintiff to the defendant
(employer) to show that its practice is non-discriminatory. I do not set forth a
specific rule or standard for how to determine discrimination per se, rather, I
think this is a question of law that, like other types of American legal doctrines,
should be generated through case law. Note that this is not an automatic win
for the plaintiff, rather it merely reverses the American legal tradition of
deference to employers and allows that an employment discrimination plaintiff
will at least get a day in court. Note also that it still remains relatively easy for
employers to establish business necessity for their practices and therefore
defeat any plaintiffs’ disparate impact claims.296
Discrimination per se is an answer to the question of whether the liability of
corporations could be mitigated by a lack of intent to discriminate or even a
has been able to show business necessity, a plaintiff may nevertheless be able to prevail by
showing that there could be an “alternative employment practice” that meets that business
necessity.” §703 (k)(1)(A) & C. See also, Jones v. City of Boston, 845 F.3d 28 (1st Cir. 2016).
297
Professor Charles Sullivan has also grappled with these questions. See Sullivan, supra note
12, at 398.
298 Singh et. al., supra note 176.
299 Id.
300 Bornstein, Reckless Discrimination, supra note 12.
301 See id. at 1056.
302 Id.
303 Id. at 1058.
304 Id.
305 See id. at 1110.
liability standard of res ipsa loquitor for racial discrimination claims,306 and I
believe that in some instances such a standard might be warranted, I argue that
a discrimination per se standard that is modeled on the negligence per se standard is
more generally applicable (that is, it would apply to various cases of
employment discrimination, not just racially-motivated discrimination) and
also serves to institute more feasible self-regulation practices. The concept of
discrimination per se is also in line with Professor Ford’s argument that
employment discrimination law imposes a duty of care on the employer to
ensure that its employment practices are not unlawfully discriminatory.307 Note
that, as I explain in another article in progress, Automated Employment
Discrimination, the discrimination per se doctrine should work hand in hand
with an “auditing imperative” imposed on the employer.308 This takes into
consideration the practical problems associated with proving disparate impact
in an algorithmic hiring scenario and would allow a plaintiff to have some
headway in making the case.
The proto negligence per se case involved a Minnesota drug store clerk who
sold a deadly poison to a customer at the customer’s request.309 At the time of
the sale, the clerk did not label the substance as a “poison,” which was required
by a state statute for the sales of such substances.310 Later, the customer who
had purchased the substance ingested the chemical, which caused her death.311
Given these facts, should the clerk have been held legally liable for his actions,
which indirectly caused the customer’s death? This case, Osborne v. McMasters,
became one of the earliest cases in the United States to analyze the illegal
concept of negligence per se. Given the facts of the case, the court first found that
there could be no “serious doubt of defendant’s liability”—as he had known
of his duty to label the bottle as poison.312 In explanation, the court detailed
that it was
well settled . . . that where a statute or municipal ordinance imposes
upon any person a specific duty for the protection or benefit of others,
if he neglects to perform that duty he is liable to those for whose
protection or benefit it was imposed for any injuries of the character
which the statute or ordinance was designed to prevent . . . .313
Since the time of Osborne, the doctrine of negligence per se has become
commonly used for violations of laws such as traffic laws, building codes,
blood alcohol content limits, and various federal laws.314 For example, in
306
See Giradeau A. Spann, Race Ipsa Loquitor, 2018 MICH. ST. L. REV. 1025 (2019).
307
See, e.g., Ford, supra note 30 (arguing that employment law imposes a duty of care on
employers to avoid decisions that undermine social equality).
308
I discuss the “auditing imperative” in another article, Automated Employment Discrimination
(draft manuscript available).
309 Osborne v. McMasters, 41 N.W. 543 (Minn. 1889).
310 Id.
311 Id.
312 Id. at 543.
313 Id.
314 See, e.g., Williams v. Calhoun, 333 S.E.2d 408 (Ga. App. 1985) (in which the defendant’s
failure to stop at a stop sign constituted negligence per se); Lombard v. Colo. Outdoor Educ.
Mikula v. Tailors, an Ohio business invitee was taken to the emergency room
after falling down in a snow-covered parking lot at the place of business to
which she was invited.315 Witnesses report to have seen her fall after stepping
into a hole in the parking lot that was about seven inches deep and had been
covered by the snowfall from that day. After careful consideration, the jury
determined that
[a] deep hole in a parking lot which is filled or covered, or both, by a
natural accumulation of snow constitutes a condition, the existence of
which the owner of the premises is bound, in the exercise of reasonable
care, to know. He is also bound to know that a natural accumulation
of snow which fills or covers the hole is a condition substantially more
dangerous than that normally associated with snow. . . . Under such
circumstances, the owner’s failure to correct the condition constitutes
actionable negligence.316
Moreover, failure to correct an issue can also lend itself to negligence per se
claims if the accused individual is found to have violated a statute by his or her
failure to respond to a problem. For example, in Miller v. Christian, a landlord
was found negligent per se, after being placed on notice from a tenant that the
building’s sewage system had recurring problems.317 Failure to “fix the
immediate problem within a reasonable amount of time” resulted in a backup
of the sewage system, which caused the tenant’s apartment to flood, ruining
much of her personal property.318 The court in Miller found that Allan
Christian, the landlord, was liable for the damage to the tenant’s property
because he had a legal duty to maintain the apartment’s sewage system in
addition to being legally obligated to keep the premises fit for habitation.319
Often, “failure to correct” claims entail a consideration of whether the
plaintiff knew of the problem, as it is presumed that a plaintiff with knowledge
of an existing problem would be reasonable enough to avoid injury by the issue
altogether. In one case, Walker v. RLI Enterprises, a tenant in an apartment
building sued her landlord after she stepped out the back door of the building
and slipped on a sheet of ice.320 She suffered serious injuries to her ankle.321 In
her suit, the tenant asserted that the landlord was negligent in maintaining the
property, because she had given him notice of a leaky water faucet by the back
door of her apartment.322 This negligence, the court determined, was negligence
Ctr., Inc., 187 P.3d 565 (Colo. 2008) (in which an outdoor education teacher fell off of a
ladder that was in violation of building code restrictions, establishing negligence per se on
the part of the landowner); Purchase v. Meyer, 737 P.2d 661 (Wash. 1987) (in which a cocktail
lounge was found negligent per se for serving alcohol to a minor).
315 Mikula v. Tailors, 263 NE.2d 316 (Ohio 1970).
316 Id. at 322–23.
317 Miller v. Christian, 958 F.2d 1234 (3d Cir. 1992).
318 Id. at 1234.
319 V.I. CODE ANN. tit. 29, § 333(b)(1) (2019).
320 Walker v. RLI Enters., Inc., No. 89325, 2007 WL 4442725, at *1 (Ohio Ct. App. 2007).
321 Id.
322 Id.
per se because the landlord had an obligation to maintain the premises under
Ohio law.323
At trial, however, the landlord argued that “a landlord is only liable where
he has ‘superior knowledge’ of the defect that led to the injury.”324 By this the
landlord meant that as the tenant had alerted him of the problem, the tenant
then clearly knew as much about the dangerous conditions as he did. He also
noted that she had taken no further action to avoid the leaky faucet and could
thus be responsible for her own injury.325 However, the court found this
argument unconvincing, holding that such an argument only applies in the
context of natural accumulations of ice and snow, because most people have
experienced such conditions and know that they should take precautions.326
Site-specific problems, though, are the responsibility of the landlord to correct,
as he likely has a “superior” knowledge of the issues on the property than his
tenants or site visitors.327
In the case of automated hiring systems, employers have an obligation not
to unlawfully discriminate against applicants, as proscribed by Title VII of the
Civil Rights Act and other federal antidiscrimination laws. Furthermore, as I
propose in a separate paper, if self-audits or external audits of hiring algorithms
become mandated by law,328 then it follows that when an employer willfully
neglects to audit and correct its automated hiring systems for unlawful bias, a
prima facie intent to discriminate could be implied, pursuant to the proposed
doctrine of discrimination per se. This argument becomes persuasive when one
considers that some corporations make use of bespoke internal hiring
algorithms, such that no one, except the corporation, has access to the hiring
algorithm and its results—meaning then that only the corporation could have
“superior knowledge” of any problems of bias.
There are two important arguments against the introduction of the
discrimination per se doctrine: 1) The first is the difficulty of establishing a
standard for when the doctrine might apply; 2) The second is that it imposes
too large a burden on the employer. Regarding the first, I agree that it will take
some work on the parts of the courts to establish clear precedents for when
the doctrine could apply. But this is true for any new legal doctrine. In fact,
even established legal doctrines still face contestation as to when they should
or should not apply.329 Consider that in the context of automated hiring the
two legal doctrines currently available to the plaintiff on which to build a case
are disparate treatment or disparate impact. The fact is that there are very few
cases of disparate treatment because employers are now much too
sophisticated to leave the kind of “smoking gun” evidence required. For
disparate impact, the problem is that there is wide discrepancy in determining
what statistics are enough to show a pattern of disparate impact disparate
impact.330
Regarding the burden on employers, the fact remains that automated
hiring is a cost-saving measure. Employers save significant amounts of money
and time by using automated hiring platforms. However, automated hiring
platforms should not save employers from their duty not to discriminate. Just
like an employer holds a responsibility to supervise its human workers for
activities that might contravene the law, so, too, remains an obligation to audit
automated hiring systems for bias. This burden is neither heavier than when
the intermediary is human, nor does it disappear merely because the
intermediary is a set of algorithms. The doctrine of discrimination per se is meant
to prevent employers from shirking their responsibility.
eligibility.333 They would then submit these reports to banks and employers,
showing the “risk” of the current individual in terms of lending or
employment. As such, the FCRA was passed to prevent unfair or opaque credit
reporting.334
The law also protects consumers from unfair background checks and
unauthorized collections of their private information, ensuring that consumers
are alerted to any information that may adversely affect their abilities to obtain
either credit or, more recently enforced, employment.335 Moreover, the law also
protects consumers by providing that creditors or employers must “obtain a
written authorization from any applicant or employee for the procurement of
a report, and certify to the consumer reporting agency its compliance with the
requirements of the statute such that that it will not violate any equal
employment opportunity law.”336 Through such provisions, the FCRA gives
consumers more control over how their personal information is reported by
consumer reporting agencies and used by both banks and employers.
However, since the time of its passage, the FCRA has expanded its bounds
such that it no longer only applies to the “Big Three” credit reporting
agencies.337 Now, it also applies to a variety of agencies that collect and sell
information that is found outside the workplace and that might be pertinent
for applicant-reviewing purposes.338 With the coverage of many consumer
reporting agencies (CRAs) whose sole purpose is employment pre-screening,
a question has arisen regarding the point at which a screening service should
be considered a CRA by the FCRA. In essence, how big of a role does a
reporting agency have to play in the information collection and reporting
process in order to face such substantial government regulation?
The language of the FCRA plainly defines the characteristics of entities
that can be considered CRAs, as well as the content of reports that can be
considered “consumer reports” under the law. A consumer reporting agency,
by definition, is any “person which, for monetary fees, dues, or on a
cooperative nonprofit basis, regularly engages in whole or in part in the
practice of assembling or evaluating consumer credit information or other
information on consumers for the purpose of furnishing consumer reports to
third parties.”339 Application screening software companies could be
https://files.consumerfinance.gov/f/201604_cfpb_list-of-consumer-reporting-
companies.pdf.
338 See, e.g, CHECKER INC., https://Checkr.com (last visited Aug. 10, 2019) (which screens
applicants for criminal records, driving records, and also provides employment verifications,
international verifications, and drug screenings); HIRERIGHT LLC., www.HireRight.com (last
visited Aug. 10, 2019) (which boasts the industry’s broadest collection of on-demand
screening applications); FIRST ADVANTAGE CORP., www.fadv.com (last visited Aug. 10,
2019) (which provides criminal and pre-employment background checks, as well as drug-
testing and tenant screening services).
339 15 U.S.C. § 1681a(d)(2)(f).
You’re Not a Consumer Reporting Agency Isn’t Enough, FED. TRADE COMMISSION (Jan. 10, 2013,
2:00 PM), https://www.ftc.gov/news-events/blogs/business-blog/2013/01/background-
screening-reports-fcra-just-saying-youre-not.
352 Id.
353 Id.
354 Id.
355 Id.
356 Thompson v. San Antonio Retail Merchs. Ass’n, 682 F.2d 509 (5th Cir. 1982); see also Spokeo v.
Robins, 136 S. Ct. 1540, 1546 (2016) (in which a “people search engine” provided incorrect
personal information about a consumer to employers and the Supreme Court ruled that this
established concrete injury to the consumer, by damaging his employment prospects).
357 Id.
358 Ryan Calo, Open Robotics, 70 MD. L. REV. 101 (2011).
359 Id. at 106.
CONCLUSION
Proponents of algorithms have favorably likened its workings to that of an
oracle. For those adherents, the algorithm is all-knowing and will infallibly
provide the answers the intrepid inquirer seeks. This represents a simplistic
understanding of the opaque nature of an oracle. Consider the ur-Oracle, the
Oracle of Delphi.360 The Oracle, a figure known in Greek mythology, spoke
veraciously, but in truth that was spun in riddle and with many strands of
interpretation.361 In the most famous tale of the Oracle, the King of Lydia—
who faced a war against the Persians—asked for the Oracle’s advice. However,
the King failed to fully interrogate the Oracle and did so at his own peril,
departing with a seemingly simple answer that “if he went to war, a great
empire would surely fall.”362 Of course, this advice was highly vulnerable to
misinterpretation, and the King’s own empire later fell to the Persians.363
Similarly, algorithms deployed in the decision-making process are vulnerable
to misinterpretation and misuse. Although automated hiring platforms offer
efficiency to the hiring process, we must continue to interrogate their results
to ensure they are working in furtherance of the shared goal of an equal
opportunity society.
360 See WILLIAM BROAD, THE ORACLE: ANCIENT DELPHI AND THE SCIENCE BEHIND ITS
LOST SECRETS (2006).
361 See Mark Cartwright, Delphi, ANCIENT HIST. ENCYCLOPEDIA (Feb. 23, 2013),
https://www.ancient.eu/delphi/.
362 See id.
363 See id.