Resetting Antidiscrimination Law in the Age of AI - Harvard Law Review (2025)

On June 21, 2022, the U.S. Attorney for the Southern District of New York brought suit against Meta Platforms1 for discriminating against Facebook users through its housing advertisement system.2 Meta had developed a machine learning tool that allowed advertisers to select target characteristics and display ads to users with those features.3 Using the tool, Meta’s algorithms disproportionately delivered housing ads for majority-Black zip codes to Black Facebook users, and ads for majority-white zip codes to white users.4 In United States v. Meta Platforms, Inc.,5 the government alleged that Meta had “intentionally discriminated on the basis of race” and other characteristics protected by the Fair Housing Act6 through its design and use of these algorithms.7

In many ways, Meta foreshadowed the discrimination-related puzzles ushered in by the evolution of algorithmic decisionmaking—those enabled by artificial intelligence (AI).8 One could imagine different remedies for the kind of algorithmic discrimination alleged in Meta. Meta could discontinue use of its ad algorithms entirely. It could notify users that the ads they saw were determined by algorithm-enabled targeting. It could remove race and other protected characteristics from the data­sets on which the algorithms were trained, in hopes of precluding the system from “relying” on such inputs. In reality, Meta quickly agreed to deploy a new “Variance Reduction System,” or VRS—a second algorithmic overlay designed to reduce certain biases of the ad tool by rebalancing its results to render it less discriminatory.9 The solution was to adjust the system’s results until its outputs were more representative of the full population that should be eligible to receive the ad.10

In the two years since generative AI was launched into popular use,11 legislators have attempted to regulate AI in all these ways and more. This Chapter surveys seventy-five proposed and enacted federal and state bills targeting AI-enabled discrimination12 in civil contexts and identifies four dominant methods of regulation: prohibiting use, regulating process, regulating inputs, and regulating outputs. Some of these methods combat AI discrimination by treating it as a new and unique harm, but the majority shoehorn AI into existing legal frameworks.

The various approaches to AI regulation reflect a longstanding debate in technology and the law. In the mid-1990s, Judge Easterbrook argued that just as society did not need to create a new legal discipline to regulate the once-novel technology of the horse, it did not need to invent a new legal paradigm to govern “cyberspace.”13 Under this logic, AI may not merit a new legal paradigm if it is the “horse” of our day. At the same time, however, the gaps that emerge when existing anti-discrimination law is extended to AI may reflect something more profound. Responding contemporaneously to Judge Easterbrook’s “law of the horse” framing, Professor Lawrence Lessig instead posited that because code changes the applicability of law in cyberspace, thinking critically about the “law of cyberspace” might teach us “something general about real space law.”14 In Lessig’s frame, then, the way we seek to regulate a new form of AI-enabled discrimination might teach us something about “real antidiscrimination law”—that is, how we ought to regulate old forms of the same harm by human decisionmakers.

Of the four regulatory methods this Chapter surveys, the most common for addressing AI-enabled bias regulates an AI system’s outputs.15 This method involves requirements to test, audit, report, or adjust AI systems based on their results. But the corollary to this type of regulation under existing antidiscrimination doctrine is disparate impact—a regime that some fear has been rendered constitutionally suspect. Rather than concede that output-oriented AI regulations are doomed, however, we can instead use the instinct to regulate AI in this way to reflect on how our current legal regime governing both machines and humans is deeply incomplete.

This Chapter proceeds in three sections. Section A summarizes current antidiscrimination law and its major gaps as applied to AI-enabled decisionmaking. Section B surveys bills targeting AI that have been proposed or enacted at the state or federal level since 2022, categorizes them into four regulatory methods, and analyzes each method’s corollaries to existing legal frameworks. Section C analyzes the viability of output-based regulations—the most common method for addressing AI-enabled discrimination—under the Supreme Court’s decisions in Ricci v. DeStefano16 and Students for Fair Admissions, Inc. v. President & Fellows of Harvard College17 (SFFA). It then offers two paths to secure their validity, either by embracing tech exceptionalism and creating an AI carveout, or by revisiting the standard we apply to human decisionmaking. The prevalence of legislative efforts to extend disparate impact liability to AI decisionmaking favors the latter. As new regulations begin to form the “law of AI,” their methods underscore how and why decisions like Ricci and SFFA should be read narrowly with regard to “real antidiscrimination law,” which should allow correction of formalistically neutral AI and human decisionmaking systems.

A.Antidiscrimination Law Before AI

U.S. law has long had frameworks to combat intentional and systemic discrimination in the context of human decisionmaking, but those frameworks may be inadequate to address bias in the age of AI. This section outlines two pre-AI antidiscrimination regimes: constitutional protections under the Equal Protection Clause of the Fourteenth Amendment18 (EPC) and statutory protections under Title VII of the Civil Rights Act of 1964.19 It then notes their deficiencies when applied to AI.

The EPC guarantees “the equal protection of the laws”20 by limiting state action that distinguishes between groups of people, particularly when done on the basis of identity characteristics like race.21 The protective reach of the EPC, however, has been curtailed by the Supreme Court, which has interpreted the EPC as effectively barring only intentional discrimination.22 This intent requirement has been criticized as difficult to determine;23 conceptually strained when ascribed to groups like legislative bodies;24 and counter to the purpose and “spirit” of the EPC.25

When it comes to AI, the intent requirement could render the EPC a dead letter. For one, whose intent should matter? In the human context, there is a debate over how a legislator’s impermissible purpose affects the intent analysis.26 AI magnifies this problem because the decisionmaking chain is far more complex. Is the relevant intent that of the many humans whose decisions make up the AI’s training data? That of the developers and red teamers27 who select training data and then build, test, and calibrate the AI systems? That of the users who use AI in both predictable and unforeseen ways? To further complicate matters, AI systems have disaggregated “supply chains” that may trigger different technologies and datasets each time they are activated, meaning this web of potential actors can differ with every use.28

What’s more, even if one could say that an AI system itself acted with “intent,” why did it act? The “reasoning” behind AI intent is becoming increasingly impossible to discern,29 creating what some scholars have termed the “Black Box Problem.”30 Professor Yavar Bathaee and others have argued that the black box breaks down the EPC’s intent test entirely;31 Bathaee argues that AI should instead be governed by alternative tests like strict liability or negligence, depending on the risk context.32 Another option is to infer intent from the effects of a policy or system, but the Supreme Court has increasingly eschewed such inferences in the absence of affirmative intent.33 One could instead apply principles like respondeat superior liability when AI acts on a human’s behalf;34 at least one federal court has held that an AI system vendor can be liable for a hiring decision under such an agency liability theory.35 Yet another proposed solution is to impose ex ante design and transparency requirements.36

The second antidiscrimination regime is Title VII, a federal statutory scheme that prohibits employment discrimination on the basis of “race, color, religion, sex, or national origin.”37 Though Title VII is limited to employment,38 other regimes provide similar antidiscrimination protections in housing,39 credit,40 and federal funding.41 Unlike the EPC, Title VII is not limited to intentional discrimination: Instead, it creates a burden-shifting framework under which employees can challenge practices that have disparate impacts on protected classes.42 If, for example, an employee makes a prima facie showing that a practice leads to disproportionate hiring43 of men over women, then an employer must show that the practice is “job related” and “consistent with business necessity.”44 Even if the employer meets that showing, it can still be held liable if it refused to adopt a less discriminatory “alternative” that would satisfy its business needs.45

Disparate impact liability has an obvious appeal in the AI context. While an AI system’s intent may be difficult or impossible to discern, its outcomes are readily observable.46 Yet scholars have expressed concern that disparate impact under Title VII may be ineffective to combat AI-enabled discrimination. First, the burden-shifting framework may not effectively restrict “proxy discrimination” because employers can show that an AI system is designed to optimize for factors rooted in “business necessity.”47 And as the use of AI becomes ubiquitous,48 it may be difficult to argue that a non-AI system could also adequately meet an employer’s legitimate business needs.49

Second, disparate impact doctrines may be on uncertain constitutional footing. In 2009, the Supreme Court laid down a rather fine line between adherence to and violation of statutory antidiscrimination regimes in Ricci. At issue was a City’s decision to throw out the results of its fire department’s promotion eligibility exam after the City observed significant racial disparities in the results, out of fear it could be held liable for disparate racial impact.50 A group of mostly white firefighters then sued the City under Title VII and the EPC, arguing that the decision to throw out the results amounted to intentional discrimination.51 The Supreme Court ruled for the plaintiffs,52 leaving unclear whether disparate impact regimes might be invalid as a form of unconstitutional intentional discrimination53—doubts that have only grown stronger following SFFA.54

In summary, the core machinery of constitutional and statutory anti-discrimination law begins to fray when applied to AI, either because AI has no discernable intent, or because AI-enabled decisionmaking may be defensible under disparate impact regimes. The next section outlines how proposed and enacted AI regulations have attempted to address these shortcomings—at times by bringing AI within the reach of existing regimes, and at others by creating novel methods for combatting AI-enabled discrimination.

B.A Typology of “AI Regulations”

The recent explosion of AI into our national consciousness has been accompanied by a flurry of legislative action. Legislatures have enacted dozens of AI laws;55 the Biden Administration issued several executive orders;56 and several federal agencies have announced their intent to enforce existing law against AI-assisted violations.57 This Chapter focuses on a subset of this activity—proposed and enacted state and federal legislation—to examine the intuitive response not just to AI, but also to discrimination more broadly.

This Chapter draws from a survey of seventy-five bills—sixty-two state and thirteen federal—introduced between January 2022 and September 2024.58 Of these, forty-four were expressly directed toward mitigating AI-enabled bias.59 The survey focuses on civil regulations of AI in consumer-facing sectors in which antidiscrimination principles are relevant, including healthcare, education, employment, housing, technology, financial services, and consumer protection. It includes legislation that primarily aims to establish AI commissions or working groups, as well as some regulations related to internet and data privacy that also cover AI systems. It excludes bills that encourage the adoption of AI by governmental actors60 and those that target use of AI in criminal contexts.61

This Chapter identifies four primary methods by which surveyed bills and laws target the development and use of AI: (a) prohibiting certain use cases; (b) regulating the process by which AI is used; (c) regulating the inputs of AI systems; and (d) regulating the outputs of AI systems. Though these four methods are analyzed as conceptually distinct, they are not mutually exclusive in application—surveyed bills frequently use more than one method to target AI.

Table 1: Methods of Surveyed Legislation

Regulatory Method

Description

Legal Analogue

Possible Weaknesses

Prohibition Regulation

Total or conditional prohibitions on certain uses of AI

Strict liability

Stifles innovation

Process Regulation

Procedural requirements, including notice, appeals, and human review or decisionmaking

Due process; constructive knowledge

Provides only individual remedies; fails to establish intent

Input-Based Regulation

Prohibitions on (or selective permissibility of) using certain classes of data and their proxies in AI decisionmaking

Disparate treatment

May insulate proxy discrimination

Output-Based Regulation

Testing or auditing of AI systems for differential impact, either before or after the system is deployed

Disparate impact

Provides unclear guidance on implementation; may face constitutional challenges

1.Prohibition Regulation.—Prohibition is the most categorical method of regulation and simply bars the use of AI in certain contexts. Many prohibition regulations are targeted at contexts where biased decisionmaking may be of concern. Proposed legislation in Colorado, for example, attempted to prohibit landlords from using any algorithmic system to determine residential rental rates.62 A proposed Rhode Island bill would have prohibited the use of facial recognition and AI-enabled algorithms in decisionmaking related to gambling.63 And legislation in South Carolina would have prohibited web application operators from using automated decision systems (ADSs) to curate content for users under eighteen.64

Prohibition regulations reflect an intuition that certain decisions are so significant that AI systems cannot be used in the process under any circumstances. Bills often refer to these as “consequential decisions” that materially affect the provision, denial, or cost of a critical service or opportunity, such as housing, employment, or education.65 Thus, even as AI tools are used to assist decisionmaking in an increasing number of contexts, perhaps we fear overreliance on technology66 or believe that some choices must be driven by human reasoning. Professor Aziz Huq and others have conceptualized this as “a right to a human decision” when individuals are “assigned a benefit or a coercive intervention.”67 Prohibitions on AI use codify a right to a human decision in certain contexts by carving decisions out from AI influence.

Prohibition regulations skirt the gaps in antidiscrimination law analyzed in section A by essentially imposing strict liability. An employer, for example, could be held liable simply for using AI for a prohibited purpose—say, for terminating employees—the employee would not have to prove intent or show that the employer’s use of AI did not serve a legitimate business necessity. Prohibitions obviate the trickier aspects of AI regulation, such as identifying a responsible actor and determining their mens rea, which makes these regulations attractive from an evidentiary standpoint. That said, they are blunt instruments that stifle innovation and experimentation,68 which might explain their relative rarity among surveyed bills. This suggests that prohibitions may be of limited utility in curbing discrimination in most contexts.

2.Process Regulation.—A finer-tuned version of prohibition is to regulate the process by which AI can be deployed. This approach has gained traction internationally69 and within the tech sector itself.70 Process regulations generally take one of two forms: (a) transparency requirements, such as notice, disclosure, or appeal mandates or (b) requirements of human involvement in AI-enabled decisionmaking processes.

(a)Transparency Requirements.Some regulations impose additional transparency requirements when AI is used in decisionmaking. These often come in the form of mandated disclosures. For example, legislation has been proposed to require disclosure when AI is used in pricing algorithms,71 employment decisions,72 and healthcare.73 Other bills would require that individuals be allowed to challenge an algorithmic determination or the data relied upon by an AI-enabled system.74

AI process regulations sometimes confer greater rights to individuals than they would have if the same decision were made by a human. For example, a proposed New Jersey bill would allow employees to “contest any [adverse] employment decision that results from the use of [an] automated employment decision tool” and to obtain information including, but not limited to, a written “statement of specific reasons for an adverse employment decision.”75 Under traditional at-will employment, employees have no right to know why adverse actions are taken against them.76 Under this bill, use of AI would provide the employee with greater access to information than they are entitled to under the at-will regime today.77

Transparency requirements may be grounded in notions of due process.78 Some have argued for creating distinct “technological due process”79 norms in certain contexts where automated or AI-enabled systems undermine notice and opportunity to be heard.80 Layering additional processes when AI is used in decisionmaking is thus a form of tech exceptionalism—it increases legal scrutiny when the government relies on technology that may erode due process rights.81 And regardless of whether the government or a private actor relies on AI, process regulations may also mitigate general social distrust of AI.82 By allowing people to challenge AI-enabled decisions or the underlying data, process rights mirror the more transparent participatory processes required in other adjudicatory settings, fostering a sense of fairness and promoting deference to legal determinations.83

Though transparency requirements are common,84 they may not do much to fill gaps in antidiscrimination law. Providing notice of the use of AI, for example, does not alter one’s actual intent under the EPC or an employer’s business necessity claim under Title VII. And while appeals processes might provide an opportunity to identify and correct discrimination in individual cases, if such discrimination is a symptom of broader bias in an AI system, case-by-case appeals are an unsatisfying remedy.85 At best, then, disclosures and appeals provide means of individual relief, but they seem ill-suited to rectify systemic bias in AI decisionmaking.

(b)Requirement of Human Involvement.—Other regulations require human involvement in decisions made with AI assistance. Unlike prohibitions, this method permits the use of AI but requires human engagement at certain stages, such as in the initial determination86 or on appeal.87 A Louisiana bill, for example, would bar healthcare providers from making decisions regarding patient care based solely on an AI determination.88 Other healthcare bills similarly mandate that patients have the option of interacting with a human provider for medical diagnoses89 or in patient care communications.90

Human involvement may bridge gaps in antidiscrimination law by imputing a kind of constructive knowledge of AI-enabled bias onto a human actor. For example, Amazon discovered in 2015 that an algorithm it had built (but not used) to screen resumes was penalizing resumes from female applicants.91 If Amazon had put that algorithm to use, the company could hypothetically have defended against a disparate impact claim by arguing that its custom-built algorithm was rooted in business necessity and that the company lacked an effective alternative. That might absolve Amazon of liability, but it would surely be a less convincing argument if someone at Amazon knew that its algorithm was biased, given that an obvious “alternative” would be to adjust the algorithm in light of that knowledge.92 The same logic extends to Meta’s ad deployers after they became aware of the biased operation of Meta’s housing algorithm.93 Adding a human into the loop94—that is, a person who is aware of an algorithm’s decisions over time—could thus make the deployer constructively aware that the AI’s “target variable” is biased.95 This knowledge could prompt investigation into alternatives.96 Requiring human involvement could also provide a basis for other types of liability, such as under an agency theory, as knowledge of a system’s bias increases its foreseeable risks.97

Notably, however, human in the loop98 solutions do not change the outcome of a constitutional discriminatory intent analysis. The Supreme Court has squarely rejected the idea that disparate impact claims are actionable under the EPC, even if there are known disparate effects.99 In the Amazon example, then, even if an employee began to notice that the tool selected more resumes from men, that would not be enough to transform inadvertent bias into intentional discrimination.100

3.Input-Based Regulation.—A third category of regulation targets the inputs of AI systems, typically by barring use of certain classes of data.101 An Illinois law, for example, would have barred the use of AI systems to consider an applicant’s race to reject an applicant for employment, assign risk factors to an applicant’s creditworthiness, or take adverse action on a credit decision.102 A California law requires healthcare-related AI systems to rely only on patients’ medical histories and clinical information, presumably barring models trained on demographic or other nonclinical information.103

Input-based regulations attempt to create characteristic-blind AI systems to prevent discrimination. The prohibited inputs are typically protected characteristics (most often, race) or their proxies (such as income, education, or zip code).104 There is a robust academic debate over whether requirements to create such mechanistically “race-neutral” algorithms mitigate, exacerbate, or themselves amount to discrimination under the EPC.105 Some have argued that input-based practices may be necessary to avoid “serious constitutional concerns.”106 Because reliance on a protected characteristic may be problematic even when it is just one factor among many,107 one might conclude that protected characteristics must be entirely withheld from AI systems,108 particularly if a system is a black box.109 Others have argued, however, that input restrictions are highly unlikely to be effective at combatting AI-enabled discrimination in practice. Professors Crystal Yang and Will Dobbie explain that race is so strongly correlated with other factors that withholding racial data will not prevent disparate sentencing recommend-ations by AI systems110—and in fact, that withholding such data could exacerbate race-based disparities.111

Input-based regulations do not appear to solve the weaknesses in current antidiscrimination law, and they may in fact have the perverse effect of insulating AI bias from the law’s reach. When it comes to discriminatory intent, those who use race-blind systems can argue that, because all protected characteristics were excluded, the AI’s output simply could not have been based on those characteristics. But in fact, AI systems are far better at identifying highly correlated variables than humans are,112 and mechanistically race-blind systems can readily find or create proxies for race even where such data has been categorically excluded from the system inputs.113 By assuming that AI can make “characteristic-blind” decisions at all, input-based regulations thus ironically provide cover for the very kinds of invidious discrimination the law is intended to prevent.

4.Output-Based Regulation.—The final regulatory method imposes requirements based on the outputs of AI systems, typically requiring testing or auditing114 to determine whether an AI system has biased effects.115 Among surveyed regulations, output-based methods are the most common for targeting AI-enabled bias and are included in roughly 82% of bills with a focus on discrimination. That figure rises to 94% if bills that primarily create AI commissions or working groups are excluded.116 Only a few regulations mandating bias audits actually define them, however;117 a New Jersey bill would require employers to calculate an AI system’s “selection rate,” defined as the ratio of favorable to unfavorable employment decisions along certain protected characteristics, as well as its “impact ratio,” or the ratio of favorable outcomes in the “protected class” compared to a “control class.”118 Both metrics rely on the sample outputs of AI systems to determine whether those systems are biased.

The regulations differ in what response, if any, is required once testing reveals that a system has produced disparate results. Many regulations simply require that results be disclosed—at times to the public,119 but more often to an agency120 or state official.121 Some require disclosure of all impact assessment results,122 while others require public disclosure only if biased effects are found.123 Other regulations stipulate that systems that produce “discriminatory” results cannot be used at all.124 Between the two extremes, some regulations prohibit AI decisions that discriminate “based on” protected characteristics125 or “in a manner” that discriminates against a protected class.126 In these cases, it is not always clear whether a system that produces “biased” results can never be used, or whether its use is prohibited only in individual instances where results are shown to be biased. Still other regulations provide little to no guidance beyond the fact that an impact assessment must be conducted.127

The relationship between output-based regulations and existing anti-discrimination frameworks depends on when impact assessments are conducted and what follow-up actions are required. Some regulations mandate pre-implementation testing before an AI system may be put into use,128 while others require postdeployment auditing to evaluate the outcomes of an AI system after it has been relied upon.129

Pre-implementation testing can be evaluated under similar frameworks to the three other methods of regulation outlined above. Output-based regulations that prohibit the use of AI systems that fail pre-implementation testing are essentially conditional prohibitions. These regulations apply a strict liability framework, treating AI systems that fail bias testing as per se too risky for use. In contrast, regulations that require disclosure of pre-implementation testing results are akin to process and notice regulations.

Postdeployment auditing regulations, on the other hand, are analogous to both human in the loop process regulations and disparate impact regimes. A deployer may become constructively aware of a system’s risks when auditing reveals that the system has produced discriminatory outputs, requiring a deployer to then take corrective action under Title VII or other disparate impact regimes.

5.Reflections.—Of the four regulatory methods outlined above, some treat AI as meriting new legal paradigms, while others treat it as a twenty-first-century “horse” that existing doctrines can readily proscribe. Prohibitions and process transparency regulations place restrictions on AI above and beyond what human decisionmakers would face when acting without AI input,130 thereby treating AI as inflicting new harms distinct from human discrimination. By contrast, input- and output-based regulations and human involvement requirements appear to target AI discrimination as the latest manifestation of biased human decisionmaking, attempting to bring it within the reach of existing antidiscrimination schemes.131 The next section explores whether there should be a presumption that AI behavior is different from human behavior and what that might imply about “real antidiscrimination law” as applied to both human and AI-made decisions.

C.The Output-Based Puzzle

Given the array of these regulations proposed in recent years, it is clear that legislators are concerned about AI-enabled discrimination. But output-based regulation, the most commonly proposed method for combatting AI bias, may face legal challenges under modern Supreme Court doctrine. This section offers two paths forward. First, treatment of AI-enabled discrimination should be carved out from any further erosion of disparate impact regimes. Second, and more conceptually, AI forces the question of whether the potentially discriminatory harms it creates are truly distinct from those in a pre-AI world. If not, output-based regulations may suggest that the same antidiscrimination regime should also normatively apply to human action. This path suggests that disparate impact doctrine must be retained and strengthened not just for AI, but for human decisionmaking as well.

1.Disparate Impact: A Weakening Regime.—Of those bills surveyed, nearly all proposed or enacted regulations targeting AI-enabled discrimination include a testing or auditing requirement.132 After Ricci, however, output-based regulations may be of uncertain constitutional validity. In Ricci, the City threw out its employment exam results in part to avoid disparate impact liability under Title VII.133 The Court held that the City’s decision amounted to “intentional discrimination,”134 a characterization that raises questions as to whether disparate impact regimes that require alteration of biased outputs will be held unconstitutional.135 Though the Court has not returned to this question since it decided Ricci in 2009, its 2023 decision in SFFA embraced an anticlassification view of the EPC that may render any consideration of protected characteristics legally suspect, even if one’s goal is to prevent disparate outcomes.136

2.An AI Carveout.—If the Supreme Court invalidates or continues to narrow disparate impact regimes, one way to protect output-based regulations is to create an AI carveout. Such an approach could be justified by at least two unique challenges posed by AI. First, and most critically, output-based regulation may be mechanistically necessary in the AI context, as the lack of transparency in AI decisionmaking137 may escape what our current adjudicative process can assess, prevent, and rectify. It is difficult, if not impossible, to detect AI-enabled bias without examining a system’s outputs.138 And while AI systems can be red-teamed or tested for bias prior to use, they may well produce unanticipated results in the “real world”139 compared to other algorithmic and machine learning systems. Thus, examining an AI’s outputs may be the only effective way to scrutinize it.140 And other regulatory approaches have drawbacks: Flat prohibitions stifle innovation; process regulations operate at the individual level and are likely ineffective to address system-wide bias; and input-based regulations are widely considered inadequate protection against (or even a driver of) AI-enabled bias. In this world, a bar on output-based interventions could become a functional mandate to accept AI-assisted bias once a system has been deployed.

Second, a carveout could be justified on the view that AI-enabled decisionmaking is simply different from that of humans in its scale or nature. There is a sense that AI is unaccountable and that its decisions are arbitrary, which infringes on a dignitary interest of being able to attribute adverse decisions to a responsible actor and seek explanations for them.141 And because AI systems generate predictive outcomes based on prior human decisions, there is a very real concern that AI could amplify or legitimate existing human bias142 at a massive scale.143 Under this view, the effects of ubiquitous and nearly costless AI decisionmaking are of a different kind or order of magnitude than New Haven’s decision to toss the exam results of 118 firefighters.144

Both of these justifications are variations on tech exceptionalism: the idea that, when a technology is fundamentally different from what has come before, it justifies—and often forces—a change in the law.145 However, creating an AI exception from the disparate impact regime may create anomalous results. A carveout would bifurcate decisions made by AI versus those made by humans and apply different antidiscrimination standards and remedies to each. Putting aside whether we can practically draw the line between human and AI decisions, a tech exceptionalist approach could result in a legal regime that is much more protective of antidiscrimination principles when AI is involved than when it is not. Even if this outcome may be normatively desirable, it ultimately elides the question of why we believe disparate decisionmaking should be corrected in the first place. The following section provides an alternative doctrinal and conceptual treatment of AI-enabled bias that attempts to resolve this more fundamental question.

3.An AI Reflection.—The justifications for an AI carveout presuppose that there are fundamental differences between AI and human decisionmaking. Otherwise, there would be no need for additional protections when AI is involved.146 But is it really true that AI systems are more harmful or pernicious when it comes to biased decisionmaking than humans are? Advocates sometimes point to AI’s ability to reduce bias147 because humans make irrational and inconsistent decisions based on biased priors.148 And mechanistically, if most AI decisions simply reflect the human decisions on which they are trained, they act as highly accurate mirrors of human bias rather than independent sources of discrimination.149

With this in mind, there is an alternative way to treat AI-enabled discrimination that does not require a carveout. The emergence of AI was a blank slate onto which legislators could design a normatively desirable antidiscrimination regime. Thus, our instinctual response to regulating AI reflects how the law ought to mitigate against discriminatory harms effectuated by any actor, whether machine or human. In other words, if AI is a mirror reflecting biased human judgments, then AI regulations are also a refraction of our desired legal remedies for human discrimination.

The prevalence of output-based regulations suggests that we find insufficient a regime that cannot correct for disparate outcomes of formalistically neutral systems. For one, the frequency of output-based regulations—as compared with process regulations, which provide largely individual remedies—indicates an appetite for systemic correction in addition to fair outcomes in individual cases. Moreover, regulations seem to target biased outcomes as an important heuristic for biased decisionmaking. We appear to expect AI regulations to protect against disparate outcomes, regardless of whether they result from a specific intent to discriminate. Conversely, input-based regulations that functionally codify an anticlassification approach are relatively uncommon.

4.A Path Forward.—If AI regulations reflect what we instinctually want antidiscrimination law to accomplish, then acquiescing to a dampened disparate impact regime while carving out an “exception” for AI is a move in the wrong direction. Instead of asking whether AI regulations can address harms unique to AI, the question should be whether the law can address harms that are the same regardless of whether they are created by AI or by human decisions. While AI may operate at a larger scale than humans, the core issue that output-based regulations target is the same as that which disparate impact regimes have long sought to address: namely that, absent intervention, facially neutral tools may perpetuate and entrench systemic biases.150 Given this, there are two ways to read Ricci narrowly to allow not only the continued operation of output-based AI regulations, but also their human analogues in disparate impact regimes.

One way to limit Ricci is in the timing of intervention. In Ricci, the Court expressly stated that an employer may, “before administering a test or practice,” consider “how to design that test or practice...to provide a fair opportunity for all individuals, regardless of their race.”151 Several scholars have thus read Ricci to permit race-aware practices in the design and testing phases,152 which would render output-based regulations with predeployment testing requirements safe from scrutiny. Another option is to read Ricci not as barring all postdeployment interventions, but instead as prohibiting only the “disrupt[ion of] legitimate, settled expectations” of actual applicants.153 After all, the City in Ricci did not alter its exam protocols prospectively, but instead threw out the results of candidates who had already taken the exam.154 Under this view, the timing of intervention may matter only as a proxy for reliance, as there are greater reliance interests after a system has been used on actual applicants. Even postdeployment auditing regulations would be within constitutional bounds under this approach, so long as only prospective alterations are made to AI systems.155

Extending these principles to human decisionmaking is revealing. Consider, for example, affirmative action in university admissions. In SFFA, the Supreme Court held that race may not be used as a “plus” (or a minus) in admissions decisions.156 Viewed through the lenses of intervention timing and reliance, SFFA can be understood as barring interventions that adjust for racial bias in real time by admitting or rejecting specific candidates who applied with the “legitimate expectation not to be judged on the basis of race.”157 An open question after SFFA, however, is whether the decision bars other admissions policies like “Top Ten Percent” plans that are designed with the intent of promoting student body diversity.158 Under the values reflected in output-based AI regulations and a narrow reading of Ricci, the answer should be a resounding “no,” so long as system alterations are purely prospective and do not disrupt the reliance of actual applicants. In other words, just as Meta may alter or discard an ad algorithm known to be biased against its users, so too should an admissions team be permitted to reject practices—such as emphasis on test scores above other criteria159—known to have disproportionate impacts on certain classes of applicants, provided that they do not affect prior applicants’ legitimate expectations.160

Conclusion

The rapid rise of AI has been met with a raft of legislative proposals to target AI-enabled bias, many of which impose output-based requirements. The prevalence of testing and auditing mandates may be attributable in part to challenges unique to AI’s mechanics. But given the chance to draft a new antidiscrimination regime on the blank slate of AI, legislators chose systemic remedies, targeted systemic harms, and rejected the notion that the entrenchment of bias in employment, healthcare, and housing161 only matters when it results from intentional acts of discrimination. This Chapter explores whether the legal response to this new technology might also force a reflection about antidiscrimination law itself, rather than simply being a new technological “horse” onto which old regimes can be grafted.162

Refracting the AI legislative impulse onto legal regimes governing human decisionmakers is an opportunity to take stock of the antidiscrimination regimes we have inherited and consider what we want to carry forward. The prevalence of output-based regulations suggests a need to shore up the disparate impact doctrines that have fallen into legal uncertainty under the current Supreme Court. As both the Meta settlement and these regulations show, it is possible to provide robust, effects-based protections against AI-enabled discrimination without carving out an AI “exception.” A narrow read of Ricci and SFFA, one sensitive to individual discrimination and which does not turn a blind eye to systemic bias, provides such a path forward. The nationwide response to AI points to a specific expectation of our institutions: a fair shake and the eradication of bias, whether intentional or not.

Resetting Antidiscrimination Law in the Age of AI - Harvard Law Review (2025)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Tyson Zemlak

Last Updated:

Views: 5928

Rating: 4.2 / 5 (63 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Tyson Zemlak

Birthday: 1992-03-17

Address: Apt. 662 96191 Quigley Dam, Kubview, MA 42013

Phone: +441678032891

Job: Community-Services Orchestrator

Hobby: Coffee roasting, Calligraphy, Metalworking, Fashion, Vehicle restoration, Shopping, Photography

Introduction: My name is Tyson Zemlak, I am a excited, light, sparkling, super, open, fair, magnificent person who loves writing and wants to share my knowledge and understanding with you.