You are here

Past Work


"Implicit Bias, Moods, and Moral Responsibility" (2017), Pacific Philosophical Quarterly, DOI: 10.1111/papq.12212​.
Are individuals morally responsible for their implicit biases?  One reason to think not is that implicit biases are often advertised as unconscious, “introspectively inaccessible” attitudes.  However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways.  Here I explore the implications of this evidence of partial awareness for individuals’ moral responsibility.  First, I argue that a graded notion of responsibility (i.e., that responsibility comes in degrees) is independently plausible.  Second, I argue that individuals’ partial awareness of their implicit biases suffices to make them (partially) morally responsible for them.  I make an argument by analogy to a close relative of implicit bias: moods.  The degree of awareness that individuals have of their moods makes them responsible for mood-influenced action, and the awareness individuals have of their implicitly biased behavior is importantly similar. (Abbreviated Version for a TalkHandout)

"Black Lives Matter and the Call for Death Penalty Abolition," (Forthcoming), with Michael Cholbi, Ethics.
The abolition of capital punishment is among the reforms the Black Lives Matter movement has called for in response to what it calls "the war against Black people" and "Black communities" in the United States. Drawing on the large body of studies indicating discrimination against Blacks both as capital defendants and as murder victims, the movement asserts that the death penalty in the U.S. is a "racist practice" that "devalues Black lives." This article defends the two central contentions embedded in the movement's abolitionist stance: first, that U.S. capital punishment practices represent a wrong to Black communities rather than simply a wrong to particular Black capital defendants or particular Black victims of murder, and second, that the most defensible remedy for this wrong is the abolition of the death penalty. We argue that while Black Americans suffer retributive injustices in the U.S. capital punishment regime, Black Americans as a class also suffer a distributive injustice under that regime inasmuch as Black Americans do not receive either the equal protection of, or equal status under, the law. Moreover, these patterns of discrimination cannot be explained without reference to implicit racial biases likely to influence capital punishment decisions reached by prosecutors, judges, and juries. The failure to remedy such discrimination thus represents a form of institutional recklessness with respect to Black lives and legal status. Among plausible remedies, only abolition, either de facto or de jure, succeeds in both eliminating the discriminatory effects of this bias-based recklessness and in not being itself unjust.

"Stereotypes, Conceptual Centrality, and Gender Bias: An Empirical Investigation" (2017), with Guillermo Del Pinal and Kevin Reuter, Ratio, DOI: 10.1111/rati.12170.
Most accounts of implicit bias focus on “mere associations” between concepts of groups and traits (e.g., woman and nurturing).  Some have argued that implicit biases must have a richer conceptual structure, although, to this point, they have said little about what this richer structure might be.  To address this lacuna, we build on research in philosophy and cognitive science demonstrating that concepts encode dependency relations between features.  Dependency relations determine the centrality of a feature f, relative to a concept C: i.e., the degree to which other features of C depend on f.  Crucially, centrality and associative strength come apart.  For example, the feature having bones is more central to the concept lion than is having a mane (lions who are female, young, or shaved have bones but not manes); however, while much of our thinking about lions depends on their having bones, this central feature might not show up in measures of associative strength (participants might be faster to associate “lion” with “mane” than with “bones”).  We argue that some implicit gender biases reflect differences in the central features encoded in gender concepts.  We defend this claim in a series of experiments exploring biases regarding gender, intelligence, and effort.  Roughly, we find that participants are equally likely to associate women and men professors with intelligence, but that participants think the intelligence of women professors centrally depends on their being hard-working, whereas men’s intelligence does not so depend.  We conclude by considering the social and political implications of this conceptually complex gender bias and point toward future research to uncover the distinctive features taken to be central to specific social groups of gender, race, disability, and so on.

"Biased against Debiasing: On the Role of (Institutionally Sponsored) Self-Transformation in the Struggle against Prejudice" (2017), Ergo. (Longer PPT; shorter PPT for talk given to Psychology & Sociology Department at Cal Poly Pomona on 12/3/15)
Research suggests that interventions involving extensive training or counterconditioning can reduce implicit prejudice and stereotyping, and even susceptibility to stereotype threat.  This research is widely cited as providing an “existence proof” that certain entrenched social attitudes are capable of change, but is summarily dismissed—by philosophers, psychologists, and activists alike—as lacking direct, practical import for the broader struggle against prejudice, discrimination, and inequality.  Criticisms of these “debiasing” procedures fall into three categories: concerns about empirical efficacy, about practical feasibility, and about the failure to appreciate the underlying structural-institutional nature of discrimination.  I reply to these criticisms of debiasing, and argue that a comprehensive strategy for combating prejudice and discrimination should include a central role for training our biases away. 

"A Plea for Anti-Anti-Individualism: How Oversimple Psychology Misleads Social Policy" (2016), Ergo.
This essay, which is a companion paper to "Biased against Debiasing," responds in greater depth to the criticism that contemporary efforts to redress discrimination and inequality are overly individualistic.  Critics of individualism emphasize that these systemic social ills stem not from the prejudice, irrationality, or selfishness of individuals, but from underlying structural-institutional forces.  They are skeptical, therefore, of attempts to change individuals’ attitudes while leaving structural problems intact.  I argue that the insistence on prioritizing structural over individual change is problematic and misleading.  My view is not that we should instead prioritize individual change, but that individual changes are integral to the success of structural changes.  These theorists urge a redirection of attention, claiming that we should think less about the individual and more about the social.  What they should urge instead is that we think differently about the individual, and thereby think differently about the social.

"Stereotypes, Prejudice, and the Taxonomy of the Implicit Social Mind" (2016), with Michael Brownstein, Noûs​, DOI: 10.1111/nous.12182.
How do cognition and affect interact to produce action?  Research in intergroup psychology illuminates this question by investigating the relationship between stereotypes and prejudices about social groups.  Yet it is now clear that many social attitudes are implicit (roughly, nonconscious or involuntary).  This raises the question: how does the distinction between cognition and affect apply to implicit mental states?  An influential view—roughly analogous to a Humean theory of action—is that “implicit stereotypes” and “implicit prejudices” constitute two separate constructs, reflecting different mental processes and neural systems.  On this basis, some have also argued that interventions to reduce discrimination should combat implicit stereotypes and prejudices separately.  We propose an alternative (anti-Humean) framework.  We argue that all putative implicit stereotypes are affect-laden and all putative implicit prejudices are “semantic,” that is, they stand in co-activating associations with concepts and beliefs.  Implicit biases, therefore, consist in “clusters” of semantic-affective associations, which differ in degree, rather than kind.  This framework captures the psychological structure of implicit bias, promises to improve the power of indirect measures to predict behavior, and points toward the design of more effective interventions to combat discrimination.

"Implicit Bias and Latina/os in Philosophy" (Fall 2016), APA Newsletter on Hispanic/Latino Issues in Philosophy, 16 (1): 8-15.
Can research on implicit bias shed light on issues related to teaching Latina/os in philosophy? Yes, with caveats. In particular, no one will be surprised to learn that implicit bias against (and among) Latina/os and Latin Americans is severely understudied. While Latina/os make up the largest minority group in the United States, recent estimates suggest that there is more than six times as much research on stereotyping and prejudice against African-Americans as there is against Latina/os. I speculate about some causes and remedies for this disparity, but my primary aims in this essay are different. First, I attempt to stitch together the general literature regarding anti-Latina/o bias with the general literature regarding bias in education in order to convey some of the basic challenges that bias likely poses to Latina/o students. Second, I consider whether Latin American philosophy might itself serve a bias-reducing function. Specifically, I sketch—in tentative and promissory terms—how the traditional “problem” of group identity explored in Latina/o and Latin American thought might function as part of the “solution” to the stereotypes and prejudices that have helped to sustain an exclusionary atmosphere in Anglo-American philosophy. Given the dearth of literature on the situation of Latina/os in philosophy, my claims here build on findings about the situations of minorities in education more broadly.

"Why Implicit Attitudes Are (Probably) not Beliefs" (2016), Synthese​, 193:2659–2684.
Should we understand implicit attitudes on the model of belief?  I argue that implicit attitudes are (probably) members of a different psychological kind altogether, because they seem to be insensitive to the logical form of an agent’s thoughts and perceptions.  A state is sensitive to logical form only if it is sensitive to the logical constituents of the content of other states (e.g., operators like negation and conditional).  I explain sensitivity to logical form and argue that it is a necessary condition for belief.  I appeal to two areas of research that seem to show that implicit attitudes fail spectacularly to satisfy this condition—although persistent gaps in the empirical literature leave matters inconclusive.  I sketch an alternative account, according to which implicit attitudes are sensitive merely to spatiotemporal relations in thought and perception, i.e., the spatial and temporal orders in which people think, see, or hear things.​ (PPT)

"Virtue, Social Knowledge, and Implicit Bias" (2016), for Implicit Bias & Philosophy: Volume 1: Metaphysics and Epistemology, eds. Jennifer Saul and Michael Brownstein, Oxford University Press.
This paper is centered around an apparent tension that research on implicit bias raises between virtue and social knowledge.  Research suggests that simply knowing what the prevalent stereotypes are leads individuals to act in prejudiced ways—biasing decisions about whom to trust and whom to ignore, whom to promote and whom to imprison—even if they reflectively reject those stereotypes.  Because efforts to combat discrimination obviously depend on knowledge of stereotypes, a question arises about what to do next.  I argue that the obstacle to virtue is not knowledge of stereotypes as such, but the “accessibility” of such knowledge to the agent who has it.  “Accessibility” refers to how easily knowledge comes to mind.  Social agents can acquire the requisite knowledge of stereotypes while resisting their pernicious influence, so long as that knowledge remains, in relevant contexts, relatively inaccessible.
(I elaborate on these issues in a post on the Brains Blog here.)

The Normativity of Automaticity,” (September 2012), co-authored with Michael Brownstein (Assistant Professor, New Jersey Institute of Technology), Mind and Language, 27:4, 410-434.
While the causal contributions of so-called ‘automatic’ processes to behavior are now widely acknowledged, less attention has been given to their normative role in the guidance of action. We develop an account of the normativity of automaticity that responds to and builds upon Tamar Szabó Gendler's account of ‘alief’, an associative and arational mental state more primitive than belief. Alief represents a promising tool for integrating psychological research on automaticity with philosophical work on mind and action, but Gendler errs in overstating the degree to which aliefs are norm-insensitive.

Ethical Automaticity,” (March 2012), with Michael Brownstein, Philosophy of the Social Sciences, 42:1, 67-97.
Social psychologists tell us that much of human behavior is automatic. It is natural to think that automatic behavioral dispositions are ethically desirable if and only if they are suitably governed by an agent’s reflective judgments. However, we identify a class of automatic dispositions that make normatively self-standing contributions to praiseworthy action and a well-lived life, independently of, or even in spite of, an agent’s reflective judgments about what to do. We argue that the fundamental questions for the “ethics of automaticity” are what automatic dispositions are (and are not) good for and when they can (and cannot) be trusted.

Dissertation Summary

The Hidden Mechanisms of Prejudice: Implicit Bias and Interpersonal Fluency

Committee: Christia Mercer (adviser), Patricia Kitcher, Taylor Carman, Tamar Szabó Gendler, Virginia Valian

My dissertation is about prejudice.  It examines the theoretical and ethical questions raised by research on implicit social biases.  My approach is grounded in a comprehensive examination of empirical research and, as such, is a contribution to both philosophy and social psychology.  Social biases are termed “implicit” when they are not reported, though they lie just beneath the surface of consciousness.  Such biases are easy to adopt but very difficult to introspect and control.  Despite this difficulty, I argue that we are obligated to overcome, even remove, our biases if they can bring harm to ourselves or to others.  Understanding the particular character of implicit biases is vital to determining how to replace them with more preferable habits of mind.  I argue for a model of interpersonal fluency, a kind of ethical expertise that requires transforming our underlying dispositions of thought, feeling, and action.

Chapters 1 and 2 address the underlying nature of implicit biases.  Researchers in philosophy and psychology agree that implicit attitudes involve a psychological connection between social categories (e.g., age, race, or gender) and traits (e.g., being forgetful, athletic, or nurturing).  Researchers disagree about how best to model the structure of this connection.  Some argue that, due to their influence on judgment and action, implicit biases should be understood as unconscious beliefs.  I argue, however, that implicit biases fail to meet the minimal necessary conditions for belief.  In some respects, implicit biases resemble the primitive associations we make, say, between “salt” and “pepper” or “apples” and “oranges,” when our minds move from one state to the next simply out of habit.  For example, hearing the phrase, “Old people are not bad drivers,” can enhance rather than reduce bias against the elderly, simply by strengthening a mental association between “old people” and “bad drivers.”  I conclude that implicit biases are not reducible to more familiar psychological states such as belief or desire.  Rather, they constitute a class of their own.

Chapter 3 turns to the practical implications of discovering a psychological state that drives so much behavior, but differs so much from belief and desire.  Given the ease of adopting and the difficulty of controlling implicit biases, are we as individuals morally responsible for them?  I argue that we are.  Although implicit biases can easily escape conscious attention, individuals are sufficiently aware of them to be responsible for their harmful expression and therefore obliged to change them.  Appreciating how each of us contributes to the harms of implicit bias will be vital for bringing about meaningful change.

The fundamental concern surrounding implicit bias is not, I take it, the backward-looking question of how we got into this mess, but the forward-looking question of how to get out of it.  Chapters 4 and 5 address the question how to move away from social biases and toward more preferable habits of mind.  The prevailing approach among social psychologists and activists has been to identify strategies for suppressing the overt expression of biases and impulses whose content we reflectively disavow.  I argue, by contrast, that the ethical imperative is for us to revise our standing social dispositions, not simply to control or circumvent them.  Treating others and ourselves fairly requires that we transform our underlying attitudes by reconfiguring our automatic dispositions of thought, feeling, and action.  We must take the basic first steps toward cultivating interpersonal fluency, the virtuous opposite of implicit bias, an ideal state of social know-how in which an individual’s dispositions are automatically egalitarian and unprejudiced.