TLDR: This research paper argues that current algorithmic fairness approaches, which primarily focus on sensitive attributes like race or gender, overlook the crucial role of “social determinants” – environmental and contextual factors like neighborhood wealth or school quality. Using college admissions as a case study, the authors demonstrate how ignoring social determinants can lead to new forms of unfairness or disproportionately affect certain groups. They propose a framework to incorporate these factors for a more comprehensive and transparent understanding of algorithmic fairness, urging policymakers to consider collecting data on social determinants.
In the evolving landscape of artificial intelligence and its increasing role in decision-making, ensuring fairness is paramount. Much of the discussion around algorithmic fairness has traditionally centered on “sensitive attributes” – characteristics like race, gender, or age that are legally protected from discrimination. However, a recent research paper sheds light on a crucial, often overlooked aspect: “social determinants.”
Social determinants are not about an individual’s inherent traits but rather about the broader context and environment they live in. Think of factors like the quality of local schools, access to healthcare, economic conditions of a neighborhood, or even environmental pollution. These external conditions can profoundly influence an individual’s opportunities and outcomes, yet they are frequently left out of algorithmic fairness models.
The paper, titled Algorithmic Fairness amid Social Determinants: Reflection, Characterization, and Approach, argues that focusing solely on sensitive attributes can lead to an incomplete understanding of fairness and may even inadvertently perpetuate or introduce new forms of structural injustice. Structural injustice refers to systemic issues where social practices and environments reinforce historical disadvantages.
Understanding the Core Concepts
The researchers clearly define the distinction: Sensitive attributes are intrinsic to an individual and are recognized by law or ethics as needing protection from bias. Examples include race, sex, or disability status. Social determinants, on the other hand, are external variables that describe the context or environment an individual operates within, such as the economic profile of a geographic area or the availability of educational resources in a community. While sensitive attributes are stable identifiers, social determinants are dynamic and can change if an individual moves or their environment changes.
Insights from fields like political philosophy, economics, sociology, and healthcare have long recognized the impact of social determinants. For instance, economists and sociologists use indices like the Social Vulnerability Index (SVI) or the Area Deprivation Index (ADI) to quantify the influence of contextual environments. Healthcare research, too, extensively studies the social determinants of health (SDoH), recognizing that factors beyond individual biology significantly affect health outcomes.
College Admissions: A Practical Example
To demonstrate their framework, the authors examine the concrete setting of college admissions, using geographic region as a proxy for social determinants. They analyze three common admission procedures:
- Quota-Based Admissions: This method sets specific numbers of admissions for different demographic groups. The paper shows that while intended to rectify historical injustice, it can unintentionally disadvantage non-URM (Underrepresented Minority) applicants from less affluent regions by imposing more competitive requirements on them.
- Holistic Review with Plus Factors: This approach considers multiple factors, including sensitive attributes, to promote diversity. The research suggests that this method, while flexible, might disproportionately benefit URM applicants from wealthier regions more than those from less affluent areas, in terms of increased admission probability.
- Top-Percentage Plans: These policies guarantee admission to students in a certain top percentage of their high school class, often aiming to be race-neutral. The paper reveals that these plans effectively reallocate admission opportunities from richer regions to poorer regions, regardless of the applicant’s demographic group.
These analyses highlight that even when race is not explicitly used in decision-making, social determinants embedded in geographic regions can still lead to disparities and influence who benefits or is burdened by different admission strategies.
Also Read:
- The Fair Game: A Dynamic Approach to Ensuring AI Fairness Over Time
- How Algorithms Shape Academic Visibility: A Gender Fairness Study
Empirical Evidence and Future Directions
The researchers applied their analytical framework to publicly available summary statistics for freshmen admissions to the University of California system. Despite the aggregated nature of the data, their model was able to estimate region-specific academic preparedness and demographic compositions. Their findings corroborated the correlation between a region’s ethnicity composition, the state of social determinants (like educational resources), and admission outcomes.
The paper concludes by emphasizing that algorithmic fairness research must expand its focus beyond sensitive attributes to explicitly incorporate social determinants. Doing so provides a more comprehensive understanding of the underlying data-generating processes and allows for the design of more effective, principled, and transparent fairness characterization and mitigation strategies. Policymakers are encouraged to mandate the collection and analysis of social determinants alongside sensitive attributes when evaluating algorithmic systems for fairness compliance. This framework is not limited to college admissions and can be applied to other critical areas like healthcare resource allocation, lending decisions, and hiring processes.


