TLDR: This paper proposes a framework to integrate discussions on the evolving AI policy landscape into computer science courses. It synthesizes key AI regulatory developments in the United States and the European Union, highlighting the rapid changes, the challenge of translating abstract policies into technical specifications, and the global impact of regulations. The framework uses “socio-political,” “translational,” and “technical” perspectives, along with “scope,” “power,” and “agency” dimensions, to help students understand and adapt to the complex regulatory environment.
The rapid expansion of artificial intelligence (AI) technologies across all sectors of society has brought the critical issue of AI governance and responsible use to the forefront. Governments and major corporations are increasingly defining and enforcing their preferences through AI policy. However, the current landscape is complex, marked by diverse ethical principles and a noticeable absence of AI policy discussions in computer science (CS) curricula. This paper introduces a valuable framework designed to integrate these crucial discussions into CS courses, preparing future AI developers to navigate an ever-changing regulatory environment.
The authors, James Weichert from the University of Washington and Hoda Eldardiry from Virginia Tech, highlight the necessity for AI developers to adapt to evolving regulations. Their work synthesizes recent AI policy efforts in the United States and the European Union, proposing guiding questions to facilitate classroom discussions in both technical and ethics-focused CS courses. The paper emphasizes the direct link between policy demands and the technical challenges involved in their implementation and enforcement.
The Evolving AI Policy Landscape
The paper focuses on the US and EU due to their significant influence on AI development globally. While both regions are actively shaping AI policy, their approaches differ considerably.
In the United States, AI policy efforts have been characterized by heterogeneity and a lack of comprehensive federal legislation. Instead, the US has relied on non-binding policy statements like the NIST AI Risk Management Framework and the Blueprint for an AI Bill of Rights. The political landscape further complicates matters, as evidenced by the recent shift in executive orders. President Biden’s Executive Order 14110 focused on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” covering areas like safety, consumer protection, and privacy. However, this was rescinded by President Trump’s Executive Order 14179, which prioritized “Removing Barriers to American Leadership in Artificial Intelligence,” emphasizing economic competitiveness and national security. This ideological polarization suggests continued volatility in US AI policy, requiring AI companies and practitioners to be highly adaptable.
Conversely, the European Union has made steady progress towards comprehensive legislation with the EU AI Act, which received final assent in March 2024. This Act defines AI systems broadly and categorizes them by risk level. Practices deemed ‘unacceptable risks,’ such as social scoring and real-time biometric surveillance, are prohibited. High-risk AI systems, particularly those related to product safety, are subject to stringent risk management, transparency, and oversight requirements. Additionally, providers of ‘general-purpose’ AI models, including large generative models, face specific transparency obligations, including compliance with copyright law.
Key Implications for AI Developers
The authors identify three critical implications of this developing AI policy landscape:
- The Rapidly Changing AI Landscape: The frequent shifts in policy, especially in the US, underscore the need for AI developers to be familiar with the policy environment and quickly adapt to new regulatory demands.
- Technical Compliance with Non-Technical Specifications: A significant challenge lies in translating abstract policy goals, such as defining and detecting “algorithmic discrimination” or ensuring generative AI models comply with copyright law, into concrete technical controls. This requires developers to possess both technical skills and an understanding of AI policy to bridge the gap between vague policy desires and their technical implementation.
- Cross-Border Implications: The global nature of the technology economy means that regulations from one jurisdiction can have far-reaching effects. The EU’s General Data Protection Regulation (GDPR) serves as a prime example, influencing data privacy practices worldwide. The authors anticipate a similar “Brussels Effect” from the EU AI Act, potentially leading American AI companies to adopt its compliance standards even for the US market in the absence of a unified US regulatory consensus.
Also Read:
- Unmasking Hidden Bias: A New Framework for Explaining AI Discrimination
- Unpacking AI Evaluation: A New Approach with Measurement Trees
A Framework for Discussion
To address these challenges, the paper proposes a framework for integrating AI policy discussions into computing courses. This framework is built around a continuum of perspectives and three dimensions:
- Perspectives: These range from socio-political considerations (normative preferences, ethical principles) to technical ones (implementation of specifications). A crucial translational perspective bridges these two, focusing on how to move from abstract principles to practical applications, requiring collaboration between policymakers and developers.
- Dimensions: These act as lenses for analyzing policy implications: scope (what the policy requires/prohibits, and of whom), power (who creates and benefits from the policy, and whose perspectives are missing), and agency (the role and influence of AI practitioners in shaping responsible AI development).
The framework is embodied in a grid of guiding questions that encourage students to consider normative policy demands within the technical intricacies of AI systems. For instance, a policy requiring “user privacy” necessitates defining privacy technically and identifying algorithms to implement it, while also considering the power structures involved in its creation and the agency developers have in its implementation.
This framework offers a robust and flexible starting point for classroom discussions and assignments, preparing the next generation of AI engineers to interact with and adapt to societal policy preferences. For more details, you can read the full research paper here.


