TLDR: The Organisation for Economic Co-operation and Development (OECD) has released a new report, ‘Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions,’ urging governments to adopt a ‘high-benefit, low-risk’ approach to AI. The report, approved on September 5, 2025, analyzes 200 AI use cases across 11 government functions, highlighting both the transformative potential and significant risks of AI in public administration. It emphasizes the need for robust governance, data management, and skills development to ensure trustworthy and effective AI deployment.
The Organisation for Economic Co-operation and Development (OECD) has issued a comprehensive new report, ‘Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions,’ which calls on governments worldwide to strategically implement AI with a focus on maximizing benefits while minimizing risks. Approved and declassified by the Public Governance Committee on September 5, 2025, the study provides an in-depth analysis of 200 AI use cases across 11 core government functions, offering insights into the current state of AI adoption and a roadmap for its responsible integration into public services.
OECD Secretary-General Mathias Cormann underscored the importance of this initiative, stating, ‘With more and more governments incorporating AI into service delivery, the right policy frameworks and safeguards are essential to encourage innovation and ensure the trustworthy use of AI.’ He added that the OECD’s Framework for Trustworthy AI in Government offers crucial guidance for public sector entities in the responsible development, deployment, and use of AI.
The report identifies several key benefits of AI in government, including improved efficiency and responsiveness through the automation of repetitive processes and personalized service delivery. AI can also enhance decision-making and forecasting capabilities, leading to more effective resource allocation and proactive responses to emerging issues. Furthermore, it supports accountability and anomaly detection, crucial for fraud prevention and regulatory oversight. Finally, AI presents opportunities for citizens and businesses by providing access to government-developed systems.
Despite these advantages, the OECD warns of significant risks. These include the potential for biased algorithms, insufficient transparency, and over-reliance on AI, which could undermine citizens’ rights, erode trust, or perpetuate systemic errors. The displacement of public service workers is another concern, particularly if AI is used to replace rather than augment human capabilities. Conversely, the report also highlights the risks of *not* adopting AI, such as missed opportunities for efficiency gains and a widening gap between public and private sector technological capacities.
Governments face numerous implementation challenges, which often keep AI initiatives in the pilot phase. These barriers include skills shortages, difficulties in accessing and sharing quality data, financial constraints, outdated legacy IT systems, and a lack of concrete guidance despite the proliferation of national AI strategies. The absence of robust monitoring and evaluation mechanisms further limits the ability to measure outcomes and detect risks effectively.
To navigate these complexities, the OECD proposes a three-pillar framework: 1. Enablers: This pillar emphasizes the need for strong governance structures, robust digital infrastructure, effective data management, adequate funding, and skilled workforces to support AI adoption. 2. Guardrails: These include clear rules, accountability measures, transparency requirements, and independent oversight bodies to ensure responsible AI use. The report stresses that guardrails should be proportionate and risk-based, avoiding excessive caution that could stifle innovation. 3. Engagement Mechanisms: Meaningful engagement with citizens, civil society, and businesses is crucial for designing user-centered and responsive AI systems.
Also Read:
- Indonesia’s Ambitious AI Strategy: Driving National Development Across Key Sectors
- Global Regulators Intensify Scrutiny on AI, Ushering in an Era of Accountability
The report encourages governments to prioritize AI applications that offer ‘high benefits with manageable risks’ and to invest in measuring both efficiency gains and potential harms. It also provides detailed sectoral analyses, offering practical examples and lessons learned for areas such as tax administration, public financial management, law enforcement, and civic participation. By positioning governments as both regulators and users/developers of AI, the OECD underscores the necessity of building internal AI governance capacities, cautioning that delayed adoption could lead to dependence on external actors and a diminished ability to shape AI’s public sector trajectory.


