TLDR: The Monetary Authority of Singapore (MAS) has released a consultation paper proposing comprehensive guidelines for Artificial Intelligence (AI) risk management across all financial institutions (FIs). These guidelines aim to ensure the safe and responsible adoption of AI, including generative AI and AI agents, by setting supervisory expectations for governance, risk management systems, and lifecycle controls, all applied proportionately to an FI’s AI usage and risk profile. Public feedback is invited until January 31, 2026.
Singapore, November 13, 2025 – The Monetary Authority of Singapore (MAS) today announced the release of a consultation paper detailing proposed Guidelines on AI Risk Management for financial institutions (FIs). This initiative underscores MAS’ commitment to fostering responsible AI innovation within the financial sector, providing a clear framework for FIs to manage the inherent risks associated with advanced AI technologies.
The proposed guidelines are designed to be broadly applicable to all FIs, encompassing a wide array of AI applications and technologies, including the rapidly evolving generative AI and emerging AI agents. MAS acknowledges that the risks associated with AI usage can vary significantly based on the scale, scope, and business models of individual FIs. Consequently, the guidelines emphasize a proportionate, risk-based approach, allowing FIs to tailor their implementation based on their specific activities, AI adoption, and risk profiles.
Key areas of supervisory expectations outlined in the consultation paper include:
Oversight of AI Risk Management: Boards and senior management are expected to establish robust governance frameworks, structures, policies, and processes. A critical component is fostering an appropriate risk culture that supports the responsible use of AI throughout the organization.
AI Risk Management Systems, Policies, and Procedures: FIs must implement clear processes for identifying all AI applications across their operations. This includes maintaining accurate and up-to-date AI inventories and conducting thorough risk materiality assessments. These assessments should consider the potential impact, complexity, and reliance on AI systems.
AI Life Cycle Controls, Capabilities, and Capacities: To effectively manage AI risks across its entire lifecycle, FIs are expected to implement stringent controls. These controls cover crucial areas such as data management, ensuring fairness and mitigating bias, promoting transparency and explainability of AI decisions, establishing human oversight mechanisms, managing third-party risks associated with AI vendors, and implementing robust evaluation, testing, monitoring, and change management protocols. These controls should be proportionate to the assessed materiality of AI-related risks. Furthermore, FIs must ensure they possess adequate capabilities and resources to manage their AI deployments effectively.
These guidelines build upon MAS’ thematic review of major banks’ AI usage conducted in 2024, as well as ongoing discussions with financial institutions.
Ms. Ho Hern Shin, Deputy Managing Director of MAS, commented on the initiative, stating, “The proposed Guidelines on AI Risk Management provide financial institutions with clear supervisory expectations to support them in leveraging AI in their operations. These proportionate, risk-based guidelines enable responsible innovation by financial institutions that implement the relevant safeguards to address key AI-related risks.”
Also Read:
- Generative AI Adoption Soars in DIFC as Financial Sector Embraces Innovation
- China’s Central Bank Outlines ‘AI + Finance’ Strategy for Next Phase of Fintech Evolution
Interested parties are invited to submit their comments on the proposed guidelines to MAS by January 31, 2026.


