TLDR: MIT Sloan has introduced a groundbreaking framework, ‘An Executive Guide to Secure-by-Design AI,’ developed by Keri Pearlson and Nelson Novaes Neto. This guide provides 10 strategic questions to help organizations integrate security from the initial stages of AI development, addressing the critical need to mitigate risks associated with AI’s unique vulnerabilities and ensuring that security is not an afterthought.
Cambridge, MA – In a significant move to bolster the security of artificial intelligence deployments, MIT Sloan has unveiled a new framework designed to help companies build secure AI systems from the ground up. The initiative, spearheaded by Keri Pearlson, a senior lecturer and principal research scientist at MIT Sloan, and Nelson Novaes Neto, an MIT Sloan research affiliate and CTO of Brazil-based C6 Bank, addresses a critical gap in current AI development practices.
The framework, detailed in their report titled ‘An Executive Guide to Secure-by-Design AI,’ condenses hundreds of technical considerations into 10 strategic questions. These questions are specifically crafted to assist technical executives and their teams in identifying potential security risks early in the AI system design process, aligning AI initiatives with business priorities, ethical standards, and cybersecurity requirements.
Pearlson highlighted the urgency of this approach, stating, ‘People are trying to figure out how best to use AI, but few are thinking about the security risks that come with it from day one. That’s the big problem right now.’ The traditional approach of addressing security as an afterthought in software development is proving insufficient for AI systems, which possess defining traits such as data dependence, continuous learning, and probabilistic outputs. These characteristics expose AI to a new and evolving class of cyber threats.
The report identifies some of the most urgent AI threats, including evasion and poisoning attacks, where malicious inputs can skew outputs or corrupt training data. Another significant concern is model theft and inversion, where attackers steal proprietary systems or reconstruct sensitive information. The framework aims to counteract these vulnerabilities by promoting a ‘secure-by-design’ philosophy.
According to MIT Sloan, the business benefits of AI, such as enhanced customer experience, greater efficiency, and improved risk management, are increasingly integral to digital strategies. However, the rapid adoption of AI has often outpaced the development of robust security measures. The new guidance seeks to bridge this gap by providing a practical foundation that prompts better questions, clearer decisions, and more resilient designs from the top down.
Also Read:
- Autonomous AI Reshapes Enterprise Cybersecurity: A New Era of Machine-to-Machine Defense
- Singapore Unveils Enhanced AI Governance Framework for Generative Technologies
‘The idea was to give technical executives a structured way to ask important questions early in the AI systems design process to head off problems later,’ Pearlson explained. While the framework does not eliminate all AI risk, it provides a crucial starting point for organizations to embed security from the outset, rather than attempting to patch vulnerabilities later. Pearlson expressed her hope that ‘this helps other organizations ask smarter questions earlier so they can avoid the mistakes that happen when security is an afterthought.’


