TLDR: India is actively working towards establishing itself as an ethical and strategic leader in military Artificial Intelligence (AI), building upon frameworks like DRDO’s ETAI and the IndiaAI Mission. Globally, the focus has shifted to the ethics, rules, and trust surrounding military AI, with nations like the US and NATO members formalizing principles for responsible use. India faces the challenge of translating policy into enforceable practice and has opportunities to embed ethical principles into procurement, establish an inter-agency commission, and contribute to global standards. The article emphasizes that clear rules and transparency are crucial for strategic strength in this new era.
New Delhi, August 18, 2025 – As Artificial Intelligence (AI) rapidly transforms global defense landscapes, India is making significant strides to position itself as an ethical and strategic leader in military AI. Authored by Zain Pandit, Partner, and Aashna Nahar, Associate, JSA Advocates and Solicitors, a recent Hindustan Times article highlights India’s progress and the critical need for a comprehensive legal framework to govern AI in defense.
India’s journey towards responsible AI adoption in defense began with foundational initiatives such as the Defence Research and Development Organisation’s (DRDO) Ethical and Trustworthy AI (ETAI) Framework and the ambitious IndiaAI Mission. These frameworks are designed to ensure the responsible, transparent, and effective integration of AI into military operations. The Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) are key institutions accelerating efforts to bring trustworthy AI into the services, with the ETAI Framework setting clear rules around reliability, safety, transparency, fairness, and privacy. These principles guide the entire lifecycle of AI systems, from development and testing to deployment, echoing the values of accountability and inclusivity articulated by NITI Aayog.
Globally, the discourse surrounding military AI has evolved beyond mere technological innovation to a profound focus on ethics, governance, and trust. International forums, including NATO, G7, the Pentagon, and the United Nations, are grappling with fundamental questions: who bears responsibility, how will human oversight be maintained, and how can AI systems be ensured to be reliable, explainable, and accountable?
NATO member states, for instance, signed onto six guiding principles for responsible AI use in defense in 2021, revised in 2024. These principles include strict commitments to lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, and bias mitigation. Similarly, the United States formalized its Ethical Principles for AI in 2020, incorporating values such as responsibility, equity, traceability, reliability, and governability into procurement and deployment processes. Traceability, in particular, emphasizes not just tracking how an AI reached a decision but enabling commanders to comprehend those decisions on the battlefield, while governability ensures human control in scenarios where AI might deviate from intended behavior. The European Union has also enacted landmark legislation for AI regulation, based on risk levels, which is anticipated to set a global standard for AI governance, akin to the influence of GDPR on data privacy.
India’s stance on autonomous weapons differs from calls for a total ban; instead, it advocates for judging technology based on its real-world use and impact, rather than solely on its autonomy. Indian representatives have also pointed out the lack of a universally accepted definition for lethal autonomous weapons, suggesting that a rigid treaty might be premature.
Despite India having robust institutions like MeitY, DRDO, and DAIC, the next crucial step is to transition from policy formulation to effective enforceability. The article identifies three key opportunities for India to lead in this domain:
1. Embed Ethical Principles: Integrate ethical guidelines directly into procurement and deployment protocols for all military AI systems.
2. Establish an Inter-Agency Commission: Create a permanent AI inter-agency commission with representation from all critical stakeholders, including the armed forces, DRDO, MeitY, and legal experts, to ensure coordinated ethical governance.
3. Shape Global Standards: Maintain deep involvement in shaping global AI standards, not only to protect national interests but also to contribute to setting international norms.
Also Read:
- Despite CIO Advocacy, AI Adoption in India Remains Nascent with 92% in Pilot Phases
- India’s AI Ascent: Rapid Adoption Amidst Persistent Skill and Infrastructure Deficits
The authors emphasize that in a world where civilian and military AI capabilities are increasingly intertwined and rapidly evolving, reactive regulation is insufficient. India has a unique opportunity to lead by developing a robust, clear, and enforceable framework that strikes a balance between national security imperatives and ethical considerations. In this new era of military AI, trust, transparency, and clear rules are not constraints but rather fundamental sources of strategic strength.


