NEW DELHI ACCORD

NEW DELHI ACCORD ON ARTIFICIAL INTELLIGENCE EMERGING TECH LAW & GOVERNANCE, 2025

 

Adopted
at the Global Summit on Artificial Intelligence, Emerging Tech Law &
Governance (GSAIET 2025)

New Delhi, Republic of India – 24 July 2025

 

PREAMBLE

 

We, the Muli stakeholder participants of the digital and Artificial Intelligence Ecosystem, convened in New Delhi for Global Summit on Artificial Intelligence Emerging Tech Law & Governance, 2025 (GSAIET 2025).

Guided by the belief that Artificial Intelligence (AI) and allied emerging technologies offer transformative potential to improve human well-being, enhance inclusive development, and strengthen global governance;

Cognizant that the same technologies, if left unregulated or improperly deployed, carry the risk of undermining fundamental rights, exacerbating inequalities, and destabilising the global order;

Acknowledging the absence of a universal governance framework and the pressing need for coordinated, anticipatory, and principled regulation;

Affirming the enduring relevance of the UN Charter, the Universal Declaration of Human Rights, the Sustainable Development Goals, and other core instruments of international law;

We hereby adopt this New Delhi Accord On Artificial Intelligence Emerging Tech Law & Governance, as a foundational and forward-looking text, to serve as a global reference for shaping responsible, equitable, and sustainable governance of AI and emerging technologies.

 

PART I: DEFINITIONS

  • Artificial Intelligence (AI): Machine-based systems that, for explicit or implicit objectives, analyse, generate, or act upon data in a manner that emulates or exceeds aspects of human cognition.
  • Emerging Technologies: Rapidly evolving technologies—including quantum computing, blockchain, Internet of Things (IoT), extended reality (XR), and decentralised digital infrastructures and such other future technologies—not yet subject to comprehensive regulatory oversight.
  • Stakeholders: Entities and individuals, including State and non-state actors, intergovernmental organisations, industry actors, researchers, civil society groups, and end-users, who are impacted by or contribute to the AI and emerging technology ecosystem.

PART II: FOUNDATIONAL PRINCIPLES

  1. Human Rights & Dignity: The deployment and use of AI must respect and uphold internationally recognised human rights. Human agency and dignity shall remain inviolable, with safeguards to prevent discriminatory or coercive applications.
  2. Transparency & Explainability: Systems must be comprehensible and their decisions interpretable, particularly when affecting human rights, safety, or livelihoods. Stakeholders must be informed in accessible and contextually relevant ways.
  3. Accountability: Clear legal and ethical responsibility must accompany all stages of the AI lifecycle. Affected individuals and communities must have access to timely and effective remedies.
  4. Fairness & Non-Discrimination: Measures must be taken to identify, prevent, and remedy algorithmic bias and disparate impacts. Inclusivity shall guide all phases of system development and deployment.
  5. Privacy & Data Protection: The integrity, confidentiality, and lawful processing of personal data must be assured across borders and systems. Special protection is warranted for sensitive and vulnerable data categories.
  6. Safety & Security: AI systems must be resilient against failures and hostile threats. Collaborative mechanisms shall address malicious uses, including cyberattacks and autonomous weaponisation.
  7. Human Oversight: Critical decisions must involve meaningful human judgement. Oversight mechanisms must align with the degree of risk and impact.
  8. Sustainability: AI shall contribute to environmental stewardship and minimise ecological harm. Lifecycle assessments and sustainability standards shall guide design and deployment.
  9. Innovation: Responsible innovation shall be encouraged through proportionate, flexible regulation that supports research, competitive markets, and public interest objectives.
  10. International Cooperation: Multilateralism and equitable capacity-sharing shall underpin the global governance of AI. States and stakeholders shall collectively shape interoperable standards and inclusive participation.

PART III: GOVERNANCE ARCHITECTURE

  1. Global AI Governance Council (GAIGC): The Summit recommends that an international body headquartered in New Delhi, be established which is mandated to facilitate and coordinate the global implementation of this Accord.
  2. Council Structure: Comprising a Plenary Assembly of relevant stakeholders, an Executive Bureau, and a multidisciplinary Scientific, Ethical, and Technical Advisory Board.
  3. Mandate: To develop legal foundational principles and model legislation , monitor risks, facilitate peaceful dispute resolution, and support capacity-building efforts.
  4. Regional Coordination Bodies: Regional Coordination Bodies shall contextualise global standards and report implementation progress.
  5. Risk-Based Regulation: A shared taxonomy shall categorise AI systems by risk, guiding proportionate regulatory obligations.
  6. AI Impact Assessment: High-risk systems shall be subject to mandatory, transparent assessments and audits addressing safety, rights, environmental, and socio-economic impacts.
  7. Stakeholder Inclusion: Formal structures shall ensure representation of marginalised communities, youth, Indigenous groups, and persons with disabilities.
  8. Ethics Hotline: A confidential mechanism shall be maintained for reporting violations and misconduct, with safeguards against retaliation.
  9. Data & Knowledge Platform: A multilingual open-access platform shall enable transparency, best practice exchange, and legal harmonisation.
  10. Legal Harmony: The Accord shall be interpreted in coherence with existing international legal obligations.

PART IV: SECTORAL APPLICATIONS

AI governance must extend meaningfully across key domains, including:

  • Healthcare: Systems must be clinically validated and uphold patient rights and health equity.
  • Finance: Compliance with financial law, auditability, and algorithmic fairness are imperative.
  • Transport: Autonomous mobility systems must prioritise public safety, reliability, and liability clarity.
  • Education: AI must enhance inclusive learning and preserve human educator authority.
  • Justice: Judicial and law enforcement applications must protect procedural fairness and avoid overreliance on automation.
  • Labour: AI affecting employment decisions must uphold worker rights and be subject to appeal.
  • Media: Algorithmic systems must protect information integrity and clearly identify synthetic content.
  • Military: Deployment of lethal autonomous weapons shall be restricted in accordance with international humanitarian law.
  • Emergency Response: Systems must be designed for ethical, inclusive, and timely crisis response.
  • Energy: Systems must ensure resilience and prioritise critical infrastructure security.
  • Environment: AI shall be harnessed to support ecological conservation and climate goals.
  • Cultural Heritage: Interventions must ensure authenticity and involve indigenous and local voices.
  • Sports: Automation must support, not replace, the spirit of fair play.
  • Smart Cities: Urban AI systems must be rights-centric, transparent, and subject to public consultation.
  • Liability: Risk-tiered legal regimes must provide redress and clarity.

PART V: EMERGING TECHNOLOGIES

Governance must remain agile to technological convergence:

  • Quantum Computing: State actors must coordinate transitions to post-quantum security protocols with appropriate legal and regulatory foundations
  • AI–Biotech Convergence: Oversight must reflect biosafety, transparency, and ethical precautions.
  • Nanotechnology: Lifecycle impact assessments must guide deployment.
  • Internet of Things (IoT): Secure-by-design standards and firmware transparency are essential.
  • XR & Metaverse: Consent, autonomy, and safety must be legally enforceable in immersive environments.
  • Blockchain: Smart contracts must be auditable; ecological efficiency must be prioritised.

Emerging Technologies Legal Principles:

  • Technology Convergence: Interdisciplinary foresight must guide emerging intersections.
  • Precautionary Principle: Where uncertainty exists, responsibility shall lie with the technology proponent.
  • Technological Neutrality: Regulation shall be technology neutral, outcome-oriented and adaptable.
  • Periodic Review: Frameworks shall be revisited at minimum every year.

PART VI: STRATEGIC RECOMMENDATIONS OF GSAIET, 2025

GSAIET is recommended to follow and work towards the fulfilment of the following: –

  1. Align governance with institutional purpose and mission.
  2. Develop risk-calibrated, full-lifecycle AI legal and related policies.
  3. Embed compliance and rights safeguards from design.
  4. Encourage cross-disciplinary and cross-sectoral collaboration.
  5. Create systems for ongoing performance and impact evaluation.
  6. Prioritise openness, data stewardship, and user agency.
  7. Build capacity and cultivate ethical leadership.
  8. Ensure governance evolves with technology.
  9. Tailor obligations based on size and use-case risk.
  10. Anchor domestic efforts in global norms and consensus.

Commitments- GSAIET and its subsequent editions should

  • Promote the harmonization of legal principles and frameworks governing AI and emerging technologies to support interoperability, innovation, and mutual trust.
  • Establish, or reinforce, institutions and platforms for multi-stakeholder dialogue, monitoring, and the sharing of best practices.
  • Encourage capacity building, education, and awareness programs to empower individuals and communities to meaningfully engage with and benefit from technological advances.
  • Commit to continuous review and adaptation of governance mechanisms, in line with technological progress and societal needs.

Role of stakeholder actors

  • This Accord invites all stakeholder actors to participate in ongoing efforts to strengthen international consensus, pool expertise, and uphold the highest standards in the governance of AI and emerging technologies.

 PART VII:- RECOGNITION AND FUTURE INITIATIVES

  • The Summit recognizes the contribution and current efforts of the Summit Chair, Dr. Pavan Duggal, internationally acclaimed authority on AI law, regulation and emerging technology law.
  • The Summit hereby endorses and upholds the Duggal Doctrine of 10 AI Legal Principles, as articulated by Dr. Pavan Duggal, to guide the responsible development, deployment, and governance of Artificial Intelligence.
  • The Summit encourages the Summit Chair to expand the scope of activities of various initiatives, including AI Law Hub, Global Artificial Intelligence Law and Governance Institute, and Pavan Duggal Associates, focusing on diverse aspects of digital jurisprudence development concerning AI, emerging technologies and their applicability.
  • The Summit calls for formal establishment of specialized working groups under the mentorship of GALGI (Global AI Law & Governance Institute). Initial focus areas could include AI liability frameworks, quantum technology law, and digital rights governance.
  • The summit calls upon the international community to intensify efforts in addressing the legal, policy, and regulatory challenges presented by artificial intelligence and other emerging technologies such as quantum computing, blockchain, and Internet of Things, and to develop harmonized international, regional, and national approaches to address these challenges.

PART VIII: IMPLEMENTATION ROADMAP

Stakeholder actors are encouraged to take appropriate steps to implement the provisions of this Accord.

The Global Summit encourages all governments, policymakers, businesses, and technologists to join in fostering a global AI legal order that balances innovation with responsibility.

PART IX: FUTURE SUMMITS AND DISSEMINATION

  • The Summit Chair is called upon to convene subsequent editions of GSAIET to address updated scenarios for AI and emerging technologies and to continue enriching international legal jurisprudence concerning AI and emerging technologies.
  • The organizers—AI Law Hub, Global Artificial Intelligence Law and Governance Institute, and Pavan Duggal Associates—are called upon to disseminate this New Delhi Accord to all stakeholders in the AI and emerging technologies ecosystem globally and to interact and collaborate with other stakeholders at global, regional, and national levels to advance AI and emerging tech legal jurisprudence and facilitate the future development of AI law and emerging technology law.

PART X: FINAL PROVISIONS

Adopted in New Delhi on 24 July 2025 by the Muli stakeholder participants of the digital and Artificial Intelligence Ecosystem, convened in New Delhi for the Global Summit on Artificial Intelligence, Emerging Technologies, Law & Governance (GSAIET 2025).

Scroll to Top