FAQs

The Global AI Bill of Rights addresses a fundamental question:
How can societies ensure that as artificial intelligence becomes embedded in public life, its growing power strengthens human dignity, democratic capacity, and collective welfare?
This FAQ section provides an entry point into the framework—clarifying its purpose, scope, and place within broader efforts to govern AI systems that shape social outcomes at scale. It is intended for policymakers, public institutions, researchers, and others engaged with AI in public and institutional contexts.
The Global AI Bill of Rights does not prescribe technical solutions or fixed policies. Instead, it establishes a shared foundation for understanding how rights, legitimacy, and public trust can be carried forward as AI systems become more capable and more deeply integrated into society.
While the framework does not dictate implementation, it is designed to inform how AI systems that exercise public authority are designed, governed, and sustained over time, supporting responsible innovation and long-term societal benefit.

The Global AI Bill of Rights is a normative framework that defines the foundational rights people should retain when artificial intelligence systems exercise power over public life, particularly when those systems are deployed as part of sovereign, public-interest, or state-governed initiatives.
It articulates the rights, principles, and system requirements necessary to ensure that AI systems respect human dignity, democratic accountability, and social legitimacy when they are embedded in core societal functions, operate at scale, or shape institutional decision-making.
As artificial intelligence systems are increasingly used to allocate resources, evaluate individuals, enforce rules, and shape access to public services, they are no longer just tools—they are becoming mechanisms of delegated authority.
Existing legal and ethical frameworks were not designed for systems that operate at scale, adapt over time, and make or inform decisions that directly affect people without transparency, consent, or clear lines of accountability. When intelligence is automated and embedded into institutions, rights can be weakened not through intent, but through design and default.
An AI Bill of Rights is therefore necessary to ensure that as intelligence becomes more powerful and institutionalized, human dignity, agency, due process, and democratic accountability remain explicit constraints, rather than assumptions that erode quietly over time.
It establishes a clear baseline for what must be protected whenever AI systems exercise power over people as part of public or sovereign functions.
The Global AI Bill of Rights is intended for governments, public institutions, regulators, and multilateral bodies responsible for deploying, authorizing, or overseeing AI systems that shape public life.
It is designed for:
-
Policymakers and legislators
-
Regulatory and oversight authorities
-
Public agencies using AI in core functions
-
Researchers, standards bodies, and international institutions supporting AI governance
The framework is specifically concerned with AI systems that exercise public or delegated authority—including systems deployed by states, authorized by public institutions, or operating as part of essential public infrastructure.
It is not a consumer rights guide, a corporate ethics charter, or a general-purpose compliance framework for all private AI systems.
-
No.
The Global AI Bill of Rights is not a law, does not mandate enforcement, and does not replace democratic legislative or regulatory processes. It is a normative framework, not a legal instrument.
Its purpose is to clarify what must be protected when AI systems exercise public or delegated authority—so that lawmakers, regulators, courts, and institutions have a clear reference point when designing laws, policies, oversight mechanisms, and governance structures.
In this way, the framework guides and constrains future action without pre-empting it, preserving democratic choice while establishing a shared baseline for legitimacy and rights.Many national and regional initiatives outline principles for responsible or ethical AI within specific legal systems. The Global AI Bill of Rights is different in both scope and purpose.
First, it is global and system-level. Rather than reflecting the laws or policy priorities of a single jurisdiction, it articulates a shared baseline of rights and expectations for AI systems that exercise public or delegated authority across diverse political, legal, and cultural contexts.
Second, it focuses on rights and system requirements, not policy checklists or compliance guidance. The framework does not prescribe specific rules, technologies, or regulatory models. Instead, it defines what must be protected—and what AI systems must be capable of—when they shape public life.
Finally, the Global AI Bill of Rights is designed to complement, not replace, national laws, regulations, and democratic processes. It provides a common reference point that can inform legislation, regulation, institutional design, and international coordination, while leaving implementation choices to sovereign decision-making.The Global AI Bill of Rights articulates rights centered on:
-
Human dignity and agency, ensuring people are not reduced to data points or automated outcomes
-
Fair treatment and non-discrimination, particularly where AI systems influence access to opportunity, services, or public goods
-
Transparency, explainability, and accountability, proportional to the system’s role and impact
-
Contestability, oversight, and meaningful redress, so individuals and institutions can challenge and correct harmful outcomes
-
Protection against systemic harms, including environmental and resource impacts
In this context, AI systems that exercise public or delegated authority must not externalize environmental costs in ways that undermine collective welfare, intergenerational equity, or democratic accountability. Because such systems operate at scale and rely on shared infrastructure and resources, their impacts extend beyond individual users to society as a whole.
These rights recognize that AI systems can affect people without their awareness or consent, and that protections must be embedded at the system and institutional level, not left to individuals to manage alone.
-
Rights imply system requirements.
For AI systems to uphold the rights articulated in the Global AI Bill of Rights, they must be designed, deployed, and governed in ways that make those rights practically enforceable, not merely aspirational—especially when such systems exercise public or delegated authority.
This requires that AI systems be supported by:
-
Clear accountability and governance structures, so responsibility for system behavior, outcomes, and harms is identifiable and cannot be displaced by automation
-
Transparency and explainability proportional to impact, enabling oversight bodies, institutions, and affected individuals to understand how decisions are made and challenged
-
Auditability and contestability, allowing systems and outcomes to be reviewed, tested, corrected, and, where necessary, suspended
-
Institutional capacity to intervene, including the authority and resources to modify, override, or withdraw systems that undermine rights or public trust
-
Management of systemic externalities, including environmental, infrastructure, and societal impacts, proportional to the scale, persistence, and criticality of the system
The framework emphasizes that rights are upheld not by intent or technical performance alone, but through durable institutional responsibility, governance, and public capacity. Without these system-level requirements, even well-designed AI systems can erode rights through scale, opacity, or neglect.
-
The Global AI Bill of Rights defines what must be protected when AI systems exercise public or delegated authority—articulating the rights, limits, and legitimacy conditions that should govern sovereign and public-interest AI initiatives.
Sovereign AI Finance addresses a different but complementary question: how states build, finance, and govern the durable capacity required to uphold those rights over time. This includes access to intelligence, digital and physical infrastructure, institutional control, and continuity beyond short-term political or budget cycles.
The two frameworks operate at distinct but interdependent levels:
-
The Global AI Bill of Rights functions as a normative and constitutional layer, establishing rights and constraints on the exercise of AI-enabled public power.
-
Sovereign AI Finance functions as an institutional and economic layer, ensuring that the systems, infrastructure, and governance needed to protect those rights are sustainable and publicly accountable.
Rights without capacity are fragile. Capacity without rights is illegitimate. Together, the two frameworks form a coherent foundation for governing AI as public infrastructure in democratic societies.
-
The Global AI Bill of Rights is intentionally limited in scope.
It does not:
-
Prescribe specific technologies, models, or technical architectures
-
Define regulatory rules, compliance procedures, or enforcement mechanisms
-
Establish funding, procurement, or financing frameworks
-
Replace democratic lawmaking, judicial processes, or institutional authority
-
Serve as a certification, licensing, or audit regime
The framework’s purpose is to clarify rights, boundaries, and legitimacy conditions, not to operationalize them. Decisions about implementation, enforcement, financing, and institutional design remain the responsibility of sovereign governments and democratic processes.
By explicitly setting these limits, the Global AI Bill of Rights aims to strengthen—not bypass—democratic governance, while providing a shared reference point for how AI systems that exercise public power should be constrained and evaluated.
-
As AI systems increasingly shape public institutions, allocate resources, influence life outcomes, and mediate access to opportunity, the framework seeks to establish a durable baseline of rights and constraints that can guide governance across political systems, technological change, and generations.
By articulating shared expectations for how AI systems that exercise public or delegated authority should be designed and governed, the Global AI Bill of Rights aims to support a future in which access to intelligence strengthens democratic capacity and collective welfare, rather than concentrating power without accountability.
Its ambition is not to predict how AI will evolve, but to ensure that legitimacy, rights, and public trust remain foundational conditions of that evolution.