The G7 and the future of AI governance
The contours of the governance of artificial intelligence, although still nascent, are beginning to become more clear. While the private sector actively competes to deliver AI services amid surging demand, policymakers and observers increasingly call for appropriate safeguards to ensure that AI development fulfils its promises of economic growth and social advancement.
Amid growing demand and fierce private-sector competition to develop and deploy AI technologies, there is a parallel push from policymakers, scholars and civil society groups for robust safeguards. Despite a shared aspiration to harness AI for economic growth and social advancement, significant differences remain in how countries envision and approach AI governance.
Divergent approaches to AI governance
Across the globe, AI governance frameworks typically focus on three key dimensions: regulatory structures, degree of institutional oversight and ethical principles. Collectively, these elements guide how AI is developed, deployed and monitored. The European Union, through its AI Act, has embraced a regulation-heavy, rights- and risk-based approach. This precautionary, ‘ethics-first’ governance model contrasts markedly with the American ‘innovation first’ philosophy, which favours market-driven, self-regulated approaches. The differences between these models were made starkly visible during the Paris AI Action Summit in February 2025, where both the UK and the US refrained from signing the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, citing national security considerations and concerns over stifling innovation, respectively. Such visible fissures within the G7, coupled with rapid advances in China’s AI capabilities, have sparked suggestions that China’s centralised, state-controlled approach might serve as a practical alternative to the approaches presented by the US and EU.
However, it would be premature to interpret the decisions made by the UK and US at the Paris AI Action Summit as indicative of a deeper division among G7 members on AI governance. Instead, these differences highlight an opportunity to rethink and diversify governmental approaches to AI governance.
finding alignment on global AI governance
G7 members, united by shared fundamental values of freedom, democracy and human rights, should continue to anchor their collaboration in these shared principles. It is these commonalities that provided the impetus for the G7’s very own 2023 Hiroshima AI process. Although AI development continues to evolve at a rapid rate and differing national AI governance approaches have emerged among the G7 members, substantial common ground remains to enable joint leadership and shape the trajectory of global AI governance.
Several areas provide promising opportunities for consensus:
First, risk-based regulation is an area of G7 alignment that could inform global AI governance efforts elsewhere. Both the EU and the US concur that high-risk AI applications necessitate stricter oversight. The G7 should work collaboratively to establish agreed-upon standards or thresholds for categorising AI risks, to facilitate the development of regulatory frameworks that are tiered and can balance innovation and protection effectively.
Second, there is a collective emphasis within the G7 on trustworthy and responsible AI. The EU’s strong focus on ethical AI principles aligns closely with a growing recognition in the US that trustworthiness is essential for long-term economic viability and consumer adoption. Principles such as transparency, accountability and fairness offer a shared ethical framework within the G7, simultaneously promoting consumer protection and business innovation. Under the G7 umbrella, a joint commitment to human oversight and fundamental rights could harmonise the EU’s ethical orientation with the American drive for innovation – again serving as a good example for efforts elsewhere.
Third, enhanced transparency and collaborative mechanisms for AI-related information sharing and reporting could further solidify faith in a potential G7 model. Establishing robust channels for information exchange on AI-related incidents and best practices within the G7 can inspire similar efforts elsewhere, helping to pre-empt misunderstandings and reduce the likelihood of unintended disputes.
The fact that 62 countries plus the European Union and the African Union signed the joint declaration from the Paris Summit should reassure the G7. As emphasised by the declaration itself, inclusiveness, sustainability and a people-centred approach are foundational pillars for global AI governance. By building upon the success of the 2023 Hiroshima Summit, the G7 at Kananaskis and beyond, guided by its foundational values, can be well positioned to serve as an influential model for global AI governance.