The UK and US have declined to sign an international declaration on artificial intelligence (AI) at a global summit in Paris, putting them at odds with countries such as France, China, and India, which have pledged a collaborative and ethical approach to AI development.
The agreement, endorsed by 60 nations, commits to ensuring AI is “transparent,” “safe,” and “secure and trustworthy,” while also addressing digital divides and the environmental impact of AI. The UK government defended its decision not to sign, stating it “hadn’t been able to agree all parts of the leaders’ declaration” and would only back initiatives aligned with national interests.
US pushes ‘pro-growth’ AI policies over regulation
US Vice President JD Vance told delegates that excessive AI regulation could “kill a transformative industry just as it’s taking off” and vowed that the Trump administration would prioritise “pro-growth AI policies” over stringent safeguards.
“Rather than strangle AI with regulation, we should foster its development,” Vance said, urging European leaders to adopt a more optimistic stance. His comments contrasted sharply with French President Emmanuel Macron, who defended regulatory measures, stating: “We need these rules for AI to move forward.”
UK stance sparks concerns over AI credibility
The UK, which previously led global discussions on AI safety—hosting the world’s first AI Safety Summit in November 2023—now risks undermining its credibility in this area, according to industry experts.
Andrew Dudfield, Head of AI at fact-checking organisation Full Fact, warned that refusing to sign the declaration could weaken the UK’s position as a leader in ethical AI.
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.