Around the world, countries and economic groups are at different stages when it comes to regulating artificial intelligence. While the European Union has already set up detailed rules, the United States now resembles a "Wild West" with few formal AI regulations.
Ahead of the Paris AI Summit on February 10-11, here are some key updates on how major regions are handling AI oversight:
United States
Last month, President Donald Trump reversed Joe Biden’s executive order on AI blunder issued in October 2023. The order had been mostly voluntary, asking leading AI companies like OpenAI to share their safety assessments and important data with the federal government. It was backed by major tech companies and aimed to protect privacy, prevent civil rights violations, and ensure national security.
Now, even though the U.S. is home to some of the world's top AI developers, it no longer has formal AI guidelines, though existing privacy protections remain in place. As digital lawyer Yael Cohen-Hadria from EY remarked, under Trump, the United States has "picked up their cowboy hat again, it's a complete Wild West."
The administration essentially stated, "We're not following this law anymore... we're just letting our algorithms run and going all in," she added.
China
China's government is still working on a formal law for generative AI. For now, a set of "Interim Measures" mandates that AI systems must respect both personal and business interests, refrain from using personal information without consent, clearly label AI-generated images and videos, and protect users' physical and mental health.
Also, AI must "comply with core socialist values." This includes strict language against AI content that could threaten the ruling Communist Party or China's national security.
For instance, DeepSeek, whose efficient yet powerful R1 model made waves last month, has been notably vague when questioned about sensitive topics like President Xi Jinping or the 1989 crackdown on pro-democracy protests in Tiananmen Square.
While China is closely regulating AI business, especially for foreign entities, the government is expected to carve out "strong exceptions" to its own rules, as predicted by Cohen-Hadria.
India
India has laws for personal data but no specific rules for AI. So far, AI-related issues have been handled using existing laws on defamation, privacy, copyright, and cybercrime. While the government has talked about AI regulations, no clear action has been taken. Cohen-Hadria believes India will make AI laws only if they benefit the economy.
In March 2024, AI companies like Perplexity criticized the government after it announced that firms needed permission to launch “unreliable” AI models. This came after Google’s Gemini made controversial remarks about Prime Minister Narendra Modi. The government later changed the rules, only requiring AI-generated content to have disclaimers.
Read: U.S. retuns $10 million worth of looted artifacts to India
Britain
Britain’s Labour government sees AI as a key part of its plan to grow the economy. The country has the third-largest AI sector in the world, after the US and China.
In January, Prime Minister Keir Starmer introduced an "AI opportunities action plan," saying the UK should create its approach. He believes AI should be tested before new rules are made. The plan states that good regulations can help AI grow safely, while bad regulations could slow down progress.
The government is also reviewing copyright laws to make sure they protect the creative industry while addressing AI’s impact.
European Union
The European Union takes a different approach from the US and China, focusing on protecting citizens' rights. According to Cohen-Hadria, EU regulations ensure that everyone, AI providers, users, and even consumers share responsibility.
The AI Act, passed in March 2024, is the world's most detailed AI law. Some parts of it take effect this week. It bans AI systems that use biometric data to predict a person's race, religion, or sexual orientation, as well as AI-based predictive policing.
The law follows a risk-based approach, the higher the risk an AI system poses, the stricter the rules companies must follow. EU leaders believe clear regulations will help businesses operate more smoothly.
The EU also emphasizes strong protections for intellectual property while ensuring data can flow freely, giving citizens more control over their personal information.
Read: India to build its own AI model like ChatGPT in 10 months, says Ashwini Vaishnaw