Google has apologized to the Indian government following concerns about its AI platform, Gemini, making baseless comments about Prime Minister Narendra Modi. The apology was in response to a notice from the government seeking clarification on the AI's questionable outputs, raising broader concerns about the reliability of AI platforms during trial phases.
Rajeev Chandrasekhar, the Minister of State for IT & Electronics, expressed the government's dissatisfaction with Google's response and criticized the practice of using India as a testing ground for certain AI platforms. This incident prompted the announcement of new regulations mandating AI platforms to obtain a permit to operate in India. The minister stressed the need for transparency and accountability, requiring platforms to inform users about potential unreliability and the risks associated with generating inaccurate or unlawful content.
The government's position is unequivocal: AI platforms must comply with Indian IT and criminal laws, and unreliability cannot serve as a defense against legal action for spreading misinformation. This reflects a broader concern about the societal impact of AI and the necessity for robust regulatory frameworks to ensure responsible development and deployment of these technologies.
In response to these challenges, the government has advised AI-led startups to clearly label unverified information to prevent the dissemination of misinformation. This advisory is part of a broader initiative to address AI-related issues, including deepfakes, aiming to ensure that technology serves the public good while safeguarding citizens from potential harm.
You might also be interested in - Apple acquires more than 30 AI startups, surpasses Google and Meta in its purchasing frenzy