India tells tech firms approval is needed before releasing new ‘unreliable’ AI tools
The Indian government has issued a new advisory that warns tech companies developing new lab-level AI tools that they need approval prior to public release on the internet.
The Indian government issued an advisory to tech companies developing new artificial intelligence (AI) tools that they must be approved by the government prior to release.
According to the advisory released by the Indian IT ministry on March 1, this approval must be granted before the public release of AI tools that are “unreliable” or still in a trial phase, and such tools should be labeled for possibly providing inaccurate answers to queries. The ministry added:
“Availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”
Additionally, the advisory asked platforms to make sure that their tools will not “threaten the integrity of the electoral process,” as general elections are anticipated this summer.
This new advisory comes shortly after one of India’s top ministers called out Google and its AI tool Gemini for its “inaccurate” or biased responses, including one saying that Indian Prime Minister Narendra Modi has been characterized by some as a “fascist.”
Google apologized for Gemini’s shortcomings and said it “may not always be reliable,” particularly for current social topics.
Rajeev Chandrasekhar, India’s deputy IT minister, said on X, “Safety and trust is [a] platform’s legal obligation. ‘Sorry Unreliable’ does not exempt from law.”
Related: India to build AI climate models for severe weather forecasting
In November, the Indian government said it would be introducing new regulations that would help combat the spread of AI-generated deepfakes prior to its upcoming elections — a move also implemented by regulators in the United States.
However, officials in India received pushback from the tech community regarding its latest AI advisory, saying that India is a leader in the tech space and it would be a “crime” if India “regulated itself out of this leadership.”
Chandrasekhar responded to this “noise and confusion” in a follow-up post on X, saying that there should be “legal consequences” for platforms that “enable or directly output unlawful content.” He added:
“India believes in AI and is all in not just for talent but also as part of expanding our Digital & Innovation ecosystem. India’s ambitions in AI and ensuring Internet users get a safe and trusted internet are not binaries.”
He also clarified that the advisory was simply to “advise to those deploying lab-level or under-tested AI platforms onto public internet” to be aware of obligations and consequences according to Indian laws and how to best protect themselves and users.
On Feb. 8, Microsoft partnered with India AI startup Sarvam to bring an Indic-voice large language model (LLM) to its Azure AI infrastructure to reach more users in the Indian subcontinent.
Magazine: Owner of seven-trait CryptoPunk Seedphrase partners with Sotheby’s: NFT Collector
Responses