Regulation Update: Indian Government Necessitates Permission Prior to AI Launches
Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it’s geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)
In an advisory issued by India’s Ministry of Electronics and Information Technology (MeitY) last Friday, it was declared that any AI technology still in development must acquire explicit government permission before being released to the public.
Developers will also only be able to deploy these technologies after labelling the potential fallibility or unreliability of the output generated.
Furthermore, the document outlines plans for implementing a “consent popup” mechanism to inform users about potential defects or errors produced by AI. It also mandates the labelling of deepfakes with permanent unique metadata or other identifiers to prevent misuse.
The advisory directs all intermediaries or platforms to ensure that no AI model product, including large language models (LLM), enables bias or discrimination, or compromises the integrity of the election process.
Some members of the industry have criticized India’s plans, calling them excessively restrictive:
India has just bid its future farewell! Every company that deploys a GenAI model now needs approval from the Indian government! In other words, you need approval just to deploy a 7b open source model 🤯🤯 If you’re familiar with the Indian government, you know that this will be a huge obstacle!… pic.twitter.com/PnHk8SE7TF
Developers are advised to adhere to the advisory within 15 days of its issue. It is suggested that after complying and applying for permission to launch a product, developers might need to demonstrate it to government officials or subject it to stress testing.
Although the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector.
“We are doing it as an advisory today asking you (the AI platforms) to comply with it,” said IT minister Rajeev Chandrasekhar. He added that this stance would eventually be encoded in legislation.
“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing,” continued Chandrasekhar, as reported by local media.
(Photo by Naveed Ahmed on Unsplash)
See also: Elon Musk sues OpenAI over alleged breach of nonprofit agreement
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, development, government, india, large language models, law, legal, Legislation, llm, Politics
You must be logged in to post a comment.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More