In a landmark move, India’s Ministry of Electronics and Information Technology (MeitY) issued an advisory last Friday, ushering in a new era of regulation for artificial intelligence (AI) technologies in the country. This advisory mandates that any AI technology still in development must obtain explicit government permission before being released to the Indian public.
The decision reflects the government’s commitment to ensuring the responsible deployment of AI systems while addressing risks and challenges associated with their use.
The advisory, outlined in a detailed document, emphasizes the need for developers to label the fallibility or unreliability of the output generated by AI systems. This requirement aims to promote transparency and accountability, enabling users to make informed decisions about the reliability of AI-driven products and services.
Additionally, the document introduces plans for implementing a “consent popup” mechanism to inform users about defects or errors produced by AI, further improving transparency and user awareness.
One of the key aspects of the advisory is the directive to label deepfakes with permanent unique metadata or other identifiers to prevent misuse. Deepfakes, which are AI-generated synthetic media, have the potential to deceive and manipulate individuals, posing significant risks to privacy, security, and societal trust.
Moreover, the advisory orders all intermediaries or platforms to ensure that any AI model product, including large language models (LLM), does not permit bias, discrimination, or threaten the integrity of the electoral process. Bias and discrimination in AI systems have raised concerns globally, highlighting the importance of addressing these issues to ensure fairness and equity in AI-driven decision-making processes.
Developers are requested to comply with the advisory within 15 days of its issuance. It has been suggested that after compliance and application for permission to release a product, developers may be required to perform a demo for government officials or undergo stress testing.
While the advisory is not legally binding at present, it signifies the government’s expectations and hints at the future direction of regulation in the AI sector. IT minister Rajeev Chandrasekhar emphasized that this stance would eventually be encoded in legislation, further solidifying the regulatory framework for AI in India.
Chandrasekhar stressed the importance of AI platforms taking full responsibility for their actions, stating that accountability cannot be evaded by citing the developmental stage of AI systems.
India’s journey towards regulating AI is not without its challenges and controversies. While acknowledging the potential of AI to transform industries and improve lives, the government remains vigilant in addressing ethical, legal, and societal implications associated with its use.