The Indian Ministry for Electronics and Information Technology has issued a significant advisory to companies operating Artificial Intelligence (AI) platforms ahead of the general elections scheduled for later this summer. The advisory primarily targets generative AI platform-owning companies, including industry giants like Google and OpenAI, instructing them to ensure that their services do not produce responses that could compromise the integrity of the electoral process.
Under the advisory, companies providing AI platforms, particularly those offering under-testing or unreliable AI systems or Large Language Models (LLMs) to Indian users, are mandated to label the potential fallibility or unreliability of the output generated. This requirement aims to inform users about the limitations of AI-generated content, ensuring transparency and accountability.
India’s regulatory measures and legislative intent
The advisory serves as a precursor to potential legislative actions to regulate AI platforms in India. Minister of State for Electronics and IT Rajeev Chandrasekhar emphasized that the advisory signals the government’s intention to effectively introduce legislation to control generative AI platforms. Chandrasekhar, a Bharatiya Janata Party (BJP) Lok Sabha candidate for the 2024 General Elections, indicated that the government may seek demonstrations of AI platforms and their consent architecture from companies.
Companies have been given a 15-day deadline to submit an action-taken report in response to the advisory. They must comply with several directives, including obtaining explicit permission from the Government of India before deploying under-testing or unreliable AI models or LLMs. Additionally, companies must implement mechanisms, such as consent popups, to inform users about the potential fallibility of AI-generated output.
Identification of misinformation and deepfakes
To combat the spread of misinformation and deepfakes, the government has instructed companies to label AI-generated responses with a permanent unique identifier. This measure aims to facilitate the identification of the originator of any misinformation or deepfake content, enhancing accountability and transparency on online platforms.
Furthermore, the advisory mandates that all intermediaries and platforms utilizing AI models must ensure that their computer resources do not promote bias, discrimination, or threats to the integrity of the electoral process. The government aims to regulate AI technologies by regulating the electoral process from potential manipulations or distortions.
The Indian government’s advisory to AI companies underscores its commitment to regulating AI platforms and ensuring the integrity of democratic processes, particularly ahead of the upcoming general elections. By implementing transparency measures and accountability mechanisms, the government aims to mitigate the risks associated with AI-generated content and combat the spread of misinformation and deepfakes. As companies prepare to comply with the directives outlined in the advisory, the regulatory landscape surrounding AI technologies in India is expected to evolve, shaping the future of digital governance and technology ethics in the country.