A Discordant Note: AI Cloning as a Risk in Electoral Processes

In Dec 2023, the audience at the Kashi Tamil Sangamam event in Varanasi was  handed down ear plugs to listen to an election speech. Prime Minister Narendra Modi spoke in Hindi, but for the audience a Tamil speech was played. A translation of the original speech to a native language, on a land far from its origins was one of its firsts in the country. Another video of Congress leader Rahul Gandhi with his AI-generated voice clone was viral in April, this year. In the video Gandhi is giving a speech which is wrongly termed as the swearing in speech as the newly elected Prime Minister of India. The swearing, if at all, is scheduled for June this year. But the video talks of this event in past tense, as if the ceremony was held already.

These separate incidents of two leading leaders bring the spotlight on AI and its usage; The possible misuse of the technology at hand and its direct implications on a democratic process like the elections. 

In PM Modi’s speech, Bhashini, an AI tool was deployed whose language translation tool is designed to enable real-time translation of Indian languages. He called it a “new beginning,” highlighting the role of this technology in simplifying communication with the public. Launched by the PM himself during the Digital India Week in Gujarat in 2022,  Bhashini aims to enhance internet accessibility and digital services in Indian languages, incorporating voice-based functions and promoting content creation in diverse languages. Driven by an AI translation system that is accessible through dedicated Android and iOS apps, providing a user-friendly experience, the app is marketed as breaking down language barriers and enabling conversations between speakers of different Indian languages. While this new feature can be used in different scenarios, its rushed deployment and marketing by the ruling Government is contentious.

Gandhi’s video, on the other hand, was detected to be a voice clone using the deepfake analysis tool Itisaar, a homegrown technology from the IIT Jodhpur. The same was confirmed by contrails.ai, another AI voice tool, said this report. In a similar incident, Congress politician and former CM of Madhya Pradesh Kamal Nath’s deep-fake video wrongly claiming that he promised building a mosque and a reinstatement of Article 370, a provision by the Indian constitution that gives special status to Jammu and Kashmir was in circulation. The audio in the video clip is not an original audio but a cloned version of the Minister’s voice.

 

Building an ecosystem for AI tools


Tech giants and startups across the globe have anchored themselves in AI and its voice tools. Tech giant Microsoft has over 20 native languages in its kitty, ‘Microsoft Translator’. Google on the other hand had Bard and Duet, before it launched ‘Gemini’ for its AI ambitions. Many other startups are springing up in India and globally.

In India, Ministry of Electronics and Information Technology (MeitY) issued an advisory to the AI industry highlighting that all generative AI products, like large language models on the lines of ChatGPT and Google’s Gemini, would have to be made available “with [the] explicit permission of the Government of India” if they are “under-testing/ unreliable”. However, the government hasn’t elaborated in detail on how IT laws can apply to automated AI systems in this way.

In a panel discussion held by Delhi-based legal service organization, Software Freedom Law Center, one of the panelists, Prof . (Dr.)  Charru Malhotra, a senior professor from the Indian Institute of Public Administration said, “The usage of AI in electoral speeches allows politicians to deliver electoral speeches in local languages leading to localised campaigning. This is surely a positive development AI that is helping to enhance the outreach of messages of parties to the masses; because of this voter engagement can increase and this also makes way for opportunities for personalized interactions and messages can be sent in scale.” she asked.

However, pointing on the negative repercussion, Professor Malhotra cautioned about the misuse of AI by adversary nations/ rogue actors to instigate influence operations using Deep fakes, etc. Not only it can wrongly instigate the masses and influence their nebulous minds but could also create ethical dilemmas of another sort. “This can lead to a possible case of loss of accountability since the potential candidate, after making controversial statements can create unrest in the minds of electorates and then withdraw / disown their responsibility, ( wrongly) attributing these statements to be Deep fake created by AI. This can lead to lack of accountability of the candidate”, Professor Charru cautioned.

This digital revolution that is silently seeping in our lives, has on one hand distanced voters from the real connect of their leaders, during election campaigns but on the other hand, leapfrogged to address the language crisis in a parliamentary event. While there is no active regulation to guard against the ethical implication of AI in voice tools, otherwise too, its free flow is questionable. Tools actively participating in electoral processes owned by the Government or tech giants raise a brow of its vested interests. For the greater good, software intended as a public good must be open source, reliable, and devoid of favoritism.

Also read: AI as a public good

Shruthi Manjula Mohan

Shruthi is the Program Lead and Opinion Editor at DataLEADS.

Leave a Reply

Your email address will not be published. Required fields are marked *