An edited video of Home Minister Amit Shah went viral recently, falsely showing him promising an end to reservations for Scheduled Tribes, Scheduled Castes, and Other Backward Classes. While BOOM’s fact-check found that the video was doctored using video editing tools, it was falsely labelled as a ‘deepfake’ by other Bharatiya Janata Party leaders, and various mainstream media outlets. 

Political parties in India are experimenting with AI in myriad ways this election season. For instance, AI voice clones of politicians are being used to craft cadre and voter outreach messages. Political parties are also skirting any social media rules by using satirical videos which use voice cloning, face swap and other AI editing techniques, to target their political rivals. These videos are posted on their official Instagram and YouTube handles.

However, the damaging deepfakes that are outright deceptive, are being shared either by IT cell workers, or through proxies and diffused actors, who support the party and their ideology. With misinformation peaking during election seasons in India, shallow fakes or cheapfakes still make up most of the misinformation seen online so far. 

Latest and Breaking News on NDTV

Not Always Deepfakes 

Multiple mainstream media outlets such as the Indian Express, Times Now, Republic and DNA, among others dubbed the doctored Amit Shah video, a deepfake. While speaking at a rally in Maharashtra’s Satara, Prime Minister Narendra Modi also referred to this video as being altered using artificial intelligence.

However, upon analysing the video, BOOM found that it has been doctored by splicing different portions of his speech, in order to decontextualise a statement made by Shah on ending reservations for Muslims in Telangana, ahead of the 2023 legislative elections in the state. Our analysis confirms that the video was not a ‘deepfake’ – as in, it was not altered or created using artificial intelligence, or deep learning algorithms.

Last year, while addressing the media at a Diwali Milan organised by BJP at its national headquarters in New Delhi November, Modi highlighted the threats of ‘deepfakes’. As an example, he went on to cite a video he had seen, purportedly of himself dancing garba, and called it a deepfake. BOOM had fact-checked this video, and found out that it was neither a deepfake, nor edited, but showed a real video of a Narendra Modi-lookalike named Vikas Mahante dancing garba at a Diwali event in the UK. 

Days before the first phase of elections, a video of Dinesh Lal Yadav, BJP candidate from Azamgarh, went viral, where he could be seen speaking on how Modi and UP CM Yogi Adityanath have preferred to be childless to stop unemployment. After this video was widely shared by Congress supporters, BJP IT Cell head Amit Malviya tweeted out claiming the video was a deepfake.

BOOM analysed the video using deepfake detection tools, and also retrieved the original video file from the reporter who shot it, which confirmed that the video was authentic. Although the sequence of his comments were rearranged in the viral video, it was not a deepfake, nor did it change the meaning of his remarks. 

“We should absolutely expect the term ‘deepfakes’ to be misused. Just as terms like ‘misinformation’ or ‘fake news’ were used to dismiss any evidence that a political actor didn’t like,” said Prateek Waghre, Executive Director, Internet Freedom Foundation. 

Speaking to BOOM, Waghre highlights two different types of cases of misusing the term ‘deepfakes’. “One could be the lack of awareness in terms of the nuance of the difference between an edited version, a ‘cheapfake’ and a deepfake. The other is the outright dismissing of evidence by calling it a deepfake.”

The Real Political Deepfakes 

The use of deepfakes in politics is not new. The first use of such technology in the context of an election was seen on February 7, 2020 – a day before Delhi headed for legislative polls. Several videos appeared showing Bharatiya Janata Party leader Manoj Tiwari criticising the Aam Aadmi Party government and its leader, Arvind Kejriwal, and urging people to vote for the BJP. These videos showed Tiwari speak in English, Hindi and a Hindi dialect from Haryana. However, Vice found that only the Hindi version was originally shot by Tiwari – the English and Hariyanvi versions were actually fabricated using a ‘lip-sync’ deepfake algorithm trained on videos of Tiwari’s speeches. 

Neither the Election Commission of India nor the BJP has officially commented on Tiwari’s deepfake videos. More recently, BOOM has fact-checked multiple videos that were made or altered using artificial intelligence, and shared in the context of the ongoing polls. Less than a week before the polls kicked-off, a video of Congress leader Rahul Gandhi appeared on social media, where he can be heard announcing his resignation from the party. BOOM found that the video actually showed Gandhi filing his election nomination from Wayanad, Kerala, which was overlaid with AI voice cloning.

Just days before the election, two videos went viral showing Bollywood actors Aamir Khan and Ranveer Singh criticising the BJP-led government. BOOM analysed both these videos, and found evidence of them being altered using AI voice cloning.

Another video hit social media between the first and second phase of elections, where Congress leader Kamal Nath can be heard promising land to Muslims for the construction of a mosque, and the reinstatement of Article 370. BOOM found that this video was also altered, with Nath’s voice being replaced by an AI voice cloned replica. 

The Election Commission of India is yet to make a statement acknowledging the emergence of such AI-led disinformation in the context of the polls. Even before elections in India deepfakes were peddled during polls in neighbouring Pakistan and Bangladesh earlier this year. How Can We Regulate Deepfakes? Regulation of AI has been a hot topic of debate in India, following a botched advisory attempt by the Minister of State of Electronics and Information Technology Rajeev Chandrashekhar, which led to massive confusion and multiple clarifications from the minister. “Whether you choose to use the term ‘deepfake’, whether you look at the broader issue of ‘synthetic media’, there needs to be clarity around the concepts we’re trying to regulate. And today that’s getting muddied by the public discourse around it,” Waghre points out. The novelty of AI tools has left many perplexed at what exactly is being discussed, and the deliberate misuse of the term ‘deepfake’ is expected to make it worse. How then can be rein in on the abuse of such technology?

Referring to the now-revoked advisory by MeitY, Radhika Jhalani, Volunteer Legal Counsel,, highlights the evolving nature of the technology, with “the potential to change every day, and recommends a balanced approach to new laws. “Especially with 2024 being a major election year globally, deepfakes are a cause of concern. Any legislation that is implemented needs to balance misuse of tech with free speech,” Radhika tells BOOM. Waghre believes we should avoid rushing into new regulations, and rather examine existing laws to find the regulatory gaps. “What are the different ways it (deepfake) is being used in a harmful context. And what does the existing laws say about forgery, and identity theft etc? Then you recognise the regulatory gaps, where the laws are currently falling short. Do we need new laws for cover them, do we need to amend the laws, or improve the redressal mechanism,” Waghre adds.

(This story was originally published by BOOM, and republished by NDTV as part of the Shakti Collective.)

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Source link


Please enter your comment!
Please enter your name here