The amount of deepfake content online is growing at a rapid rate. At the beginning of 2019, there were 7,964 deepfake videos online, according to a report from start-up Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then. 96% of deepfakes are porn.
With the use of GenerativeAI (GenAI), the world of ‘fake news’ and ‘true lies’ just got murkier. Last week, President Joe Biden’s ‘fake’ video resulted in his Administration issuing an Executive Order related to the governance of AI frameworks. The fake video of Rashmika Mandanna earlier this week took Bollywood by storm, with senior members of the fraternity calling for legal action. Deep Fake Love is a Spanish reality TV dating show on Netflix that uses deepfake technology to blur the lines between reality and fabrication.
Deepfakes, using Generative Adversarial Networks (GANs), have been around for many years. However, with the emergence of GenAI, they have become more lifelike and much easier to produce at scale. Invariably, fake videos would be of celebrities and politicians. With several elections round the corner in India, politicians and political parties could be both creators, as well as at the receiving end of such fake videos. These would be used to spread misinformation, put political opponents on the spot, or even build an entire campaign to sway voters.
The aam junta – people like you and me – could also be victims. It could be someone wanting to embarrass us professionally, or a jilted lover wanting revenge on their ex. It could even be an inconsequential prank by ‘friends’ wanting to make fun of us on social media. The possibilities, unfortunately, are endless.
Text Box: Safeguard yourself by:
1. Double-check the source. Look for the same story across different media outlets to verify its authenticity.
2. Avoid sharing unverified information.
3. Always approach content with a critical mind. If it seems off, there’s a good chance it might be.
4. Tighten your online privacy settings. The less data you have out there, the harder it is for someone to create a deepfake of you.
5. If you find a deepfake on yourself, report it to the authorities immediately.Text Box: Watch out for:
1. Look for unnatural blinking or the lack of it, facial distortions, or lighting that does not look right.
2. The voice might give it away. It could be too robotic or just slightly out of sync with the lip movements.
3. If it sounds too sensational to be true, trust your gut. Does it fit the context, or is it just too outlandish?
4. Who put this out into the world – a reliable source or a notorious fake-news factory?
It is extremely important for regulators to sit up and take notice – this is the time to put in place stringent regulation with exemplary punishment to offenders. It should be mandated that anyone using an AI model to produce an image or information must disclose it. People must be made aware of Classifiers – software which can detect AI-generated content – and widespread use of the same, much like antivirus. There is an entire ethical and moral conversation that must gain traction to create awareness of how GenAI must be utilised.
By Jaspreet Bindra, Founder & MD, The Tech Whisperer Ltd, UK