APP Users: If unable to download, please re-install our APP.
Only logged in User can create notes
Only logged in User can create notes

General Studies 3 >> Science & Technology

audio may take few seconds to load

VOICE DEEPFAKES

VOICE DEEPFAKES

 

1. Context

Several users of the social media platform 4chan, used "speech synthesis" and "voice cloning" service provider, ElevenLabs, to make voice deepfakes of celebrities like Emma Watson, Joe Rogan, and Ben Shapiro. These deepfake audios made racist, abusive, and violent comments. Making deepfake voices impersonate others without their consent is a serious concern that could have devastating consequences.

2. What are Voice Deepfakes?

A voice deepfake is one that closely mimics a real person’s voice. The voice can accurately replicate tonality, accents, cadence, and other unique characteristics of the target person. People use AI and robust computing power to generate such voice clones or synthetic voices. Sometimes it can take weeks to produce such voices, according to Speechify, a text-to-speech conversion app.

3. How are voice deepfakes created?

  • To create deepfakes one needs high-end computers with powerful graphics cards, leveraging cloud computing power.
    Powerful computing hardware can accelerate the process of rendering, which can take hours, days, and even weeks, depending on the process.
  • Besides specialized tools and software, generating deepfakes needs training data to be fed to AI models. These data are often original recordings of the target person’s voice.
  • AI can use this data to render an authentic-sounding voice, which can then be used to say anything.

4. Tools used for Voice Cloning

OpenAI’s Vall-e, My Own Voice, Resemble, Descript, ReSpeecher, and iSpeech are some of the tools that can be used in voice cloning.
ReSpeecher is the software used by Lucasfilm to create Luke Skywalker’s voice in the Mandalorian.

5. Threats arising from the use of Voice deepfakes

  • Attackers are using such technology to defraud users, steal their identity, and to engage in various other illegal activities like phone scams and posting fake videos on social media platforms.
  • Voice deepfakes used in filmmaking have also raised ethical concerns about the use of the technology.
  • Gathering clear recordings of people’s voices is getting easier and can be obtained through recorders, online interviews, and press conferences.
  • Voice capture technology is also improving, making the data fed to AI models more accurate and leading to more believable deepfake voices. This could lead to scarier situations, Speechify highlighted in their blog.

6. Ways to detect voice deepfakes

  • Detecting voice deepfakes needs highly advanced technologies, software, and hardware to break down speech patterns, background noise, and other elements.
  • Research labs use watermarks and blockchain technologies to detect deepfake technology, but the tech designed to outsmart deepfake detectors is constantly evolving.
  • Programmes like Deeptrance are helping to provide protection. Deep trance uses a combination of antivirus and spam filters that monitor incoming media and quarantine suspicious content.
  • Last year, researchers at the University of Florida developed a technique to measure acoustic and fluid dynamic differences between original voice samples of humans and those generated synthetically by computers. They estimated the arrangement of the human vocal tract during speech generation and showed that deepfakes often model impossible or highly unlikely anatomical arrangements.
  • Callback functions can end suspicious calls and request an outbound call to the account owner for direct confirmation.
  • Multifactor authentication (MFA) and anti-fraud solutions can also reduce deepfake risks.

For Prelims & Mains

For Prelims: Voice deepfakes, Speech synthesis, Voice Cloning, OpenAI’s Vall-e, My Own Voice, Resemble, Descript, ReSpeecher, iSpeech, ReSpeecher, and Multifactor authentication (MFA).
For Mains: 1. What are Voice Deepfakes? Discuss the threats that are emanating from the use of Voice deepfakes.
 
Source: The Hindu

Share to Social