Close This site uses cookies. If you continue to use the site you agree to this. For more details please see our cookies policy.

Search

Type your text, and hit enter to search:

Deepfake videos could 'spark' violent social unrest 

Experts in the field warn of the potential violence that deepfake technology could incite. The warning came during a hearing in the US House of Representatives on deepfake technology that can create convincing videos of public figures. Public safety could be put at risk if it was adopted by those pushing "false conspiracies", said Clint Watts, from the Foreign Policy Research Institute. Its threat could be countered by making social media services liable for fake videos that users share, one witness suggested.

Call to action

The US House Intelligence Committee called witnesses to testify about the peril and promise of deepfake technology. The hearing came a day after Facebook was criticised for not removing a manipulated video that appeared to show Mark Zuckerberg credit a secretive organisation for its success. The video was created with deepfake software that makes it easy to generate fake videos using still images of a person.

During his testimony, Watts said AI-based deepfake software could become key tools for propagandists. "Those countries with the most advanced AI capabilities and unlimited access to large data troves will gain enormous advantages in information warfare," he said. "The circulation of deepfakes may incite physical mobilisations under false pretences, initiating public safety crises and sparking the outbreak of violence."

He pointed to the spate of false conspiracies proliferating via WhatsApp in India as an example of how bogus messages and media were already fuelling violence. "The spread of deepfake capabilities will only increase the frequency and intensity of these violent outbreaks," he continued.

thumb

Further warnings about the potential harm of deepfakes came from legal expert Prof Danielle Citron, from the University of Maryland. Prof Citron said deepfakes were already being used as a political tool and cited the case of investigative journalist Rana Ayyub, who in April 2018 was subjected to prolonged harassment after opponents created deepfake sex videos of her.

Deepfakes were: "Particularly troubling when they were provocative and destructive", said Prof Citron, adding this was the type of content people were most likely to share as they were often crafted to play on biases and prejudices.

Countering the spread of the fake videos was hard, she said, but it was perhaps time to revisit US legislation that gives immunity from prosecution to social media platforms no matter what users post. "We should condition the legal immunity to be based on reasonable content practices," she said. "It should not be a free pass."

For further reading around this issue, Patrick Meschenmoser wrote about the dangers of fake news and how limitations may affect crisis communications in the last edition of CRJ (14:2)

Read more here

Reproduced under licence from BBC News © 2019 BBC

Image: lightwise|123rf 

    Tweet       Post       Post
Oops! Not a subscriber?

This content is available to subscribers only. Click here to subscribe now.

If you already have a subscription, then login here.