Twitter promises to introduce new policy to fight deepfakes, especially when they could "threaten someone's physical safety or lead to offline harm." The social network has announced that it's working on created rules to address what it calls "synthetic and manipulated media" posted on its website. Photos, videos, and audio that had been significantly altered to fabricate events that never happened fall under that classification.
The company says it's taking this step, since it needs to consider the potential damage deepfakes shared on Twitter can cause. Before it rolls out the new rules, though, it will run a feedback period to give users the chance to help the company refine them before they go live.
More and more tech giants and social networks have been taking a stand against and finding ways to combat deepfakes, which could be used as a tool for disinformation campaigns and to create content that can harm people's lives. Amazon has recently joined Facebook's Deepfake Detection Challenge that aims to create open source tools organizations and governments can use to spot altered media. Microsoft is also part of the initiative, along with MIT and the University of Oxford, among others. Twitter itself banned deepfake porn in 2018.
We're always updating our rules based on how online behaviors change. We're working on a new policy to address synthetic and manipulated media on Twitter – but first we want to hear from you.— Twitter Safety (@TwitterSafety) October 21, 2019