Synthetic media targets politics ahead of U.S. presidential election

Synthetic media targets politics ahead of U.S. presidential election

As the presidential election swings into high gear, the amount of synthetic media being produced to target politicians is soaring. According to a new report from brand protection startup CREOpoint, the number of manipulated videos shared online grew 20 times over the past year.

While celebrities and executives continue to be targets, the company said that 60% of doctored videos it found on its platform took aim at politicians. The videos ranged from goofy content such as a deepfake that placed President Trump into a scene from the movie “Independence Day,” to more insidious content designed to make Vice President Joe Biden appear to be disoriented.

Because these videos use a wide range of techniques, from AI-driven deepfakes to more basic selective editing, they can be hard for platforms such as YouTube, TikTok, and Twitter to detect, even as these companies develop more powerful AI to scour their content. For that reason, CREOpoint CEO Jean-Claude Goldenstein said he expects the current presidential election to be remembered as the “Fake-Video Election.”

“There is a lot more than you think,” Goldstein said. “And it’s alarming.”

In recent months, some social media companies have taken more public steps to identify, and in some cases remove, videos that have been doctored in some fashion. Twitter, for instance, placed the label “manipulated” on a video shared by President Trump and another shared by his social media team.

But overall, the surge of manipulated videos continues to overwhelm social platforms, Goldstein said. While companies such as Google, Facebook, Twitter say they are investing in AI and machine learning to combat this issue at scale, Goldstein believes such algorithmic approaches are doomed to fail.

Goldstein argued that algorithms can not be fed enough information in a timely way to help them learn quickly enough to spot fakes. In part, that’s because at the high end, the tools for creating deepfakes are advancing too quickly and becoming too widely available. But the number of ways people manipulate videos is also expanding, adding additional challenges.

Such synthetic media includes such simple tricks as relabeling videos to give them a more sinister tone, or changing the video speed to make the subject appear slow-witted or disoriented. Overall, CREOpoint found the number of such doctored videos has grown 20 times since the end of September 2019.

Not only do these videos have the intention of discrediting or humiliating the subjects, but they also serve to undermine people’s trust in video overall. Goldstein pointed to the story of a GOP Congressional candidate who published a report that insisted the video of Minneapolis police killing George Floyd was a deepfake.

Goldstein does think AI can play a role in this fight. The company created the report based on its own work that typically involves helping executives and brands protect their reputation by monitoring online content through such tools as text mining. But the company also has a patent for a system to “contain the spread of doctored political videos.”

CREOpoint uses AI to find domain experts in countless fields and then add them to a database. When it finds videos that have been potentially manipulated, it signals relevant members of this network who act like a SWAT team to review and identify possible manipulations.

Goldstein argues that making better use of human expertise is a critical part of augmenting the work being done by AI and moderators on the various platforms.

“I’m concerned about what’s about to hit us in the coming weeks,” he said. “The technology to make these videos is growing much faster than the solutions.”

Source: Read Full Article