Researchers from the University of Notre Dame are working on creating an early warning system using artificial intelligence (AI) that identifies manipulated images, deepfake videos and disinformation online.
In the last few years, there has been a rise in coordinated social media campaigns to spark violence, weave discord into society and threaten the integrity of democratic elections.
The new system uses content-based image retrieval and applied computer vision-based techniques to find political memes on social networks. Memes are easy to create and share. political memes have been used to encourage people to vote and to spread inaccurate information.
To start the development of the system, the team collected over two million images and content related to Indonesia’s 2019 general election from Twitter and Instagram. During this election, the left-leaning centrist candidate won the majority vote over the conservative populist candidate. This sparked a wave of violent protests that ended with eight people dead and hundreds injured.
The team found that during this election, there were spontaneous and coordinated campaigns that had the intent to influence the election and insight violence. The campaigns had manipulated images with false claims, misrepresentation of incidents, logos belonging to real news sources on fabricated news stories and memes.
The new system will be able to flag manipulated content and warn journalists and election monitors of potential fake news threats in real-time. It would also provide users with tailored options for monitoring the content they view.
The challenge is to determine an optimal means of scaling up data ingestion and processing for a quick turn around. The system is still in the research and development phase.
A paper on this project was published in the Bulletin of the Atomic Scientists.