Scientists from the Samara National Research University will create a web service that can be used by anyone to verify the authenticity of a digital photograph or video and detect signs of forgery. The software package using distortion detection algorithms will automatically detect traces of changes made to a digital image or video sequence and determine the degree of reliability of the digital material proposed for analysis. The growing problem of the spread of deepfake counterfeits on the Internet makes the development of such a service especially Relevant*.
For the most part, modern deepfake technologies allow replacing a person's face in any video or put words into their mouth that they have never said. Such forgeries undermine people's reputation and serve as a tool for blackmail and fraud. There are many fake videos of famous people posted on the Internet including, for example, former US President Barack Obama, Mark Zuckerberg, and others.
"Basically, we are developing a tool to deal with fake news, which uses so-called deepfakes. By the end of this year, we plan to create a web service under the auspices of our university, with the help of which one will be able to verify the authenticity of certain images and videos. Anyone can send photos or videos to this site or provide a link, after which the web service will use neural networks to determine the reliability coefficient of the submitted material and, for example, detect the signs of forgery", - said Andrei Kuznetsov, Associate Professor at the Department of Geoinformatics and Information Security at the Samara University.
In their development, Samara scientists proposed a number of original methods and approaches, which include both classical mathematical methods for processing digital images and solutions based on special local patterns and preprocessing mechanisms, as well as deep learning methods that allow tuning neural networks for detecting individual artificial changes in images.
"When a digital photo is analyzed, the image is split into small squares for analysis, while a video is divided into frames, and in each frame, local areas are analyzed for the presence of distortions. During this, the contours are checked especially carefully, because if a person's face is replaced in the original video, then our goal is to find non-original local areas in the frame, to reveal a joint, a certain seam between the original and non-original images – no matter how hard you try hide it, they are still different. Of course, we will not restore the original image, but we will be able to determine the signs of a fake or calculate the trust-distrust coefficient for the video", - Andrei Kuznetsov stressed.
Earlier, employees of the Department of Geoinformatics and Information Security of the Samara University developed a software for checking the reliability of digital photographs and digitized documents. The creation of a web service for exposing photo and video fakes is a continuation of the scientists’ work in this direction.
"Such a web service will be useful to people who constantly work with large amounts of video data, for example, journalists. As far as I know, there is no such project in Russia yet. This way, a person can quickly scan a video received, for example, from an unreliable source – from some Telegram channel, and if, say, a web service assigns a low trust coefficient to this video, then a journalist will not make the news, will not spread a fake. One might say that our project is an attempt to maintain a kind of information hygiene on the Internet", - the scientist noted.
For reference
*Deepfake (from deep learning and fake) is a technology for synthesizing images and videos using artificial intelligence, when new elements are introduced into the original digital photo or video – for example, someone's face. Such fakes are created with the use of generative adversarial neural networks (GANs). One neural network creates a fake image, gradually improving the quality of the fake, and the second neural network competes with the first trying to distinguish the fake from the original and, if fake elements are detected, sends it to the first neural network for additional alteration until the fake looks genuine.
Photo: Anar Movsumov