The recent revolution in generative models has thrust upon us the imminent danger of deepfakes, which can offer unprecedented levels of increasingly realistic manipulated images and video. Even more worrying is the fact that – while in the past, video forgery was associated with a slow, painstaking process usually reserved for experts – currently, deepfake-related manipulation technologies are streamlined to be used by essentially everyone with the will to manipulate reality. Deepfakes pose an imminent security threat to us all, and to date, deepfakes are able to mislead face recognition systems, as well as humans.
This special issue will provide a forum to solicit research addressing the assessment of media integrity.In particular, the focus of this special issue will be on data-driven approaches using machine learning.
Lead Guest Editor
Antitza Dantcheva, Inria
Abhijit Das, Birla Institute of Technology and Science, Pilani (BITS Pilani)
Hu Han, Institute of Computing Technology, Chinese Academy of Sciences, China
Christian Rathgeb, Department of Computer sciences, Hochschule Darmstadt, Germany
Naser Damer,Fraunhofer Institute for Computer Graphics Research IGD, Darmstadt, Germany
Luisa Verdoliva, University Federico II of Naples, Italy
Ruben Tolosana,Universidad Autonoma de Madrid, Madrid, Spain