My journey into AI/ML/DL and other areas of interest.

Fakes Get More Real πŸŽ₯

This is why I’m terrified of DeepFakes! But also, think of the potential for visual artistic mediums like movies and TV. But also, think of the implications for politics. I find it fitting that they used a lot of political figures as examples since this could majorly disrupt the field. πŸ—³οΈ

My first concern was in regards to detection, especially since this method seems to solve the blinking problem that SUNY’s detection method was able to capitalize on. I wondered if this technique would generate noise and irregularities that would aid in detection, which the error section of the video suggests is the case. Here is what the SIGGRAPH team notes in relation to my concerns:

Misuses: Unfortunately, besides the many positive and creative use cases, such technology could also be misused. For example, videos could be modified with malicious intent, for instance in a way which is disrespectful to the person in a video. Currently, the modified videos still exhibit many artifacts, which makes most forgeries easy to spot. It is hard to predict at what point in time such modified videos will be indistinguishable from real content to our human eyes. However, as we discuss below, even then modifications can still be detected by algorithms.

Implications: As researchers, it is our duty to show and discuss both the great application potential, but also the potential misuse of a new technology. We believe that all aspects of the capabilities of modern video modification approaches have to be openly discussed. We hope that the numerous demonstrations of our approach will also inspire people to think more critically about the video content they consume every day, especially if there is no proof of origin. We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip. This will lead to ever better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes (see comments below).

Detection: The recently presented systems demonstrate the need for ever improving fraud detection and watermarking algorithms. We believe that the field of digital forensics will receive a lot of attention in the future. Consequently, it is important to note that the detailed research and understanding of the algorithms and principles behind state-of-the-art video editing tools, as we conduct it, is also the key to develop technologies which enable the detection of their use. This question is also of great interest to us. The methods to detect video manipulations and the methods to perform video editing rest on very similar principles. In fact, in some sense the algorithm to detect the Deep Video Portraits modification is developed as part of the Deep Video Portraits algorithm. Our approach is based on a conditional generative adversarial network (cGAN) that consists of two subnetworks: a generator and a discriminator. These two networks are jointly trained based on opposing objectives. The goal of the generator is to produce videos that are indistinguishable from real images. On the other hand, the goal of the discriminator is to spot the synthetically generated video. During training, the aim is to maintain an equilibrium between both networks, i.e., the discriminator should only be able to win in half of the cases. Based on the natural competition between the two networks and their tight interplay, both networks become more sophisticated at their task.

Src: Stanford

Leave a Reply

Your email address will not be published. Required fields are marked *