DeepFakes Get More Realistic 😖

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. 😟

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. ♻️

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. 😵

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? 🤔

Src: Carnegie Mellon

Intelliwhat? 🤔

I haven’t really bought into the fear of superintelligent AIs, generally believing they are driven by a mix of ego and ignorance at this point. So I love this quote from a recent Scientific American post on the topic: 🖊

For all we know, human intelligence is already close to the universal maximum possible. I suspect the answers can only come from experiment, or the development of that fundamental theory of intelligence.

I probably also like this post because the author thinks the bigger concern is the potential for super convincing fakes of all kinds. Agreed! The post is entitled “The Erosion of Reality”, so yeah, that’s the fear. 👹

Src: Scientific American

Fakes Get More Real 🎥

This is why I’m terrified of DeepFakes! But also, think of the potential for visual artistic mediums like movies and TV. But also, think of the implications for politics. I find it fitting that they used a lot of political figures as examples since this could majorly disrupt the field. 🗳️

My first concern was in regards to detection, especially since this method seems to solve the blinking problem that SUNY’s detection method was able to capitalize on. I wondered if this technique would generate noise and irregularities that would aid in detection, which the error section of the video suggests is the case. Here is what the SIGGRAPH team notes in relation to my concerns:

Misuses: Unfortunately, besides the many positive and creative use cases, such technology could also be misused. For example, videos could be modified with malicious intent, for instance in a way which is disrespectful to the person in a video. Currently, the modified videos still exhibit many artifacts, which makes most forgeries easy to spot. It is hard to predict at what point in time such modified videos will be indistinguishable from real content to our human eyes. However, as we discuss below, even then modifications can still be detected by algorithms.

Implications: As researchers, it is our duty to show and discuss both the great application potential, but also the potential misuse of a new technology. We believe that all aspects of the capabilities of modern video modification approaches have to be openly discussed. We hope that the numerous demonstrations of our approach will also inspire people to think more critically about the video content they consume every day, especially if there is no proof of origin. We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip. This will lead to ever better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes (see comments below).

Detection: The recently presented systems demonstrate the need for ever improving fraud detection and watermarking algorithms. We believe that the field of digital forensics will receive a lot of attention in the future. Consequently, it is important to note that the detailed research and understanding of the algorithms and principles behind state-of-the-art video editing tools, as we conduct it, is also the key to develop technologies which enable the detection of their use. This question is also of great interest to us. The methods to detect video manipulations and the methods to perform video editing rest on very similar principles. In fact, in some sense the algorithm to detect the Deep Video Portraits modification is developed as part of the Deep Video Portraits algorithm. Our approach is based on a conditional generative adversarial network (cGAN) that consists of two subnetworks: a generator and a discriminator. These two networks are jointly trained based on opposing objectives. The goal of the generator is to produce videos that are indistinguishable from real images. On the other hand, the goal of the discriminator is to spot the synthetically generated video. During training, the aim is to maintain an equilibrium between both networks, i.e., the discriminator should only be able to win in half of the cases. Based on the natural competition between the two networks and their tight interplay, both networks become more sophisticated at their task.

Src: Stanford

Oz the Great and Artificial 🔮

The newest hotness in the AI world is called the “Wizard of Oz design technique”. ❇️

Actually, this refers to when tech companies claim to be using AI but are really using humans. Could be to seem cool and hip. Could be to trick investors until they can actually develop the tech. Ultimately, it’s not cool. 🚫😎

I’d been wondering where the talent and knowledge for all these AI-powered features, companies, and startups has suddenly come from. This AI spring is still relatively young but it suddenly seemed like everyone was using AI for everything. It all makes sense now. 💡

Src: The Guardian

Deepfakes: An Overview 📽

Real talk, Deepfakes terrify me. 🙀

The implications of fake video that looks real to the human eye has “dystopian sci-fi” written all over it. If you thought the last election was a circus, wait until the first Deepfakes election. 🎪

A few outcomes that come to mind:

  1. Political mudslinging reaches its zenith and societal upheaval follows in its wake as no one knows who or what they can trust and our partisan rhetoric devolves even further.
  2. Famous people (actors were the first to pop into my head) can license their likeness, visually and audibly, and make money from acting without ever being on set as their licensed lookalike is projected onto another actor’s performance.
  3. Those VR pop stars will be taken to another level.
  4. Hollywood doubles down on their “don’t try anything new, it’s only sequels or reboots” strategy with new performances by deceased actors (music industry has been doing this forever) and cheaply cranking out even more.

This post is a great overview on the topic, but I noticed a glaring oversight in the tech section. It doesn’t matter if methods to catch these advance enough to be useful if they catch them after they go viral. A retraction/correction to a fake news item almost never gets the same reach as the fake item itself. I believe we see video as the last bastion of truth in media because we don’t have a lot of examples of convincing fake footage, either manufactured or edited. That’s about to change.

Src: Jensen Price on Medium