Wu-Tang Was Right

Cash does rules everything around us. 💰

There seems to be a trend amongst Chinese tech companies to deflect when asked about what societal implications their tech could have by shrugging and talking dollar signs. 💲💲💲

Exhibit A:

“We’re not really thinking very far ahead, you know, whether we’re having some conflicts with humans, those kinds of things,” [SenseTime co-founder Tang Xiao’ou] said. “We’re just trying to make money.”

Src: Bloomberg

Exhibit B: (Outerplaces)

According to Haung Yongzhen, the CEO of Watrix: “You don’t need people’s cooperation for us to be able to recognize their identity. Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

Src: Outer Places

Exhibit C:

“We don’t support the government,” [Su Qingfeng, the head of ZTE’s Venezuela unit,] said. “We are just developing our market.”

Src: Reuters

I find it interesting that state supported companies in a Communist country keep using Capitalism as a shield. 🛡️


SenseTime is Watching 👀

Computer vision is the engine behind China’s Panopticon, and SenseTime is the engine behind many of these computer vision capabilities. And a lot of what they have developed has a dystopian feel to it with hidden cameras scanning faces and more and triggering the appropriate actions via AI. 📹

“That’s really how they see future interactions,” says Jean-François Gagné, who runs Canadian startup Element AI Inc. “You don’t need to log in to your computer, you don’t need to get a boarding pass, you don’t need to do anything anywhere. You’re just recognized.”

Not gonna lie, that does sound pretty cool. No more remembering passwords or tickets, no panicked pocket checking. But that glosses over the tyrannical implications of the tech as well, like freezing a dissident out of everything based on their face. 👤

Src: Bloomberg

They’re Making a List, And Checking it Lots 📇

Oh yay, China is exporting its panopticon. 🤐

First up? Venezuela. The land of oil, soaring inflation, an imploding economy, and a leader that’s super into overt political intimidation. 🔨

It can’t be that bad, right? 🤷

“What we saw in China changed everything,” said the member of the Venezuelan delegation, technical advisor Anthony Daquin. His initial amazement, he said, gradually turned to fear that such a system could lead to abuses of privacy by Venezuela’s government. “They were looking to have citizen control.”

Uh, maybe it can. 😟

This holiday season’s must have gift? A build-your-own-authoritarian-regmie kit. 🎁

Src: Reuters

China’s Social Submission, er…Scoring System

There is a lot of focus on the China vs. USA space race happening in AI right now (at least in my world there is, I’m very interested in the topic). Most of it revolves around spending, governmental support, talent, etc. But maybe the most important aspect is what the implications of either country winning would be, if there truly can be only one winner in this field. 🏎️

China’s social scoring system, still in its infancy, is terrifying. Of course that is an opinion from a different way of live and cultural experience. But still, it has dystopian sci-fi future written all over it. 😈

A network of 220 million cameras outfitted with facial rec, body scanning, and geo tracking. And this insane info net will be paired with every citizen’s digital footprint. Everything is compiled to create a social credit score of sorts that is updated in real time and determines how easily you can interact with society and live your life. Piss the government off and become an outcast with no options. Dear China, Phillip K Dick called, he’d like his dystopia back. 📚

It’s no guarantee that this form of digital dictatorship will be exported on a mass scale (you know it’ll be exported at some scale) if China were to win the race, but it’s a chilling possibility. A lot of ink is spilled talking about the potential for a robot uprising and AI taking over, but the misuse of AI by human actors is far more relevant and just as fraught. We’ve been our own biggest enemy for centuries, why would that suddenly change now? 🤔

Src: ABC News Australia

DeepFakes Get More Realistic 😖

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. 😟

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. ♻️

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. 😵

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? 🤔

Src: Carnegie Mellon

The Arms Race Begins 🔫

China is developing autonomous AI-powered attack submarines. So this should be fun. Here are a couple totally not terrifying things about them. 🙁

“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”

Yup, not scary at all. 😨

It’s the decision-making that will cause the most concern as the AI is being designed not to seek input during the course of a mission.

Totally not terrifying. 😰

Src: AI News

Intelliwhat? 🤔

I haven’t really bought into the fear of superintelligent AIs, generally believing they are driven by a mix of ego and ignorance at this point. So I love this quote from a recent Scientific American post on the topic: 🖊

For all we know, human intelligence is already close to the universal maximum possible. I suspect the answers can only come from experiment, or the development of that fundamental theory of intelligence.

I probably also like this post because the author thinks the bigger concern is the potential for super convincing fakes of all kinds. Agreed! The post is entitled “The Erosion of Reality”, so yeah, that’s the fear. 👹

Src: Scientific American

Fakes Get More Real 🎥

This is why I’m terrified of DeepFakes! But also, think of the potential for visual artistic mediums like movies and TV. But also, think of the implications for politics. I find it fitting that they used a lot of political figures as examples since this could majorly disrupt the field. 🗳️

My first concern was in regards to detection, especially since this method seems to solve the blinking problem that SUNY’s detection method was able to capitalize on. I wondered if this technique would generate noise and irregularities that would aid in detection, which the error section of the video suggests is the case. Here is what the SIGGRAPH team notes in relation to my concerns:

Misuses: Unfortunately, besides the many positive and creative use cases, such technology could also be misused. For example, videos could be modified with malicious intent, for instance in a way which is disrespectful to the person in a video. Currently, the modified videos still exhibit many artifacts, which makes most forgeries easy to spot. It is hard to predict at what point in time such modified videos will be indistinguishable from real content to our human eyes. However, as we discuss below, even then modifications can still be detected by algorithms.

Implications: As researchers, it is our duty to show and discuss both the great application potential, but also the potential misuse of a new technology. We believe that all aspects of the capabilities of modern video modification approaches have to be openly discussed. We hope that the numerous demonstrations of our approach will also inspire people to think more critically about the video content they consume every day, especially if there is no proof of origin. We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip. This will lead to ever better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes (see comments below).

Detection: The recently presented systems demonstrate the need for ever improving fraud detection and watermarking algorithms. We believe that the field of digital forensics will receive a lot of attention in the future. Consequently, it is important to note that the detailed research and understanding of the algorithms and principles behind state-of-the-art video editing tools, as we conduct it, is also the key to develop technologies which enable the detection of their use. This question is also of great interest to us. The methods to detect video manipulations and the methods to perform video editing rest on very similar principles. In fact, in some sense the algorithm to detect the Deep Video Portraits modification is developed as part of the Deep Video Portraits algorithm. Our approach is based on a conditional generative adversarial network (cGAN) that consists of two subnetworks: a generator and a discriminator. These two networks are jointly trained based on opposing objectives. The goal of the generator is to produce videos that are indistinguishable from real images. On the other hand, the goal of the discriminator is to spot the synthetically generated video. During training, the aim is to maintain an equilibrium between both networks, i.e., the discriminator should only be able to win in half of the cases. Based on the natural competition between the two networks and their tight interplay, both networks become more sophisticated at their task.

Src: Stanford

Demand a Recount: DeepFake Edition 🗳

I’m not the only person worried about the potential impact of Deepfakes on politics (not that I claimed to be or thought I was). Apparently there is a Twitter wager about when a DeepFake political video will hit 2 million views before getting debunked. 🐦

I had mostly been thinking about the potential of these tools to be used like social media ads were in the last election. That is, until I read this quote. 👇

I think people who are just kind of having fun are, in aggregate, more dangerous than individual bad actors.

The pool of people that would use these just for fun to cause confusion and mayhem is a lot larger than the pool of people looking to use them specifically for political instability or influence. And the true danger is in the viewers seeing and believing it because very few will likely see or care about it being unveiled as fake. 🚫

Deepfake videos could exploit modern societies split by partisanship into echo chambers where information—authentic or not—tends to reinforce preexisting beliefs.

Src: IEEE