Deepfake Detector ๐Ÿ”ฆ

Detecting deepfakes is a problem garnering a lot of interest and effort in an ever upward-ratcheting race between the fakers and the breakers. A new approach being taken by at least two startups is to verify and record an image or video at the moment of creation to benchmark future versions against. ๐Ÿ“ธ

This is probably the best method to truly thwart fakes since they can check on a price level for modifications compared to the original versus trying to spot signs of alteration in the image or video in questions. The latter method is always a footrace between the two competing interests for better methods, the former is a matter of widespread adoption. ๐ŸŒŽ๐ŸŒ๐ŸŒ

Now that’s not to say that adoption is a trivial hurdle, but it’s a one-time hurdle. They key will be getting this kind of tech baked in to the devices, platforms, and apps we are already using instead of requiring people to add more to their home screens and habits. A problem and solution readily voiced by the current companies operating in this field. I’m very interested to see how this approach progresses and hope it can become the standard of our digital camera era. ๐Ÿคž

Src: MIT Tech Review

Ghosts as Ethical Proxies ๐Ÿ‘ป

Proof that the data used to train systems can impact the ethical fallout of their performance. And potentially the beginning of us getting a look under-the-hood at these still mysterious systems. I am interested to see how this approach will be applied to non-toy scenarios. ๐Ÿ“ก

Src: Fast Company

Hey, Whoโ€™s Driving that Trolley? ๐Ÿš‹

It is interesting to think about how cultural differences will impact reactions to some AI decisions and could make it difficult to scale a product with localizing it. ๐Ÿค”

A study by the MIT Media Lab has collects global input on variations of the classic ethical dilemma thought experiment, the trolley problem, and found interesting distinctions between cultures. This could help AI developers work through ethics and bias issues, especially in the autonomous vehicle space. But the study noted an important caveat, this data isn’t a requirement or even a suggestion, it is just an input for consideration. Problematic trends shouldn’t be perpetuated in software. ๐Ÿšซ

Src: MIT Tech Review

The Spy Who Doctored You ๐Ÿ‘ฉโ€โš•๏ธ

So many mixed emotions on this bit of news. Place a device in your small apartment/home and let it monitor your health via electromagnetic disturbances, even through walls. ๐Ÿ˜จ

On one hand, it’s really cool to think of the positive impacts this could have. Uncovering trends, monitoring habits, divorcing the data collection from the feebleness of human memory to put a device on or charge it. And then there are the home automation aspects that it could be adapted for. The truly immersive smart home hub. ๐Ÿ’ก

On the other hand, the surveillance possibilities and sketchy corporate uses from the benign health tracking are rather terrifying. I could see China undertaking a mass rollout of devices like these to augment their camera and digital tracking network. Just as I could see insurance companies requiring the use of these devices to issue policies, and using the data to “personalize” pricing in real time, and not to the benefit of the customer. ๐Ÿ™€

These are exciting and terrifying times. ๐Ÿ”ฎ

Src: MIT Tech Review

Your Phone, Your Clone? ๐Ÿ“ฑ

Essential Products is working on a phone that is largely controlled by voice commands and has an AI built in that is supposed to become a digital version of you in order to respond to messages for you. ๐Ÿ’ฌ

I think this sort of functionality and interaction is a big part of the future, but why a phone? ๐Ÿค”

Why try to execute a paradigm shift by working in a known form factor that pits you against Apple and Android. If Amazon and Facebook both failed to launch a phone, why should an interface-altering startup succeed? Why not a watch and/or earbuds? Why not a screen less device that acts as a phone add-on to start? While I applaud the message, I question the medium. โ“

Src: The Verge

Thinking in part informed by Ben Thompson at Stratechery.

AI Could Create A Defense Dark Horse ๐ŸŽ

A lot of focus is placed on the US-China AI space race (guilty), but the nature of AI could make for a surprise victor. Or at least a leveling of the playing field. ๐Ÿšœ

There is a risk that the United States, like many leading powers in the past, could take an excessively cautious approach to the adoption of AI capabilities because it currently feels secure in its conventional military superiority.

I noticed an interesting note in the piece that arms regulations are, by and large, aren’t placed on useful defense technologies that are easily spread. Like tanks and jets (“easily spread” is relative in this case). Compared to nukes, which are heavily regulated but hard to manufacture anyway. ๐Ÿญ

AI is not subject to the same manufacturing difficulties and provides far more useful. It is also difficult to draw a clear line between commercial and military uses. All of this creates a scenario that will be tough to regulate with nearly all governments incentivized to take a shot. Interesting times ahead. ๐Ÿ”ฎ

Src: Foreign Policy

Will Robots Understand Value? ๐Ÿ’น

A while back I wrote about how I didn’t think robots would become the new consumers in capitalism. Turns out I’m not the only one.ย ๐Ÿ‘ฌ

This piece scratches my economics itch in a lot of ways, but I think the heart of it is the fact that we typically believe the economy/market/capitalism operates like a rational machine and not an organism reacting to the wants and desires of a collection of irrational flesh bags.ย ๐Ÿ‘ฅ

But the threat is not real for the simple reason that the efficiency of production is not the problem that economy tries to solve. The actual problem is the use of scarce means to produce want satisfaction. Both means and ends are valued subjectively. Robots do not value.

This, once again, gets to the core of my AI belief system, that we shouldn’t try to recreate human brains in silicon or assume that AGI or superintelligence will mimic humanity’s actions and desires. It just seems like egoism disguised as science.ย โš—๏ธ

I want to include two quotes pertaining to value that I really liked in this piece. I think they are often forgotten or misunderstood.ย ๐Ÿ’ฑ

The natural resource it the same, but the economic resource โ€“ theย valueย of it โ€“ was born with the inventions. Indeed, oil became useful in engines, because those engines satisfy consumersโ€™ wants. The value in oil is not its molecular structure, but how it is being used to satisfy wants.

A good, sold in a market, is not its physical appearance, but the service it provides consumers in their attempts to satisfy wants. In other words, a goodย provides use value. And value is always in the eyes of the user. The value of any means derives from its contribution to a valuable economic good.

For further reading that provides another angle on why I don’t think robots and AIs will just slip into the existing capitalism and perpetuate it check out this piece by Umair Haque.

Src: Mises Institute

China vs. the US: Round ? ๐Ÿคทโ€โ™€๏ธ

In the continuing narrative that is the space race between the US and China in the realm of AI, we get an entry on what the US can learn from Chine. ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ณ

It boils down to the two countries excelling at the usual, the US creates visionary ideas and China puts them into production. China has a massive treasure trove of data, true survival of the fittest business environment, and highly involved (controlling) government. ๐Ÿ”ฌ

China is also further along the tech adoption curve, just look at WhatsApp. It’s hard for tourists in some areas because locals rely so heavily on digital payment platforms. China’s approach has its drawbacks, but it’s hard to say the country isn’t more all-in on AI than any other. ๐Ÿ’ฐ

Ultimately, if the US is actually competing with China it needs to take an AI-first approach with buy in from all levels. And it needs to productionize ideas, not just produce them. โš™๏ธ

Src: New York Times

China’s Social Submission, er…Scoring System

There is a lot of focus on the China vs. USA space race happening in AI right now (at least in my world there is, I’m very interested in the topic). Most of it revolves around spending, governmental support, talent, etc. But maybe the most important aspect is what the implications of either country winning would be, if there truly can be only one winner in this field. ๐ŸŽ๏ธ

China’s social scoring system, still in its infancy, is terrifying. Of course that is an opinion from a different way of live and cultural experience. But still, it has dystopian sci-fi future written all over it. ๐Ÿ˜ˆ

A network of 220 million cameras outfitted with facial rec, body scanning, and geo tracking. And this insane info net will be paired with every citizen’s digital footprint. Everything is compiled to create a social credit score of sorts that is updated in real time and determines how easily you can interact with society and live your life. Piss the government off and become an outcast with no options. Dear China, Phillip K Dick called, he’d like his dystopia back. ๐Ÿ“š

It’s no guarantee that this form of digital dictatorship will be exported on a mass scale (you know it’ll be exported at some scale) if China were to win the race, but it’s a chilling possibility. A lot of ink is spilled talking about the potential for a robot uprising and AI taking over, but the misuse of AI by human actors is far more relevant and just as fraught. We’ve been our own biggest enemy for centuries, why would that suddenly change now? ๐Ÿค”

Src: ABC News Australia

A Brief History of AI: A Timeline ๐Ÿ—“

1943: groundwork for artificial neural networks laid in a paper by Warren Sturgis McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity“. ๐Ÿ“ƒ [1]

1950: Alan Turing publishes the Computing Machinery and Intelligent paper which, amongst other things, establishes the Turing Test ๐Ÿ“ [6]

1951: Marvin Minsky and Dean Edmonds design the first neural net machine (machine, not computer) that navigated a maze like a rat. It was called SNARC. ๐Ÿ€[1]

1952: Arthur Samuel implements a computer program that can play checkers against a human, it’s the first AI program to run in the US. ๐Ÿ’พ [2]

1956: the Dartmouth Summer Research Project on Artificial Intelligence conference is held, hosted by Minsky and John McCarthy. This also marks the coining of the term “artificial intelligence”. ๐Ÿค– [6][7]

1956: Allen Newell, Cliff Shaw, and Herbert Simon present the Logic Theorist at the above mentioned conference. This program attempted to recreate human decision making.ย ๐Ÿค” [6]

1957: Perceptron, the first recreation of neurological principles in hardware, invented by Frank Rosenblatt. ๐Ÿง  [1]

1959: Samuel uses the phrase “machine learning” for the first time, in the title of his paper “Some Studies in Machine Learning Using the Game of Checkers“. ๐Ÿ“ƒ [2]

1960: Donald Michie builds a tic-tac-toe playing “computer” out of matchbooks. It utilized reinforcement learning and was called MENACE: Matchbox Educable Noughts And Crosses Engine. โŒโญ•๏ธ [2]

1961: Samuel’s program beats a human checkers champion. ๐Ÿ† [2]

1965: Joseph Weizenbaum builds ELIZA, one of the first chatbots ๐Ÿ’ฌ [7]

1969: Minsky writes a book called Perceptron that touched on the benefits of creating networks of perceptrons. ๐Ÿ“š [1]

1969: first AI conference held, the International Joint Conference on Artificial Intelliengence. ๐Ÿ‘ฅ [3]

1970: Seppo Linnainmaa creates first backpropagation equation (but it’s not known as such?). ๐Ÿ“ [3]

1986: Geoffrey Hinton and Ronald J. Williams publish paper creating/unveiling modern backprop. ๐Ÿ“ƒ [3]

1997: IBM’s Deep Blue beats chess world champion Garry Kasparov. ๐Ÿ† [5]

1999: the MNIST data set is published, a collection of handwritten digits from 0 to 9. โœ๏ธ [5]

2012: GPUs used to win an ImageNet contest, becoming the gold standard for AI hardware. ๐Ÿ… [4] (more)

Updated on 10.01.18

[1] Src: Open Data Science

[2] Src: Rodney Brooks

[3] Src: Open Data Science

[4] Src: Azeem on Medium

[5] Src: Open Data Science

[6] Src: Harvard’s Science in the News

[7] Src: AITopics