Facebook has open sourced their PyTorch based Natural Language Processing modeling framework. According to them it: ๐Ÿ‘‡

blurs the boundaries between experimentation and large-scale deployment.

Looking forward to trying this out. ๐Ÿค“

Src: Facebook

Do The Digital Worm ๐Ÿ›

Step 1: recreate the brain of the C. elegans worm as a neural network ๐Ÿง 

Step 2: ask it to park a car ๐Ÿš—

Researchers digitized the worm brain, the only fully mapped brain we have, with a 12 neuron network. The goal of this exercise was to create a neural network that humans can understand and parse since the organic version it is based on is well understood. ๐Ÿ—บ

An interesting realization that came out of this exercise: ๐Ÿ‘‡

Curiously, both the AI model and the realย C. elegansย neural circuit contained two neurons that seemed to be acting antagonistically, he saidโ€”when one was highly active, the other wasnโ€™t.

I wonder when this switching neuron feature will be rolled into an AI/ML/DL architecture. ๐Ÿค”

Src: Motherboard

A Brief History of AI: A Timeline ๐Ÿ—“

1943: groundwork for artificial neural networks laid in a paper by Warren Sturgis McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity“. ๐Ÿ“ƒ [1]

1950: Alan Turing publishes the Computing Machinery and Intelligent paper which, amongst other things, establishes the Turing Test ๐Ÿ“ [6]

1951: Marvin Minsky and Dean Edmonds design the first neural net machine (machine, not computer) that navigated a maze like a rat. It was called SNARC. ๐Ÿ€[1]

1952: Arthur Samuel implements a computer program that can play checkers against a human, it’s the first AI program to run in the US. ๐Ÿ’พ [2]

1956: the Dartmouth Summer Research Project on Artificial Intelligence conference is held, hosted by Minsky and John McCarthy. This also marks the coining of the term “artificial intelligence”. ๐Ÿค– [6][7]

1956: Allen Newell, Cliff Shaw, and Herbert Simon present the Logic Theorist at the above mentioned conference. This program attempted to recreate human decision making.ย ๐Ÿค” [6]

1957: Perceptron, the first recreation of neurological principles in hardware, invented by Frank Rosenblatt. ๐Ÿง  [1]

1959: Samuel uses the phrase “machine learning” for the first time, in the title of his paper “Some Studies in Machine Learning Using the Game of Checkers“. ๐Ÿ“ƒ [2]

1960: Donald Michie builds a tic-tac-toe playing “computer” out of matchbooks. It utilized reinforcement learning and was called MENACE: Matchbox Educable Noughts And Crosses Engine. โŒโญ•๏ธ [2]

1961: Samuel’s program beats a human checkers champion. ๐Ÿ† [2]

1965: Joseph Weizenbaum builds ELIZA, one of the first chatbots ๐Ÿ’ฌ [7]

1969: Minsky writes a book called Perceptron that touched on the benefits of creating networks of perceptrons. ๐Ÿ“š [1]

1969: first AI conference held, the International Joint Conference on Artificial Intelliengence. ๐Ÿ‘ฅ [3]

1970: Seppo Linnainmaa creates first backpropagation equation (but it’s not known as such?). ๐Ÿ“ [3]

1986: Geoffrey Hinton and Ronald J. Williams publish paper creating/unveiling modern backprop. ๐Ÿ“ƒ [3]

1997: IBM’s Deep Blue beats chess world champion Garry Kasparov. ๐Ÿ† [5]

1999: the MNIST data set is published, a collection of handwritten digits from 0 to 9. โœ๏ธ [5]

2012: GPUs used to win an ImageNet contest, becoming the gold standard for AI hardware. ๐Ÿ… [4] (more)

Updated on 10.01.18

[1] Src: Open Data Science

[2] Src: Rodney Brooks

[3] Src: Open Data Science

[4] Src: Azeem on Medium

[5] Src: Open Data Science

[6] Src: Harvard’s Science in the News

[7] Src: AITopics

DeepFakes Get More Realistic ๐Ÿ˜–

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. ๐Ÿ˜Ÿ

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. โ™ป๏ธ

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. ๐Ÿ˜ต

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? ๐Ÿค”

Src: Carnegie Mellon

ELMo Really Does Know His Words ๐Ÿ‘น

I’m super interested in the world of NLP (natural language processing), so the news that performance increased dramatically with ELMo piqued my interest. ๐Ÿ’ก

The biggest benefit in my eyes is that this method doesn’t require labeled data, which means the world of written word is our oyster. ๐Ÿš

Yeah, yeah, word embeddings don’t require labeled data either. ELMo can also learn word meanings at a higher level, which I think means it will have far more impact and a wider range of applications. ๐Ÿ“ถ

ELMo, for example, improves on word embeddings by incorporating more context, looking at language on a scale of sentences rather than words.

Our cuddly, red muppet still picks up the biases we embed in our writings though. So plenty more work to be done. ๐Ÿ› 

Src: Wired

Penny For Your Bot Thoughts ๐Ÿ’ญ

A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. ๐Ÿ‘ทโ€โ™€๏ธ

My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. ๐Ÿ”ฅ

This feature has a bonus too: ๐ŸŽฐ

Importantly, the researchers were able to then improve these results because of their model’s key advantage โ€” transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.

This is an awesome step forward for explainable AI and another big win for centaurs. ๐Ÿ†

Src: MIT

Unbiased Faces ๐Ÿ‘ถ๐Ÿป๐Ÿ‘ฉ๐Ÿฝ๐Ÿ‘ด๐Ÿฟ

IBM will be releasing a data set of faces across all ethnicities, genders, and ages to both avoid bias in future facial recognition systems and test existing systems for bias. Simply put, this is awesome. ๐Ÿ™Œ

It’s also interesting to see how ethics, fairness, and openness are being used as positive differentiators by major competitors in this new tech race. ๐Ÿƒโ€โ™€๏ธ๐Ÿƒโ€โ™‚๏ธ

Src: Axios

AIs are Teaming Up ๐ŸŽฎ

The OpenAI Five have beaten a team of amateur Dota 2 players (a strategy videogame). So? The 5 are a team of AI algorithms. And a name that brings to mind old hip-hop group names. ๐ŸŽค

This is an important and novel direction for AI, since algorithms typically operate independently.

While I can already imagine the “SkyNet is Coming!” headlines that could result from this news, I’m generally pretty stoked about it. Not gonna lie, the idea of AIs teaming up to create super AIs is a bit terrifying, but that is only one potential outcome. Also feel it’s important to note that the OpenAI Five don’t directly communicate with each other, everything appears to be coordinated through game play. So no new machine languages to decode and translate, which is nice. ๐Ÿ˜…

Some of the benefits that could come from this:

  • Enhanced human-machine team work as the AIs are better adapted to cooperation with other agents ๐Ÿ‘ฅ
  • Using different algorithm types in tandem to reduce reliance on deep learning models and expanding the scope of what’s possible ๐Ÿฅš๐Ÿฅš๐Ÿฅš
  • Potential for distributed AI not via one algorithm spread around BitTorrent style, but by distributed algorithms collaborating in different configurations (this one plays in to my vision of a future where we all have personal AIs) ๐ŸŒ
  • Help reduce implicit bias by utilizing multiple algorithms ๐Ÿ‘

The big one is the potential this has for centaurs. And centaurs are the future. ๐Ÿ”ฎ

Src: MIT Tech Review