A Brief History of AI: A Timeline πŸ—“

1943: groundwork for artificial neural networks laid in a paper by Warren Sturgis McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity“. πŸ“ƒ [1]

1950: Alan Turing publishes the Computing Machinery and Intelligent paper which, amongst other things, establishes the Turing Test πŸ“ [6]

1951: Marvin Minsky and Dean Edmonds design the first neural net machine (machine, not computer) that navigated a maze like a rat. It was called SNARC. πŸ€[1]

1952: Arthur Samuel implements a computer program that can play checkers against a human, it’s the first AI program to run in the US. πŸ’Ύ [2]

1956: the Dartmouth Summer Research Project on Artificial Intelligence conference is held, hosted by Minsky and John McCarthy. This also marks the coining of the term “artificial intelligence”. πŸ€– [6][7]

1956: Allen Newell, Cliff Shaw, and Herbert Simon present the Logic Theorist at the above mentioned conference. This program attempted to recreate human decision making.Β πŸ€” [6]

1957: Perceptron, the first recreation of neurological principles in hardware, invented by Frank Rosenblatt. 🧠 [1]

1959: Samuel uses the phrase “machine learning” for the first time, in the title of his paper “Some Studies in Machine Learning Using the Game of Checkers“. πŸ“ƒ [2]

1960: Donald Michie builds a tic-tac-toe playing “computer” out of matchbooks. It utilized reinforcement learning and was called MENACE: Matchbox Educable Noughts And Crosses Engine. βŒβ­•οΈ [2]

1961: Samuel’s program beats a human checkers champion. πŸ† [2]

1965: Joseph Weizenbaum builds ELIZA, one of the first chatbots πŸ’¬ [7]

1969: Minsky writes a book called Perceptron that touched on the benefits of creating networks of perceptrons. πŸ“š [1]

1969: first AI conference held, the International Joint Conference on Artificial Intelliengence. πŸ‘₯ [3]

1970: Seppo Linnainmaa creates first backpropagation equation (but it’s not known as such?). πŸ“ [3]

1986: Geoffrey Hinton and Ronald J. Williams publish paper creating/unveiling modern backprop. πŸ“ƒ [3]

1997: IBM’s Deep Blue beats chess world champion Garry Kasparov. πŸ† [5]

1999: the MNIST data set is published, a collection of handwritten digits from 0 to 9. ✏️ [5]

2012: GPUs used to win an ImageNet contest, becoming the gold standard for AI hardware. πŸ… [4] (more)

Updated on 10.01.18

[1] Src: Open Data Science

[2] Src: Rodney Brooks

[3] Src: Open Data Science

[4] Src: Azeem on Medium

[5] Src: Open Data Science

[6] Src: Harvard’s Science in the News

[7] Src: AITopics

AlexNet and the Rise of the GPU πŸ•Έ

A great look at the story behind the competition victory that led to the marriage of neural networks and GPUs (as mentioned in the Timeline post). πŸ₯‡

Also traces the path of the man that may have changed the course of AI, Alex Krizhevsky. Yeah, he’s the Alex in AlexNet. πŸ”€

β€œArtificial intelligence is sort of the end goal of computer science,” Krizhevsky says. β€œComputer science is about automating stuff, and artificial intelligence is about automating everything.”

Src: Quartz

Machine Learning Explained πŸ•―οΈ

[FoR&AI] Machine Learning Explained by Rodney Brooks

A great look at the history of machine learning (it started with matchboxes in ’40s). The first machine learning setup (it wasn’t a computer) was designed to play tic-tac-toe. It shows that machines don’t learn in the way humans do; we go for 3 in a row, the machine picks moves based on the present board without understanding the concept of “3 in a row”. It is important to understand the parameters of the problem you are trying to solve, you need to run the program/algorithm/robot a whole bunch of times so it can learn, and you should vary the learning sets if possible. Or make them really, really large. You don’t want to train the machine on a faulty data set because then it will only learn faulty knowledge. For example, training it against a player that makes the optimal move every time with teach the machine to play to a draw because winning is impossible. You also need to determine how to give credit to different elements in a solution chain (like GA attribution models).

Takeaway: Use lots of good data and spend as much time as you possible can thinking through the problem up front so you can map the system properly.

Skynet is far off update: AlphaGo would have melted down had the board size they played on been anything other than 18×18