China vs. the US: Round ? πŸ€·β€β™€οΈ

In the continuing narrative that is the space race between the US and China in the realm of AI, we get an entry on what the US can learn from Chine. πŸ‡ΊπŸ‡ΈπŸ‡¨πŸ‡³

It boils down to the two countries excelling at the usual, the US creates visionary ideas and China puts them into production. China has a massive treasure trove of data, true survival of the fittest business environment, and highly involved (controlling) government. πŸ”¬

China is also further along the tech adoption curve, just look at WhatsApp. It’s hard for tourists in some areas because locals rely so heavily on digital payment platforms. China’s approach has its drawbacks, but it’s hard to say the country isn’t more all-in on AI than any other. πŸ’°

Ultimately, if the US is actually competing with China it needs to take an AI-first approach with buy in from all levels. And it needs to productionize ideas, not just produce them. βš™οΈ

Src: New York Times

China’s Social Submission, er…Scoring System

There is a lot of focus on the China vs. USA space race happening in AI right now (at least in my world there is, I’m very interested in the topic). Most of it revolves around spending, governmental support, talent, etc. But maybe the most important aspect is what the implications of either country winning would be, if there truly can be only one winner in this field. 🏎️

China’s social scoring system, still in its infancy, is terrifying. Of course that is an opinion from a different way of live and cultural experience. But still, it has dystopian sci-fi future written all over it. 😈

A network of 220 million cameras outfitted with facial rec, body scanning, and geo tracking. And this insane info net will be paired with every citizen’s digital footprint. Everything is compiled to create a social credit score of sorts that is updated in real time and determines how easily you can interact with society and live your life. Piss the government off and become an outcast with no options. Dear China, Phillip K Dick called, he’d like his dystopia back. πŸ“š

It’s no guarantee that this form of digital dictatorship will be exported on a mass scale (you know it’ll be exported at some scale) if China were to win the race, but it’s a chilling possibility. A lot of ink is spilled talking about the potential for a robot uprising and AI taking over, but the misuse of AI by human actors is far more relevant and just as fraught. We’ve been our own biggest enemy for centuries, why would that suddenly change now? πŸ€”

Src: ABC News Australia

A Brief History of AI: A Timeline πŸ—“

1943: groundwork for artificial neural networks laid in a paper by Warren Sturgis McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity“. πŸ“ƒ [1]

1950: Alan Turing publishes the Computing Machinery and Intelligent paper which, amongst other things, establishes the Turing Test πŸ“ [6]

1951: Marvin Minsky and Dean Edmonds design the first neural net machine (machine, not computer) that navigated a maze like a rat. It was called SNARC. πŸ€[1]

1952: Arthur Samuel implements a computer program that can play checkers against a human, it’s the first AI program to run in the US. πŸ’Ύ [2]

1956: the Dartmouth Summer Research Project on Artificial Intelligence conference is held, hosted by Minsky and John McCarthy. This also marks the coining of the term “artificial intelligence”. πŸ€– [6][7]

1956: Allen Newell, Cliff Shaw, and Herbert Simon present the Logic Theorist at the above mentioned conference. This program attempted to recreate human decision making.Β πŸ€” [6]

1957: Perceptron, the first recreation of neurological principles in hardware, invented by Frank Rosenblatt. 🧠 [1]

1959: Samuel uses the phrase “machine learning” for the first time, in the title of his paper “Some Studies in Machine Learning Using the Game of Checkers“. πŸ“ƒ [2]

1960: Donald Michie builds a tic-tac-toe playing “computer” out of matchbooks. It utilized reinforcement learning and was called MENACE: Matchbox Educable Noughts And Crosses Engine. βŒβ­•οΈ [2]

1961: Samuel’s program beats a human checkers champion. πŸ† [2]

1965: Joseph Weizenbaum builds ELIZA, one of the first chatbots πŸ’¬ [7]

1969: Minsky writes a book called Perceptron that touched on the benefits of creating networks of perceptrons. πŸ“š [1]

1969: first AI conference held, the International Joint Conference on Artificial Intelliengence. πŸ‘₯ [3]

1970: Seppo Linnainmaa creates first backpropagation equation (but it’s not known as such?). πŸ“ [3]

1986: Geoffrey Hinton and Ronald J. Williams publish paper creating/unveiling modern backprop. πŸ“ƒ [3]

1997: IBM’s Deep Blue beats chess world champion Garry Kasparov. πŸ† [5]

1999: the MNIST data set is published, a collection of handwritten digits from 0 to 9. ✏️ [5]

2012: GPUs used to win an ImageNet contest, becoming the gold standard for AI hardware. πŸ… [4] (more)

Updated on 10.01.18

[1] Src: Open Data Science

[2] Src: Rodney Brooks

[3] Src: Open Data Science

[4] Src: Azeem on Medium

[5] Src: Open Data Science

[6] Src: Harvard’s Science in the News

[7] Src: AITopics

Lazy Faire πŸ‡ΊπŸ‡ΈπŸ‡¨πŸ‡³

At the US government’s current rate of uninvolvement in the AI sector, China will overtake it in its quest for AI overlord status by the end of the year. At least when it comes to spending, the rest might not be far behind though. πŸ’°

One of the recommendations from a subcommittee is to expedite the approval of the OPEN Government Data Set. You know, giving citizens the right to data they actually own as taxpayers. πŸ™„

My hunch is that the way Trump deals with something he doesn’t understand (and might admit to himself he doesn’t understand) is to ignore it, thus the administration’s lack of a plan. πŸ˜–

Src: The Next Web

That’s Instructor AI, Cadet

The Air Force is embracing AI to mix up their training process, and could be inventing the future of all education while they’re at it. πŸŽ“

The overall objective is to move away from an β€œindustrial age” training model with pre-set timetables and instruction plans to one that adapts to each airman’s learning pace.

Hmmmmm….what else follows an industrial model? πŸ€”πŸŽ’πŸšŒπŸ«

Among other benefits, they appear to show that artificial intelligence β€œcoaches” are highly effective at gathering data from the process of training any particular pilot, and then refocusing that process on exactly the areas in which the student needs the most help.

This is really, really cool. Personalized training could be here and if it works for pilots of multi-million dollar aircraft it should be able to work in most other fields no problem. 🍰

Src: Federal News Radio

Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. ☁️

It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. βš–οΈ

The fully automated SaaS explains decision-making and detects bias in AI models at runtime β€” so as decisions are being made β€” which means it’s capturing β€œpotentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

And there is a win for centaurs: πŸ™Œ

it will be both selling AI, β€˜a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Src: TechCrunch

Alibaba Wants Their Own Chip

There are two undeniable trends these days: major tech players want their own silicon and the new class of uber-rich want to own classy publications. πŸ—žοΈ

I’m more interested in the first trend (though the second could have interesting implications in this era of doom for publishers). Alibaba is the newest tech giant to announce they are making their own AI chips. 🏭

The chip will reportedly power the company’s cloud technology and IoT devices, and could be used for things like autonomous cars, smart cities and logistics.

This time it’s bigger than just wanting full control and integration. This move is also to reduce Alibaba’s reliance on the West and the impact of the shiny new trade war between China and the US. It also plays nicely into China’s national plan to be the AI superpower of the world. πŸ’ͺ πŸ‡¨πŸ‡³

Src: CNet

Joey Stigz on AI πŸ€–

If you’re interested in the societal and economic implications of AI, this article is worth a read. A few points that stuck out to me: πŸ‘‡

  • AI can be hugely beneficial, but right now it’s not trending in that direction
  • Tech companies shouldn’t be in charge of regulating themselves, which means decision makers need to educate themselves
  • People are starting to value privacy more and become more wary of surveillance and data collection
  • AI will “take” jobs, but there will still be plenty of uniquely humans roles (health and elderly care, education, etc.) Guess we’ll find out how much we truly value that work.
  • Centaurs

Src: The Guardian

DeepFakes Get More Realistic πŸ˜–

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. 😟

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. ♻️

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. 😡

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? πŸ€”

Src: Carnegie Mellon

ELMo Really Does Know His Words πŸ‘Ή

I’m super interested in the world of NLP (natural language processing), so the news that performance increased dramatically with ELMo piqued my interest. πŸ’‘

The biggest benefit in my eyes is that this method doesn’t require labeled data, which means the world of written word is our oyster. 🐚

Yeah, yeah, word embeddings don’t require labeled data either. ELMo can also learn word meanings at a higher level, which I think means it will have far more impact and a wider range of applications. πŸ“Ά

ELMo, for example, improves on word embeddings by incorporating more context, looking at language on a scale of sentences rather than words.

Our cuddly, red muppet still picks up the biases we embed in our writings though. So plenty more work to be done. πŸ› 

Src: Wired