Google Wages War on Gender

Ok, not really. But I can imagine that being a headline of some inflammatory “news” article. ๐Ÿ—ž๏ธ

They’re working to remove implicit, societal gender bias from machine translations in Google Translate by changing the underlying architecture of the machine learning model they use. Basically, the model now produces a masculine and feminine version and then determines which is most likely needed. It appears that in some cases, like translating from the gender-neutral Turkish language, the system will return both versions. โœŒ๏ธ

This is after they announced that all gender pronouns will be removed from Gmail’s Smart Compose feature because it was showing biased tendencies with its recommendations. ๐Ÿ“ง

It’s early in the process but it appears that they are dedicated to this work and have big dreams. ๐Ÿ”ฎ

This is just the first step toward addressing gender bias in machine-translation systems and reiterates Googleโ€™s commitment toย fairness in machine learning. In the future, we plan to extend gender-specific translations to more languages and to address non-binary gender in translations.

Src: Google AI blog

A Brief History of AI: A Timeline ๐Ÿ—“

1943: groundwork for artificial neural networks laid in a paper by Warren Sturgis McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity“. ๐Ÿ“ƒ [1]

1950: Alan Turing publishes the Computing Machinery and Intelligent paper which, amongst other things, establishes the Turing Test ๐Ÿ“ [6]

1951: Marvin Minsky and Dean Edmonds design the first neural net machine (machine, not computer) that navigated a maze like a rat. It was called SNARC. ๐Ÿ€[1]

1952: Arthur Samuel implements a computer program that can play checkers against a human, it’s the first AI program to run in the US. ๐Ÿ’พ [2]

1956: the Dartmouth Summer Research Project on Artificial Intelligence conference is held, hosted by Minsky and John McCarthy. This also marks the coining of the term “artificial intelligence”. ๐Ÿค– [6][7]

1956: Allen Newell, Cliff Shaw, and Herbert Simon present the Logic Theorist at the above mentioned conference. This program attempted to recreate human decision making.ย ๐Ÿค” [6]

1957: Perceptron, the first recreation of neurological principles in hardware, invented by Frank Rosenblatt. ๐Ÿง  [1]

1959: Samuel uses the phrase “machine learning” for the first time, in the title of his paper “Some Studies in Machine Learning Using the Game of Checkers“. ๐Ÿ“ƒ [2]

1960: Donald Michie builds a tic-tac-toe playing “computer” out of matchbooks. It utilized reinforcement learning and was called MENACE: Matchbox Educable Noughts And Crosses Engine. โŒโญ•๏ธ [2]

1961: Samuel’s program beats a human checkers champion. ๐Ÿ† [2]

1965: Joseph Weizenbaum builds ELIZA, one of the first chatbots ๐Ÿ’ฌ [7]

1969: Minsky writes a book called Perceptron that touched on the benefits of creating networks of perceptrons. ๐Ÿ“š [1]

1969: first AI conference held, the International Joint Conference on Artificial Intelliengence. ๐Ÿ‘ฅ [3]

1970: Seppo Linnainmaa creates first backpropagation equation (but it’s not known as such?). ๐Ÿ“ [3]

1986: Geoffrey Hinton and Ronald J. Williams publish paper creating/unveiling modern backprop. ๐Ÿ“ƒ [3]

1997: IBM’s Deep Blue beats chess world champion Garry Kasparov. ๐Ÿ† [5]

1999: the MNIST data set is published, a collection of handwritten digits from 0 to 9. โœ๏ธ [5]

2012: GPUs used to win an ImageNet contest, becoming the gold standard for AI hardware. ๐Ÿ… [4] (more)

Updated on 10.01.18

[1] Src: Open Data Science

[2] Src: Rodney Brooks

[3] Src: Open Data Science

[4] Src: Azeem on Medium

[5] Src: Open Data Science

[6] Src: Harvard’s Science in the News

[7] Src: AITopics

DeepFakes Get More Realistic ๐Ÿ˜–

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. ๐Ÿ˜Ÿ

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. โ™ป๏ธ

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. ๐Ÿ˜ต

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? ๐Ÿค”

Src: Carnegie Mellon

Facebook Can Read Photos ๐Ÿ–ผ๏ธ

Big Blue has rolled out a tool called Rosetta that can scan photos for text, extract the text it finds, and then “understand” that text. ๐Ÿ‘๏ธโ€๐Ÿ—จ๏ธ

This is huge as it means the platform can now increase accessibility by reading photos, it can pull out information from photos of menus and street signs, and it can monitor memes and images for destructive written content. And those are just a few examples I’m sure. โ™พ๏ธ

Personally, I’m interested to see how this impacts Facebook’s text content in ad image guidelines. It used to reject any ad that contained more than 20% text based on image size (but used a weird grid-based measuring system). Then it moved to an approach where the more text your image contain the narrower it’s delivery/reach. Facebook’s reason was always that “users preferred images with little to no text”, but I always figured it was more about their inability to automate filtering for content. Users don’t appear to have any issues with text overlays when it comes to organic content. ๐Ÿ–ผ๏ธ

Their post has a bunch of technical details if you want to nerd out. ๐Ÿค“

Src: Facebook

ELMo Really Does Know His Words ๐Ÿ‘น

I’m super interested in the world of NLP (natural language processing), so the news that performance increased dramatically with ELMo piqued my interest. ๐Ÿ’ก

The biggest benefit in my eyes is that this method doesn’t require labeled data, which means the world of written word is our oyster. ๐Ÿš

Yeah, yeah, word embeddings don’t require labeled data either. ELMo can also learn word meanings at a higher level, which I think means it will have far more impact and a wider range of applications. ๐Ÿ“ถ

ELMo, for example, improves on word embeddings by incorporating more context, looking at language on a scale of sentences rather than words.

Our cuddly, red muppet still picks up the biases we embed in our writings though. So plenty more work to be done. ๐Ÿ› 

Src: Wired

Penny For Your Bot Thoughts ๐Ÿ’ญ

A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. ๐Ÿ‘ทโ€โ™€๏ธ

My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. ๐Ÿ”ฅ

This feature has a bonus too: ๐ŸŽฐ

Importantly, the researchers were able to then improve these results because of their model’s key advantage โ€” transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.

This is an awesome step forward for explainable AI and another big win for centaurs. ๐Ÿ†

Src: MIT

(Compute) Size Doesn’t Matter ๐Ÿ“

Fast.ai was recently part of a team that set some new speed benchmarks on the ImageNet image recognition data set. Why is this noteworthy? Because they did it on an AWS instance that cost $40 total. ๐Ÿ…

We entered this competition because we wanted to show that you donโ€™t have to have huge resources to be at the cutting edge of AI research, and we were quite successful in doing so. We particularly liked the headline from The Verge: โ€œAn AI speed test shows clever coders can still beat tech giants like Google and Intel.โ€

Jeremy Howard

AI has a mystique about it that makes it seem like it’s only for uber-tech nerds that love math and have access to the biggest computers, but that’s not true. Yes it’s technical, but it’s not impossible. And there are plenty of resources to help those curious get started. It is not nearly as difficult as it seems. We just have a lot of language and storytelling baggage attached to it.ย โš—๏ธ

Very few of the interesting ideas we use today were created thanks to people with the biggest computers. And today, anyone can access massive compute infrastructure on demand, and pay for just what they need. Making deep learning more accessible has a far higher impact than focusing on enabling the largest organizations..

Jeremy Howard

Src: fast.ai

Statistics for Machine Learning: Day 2

Time for Day 2 of Machine Learning Mastery’s 7 day course on Statistics for Machine Learning.ย The assignment? List three methods that can be used for each descriptive and inferential statistics. 3๏ธโƒฃโœ–๏ธ2๏ธโƒฃ

Let’s start with descriptive; or, “methods for summarizing raw observations into information that we can understand and share”

  1. Mean – an oldie but goodie. Thinking through examples from my day job: average time on page, average order amount, etc.
  2. Standard deviation – the one that was hammered into me in college stats and econometrics.
  3. Modality – this makes me think of gradient descent and the local vs. global maximum search.

Now on to inferential statistics; or, “methods that aid in quantifying properties of the domain or population from a smaller set of obtained observations called a sample”.

  1. t-test
  2. Chi-square – I was to say I used these first two in various econ courses.
  3. Linear regression – mmmmmm that ML goodness.

Statistics for Machine Learning: Day 1

Machine Learning Mastery now has a 7 day email-based course on Statistics for Machine Learning. Naturally I signed up. Day 1’s assignment is to list 3 reasons why I want to learn statistics. 3๏ธโƒฃ

  1. I want to move from being an AI researcher to an AI practitioner and I think enhancing my statistics knowledge will help. Plus, the more I learn related to the field the better equipped I will be.
  2. Even beyond ML statistics will be useful for my work in analytics and the steps I want to take towards data science.
  3. I didn’t do great in statistics in college and it still bugs me because I love math and numbers and really should have done better.