Google Wages War on Gender

Ok, not really. But I can imagine that being a headline of some inflammatory “news” article. ๐Ÿ—ž๏ธ

They’re working to remove implicit, societal gender bias from machine translations in Google Translate by changing the underlying architecture of the machine learning model they use. Basically, the model now produces a masculine and feminine version and then determines which is most likely needed. It appears that in some cases, like translating from the gender-neutral Turkish language, the system will return both versions. โœŒ๏ธ

This is after they announced that all gender pronouns will be removed from Gmail’s Smart Compose feature because it was showing biased tendencies with its recommendations. ๐Ÿ“ง

It’s early in the process but it appears that they are dedicated to this work and have big dreams. ๐Ÿ”ฎ

This is just the first step toward addressing gender bias in machine-translation systems and reiterates Googleโ€™s commitment toย fairness in machine learning. In the future, we plan to extend gender-specific translations to more languages and to address non-binary gender in translations.

Src: Google AI blog

Signing With Alexa ๐ŸคŸ๐Ÿค™๐Ÿ–ฒ

Simply put, this is awesome. ๐Ÿค˜

Our new technologies, like voice, are opening up computing to people who were previously underserved or unserved by the screen-based paradigm. We still have a long way to go, but it’s the start of exciting times. ๐Ÿ™Œ

I wonder how long it will be until Amazon adds functionality like this to the Show device. Or until Google releases a Google Home device with a screen and camera featuring this functionality. It is in TensorFlow after all. ๐Ÿค”

Src: Synced

Chips Meet Edge ๐Ÿค

The edge is gonna be big, so it’s no surprise that the tech giants are racing to combine AI and edge. Google is the first of the non-chip makers I’m aware of bringing their new hardware to edge devices. So you’ll soon be able to build IoT and other small devices that harness the power of TensorFlow without needing a constant connection. ๐Ÿ“Ÿ

The beauty of the edge is that you can load pre-trained models on all kinds of smaller devices that will then run the models using the data they collect without needing to constantly shuttle data back and forth to the cloud. โ˜๏ธ

Google also recently announced that TensorFlow officially works on the Raspberry Pi after teaming up with the Raspberry Pi Foundation to make it easier to install. ๐Ÿ“

Src: TechCrunch

More at CNET

The Big 4 vs. The World ๐ŸŒŽ

I mentioned in yesterday’s post that there is a much higher demand for AI talent than there is supply. No surprise, but the answer for a lot of firms appears to be buying AI startups to help infuse talent. ๐Ÿ’ธ

Even less of a surprise, Google, Apple, Facebook, and Amazon are leading the way when it comes to quantity of acquisitions. The mind blowing part is that there were 115 acquisitions last year! ๐Ÿ˜ฑ

AI acquisition graph from CB Insights

Src: CB Insights

Dataset Database ๐Ÿ—„

What does ML want? Data! When does it want it? All the time! But specifically, whenever you are going to train, test, and deploy a model. Where do you get this data? I’m glad you asked! ๐Ÿ˜ƒ

Here is a collection of datasets I’ve come across. I’ll update it as I find more. โž•

Computer Vision

Autonomous Vehicles

Do you feel the need, the need for more data? Check out this list of 50 datasets from Gengo.

Updated: 06.30.18

Duplex is Back ๐Ÿ”™

Google retooled their Duplex calling tool a bit so it now announces itself as an automated calling service from Google. Still sounds human. ๐Ÿ—ฃ๏ธ

Targeted Listening ๐Ÿ‘‚

Google recently demoed an AI model that can focus in on one voice in a noisy environment. Might sound easy because we do this pretty naturally, but machines aren’t like us. Mics pick up everything in their range without distinguishing, kind of like how a picture might look different than what you saw with your eyes. ๐Ÿ‘€

This is yet another AI advance that is, as Android Police said, cool and terrifying. On the cool side, this could make digital assistants really useful. And who knows what other beneficial uses will be dreamed up. On the terrifying side, Big Brother could be listening as well as watching. ๐Ÿ•ด

Src: Android Police

Imagination 2: Deep Learning Boogaloo ๐Ÿ•บ

Following on the theme of yesterday’s post, DeepMind is really working hard at adding imagination to AI. ๐Ÿ’ญ

This time they worked on using “imagination” to extrapolate out and predict future states based on current states. Really, it’s predictive analytics. ๐Ÿ”ฎ

Again, think of the potential for self-driving cars. Being able to essentially see into the future. A very useful skill when it comes to traffic and pedestrians. ๐Ÿš—

Src: Towards Data Science

Imagine Robots Imagining ๐Ÿ’ญ

DeepMind, of beating humans at Go fame, has now created an AI that can imagine things. Kind of… ๐Ÿ’ฌ

Their new GQN model can look at a scene from a few angles and “imagine” what it will look like from another angle. ๐Ÿ”€

It’s a small step towards that ever-popular goal of making AI more like humans. I could see if being a very useful skill for self-driving cars, helping them contextualize what’s around them and potentially “see” around corners and the like. ๐Ÿš—

Src: MIT Tech Review