Google Wages War on Gender

Ok, not really. But I can imagine that being a headline of some inflammatory “news” article. ๐Ÿ—ž๏ธ

They’re working to remove implicit, societal gender bias from machine translations in Google Translate by changing the underlying architecture of the machine learning model they use. Basically, the model now produces a masculine and feminine version and then determines which is most likely needed. It appears that in some cases, like translating from the gender-neutral Turkish language, the system will return both versions. โœŒ๏ธ

This is after they announced that all gender pronouns will be removed from Gmail’s Smart Compose feature because it was showing biased tendencies with its recommendations. ๐Ÿ“ง

It’s early in the process but it appears that they are dedicated to this work and have big dreams. ๐Ÿ”ฎ

This is just the first step toward addressing gender bias in machine-translation systems and reiterates Googleโ€™s commitment toย fairness in machine learning. In the future, we plan to extend gender-specific translations to more languages and to address non-binary gender in translations.

Src: Google AI blog

When People Do Bad Things With Algorithms ๐Ÿ˜ˆ

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems and algorithms are the products of humans, and we are far from perfect, logical, and rational. We’re still our own worst enemy. ๐Ÿ‘ค

Src: The Crime Machine, Part 1

Src: The Crime Machine, Part 2

When Unbiased Machines Meet Biased Humans ๐ŸฅŠ

I’m worried about the implications of transferring our biases to machines and then turning up the speed dial to 11. But I hadn’t thought about how we biased mortals might react to truly unbiased decision making. ๐Ÿคฏ

So while creating unbiased systems is important, it doesn’t guarantee success in the messy real world once decisions are made. Ultimately debiasing systems might not matter if we don’t have the backbone to stick by unpopular results. (That’s not a dig at the Boston school system, this scenario was probably guaranteed to be a mess no matter what.) ๐Ÿ’ช

Src: Boston Globe

The Unbias 3000 ๐Ÿค–

Accenture has developed and is rolling out, in beta, a tool to help uncover bias in algorithms. Man, I hope this works. ๐Ÿ™

I am really interested to know more about how their Fairness Tool works. My guess is it basically runs another training set through the algorithm that is labeled in such a way that the outputs can be measured on scales that probably aren’t coded in or anticipated. ๐Ÿคทโ€โ™‚๏ธ

For some reason I was really skeptical this tool would work at all when I first started reading, but I think that was due to a tech bias. Assuming any non-tech giant couldn’t possibly crack this nut. Which is another reason I want this to work. We need diversity not only in our data sets and results, but in the players working in the field and developing solutions of all kinds. ๐Ÿ‘

โ€œI sometimes jokingly say that my goal in my job is to make the term โ€˜responsible AIโ€™ redundant. It needs to be how we create artificial intelligence.โ€

Src: Fast Company

The Deciding Tree ๐ŸŒณ

This is a really great description of decision trees with some lovely visuals. It also continues a good overview of overfitting. ๐Ÿ‘Œ

Decision trees might not seem as sexy as other algorithmic approaches, but it’s hard to argue with the results. It also strikes me how similar this process seems to the way humans approach a lot of experience-based decision making. โœ”๏ธ

The basics: decision trees are flowcharts derived from data. โนโžก๏ธโบ

Src: Data Science and Robots Blog

AI Replacing Animals ๐Ÿ๐Ÿ€

An algorithm has been developed for testing chemical compound toxicity that has, so far, been as accurate as animal testing. This could be huge for pharma R&D and any other industry that relies on animal testing. It should make the process faster and easier and should make animal rights supporters happy, basically it should be a win all around. ๐Ÿ™Œ

This algorithm is also an example of how AIs don’t replace human intelligence but augment it. This model is a different form of intelligence. Humans can’t perform the same level of analysis to this problem that the algorithm has. The human “intelligence” approach is to rub something on a rat and wait to see what happens. That machine approach is to look at all the prior test results, find patterns that might be invisible to us, and predict what effect this new compound is likely to have. And it does this rapidly and safely. Cue up the battle cry, centaurs FTW! ๐Ÿค˜

Src: Quartz

A Cautionary Tale โš ๏ธ

Automation can be a wonderful thing. It can also turn a small human error and snowball it out of control, like this story illustrates. Systems need fail safes and other checks so that humans can intervene if/when needed. Technology is supposed to help us, not control us. ๐Ÿšฅ

This is one of the reasons why I am in the centaur and Intelligence Augmentation camp versus the replace all humans with machines camp. โ›บ

We have a wonderful opportunity in front of us, we need to make sure we don’t squander it through laziness, ignorance, or both. โ†”๏ธ

Recommended Read: Src: Idiallo

Chatbot Gone Wrong ๐Ÿšซ

I love me some hilarious AI hijinx and the Letting Neural Networks Be Weird blog is a perfect source for it. This post about the less than correct conversations about various images is also a good reminder that AI doesn’t intuit or truly understand things the way we do. These models learn, but in a far different fashion from humans. However, this model seems to have learned humans’ unwillingness to be wrong and general penchant for obstinately sticking to the original opinion. ๐Ÿ—ฟ

Src: AI Weirdness

Osonde Osoba on AI ๐ŸŽ™

I find myself agreeing with a lot of what this guy says. Confirmation bias FTW! ๐Ÿ™Œ

Here are some quotes that represent what really spoke to me. ๐Ÿ—ฃ

Artificial intelligence is a way of understanding what it means to be human beings.

We need to think about AI in terms of value alignmentโ€”I think that’s a better framework than, say, fairness and bias.

I think when we look back at what the U.S. intelligence community has concluded were Russian attempts to intervene in the 2016 presidential election, we’ll probably think those are child’s play. I would bet money that there’s going to be escalation on that front.

I wonder what level of intelligence would be required before we start thinking of autonomous systems less as this โ€œotherโ€ and more as parts of our society

Src: RAND Review

Listen Up ๐Ÿ”Š: Digital Evolution

This episode of Software Engineering Daily is a great listen on the topic of digital evolution, or, the process of allowing algorithms to grow and evolve. ๐Ÿฆ–

This field leads to a lot of funny results but there is an important lesson to that, and is probably my favorite moment of the show. We humans are really bad at stating what we want in a way that won’t lead to unforeseen outcomes or loopholes. ๐Ÿคฆโ€โ™‚๏ธ๐Ÿคฆโ€โ™€๏ธ

Which makes sense, language is both a tool and a virus. And it functions as much through the interpretation of the recipient as the intent of the user. ๐Ÿ› ๐Ÿ˜ท

Src: Software Engineering Daily