Google Wages War on Gender

Ok, not really. But I can imagine that being a headline of some inflammatory “news” article. ๐Ÿ—ž๏ธ

They’re working to remove implicit, societal gender bias from machine translations in Google Translate by changing the underlying architecture of the machine learning model they use. Basically, the model now produces a masculine and feminine version and then determines which is most likely needed. It appears that in some cases, like translating from the gender-neutral Turkish language, the system will return both versions. โœŒ๏ธ

This is after they announced that all gender pronouns will be removed from Gmail’s Smart Compose feature because it was showing biased tendencies with its recommendations. ๐Ÿ“ง

It’s early in the process but it appears that they are dedicated to this work and have big dreams. ๐Ÿ”ฎ

This is just the first step toward addressing gender bias in machine-translation systems and reiterates Googleโ€™s commitment toย fairness in machine learning. In the future, we plan to extend gender-specific translations to more languages and to address non-binary gender in translations.

Src: Google AI blog

Ghosts as Ethical Proxies ๐Ÿ‘ป

Proof that the data used to train systems can impact the ethical fallout of their performance. And potentially the beginning of us getting a look under-the-hood at these still mysterious systems. I am interested to see how this approach will be applied to non-toy scenarios. ๐Ÿ“ก

Src: Fast Company

Hey, Whoโ€™s Driving that Trolley? ๐Ÿš‹

It is interesting to think about how cultural differences will impact reactions to some AI decisions and could make it difficult to scale a product with localizing it. ๐Ÿค”

A study by the MIT Media Lab has collects global input on variations of the classic ethical dilemma thought experiment, the trolley problem, and found interesting distinctions between cultures. This could help AI developers work through ethics and bias issues, especially in the autonomous vehicle space. But the study noted an important caveat, this data isn’t a requirement or even a suggestion, it is just an input for consideration. Problematic trends shouldn’t be perpetuated in software. ๐Ÿšซ

Src: MIT Tech Review

When People Do Bad Things With Algorithms ๐Ÿ˜ˆ

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems and algorithms are the products of humans, and we are far from perfect, logical, and rational. We’re still our own worst enemy. ๐Ÿ‘ค

Src: The Crime Machine, Part 1

Src: The Crime Machine, Part 2

When Unbiased Machines Meet Biased Humans ๐ŸฅŠ

I’m worried about the implications of transferring our biases to machines and then turning up the speed dial to 11. But I hadn’t thought about how we biased mortals might react to truly unbiased decision making. ๐Ÿคฏ

So while creating unbiased systems is important, it doesn’t guarantee success in the messy real world once decisions are made. Ultimately debiasing systems might not matter if we don’t have the backbone to stick by unpopular results. (That’s not a dig at the Boston school system, this scenario was probably guaranteed to be a mess no matter what.) ๐Ÿ’ช

Src: Boston Globe

Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. โ˜๏ธ

It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. โš–๏ธ

The fully automated SaaS explains decision-making and detects bias in AI models at runtime โ€” so as decisions are being made โ€” which means itโ€™s capturing โ€œpotentially unfair outcomes as they occurโ€, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

And there is a win for centaurs: ๐Ÿ™Œ

it will be both selling AI, โ€˜a fixโ€™ for AIโ€™s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIsโ€ฆ Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Src: TechCrunch

The Unbias 3000 ๐Ÿค–

Accenture has developed and is rolling out, in beta, a tool to help uncover bias in algorithms. Man, I hope this works. ๐Ÿ™

I am really interested to know more about how their Fairness Tool works. My guess is it basically runs another training set through the algorithm that is labeled in such a way that the outputs can be measured on scales that probably aren’t coded in or anticipated. ๐Ÿคทโ€โ™‚๏ธ

For some reason I was really skeptical this tool would work at all when I first started reading, but I think that was due to a tech bias. Assuming any non-tech giant couldn’t possibly crack this nut. Which is another reason I want this to work. We need diversity not only in our data sets and results, but in the players working in the field and developing solutions of all kinds. ๐Ÿ‘

โ€œI sometimes jokingly say that my goal in my job is to make the term โ€˜responsible AIโ€™ redundant. It needs to be how we create artificial intelligence.โ€

Src: Fast Company

Facial Rec Tech Mess ๐Ÿ˜Ÿ

This article is short but meaty. Synopsis: a lot of people are concerned about the current state of facial recognition and what it could mean for the future. I’m going to use the same headings as the MIT post and offer my random thoughts. ๐Ÿ’ญ

The questioners: The call for regulation and safeguards around facial recognition has been sounded. It is definitely a field that warrants a closer look by various watchdog groups due to the concerns and potentials outlined below. ๐Ÿ“ฏ

Eye spies: China has a very robust recognition system in place. China is also an authoritarian government that controls information and has a social credit scoring system in place. Facial recognition can allow for a level of monitoring and control that hasn’t been truly feasible until now, whether governmental or military. And when the tech giants are asking for regulation, you know something’s up. Do we want to be like China? ๐Ÿ‡จ๐Ÿ‡ณ

Oh, (big) brother: News flash, facial recognition might not be perfect! My bigger concern is that Amazon’s response to the ACLU’s findings is that “the system was used incorrectly.” Really? That’s the response? Issue #1: blaming the user has not been going well for tech companies lately, not sure this was the best course of action. Issue #2: WHY CAN THE SYSTEM BE USED INCORRECTLY?!?!? Sorry for the yelling, but if the ACLU can use incorrectly that means that every law enforcement agency using the software can also use it incorrectly. This seems like a big problem with the system. Maybe make the system simple and foolproof before sending it out into the wild to determine people’s fates and futures. ๐Ÿคฆ ๐Ÿคฆโ€โ™€๏ธ

Bias baked in: Nothing new here, but another reminder that bias is a very real factor in these systems and needs to be addressed early and often in the process. One big step to help would be creating and using more diverse data sets. ๐Ÿ‘

Src: MIT Tech Review

Unbiased Faces ๐Ÿ‘ถ๐Ÿป๐Ÿ‘ฉ๐Ÿฝ๐Ÿ‘ด๐Ÿฟ

IBM will be releasing a data set of faces across all ethnicities, genders, and ages to both avoid bias in future facial recognition systems and test existing systems for bias. Simply put, this is awesome. ๐Ÿ™Œ

It’s also interesting to see how ethics, fairness, and openness are being used as positive differentiators by major competitors in this new tech race. ๐Ÿƒโ€โ™€๏ธ๐Ÿƒโ€โ™‚๏ธ

Src: Axios

Osonde Osoba on AI ๐ŸŽ™

I find myself agreeing with a lot of what this guy says. Confirmation bias FTW! ๐Ÿ™Œ

Here are some quotes that represent what really spoke to me. ๐Ÿ—ฃ

Artificial intelligence is a way of understanding what it means to be human beings.

We need to think about AI in terms of value alignmentโ€”I think that’s a better framework than, say, fairness and bias.

I think when we look back at what the U.S. intelligence community has concluded were Russian attempts to intervene in the 2016 presidential election, we’ll probably think those are child’s play. I would bet money that there’s going to be escalation on that front.

I wonder what level of intelligence would be required before we start thinking of autonomous systems less as this โ€œotherโ€ and more as parts of our society

Src: RAND Review