Mensing.AI


Learnings & Musings on AI, ML, Data Science & Python

Google Wages War on Gender

Ok, not really. But I can imagine that being a headline of some inflammatory “news” article. πŸ—žοΈ They’re working to remove implicit, societal gender bias from machine translations in Google Translate by changing the underlying architecture of the machine learning model they use. Basically, the model now produces a masculine and feminine version and then determines which is most likely … Read More


Ghosts as Ethical Proxies πŸ‘»

Proof that the data used to train systems can impact the ethical fallout of their performance. And potentially the beginning of us getting a look under-the-hood at these still mysterious systems. I am interested to see how this approach will be applied to non-toy scenarios. πŸ“‘ Src: Fast Company


Hey, Who’s Driving that Trolley? πŸš‹

It is interesting to think about how cultural differences will impact reactions to some AI decisions and could make it difficult to scale a product with localizing it. πŸ€” A study by the MIT Media Lab has collects global input on variations of the classic ethical dilemma thought experiment, the trolley problem, and found interesting distinctions between cultures. This could … Read More


When People Do Bad Things With Algorithms 😈

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems … Read More


When Unbiased Machines Meet Biased Humans πŸ₯Š

I’m worried about the implications of transferring our biases to machines and then turning up the speed dial to 11. But I hadn’t thought about how we biased mortals might react to truly unbiased decision making. 🀯 So while creating unbiased systems is important, it doesn’t guarantee success in the messy real world once decisions are made. Ultimately debiasing systems … Read More


Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. ☁️ It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, … Read More


The Unbias 3000 πŸ€–

Accenture has developed and is rolling out, in beta, a tool to help uncover bias in algorithms. Man, I hope this works. πŸ™ I am really interested to know more about how their Fairness Tool works. My guess is it basically runs another training set through the algorithm that is labeled in such a way that the outputs can be … Read More


Facial Rec Tech Mess 😟

This article is short but meaty. Synopsis: a lot of people are concerned about the current state of facial recognition and what it could mean for the future. I’m going to use the same headings as the MIT post and offer my random thoughts. πŸ’­ The questioners: The call for regulation and safeguards around facial recognition has been sounded. It … Read More


Unbiased Faces πŸ‘ΆπŸ»πŸ‘©πŸ½πŸ‘΄πŸΏ

IBM will be releasing a data set of faces across all ethnicities, genders, and ages to both avoid bias in future facial recognition systems and test existing systems for bias. Simply put, this is awesome. πŸ™Œ It’s also interesting to see how ethics, fairness, and openness are being used as positive differentiators by major competitors in this new tech race. … Read More


Osonde Osoba on AI πŸŽ™

I find myself agreeing with a lot of what this guy says. Confirmation bias FTW! πŸ™Œ Here are some quotes that represent what really spoke to me. πŸ—£ Artificial intelligence is a way of understanding what it means to be human beings. We need to think about AI in terms of value alignmentβ€”I think that’s a better framework than, say, … Read More