When People Do Bad Things With Algorithms 😈

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems and algorithms are the products of humans, and we are far from perfect, logical, and rational. We’re still our own worst enemy. πŸ‘€

Src: The Crime Machine, Part 1

Src: The Crime Machine, Part 2

When Unbiased Machines Meet Biased Humans πŸ₯Š

I’m worried about the implications of transferring our biases to machines and then turning up the speed dial to 11. But I hadn’t thought about how we biased mortals might react to truly unbiased decision making. 🀯

So while creating unbiased systems is important, it doesn’t guarantee success in the messy real world once decisions are made. Ultimately debiasing systems might not matter if we don’t have the backbone to stick by unpopular results. (That’s not a dig at the Boston school system, this scenario was probably guaranteed to be a mess no matter what.) πŸ’ͺ

Src: Boston Globe

The Unbias 3000 πŸ€–

Accenture has developed and is rolling out, in beta, a tool to help uncover bias in algorithms. Man, I hope this works. πŸ™

I am really interested to know more about how their Fairness Tool works. My guess is it basically runs another training set through the algorithm that is labeled in such a way that the outputs can be measured on scales that probably aren’t coded in or anticipated. πŸ€·β€β™‚οΈ

For some reason I was really skeptical this tool would work at all when I first started reading, but I think that was due to a tech bias. Assuming any non-tech giant couldn’t possibly crack this nut. Which is another reason I want this to work. We need diversity not only in our data sets and results, but in the players working in the field and developing solutions of all kinds. πŸ‘

β€œI sometimes jokingly say that my goal in my job is to make the term β€˜responsible AI’ redundant. It needs to be how we create artificial intelligence.”

Src: Fast Company

Facial Rec Tech Mess 😟

This article is short but meaty. Synopsis: a lot of people are concerned about the current state of facial recognition and what it could mean for the future. I’m going to use the same headings as the MIT post and offer my random thoughts. πŸ’­

The questioners: The call for regulation and safeguards around facial recognition has been sounded. It is definitely a field that warrants a closer look by various watchdog groups due to the concerns and potentials outlined below. πŸ“―

Eye spies: China has a very robust recognition system in place. China is also an authoritarian government that controls information and has a social credit scoring system in place. Facial recognition can allow for a level of monitoring and control that hasn’t been truly feasible until now, whether governmental or military. And when the tech giants are asking for regulation, you know something’s up. Do we want to be like China? πŸ‡¨πŸ‡³

Oh, (big) brother: News flash, facial recognition might not be perfect! My bigger concern is that Amazon’s response to the ACLU’s findings is that “the system was used incorrectly.” Really? That’s the response? Issue #1: blaming the user has not been going well for tech companies lately, not sure this was the best course of action. Issue #2: WHY CAN THE SYSTEM BE USED INCORRECTLY?!?!? Sorry for the yelling, but if the ACLU can use incorrectly that means that every law enforcement agency using the software can also use it incorrectly. This seems like a big problem with the system. Maybe make the system simple and foolproof before sending it out into the wild to determine people’s fates and futures. 🀦 πŸ€¦β€β™€οΈ

Bias baked in: Nothing new here, but another reminder that bias is a very real factor in these systems and needs to be addressed early and often in the process. One big step to help would be creating and using more diverse data sets. πŸ‘

Src: MIT Tech Review

The Deciding Tree 🌳

This is a really great description of decision trees with some lovely visuals. It also continues a good overview of overfitting. πŸ‘Œ

Decision trees might not seem as sexy as other algorithmic approaches, but it’s hard to argue with the results. It also strikes me how similar this process seems to the way humans approach a lot of experience-based decision making. βœ”οΈ

The basics: decision trees are flowcharts derived from data. ⏹➑️⏺

Src: Data Science and Robots Blog

AI Replacing Animals πŸπŸ€

An algorithm has been developed for testing chemical compound toxicity that has, so far, been as accurate as animal testing. This could be huge for pharma R&D and any other industry that relies on animal testing. It should make the process faster and easier and should make animal rights supporters happy, basically it should be a win all around. πŸ™Œ

This algorithm is also an example of how AIs don’t replace human intelligence but augment it. This model is a different form of intelligence. Humans can’t perform the same level of analysis to this problem that the algorithm has. The human “intelligence” approach is to rub something on a rat and wait to see what happens. That machine approach is to look at all the prior test results, find patterns that might be invisible to us, and predict what effect this new compound is likely to have. And it does this rapidly and safely. Cue up the battle cry, centaurs FTW! 🀘

Src: Quartz

A Cautionary Tale ⚠️

Automation can be a wonderful thing. It can also turn a small human error and snowball it out of control, like this story illustrates. Systems need fail safes and other checks so that humans can intervene if/when needed. Technology is supposed to help us, not control us. πŸš₯

This is one of the reasons why I am in the centaur and Intelligence Augmentation camp versus the replace all humans with machines camp. β›Ί

We have a wonderful opportunity in front of us, we need to make sure we don’t squander it through laziness, ignorance, or both. ↔️

Recommended Read: Src: Idiallo

Chatbot Gone Wrong 🚫

I love me some hilarious AI hijinx and the Letting Neural Networks Be Weird blog is a perfect source for it. This post about the less than correct conversations about various images is also a good reminder that AI doesn’t intuit or truly understand things the way we do. These models learn, but in a far different fashion from humans. However, this model seems to have learned humans’ unwillingness to be wrong and general penchant for obstinately sticking to the original opinion. πŸ—Ώ

Src: AI Weirdness

Osonde Osoba on AI πŸŽ™

I find myself agreeing with a lot of what this guy says. Confirmation bias FTW! πŸ™Œ

Here are some quotes that represent what really spoke to me. πŸ—£

Artificial intelligence is a way of understanding what it means to be human beings.

We need to think about AI in terms of value alignmentβ€”I think that’s a better framework than, say, fairness and bias.

I think when we look back at what the U.S. intelligence community has concluded were Russian attempts to intervene in the 2016 presidential election, we’ll probably think those are child’s play. I would bet money that there’s going to be escalation on that front.

I wonder what level of intelligence would be required before we start thinking of autonomous systems less as this β€œother” and more as parts of our society

Src: RAND Review

Listen Up πŸ”Š: Digital Evolution

This episode of Software Engineering Daily is a great listen on the topic of digital evolution, or, the process of allowing algorithms to grow and evolve. πŸ¦–

This field leads to a lot of funny results but there is an important lesson to that, and is probably my favorite moment of the show. We humans are really bad at stating what we want in a way that won’t lead to unforeseen outcomes or loopholes. πŸ€¦β€β™‚οΈπŸ€¦β€β™€οΈ

Which makes sense, language is both a tool and a virus. And it functions as much through the interpretation of the recipient as the intent of the user. πŸ› πŸ˜·

Src: Software Engineering Daily