When People Do Bad Things With Algorithms ๐Ÿ˜ˆ

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems and algorithms are the products of humans, and we are far from perfect, logical, and rational. We’re still our own worst enemy. ๐Ÿ‘ค

Src: The Crime Machine, Part 1

Src: The Crime Machine, Part 2

That’s Instructor AI, Cadet

The Air Force is embracing AI to mix up their training process, and could be inventing the future of all education while they’re at it. ๐ŸŽ“

The overall objective is to move away from an โ€œindustrial ageโ€ training model with pre-set timetables and instruction plans to one that adapts to each airmanโ€™s learning pace.

Hmmmmm….what else follows an industrial model? ๐Ÿค”๐ŸŽ’๐ŸšŒ๐Ÿซ

Among other benefits, they appear to show that artificial intelligence โ€œcoachesโ€ are highly effective at gathering data from the process of training any particular pilot, and then refocusing that process on exactly the areas in which the student needs the most help.

This is really, really cool. Personalized training could be here and if it works for pilots of multi-million dollar aircraft it should be able to work in most other fields no problem. ๐Ÿฐ

Src: Federal News Radio

Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. โ˜๏ธ

It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. โš–๏ธ

The fully automated SaaS explains decision-making and detects bias in AI models at runtime โ€” so as decisions are being made โ€” which means itโ€™s capturing โ€œpotentially unfair outcomes as they occurโ€, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

And there is a win for centaurs: ๐Ÿ™Œ

it will be both selling AI, โ€˜a fixโ€™ for AIโ€™s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIsโ€ฆ Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Src: TechCrunch

Penny For Your Bot Thoughts ๐Ÿ’ญ

A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. ๐Ÿ‘ทโ€โ™€๏ธ

My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. ๐Ÿ”ฅ

This feature has a bonus too: ๐ŸŽฐ

Importantly, the researchers were able to then improve these results because of their model’s key advantage โ€” transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.

This is an awesome step forward for explainable AI and another big win for centaurs. ๐Ÿ†

Src: MIT

Centaur Q+A ๐Ÿšฒ

I really like how this Steve Job quote illustrates the centaur concept. โž•

Steve Jobs once called the computer a bicycle for the mind. Note the metaphor of a bicycle, instead of a something like a car โ€” a bicycle lets you go faster than the human body ever can, and yet, unlike the car, the bicycle is human-powered. (Also, the bicycle is healthier for you.) The strength of metal, with a human at its heart. A collaboration โ€” a centaur.

Human + machine is better than either individually. And weak combos with good processes are better than strong combos with bad processes. So the glue is the key. Amplifying the strengths of both and minimizing the weaknesses. โš–๏ธ

So how do we divide the work? ๐Ÿค”

AIs are best at choosing answers. Humans are best at choosing questions.

The key concept for the future of AI, IA, and centaurs is symbiosis, because: ๐Ÿค

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isnโ€™t zero-sum

Src: MIT

Centaurs Wear Ties ๐Ÿ‘”

I’m firmly in the Augmented Intelligence camp when it comes to where I think the real benefits lie when it comes to AI. Turns out the biggest benefit for business comes from pairing humans with machines, not replacing. ๐Ÿ‘ค+๐Ÿค–=โค๏ธ

In our research involving 1,500 companies, we found that firms achieve the most significant performance improvements when humans and machines work together.

Surprise, surprise, humans and machines are good at different things! Shocker, I know. But that means that, if done properly, they can be combined to achieve better results than either could individually. The dream and promise of centaurs. ๐Ÿ™Œ

According to this study there are 3 rolls humans need to fill with their machine counterparts ๐Ÿ‘ค:

They must train machines to perform certain tasks; explain the outcomes of those tasks, especially when the results are counterintuitive or controversial; and sustain the responsible use of machines (by, for example, preventing robots from harming humans).

And in a nice but of symmetry, the rule of threes applies to the machines as well ๐Ÿค–:

They can amplify our cognitive strengths; interact with customers and employees to free us for higher-level tasks; and embody human skills to extend our physical capabilities.

That second one, interact, is the hot button topic right now, as evidenced by Google’s Duplex and the reaction it garnered. ๐Ÿ–ฒ
Src: Harvard Business Review

A Cautionary Tale โš ๏ธ

Automation can be a wonderful thing. It can also turn a small human error and snowball it out of control, like this story illustrates. Systems need fail safes and other checks so that humans can intervene if/when needed. Technology is supposed to help us, not control us. ๐Ÿšฅ

This is one of the reasons why I am in the centaur and Intelligence Augmentation camp versus the replace all humans with machines camp. โ›บ

We have a wonderful opportunity in front of us, we need to make sure we don’t squander it through laziness, ignorance, or both. โ†”๏ธ

Recommended Read: Src: Idiallo

AIs are Teaming Up ๐ŸŽฎ

The OpenAI Five have beaten a team of amateur Dota 2 players (a strategy videogame). So? The 5 are a team of AI algorithms. And a name that brings to mind old hip-hop group names. ๐ŸŽค

This is an important and novel direction for AI, since algorithms typically operate independently.

While I can already imagine the “SkyNet is Coming!” headlines that could result from this news, I’m generally pretty stoked about it. Not gonna lie, the idea of AIs teaming up to create super AIs is a bit terrifying, but that is only one potential outcome. Also feel it’s important to note that the OpenAI Five don’t directly communicate with each other, everything appears to be coordinated through game play. So no new machine languages to decode and translate, which is nice. ๐Ÿ˜…

Some of the benefits that could come from this:

  • Enhanced human-machine team work as the AIs are better adapted to cooperation with other agents ๐Ÿ‘ฅ
  • Using different algorithm types in tandem to reduce reliance on deep learning models and expanding the scope of what’s possible ๐Ÿฅš๐Ÿฅš๐Ÿฅš
  • Potential for distributed AI not via one algorithm spread around BitTorrent style, but by distributed algorithms collaborating in different configurations (this one plays in to my vision of a future where we all have personal AIs) ๐ŸŒ
  • Help reduce implicit bias by utilizing multiple algorithms ๐Ÿ‘

The big one is the potential this has for centaurs. And centaurs are the future. ๐Ÿ”ฎ

Src: MIT Tech Review

Listen Up ๐Ÿ”Š: When AI Meets Experts ๐Ÿ‘ฉโ€๐Ÿ”ฌ๐Ÿ‘จโ€๐Ÿ”ฌ

Great episode of the TWIML+AI podcast on the use of AI in niches with significant domain expertise. The go-to example was creating new types of glass, like Gorilla Glass. Fields where AI can find novel solutions or expedite the R&D process while the human experts weigh in on what’s possible or has already been done.

Especially love this quote from the end:

If you really want to go deep, merging domain expertise and AI/ML expertise, if you can merge those effectively, you can have a super powerful tool that’s really differentiated from what anyone else has.

I think this is an extension of the centaur concept. Also feels like the precursor/initial iteration of a potential future where we all have personal AI assistants/friends/extensions.


Explaining AI to Mom ๐Ÿ‘ต

Fear mongering and buzzword-y overuse/loosey-goosey usage have been hurting AI’s appeal with the broader population. Are there terrifying possibilities? Yes (I’m terrified of the implications of Deep Fakes).

But guaranteeing dystopia is a disservice. It plays to the reptilian brain while we’re trying to recreate the higher level brain in silicon. ๐ŸฆŽ

I agree with Bryan:

Our thriving depends upon our co-evolution with AI (see my latest post) and making the journey symbiotic rather than competitive.

The differences in “thought” and “strategically unprecedented moves” arrived at by AI aren’t to be feared but respected, and embraced. Let’s be honest, human’s don’t have the best track record. Maybe mixing it up a bit will help a bit; can’t hurt, can it?

AI offers an entire universe of expansive moves and strategies that are as yet undiscovered by our human intelligence!



Src: Future Literacy