Do The Digital Worm 🐛

Step 1: recreate the brain of the C. elegans worm as a neural network 🧠

Step 2: ask it to park a car 🚗

Researchers digitized the worm brain, the only fully mapped brain we have, with a 12 neuron network. The goal of this exercise was to create a neural network that humans can understand and parse since the organic version it is based on is well understood. 🗺

An interesting realization that came out of this exercise: 👇

Curiously, both the AI model and the real C. elegans neural circuit contained two neurons that seemed to be acting antagonistically, he said—when one was highly active, the other wasn’t.

I wonder when this switching neuron feature will be rolled into an AI/ML/DL architecture. 🤔

Src: Motherboard

Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. ☁️

It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. ⚖️

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

And there is a win for centaurs: 🙌

it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Src: TechCrunch

Penny For Your Bot Thoughts 💭

A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. 👷‍♀️

My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. 🔥

This feature has a bonus too: 🎰

Importantly, the researchers were able to then improve these results because of their model’s key advantage — transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.

This is an awesome step forward for explainable AI and another big win for centaurs. 🏆

Src: MIT

What Is Explainable AI? How Does It Affect Your Job? ⬛

From Hacker Noon

Don’t believe the SkyNet hype. Good overview of narrow vs. super intelligence in AI. Narrow intelligence is what we see most of now (AlphaGo, Siri, autopilot). Super Intelligence is the good at everything one, but is (probably) a long ways off. What we’re really scared of with AI (or what probably drives a lot of the fear mongering) is the mystery of what current AI systems do. You put in your data, run it in the AI black box, and get an answer, without any idea why that answer is right. Explainable AI is the concept of adding a model and interface to the system that would explain how the AI came to a certain conclusion.

Takeaway: Explainable AI provides context to the answers that are currently generated in a black box. This mystery is what drives many of the current fears around AI.