A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. 👷♀️
My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. 🔥
This feature has a bonus too: 🎰
Importantly, the researchers were able to then improve these results because of their model’s key advantage — transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.
This is an awesome step forward for explainable AI and another big win for centaurs. 🏆