At the US government’s current rate of uninvolvement in the AI sector, China will overtake it in its quest for AI overlord status by the end of the year. At least when it comes to spending, the rest might not be far behind though. 💰
One of the recommendations from a subcommittee is to expedite the approval of the OPEN Government Data Set. You know, giving citizens the right to data they actually own as taxpayers. 🙄
My hunch is that the way Trump deals with something he doesn’t understand (and might admit to himself he doesn’t understand) is to ignore it, thus the administration’s lack of a plan. 😖
The Air Force is embracing AI to mix up their training process, and could be inventing the future of all education while they’re at it. 🎓
The overall objective is to move away from an “industrial age” training model with pre-set timetables and instruction plans to one that adapts to each airman’s learning pace.
Hmmmmm….what else follows an industrial model? 🤔🎒🚌🏫
Among other benefits, they appear to show that artificial intelligence “coaches” are highly effective at gathering data from the process of training any particular pilot, and then refocusing that process on exactly the areas in which the student needs the most help.
This is really, really cool. Personalized training could be here and if it works for pilots of multi-million dollar aircraft it should be able to work in most other fields no problem. 🍰
IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. ☁️
It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. ⚖️
The fully automated SaaS explains decision-making and detects bias in
AI models at runtime — so as decisions are being made — which means
it’s capturing “potentially unfair outcomes as they occur”, as IBM puts
It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.
And there is a win for centaurs: 🙌
it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.
I’m more interested in the first trend (though the second could have interesting implications in this era of doom for publishers). Alibaba is the newest tech giant to announce they are making their own AI chips. 🏭
The chip will reportedly power the company’s cloud technology and IoT devices, and could be used for things like autonomous cars, smart cities and logistics.
This time it’s bigger than just wanting full control and integration. This move is also to reduce Alibaba’s reliance on the West and the impact of the shiny new trade war between China and the US. It also plays nicely into China’s national plan to be the AI superpower of the world. 💪 🇨🇳
Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. ♻️
We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.
Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. 😵
Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? 🤔
I’m super interested in the world of NLP (natural language processing), so the news that performance increased dramatically with ELMo piqued my interest. 💡
The biggest benefit in my eyes is that this method doesn’t require labeled data, which means the world of written word is our oyster. 🐚
Yeah, yeah, word embeddings don’t require labeled data either. ELMo can also learn word meanings at a higher level, which I think means it will have far more impact and a wider range of applications. 📶
ELMo, for example, improves on word embeddings by incorporating more context, looking at language on a scale of sentences rather than words.
Our cuddly, red muppet still picks up the biases we embed in our writings though. So plenty more work to be done. 🛠
Surprising no one, the Pentagon wants more of that AI goodness in its weaponry. They want it about $2 billion worth of bad. 💰
But, having AI be able to explain its decision making appear to be vitally important, which is nice to see. It sounds like plenty of people are uneasy with the idea of robots doing what they want with no oversight, at least in war scenarios. 👍
enabling AI systems to make decisions even when distractions are all around, and to then explain those decisions to their operators will be “critically important…in a warfighting scenario.”
A team at MIT has developed a network that can show its work, basically outputting the “thought” process that lead to a “decision”. 👷♀️
My understanding is that TbD-net is an uber-network containing multiple “mini” neural nets, one interprets a question then a series of image rec networks tackle a sub task and pass it down the line. Each image rec network also outputs a heatmap illustrating what they passed along. 🔥
This feature has a bonus too: 🎰
Importantly, the researchers were able to then improve these results because of their model’s key advantage — transparency. By looking at the attention masks produced by the modules, they could see where things went wrong and refine the model. The end result was a state-of-the-art performance of 99.1 percent accuracy.
This is an awesome step forward for explainable AI and another big win for centaurs. 🏆
Accenture has developed and is rolling out, in beta, a tool to help uncover bias in algorithms. Man, I hope this works. 🙏
I am really interested to know more about how their Fairness Tool works. My guess is it basically runs another training set through the algorithm that is labeled in such a way that the outputs can be measured on scales that probably aren’t coded in or anticipated. 🤷♂️
For some reason I was really skeptical this tool would work at all when I first started reading, but I think that was due to a tech bias. Assuming any non-tech giant couldn’t possibly crack this nut. Which is another reason I want this to work. We need diversity not only in our data sets and results, but in the players working in the field and developing solutions of all kinds. 👐
“I sometimes jokingly say that my goal in my job is to make the term ‘responsible AI’ redundant. It needs to be how we create artificial intelligence.”