A great look at the story behind the competition victory that led to the marriage of neural networks and GPUs (as mentioned in the Timeline post). 🥇
Also traces the path of the man that may have changed the course of AI, Alex Krizhevsky. Yeah, he’s the Alex in AlexNet. 🔀
“Artificial intelligence is sort of the end goal of computer science,” Krizhevsky says. “Computer science is about automating stuff, and artificial intelligence is about automating everything.”
A great listen on the practical uses of deep learning in industrial settings, and probably not in the way you think. The guest works for a Fortune 200 energy company. 🔋
I mentioned that I don’t think we’re in the midst of a true AI bubble like some have suggested, and this episode provides some examples for why I think that. If a major energy company has found a way to incorporate this technology into rather mundane tasks that won’t grab headlines but are serving them well, then clearly this is going to be used and useful. 🎥
The example that’s lodged in my head as something really useful and unsexy is using computer vision to monitor whether or not people are wearing the proper safety gear. ⛑
Yeah, it may not grab headlines or pay ridiculous salaries, but it’ll be incorporated into the business landscape, which isn’t so bad in the end. 👌
Differentiable Plasticity looks really interesting. Basically a “plastic” weight is added to every connection within a neural network that can change over time as leaning continues. You keep the static weights from training like all other neural nets but have this additional weight changing as needed. 🎛
The idea is based on “synaptic plasticity”. No, that’s not a jam band album. It’s a feature of the brain that allows connections between synapses to change over time. 🧠
This could obviously be immensely helpful for models to actually learn over time. The early results look impressive. I also wonder if this would help minimize the risk of overfitting? 🤔
Src: Towards Data Science
Following on the theme of yesterday’s post, DeepMind is really working hard at adding imagination to AI. 💭
This time they worked on using “imagination” to extrapolate out and predict future states based on current states. Really, it’s predictive analytics. 🔮
Again, think of the potential for self-driving cars. Being able to essentially see into the future. A very useful skill when it comes to traffic and pedestrians. 🚗
Src: Towards Data Science
DeepMind, of beating humans at Go fame, has now created an AI that can imagine things. Kind of… 💬
Their new GQN model can look at a scene from a few angles and “imagine” what it will look like from another angle. 🔀
It’s a small step towards that ever-popular goal of making AI more like humans. I could see if being a very useful skill for self-driving cars, helping them contextualize what’s around them and potentially “see” around corners and the like. 🚗
Src: MIT Tech Review
Snapshot Serengeti is investigating the use of deep learning image recognition to sift through their trove of 3.2 million wildlife pictures from various camera traps. This is an AI researcher/engineer’s dream data set, there is just so much. 📊
So what can this model do? 🔍
Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.
And why does it matter? 📸
We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images.
This could be a huge development for wildlife protection/preservation efforts and the first instance of AI being used to help us protect our environment and planet, which is rather important if you ask me. 🌍
Src: R&D Magazine
Real talk, Deepfakes terrify me. 🙀
The implications of fake video that looks real to the human eye has “dystopian sci-fi” written all over it. If you thought the last election was a circus, wait until the first Deepfakes election. 🎪
A few outcomes that come to mind:
- Political mudslinging reaches its zenith and societal upheaval follows in its wake as no one knows who or what they can trust and our partisan rhetoric devolves even further.
- Famous people (actors were the first to pop into my head) can license their likeness, visually and audibly, and make money from acting without ever being on set as their licensed lookalike is projected onto another actor’s performance.
- Those VR pop stars will be taken to another level.
- Hollywood doubles down on their “don’t try anything new, it’s only sequels or reboots” strategy with new performances by deceased actors (music industry has been doing this forever) and cheaply cranking out even more.
This post is a great overview on the topic, but I noticed a glaring oversight in the tech section. It doesn’t matter if methods to catch these advance enough to be useful if they catch them after they go viral. A retraction/correction to a fake news item almost never gets the same reach as the fake item itself. I believe we see video as the last bastion of truth in media because we don’t have a lot of examples of convincing fake footage, either manufactured or edited. That’s about to change.
Src: Jensen Price on Medium
It ain’t all sunshine and rainbows, we’ve got some shiznit to figure out. A lot of the challenges raised seem to fall on the planning/people end, basically these systems are only as good as the people that program them. The biases, aversions, and misunderstanding of humans can be transferred to the machines through the coding and training of the algorithms.
Takeaway: You can’t take a terrible plan and great algorithm and make magic, at least not good magic