AlexNet and the Rise of the GPU ๐Ÿ•ธ

A great look at the story behind the competition victory that led to the marriage of neural networks and GPUs (as mentioned in the Timeline post). ๐Ÿฅ‡

Also traces the path of the man that may have changed the course of AI, Alex Krizhevsky. Yeah, he’s the Alex in AlexNet. ๐Ÿ”€

โ€œArtificial intelligence is sort of the end goal of computer science,โ€ Krizhevsky says. โ€œComputer science is about automating stuff, and artificial intelligence is about automating everything.โ€

Src: Quartz

Listen Up ๐Ÿ”Š: Industrial Grade AI

A great listen on the practical uses of deep learning in industrial settings, and probably not in the way you think. The guest works for a Fortune 200 energy company. ๐Ÿ”‹

I mentioned that I don’t think we’re in the midst of a true AI bubble like some have suggested, and this episode provides some examples for why I think that. If a major energy company has found a way to incorporate this technology into rather mundane tasks that won’t grab headlines but are serving them well, then clearly this is going to be used and useful. ๐ŸŽฅ

The example that’s lodged in my head as something really useful and unsexy is using computer vision to monitor whether or not people are wearing the proper safety gear. โ›‘

Yeah, it may not grab headlines or pay ridiculous salaries, but it’ll be incorporated into the business landscape, which isn’t so bad in the end. ๐Ÿ‘Œ

Src: TWiML&AI

Plastic Elastic ๐Ÿ•ธ

Differentiable Plasticity looks really interesting. Basically a “plastic” weight is added to every connection within a neural network that can change over time as leaning continues. You keep the static weights from training like all other neural nets but have this additional weight changing as needed. ๐ŸŽ›

The idea is based on “synaptic plasticity”. No, that’s not a jam band album. It’s a feature of the brain that allows connections between synapses to change over time. ๐Ÿง 

This could obviously be immensely helpful for models to actually learn over time. The early results look impressive. I also wonder if this would help minimize the risk of overfitting? ๐Ÿค”

Src: Towards Data Science

Imagination 2: Deep Learning Boogaloo ๐Ÿ•บ

Following on the theme of yesterday’s post, DeepMind is really working hard at adding imagination to AI. ๐Ÿ’ญ

This time they worked on using “imagination” to extrapolate out and predict future states based on current states. Really, it’s predictive analytics. ๐Ÿ”ฎ

Again, think of the potential for self-driving cars. Being able to essentially see into the future. A very useful skill when it comes to traffic and pedestrians. ๐Ÿš—

Src: Towards Data Science

Imagine Robots Imagining ๐Ÿ’ญ

DeepMind, of beating humans at Go fame, has now created an AI that can imagine things. Kind of… ๐Ÿ’ฌ

Their new GQN model can look at a scene from a few angles and “imagine” what it will look like from another angle. ๐Ÿ”€

It’s a small step towards that ever-popular goal of making AI more like humans. I could see if being a very useful skill for self-driving cars, helping them contextualize what’s around them and potentially “see” around corners and the like. ๐Ÿš—

Src: MIT Tech Review

What’s That Animal? ๐Ÿ˜๐Ÿฆ๐Ÿƒ

Snapshot Serengeti is investigating the use of deep learning image recognition to sift through their trove of 3.2 million wildlife pictures from various camera traps. This is an AI researcher/engineer’s dream data set, there is just so much. ๐Ÿ“Š

So what can this model do? ๐Ÿ”

Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.

And why does it matter? ๐Ÿ“ธ

We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images.

This could be a huge development for wildlife protection/preservation efforts and the first instance of AI being used to help us protect our environment and planet, which is rather important if you ask me. ๐ŸŒ

Src: R&D Magazine

Deepfakes: An Overview ๐Ÿ“ฝ

Real talk, Deepfakes terrify me. ๐Ÿ™€

The implications of fake video that looks real to the human eye has “dystopian sci-fi” written all over it. If you thought the last election was a circus, wait until the first Deepfakes election. ๐ŸŽช

A few outcomes that come to mind:

  1. Political mudslinging reaches its zenith and societal upheaval follows in its wake as no one knows who or what they can trust and our partisan rhetoric devolves even further.
  2. Famous people (actors were the first to pop into my head) can license their likeness, visually and audibly, and make money from acting without ever being on set as their licensed lookalike is projected onto another actor’s performance.
  3. Those VR pop stars will be taken to another level.
  4. Hollywood doubles down on their “don’t try anything new, it’s only sequels or reboots” strategy with new performances by deceased actors (music industry has been doing this forever) and cheaply cranking out even more.

This post is a great overview on the topic, but I noticed a glaring oversight in the tech section. It doesn’t matter if methods to catch these advance enough to be useful if they catch them after they go viral. A retraction/correction to a fake news item almost never gets the same reach as the fake item itself. I believe we see video as the last bastion of truth in media because we don’t have a lot of examples of convincing fake footage, either manufactured or edited. That’s about to change.

Src: Jensen Price on Medium

Challenges in Deep Learning ๐Ÿ”ฎ

Challenges in Deep Learning [on Hacker Noon]

It ain’t all sunshine and rainbows, we’ve got some shiznit to figure out. A lot of the challenges raised seem to fall on the planning/people end, basically these systems are only as good as the people that program them. The biases, aversions, and misunderstanding of humans can be transferred to the machines through the coding and training of the algorithms.

Takeaway: You can’t take a terrible plan and great algorithm and make magic, at least not good magic