The Seven Deadly Sins of Predicting the Future of AI โ˜ 

What are these 7 deadly sins?

  1. Over & Underestimating – see: Amara’s Law (below)
  2. Imagining Magic – see: Clark’s 3rd Law (also below), be wary of predictions
  3. Performance vs. Competence – computers don’t do general (more later)
  4. Suitcase Words – it’s not “learning” in the human sense, this word has baggage for us
  5. Exponentials – use caution when extrapolating from few data points. Also, Moore’s Law is dead.
  6. Hollywood Scenario – things take time. When the future of AI comes, so to does the future of everything else. We won’t just be dropping the future of AI into our present reality.
  7. Speed of Deployment – the world is not as digital as you might think. Software is ephemeral compared to hardware, which takes a long time to develop and a long time to be phased out. (example: future car)

So, what is Amara’s Law? We tend to overestimate the effect of technology in the short run and underestimate  the effect in the long run. (e.g. GPS)

And Clark’s 3rd Law? Any sufficiently advanced technology is indistinguishable from magic. ๐Ÿ”ฎ (This is a good one.)

Great quote regarding the desire to prove AIs are working along the same values as humans: “The good news is that us humans were able to successfully co-exist with, and even use for our own purposes, horses, themselves autonomous agents with on going existences, desires, and super-human physical strength, for thousands of years. And we had not a single theorem about horses. Still donโ€™t!”

And remember, predicting the future is hard, especially ahead of time.

Src: Rodney Brooks


Comments are closed.