Centaur Q+A 🚲

I really like how this Steve Job quote illustrates the centaur concept. βž•

Steve Jobs once called the computer a bicycle for the mind. Note the metaphor of a bicycle, instead of a something like a car β€” a bicycle lets you go faster than the human body ever can, and yet, unlike the car, the bicycle is human-powered. (Also, the bicycle is healthier for you.) The strength of metal, with a human at its heart. A collaboration β€” a centaur.

Human + machine is better than either individually. And weak combos with good processes are better than strong combos with bad processes. So the glue is the key. Amplifying the strengths of both and minimizing the weaknesses. βš–οΈ

So how do we divide the work? πŸ€”

AIs are best at choosing answers. Humans are best at choosing questions.

The key concept for the future of AI, IA, and centaurs is symbiosis, because: 🀝

Symbiosis shows us you can have fruitful collaborations even if you have different skills, or different goals, or are even different species. Symbiosis shows us that the world often isn’t zero-sum

Src: MIT

With Great Computing Power Comes Great Responsibility? 🚨

Maybe that open letter decrying autonomous weapons wasn’t the best choice? πŸ€”

Relax, nothing crazy happened. Paul Scharre just brought up some really good points in this interview with the MIT Tech Review. And they boil down to the best way to impact the smart weapons sector is to help educate and steer policy, not stay away from it. πŸ”–

The open letter is the typical tech sector response to a problem like this too, avoid it and shift blame. “We’re just engineers”. πŸ™„

Smart weapons are coming one way or another, and I like the idea of having the people concerned about them involved in their creation and regulation. βš’

Src: MIT Tech Review

The Tangled Web We Weave 🌐

AI and nationalism are strange bedfellows. On the one hand, most research at this point has been collaborative and open. Indeed, a lot of advances are open source and practically everything gets a paper written explaining the process. On the other hand, the implications for military use and national advancement are very real. πŸ›

The US and China are the two main players in this slowly unfolding drama. The big tech companies of each are building research centers in the other and are likely attracting the bulk of the world’s talent. But they are beginning to diverge in a major way, China’s AI industry is heavily backed by the government and follows a policy of dual-use: commercial and military. The US government is basically staying out of it (probably because most of the decision makers don’t understand it) and employees are calling for their companies to stay out of military contracts and applications. πŸ‡¨πŸ‡³πŸ‡ΊπŸ‡Έ

No one benefits if an isolationist approach is taken, but nothing good will happen if the realities of what could result from partnerships and investments are ignored. China seems to want all joint ventures to skew towards benefitting them. Could we be headed towards a Cold War? ⛄️

Could AI spark a new wave of spying? Industrial espionage, asset development and exploitation, academic pillaging, funding stipulations, code breaking? πŸ•΅οΈβ€β™€οΈπŸ•΅οΈβ€β™‚οΈ

The post on ASPI does a great job breaking down the situation and outlining what’s going on in China. Plus they recommend some approaches to potential solutions. Ultimately it feels like this is going to become a traditional battle between two countries that manifests in entirely new ways due to the technology involved. πŸ—Ί

Src: ASPI

Facial Rec Tech Mess 😟

This article is short but meaty. Synopsis: a lot of people are concerned about the current state of facial recognition and what it could mean for the future. I’m going to use the same headings as the MIT post and offer my random thoughts. πŸ’­

The questioners: The call for regulation and safeguards around facial recognition has been sounded. It is definitely a field that warrants a closer look by various watchdog groups due to the concerns and potentials outlined below. πŸ“―

Eye spies: China has a very robust recognition system in place. China is also an authoritarian government that controls information and has a social credit scoring system in place. Facial recognition can allow for a level of monitoring and control that hasn’t been truly feasible until now, whether governmental or military. And when the tech giants are asking for regulation, you know something’s up. Do we want to be like China? πŸ‡¨πŸ‡³

Oh, (big) brother: News flash, facial recognition might not be perfect! My bigger concern is that Amazon’s response to the ACLU’s findings is that “the system was used incorrectly.” Really? That’s the response? Issue #1: blaming the user has not been going well for tech companies lately, not sure this was the best course of action. Issue #2: WHY CAN THE SYSTEM BE USED INCORRECTLY?!?!? Sorry for the yelling, but if the ACLU can use incorrectly that means that every law enforcement agency using the software can also use it incorrectly. This seems like a big problem with the system. Maybe make the system simple and foolproof before sending it out into the wild to determine people’s fates and futures. 🀦 πŸ€¦β€β™€οΈ

Bias baked in: Nothing new here, but another reminder that bias is a very real factor in these systems and needs to be addressed early and often in the process. One big step to help would be creating and using more diverse data sets. πŸ‘

Src: MIT Tech Review

ML = 10 Year Olds πŸ‘¦πŸ‘§

I take issue when people seem to think that AI is only AI if it resembles what we’ve been shown in movies. I actually unsubscribed from a podcast when one of the hosts said that none of what is currently being touted as AI counts because it’s essentially not AGI or super AI. #petty πŸ˜’

That being said, I think this framework laid out by Ben Evans is pretty spot on: πŸ‘Œ

Indeed, I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:

  • Data is the new oil
  • Google and China (or Facebook, or Amazon, or BAT) have all the data
  • AI will take all the jobs
  • And, of course, saying AI itself.

More useful things to talk about, perhaps, might be:

  • Automation
  • Enabling technology layers
  • Relational databases.

Machine Learning’s current superpower is level of automation that seems almost magical. Of course this also means that products can claim to be AI/ML based but really just be a crazy automation stack. And maybe using some “new” terminology to talk about AI/ML/DL will help lead to constructive conversations and a better informed public instead of turning everything into a discussion about Terminator. πŸ€–

This might be my favorite part of the post: ❣️

Talking about ML does tend to be a hunt for metaphors, but I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds.

ML is amazing, but it isn’t omnipotent or truly intelligent in the way we would probably consider the meaning of that word (at least our limited, ego-driven meaning of it). Yeah, it can be like a superpower, but its superpower is that it’s the quietest assembly of unlimited 10 year olds you’ve ever (legally) put to work. ⚑

Src: Benedict Evans

A Cautionary Tale ⚠️

Automation can be a wonderful thing. It can also turn a small human error and snowball it out of control, like this story illustrates. Systems need fail safes and other checks so that humans can intervene if/when needed. Technology is supposed to help us, not control us. πŸš₯

This is one of the reasons why I am in the centaur and Intelligence Augmentation camp versus the replace all humans with machines camp. β›Ί

We have a wonderful opportunity in front of us, we need to make sure we don’t squander it through laziness, ignorance, or both. ↔️

Recommended Read: Src: Idiallo