Could Machines Save Our Humanity? 🤖♥️👤

The typical fear is that technology is eroding our ability to interact with other humans and generally turning humanity into a race to the bottom. Judging by social media, I can’t say that fear is misplaced. But is it the only possibility? 🤔

I am intrigued by this notion that voice assistants could help us “recover” from the effects of screened supercomputers in our pockets by making us think differently about how we phrase our inquiries and, potentially, increasing our patience as we interact with a developing technology. (And yes, I totally get the fear that being able to command non-human voice assistants could degrade manners, but it’s not an inherent aspect of the tech. That’s on us humans.) Maybe all our brains needed to rebound was to interact with technology like we’ve interacted with living creatures for centuries. 🗣️

Src: Manual Vonau

When People Do Bad Things With Algorithms 😈

There is a lot of concern surrounding decisions made by algorithms and bias baked into said systems, but that is far from the only concern. These podcast episodes do a tremendous job illustrating what happens when people use neutral data for the wrong things. When the reason for the data becomes perverted. At the end of the day AI systems and algorithms are the products of humans, and we are far from perfect, logical, and rational. We’re still our own worst enemy. 👤

Src: The Crime Machine, Part 1

Src: The Crime Machine, Part 2

Will Robots Understand Value? 💹

A while back I wrote about how I didn’t think robots would become the new consumers in capitalism. Turns out I’m not the only one. 👬

This piece scratches my economics itch in a lot of ways, but I think the heart of it is the fact that we typically believe the economy/market/capitalism operates like a rational machine and not an organism reacting to the wants and desires of a collection of irrational flesh bags. 👥

But the threat is not real for the simple reason that the efficiency of production is not the problem that economy tries to solve. The actual problem is the use of scarce means to produce want satisfaction. Both means and ends are valued subjectively. Robots do not value.

This, once again, gets to the core of my AI belief system, that we shouldn’t try to recreate human brains in silicon or assume that AGI or superintelligence will mimic humanity’s actions and desires. It just seems like egoism disguised as science. ⚗️

I want to include two quotes pertaining to value that I really liked in this piece. I think they are often forgotten or misunderstood. 💱

The natural resource it the same, but the economic resource – the value of it – was born with the inventions. Indeed, oil became useful in engines, because those engines satisfy consumers’ wants. The value in oil is not its molecular structure, but how it is being used to satisfy wants.

A good, sold in a market, is not its physical appearance, but the service it provides consumers in their attempts to satisfy wants. In other words, a good provides use value. And value is always in the eyes of the user. The value of any means derives from its contribution to a valuable economic good.

For further reading that provides another angle on why I don’t think robots and AIs will just slip into the existing capitalism and perpetuate it check out this piece by Umair Haque.

Src: Mises Institute

China’s Social Submission, er…Scoring System

There is a lot of focus on the China vs. USA space race happening in AI right now (at least in my world there is, I’m very interested in the topic). Most of it revolves around spending, governmental support, talent, etc. But maybe the most important aspect is what the implications of either country winning would be, if there truly can be only one winner in this field. 🏎️

China’s social scoring system, still in its infancy, is terrifying. Of course that is an opinion from a different way of live and cultural experience. But still, it has dystopian sci-fi future written all over it. 😈

A network of 220 million cameras outfitted with facial rec, body scanning, and geo tracking. And this insane info net will be paired with every citizen’s digital footprint. Everything is compiled to create a social credit score of sorts that is updated in real time and determines how easily you can interact with society and live your life. Piss the government off and become an outcast with no options. Dear China, Phillip K Dick called, he’d like his dystopia back. 📚

It’s no guarantee that this form of digital dictatorship will be exported on a mass scale (you know it’ll be exported at some scale) if China were to win the race, but it’s a chilling possibility. A lot of ink is spilled talking about the potential for a robot uprising and AI taking over, but the misuse of AI by human actors is far more relevant and just as fraught. We’ve been our own biggest enemy for centuries, why would that suddenly change now? 🤔

Src: ABC News Australia

When Unbiased Machines Meet Biased Humans 🥊

I’m worried about the implications of transferring our biases to machines and then turning up the speed dial to 11. But I hadn’t thought about how we biased mortals might react to truly unbiased decision making. 🤯

So while creating unbiased systems is important, it doesn’t guarantee success in the messy real world once decisions are made. Ultimately debiasing systems might not matter if we don’t have the backbone to stick by unpopular results. (That’s not a dig at the Boston school system, this scenario was probably guaranteed to be a mess no matter what.) 💪

Src: Boston Globe

Lazy Faire 🇺🇸🇨🇳

At the US government’s current rate of uninvolvement in the AI sector, China will overtake it in its quest for AI overlord status by the end of the year. At least when it comes to spending, the rest might not be far behind though. 💰

One of the recommendations from a subcommittee is to expedite the approval of the OPEN Government Data Set. You know, giving citizens the right to data they actually own as taxpayers. 🙄

My hunch is that the way Trump deals with something he doesn’t understand (and might admit to himself he doesn’t understand) is to ignore it, thus the administration’s lack of a plan. 😖

Src: The Next Web

That’s Instructor AI, Cadet

The Air Force is embracing AI to mix up their training process, and could be inventing the future of all education while they’re at it. 🎓

The overall objective is to move away from an “industrial age” training model with pre-set timetables and instruction plans to one that adapts to each airman’s learning pace.

Hmmmmm….what else follows an industrial model? 🤔🎒🚌🏫

Among other benefits, they appear to show that artificial intelligence “coaches” are highly effective at gathering data from the process of training any particular pilot, and then refocusing that process on exactly the areas in which the student needs the most help.

This is really, really cool. Personalized training could be here and if it works for pilots of multi-million dollar aircraft it should be able to work in most other fields no problem. 🍰

Src: Federal News Radio

Bolt On a Watchdog

IBM has launched a tool as part of their cloud platform that detects bias and adds a dash of explainability to AI models. ☁️

It looks like this might be the new competition between service providers, and that’s not a bad thing. A huge upside of AI is that they can make decisions free of the bias that infects humanity, but it doesn’t do so by magic. A lot of bias can be added accidentally (or overtly of course) by the humans collecting the data and building the systems. Hopefully these new tools start paving the way for a less biased future. ⚖️

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

And there is a win for centaurs: 🙌

it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Src: TechCrunch

Joey Stigz on AI 🤖

If you’re interested in the societal and economic implications of AI, this article is worth a read. A few points that stuck out to me: 👇

  • AI can be hugely beneficial, but right now it’s not trending in that direction
  • Tech companies shouldn’t be in charge of regulating themselves, which means decision makers need to educate themselves
  • People are starting to value privacy more and become more wary of surveillance and data collection
  • AI will “take” jobs, but there will still be plenty of uniquely humans roles (health and elderly care, education, etc.) Guess we’ll find out how much we truly value that work.
  • Centaurs

Src: The Guardian

DeepFakes Get More Realistic 😖

Remember back when I said I was terrified about deepfakes? Well, it’s not getting any better. 😟

Apparently researchers at Carnegie Mellon and Facebook’s Reality Lab decided there is nothing to worry about and the method for making them needed to be better. So they give us Recycle-GAN. ♻️

We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content/speech should be in Stephen Colbert’s style.

Fantastic. Just what we need. A system that transfers content while maintaining stylistic integrity all while not needing a great deal of tweaking/input to make it happen. 😵

Also, why is Facebook helping to make fake content easier to create? Don’t they have enough problems on this front already? 🤔

Src: Carnegie Mellon