Austin is doing something really cool, testing out autonomous vehicle alert boxes at select intersections. Why am I excited about this? 🚦
The boxes can feed information to autonomous vehicles, telling them when a light is about to turn red, that someone just ran a red light, that pedestrians are present and other data the cars need to make safe maneuvers.
I think the true value of autonomous vehicles will be realized when we can create a transportation mesh network, with all parts communicating with one another. It could also radically change the layout and look of cities and roads as visual cues like signs and lights would become largely unnecessary. ❌
These boxes could also be useful with augmented cars, sending signals to vehicles that can stop themselves and the like. The more cars that know a light is about to turn red or a pedestrian in the road means the safer the roads can hopefully become. 🤞
This article lays out a vision of the future I both think is likely and look forward too. Well designed, powerful AI voice assistants could open up a new realm of ambient computing reducing our reliance on screens and making tech more human-centric. We’ve evolved to be social creatures accustomed to interacting via spoken words, not by tapping on a magic box. 🦕👤📱
Also really sad to see screens usurping the world of LEGO via AR. It’s all about the bricks, man! 🏰
Src: New York Times
I’m very interested in how the AI boom will unfold on the international stage. It has all the makings of the next space race or arms race and more countries throw their hat in the ring every day. 🚀
This post from Ian Hogarth is a great overview of what is “at stake”. I use quotes for that because it seems more ominous than I think it should. But there are certainly some potentially ominous outcomes depending on who wins the race. Or who losses… 🥇🥈🥉
The big 3 sectors ready to be shake up:
- Economy 💵
- Military 🔫
- Science & Technology 🔬
What is required for countries to compete?
- Compute 🖥
- Talent 🧠
- Related tech 💽
- Stable/supportive politics ⚖️
It’s also interesting to think about how AI and its impacts will vary by country due to each country’s unique mix of experience, culture, and economy. For example, Chinese AI will be (and is) very different from US AI. 🇨🇳 🇺🇸
I also wonder if we are heading towards a post-nation future that resembles some capitalist fever dream of multinational companies ruling everything? 🤔
Or maybe it creates one global government, Illuminati style. 👁
I find myself agreeing with a lot of what this guy says. Confirmation bias FTW! 🙌
Here are some quotes that represent what really spoke to me. 🗣
Artificial intelligence is a way of understanding what it means to be human beings.
We need to think about AI in terms of value alignment—I think that’s a better framework than, say, fairness and bias.
I think when we look back at what the U.S. intelligence community has concluded were Russian attempts to intervene in the 2016 presidential election, we’ll probably think those are child’s play. I would bet money that there’s going to be escalation on that front.
I wonder what level of intelligence would be required before we start thinking of autonomous systems less as this “other” and more as parts of our society
Src: RAND Review
This video from TechCrunch is a really cool glimpse of one the many possible AI futures. Not because they built a robot with some AI capabilities to make you an obsessively engineered burger all by itself, but because of the business model Alex, the creator and owner of Creator, lays out at the end. 💱
This is an example of how automated systems can improve people’s jobs and work experience instead of replacing them and causing a jobpocalypse. Remember, centaurs are the future. 🔮
What I’m most interested in is whether or not this model can scale. I’m sure plenty of people will say that this model will never work and the economics don’t make sense and it’ll be a niche player and not a global behemoth. But why not? Ultimately we dictate what the future can be, so it’s just a matter of working towards what we want versus what we fear. 💫
This article basically validates what I’ve been thinking, so no surprise I’m sharing it here. 🤘
Basically, if killer robots come to be, it’s our fault, not AI’s. Not that that is much reassurance, this seems like something we’d inflict on ourselves. 🤦♂️🤦♀️
Sensationalizing fear around killer robots and job destruction is a great way to drive clicks, and therefore ad revenue, but it detracts and distracts from what can be done now and in the future that is positive and beneficial. 📰
Src: The Guardian
My general line of thinking towards the fear mongering around Artificial General Intelligence was that it was ratcheting up fear of the unknown at the expense of much more pressing things. Like algorithmic bias, weaponized advertising and social media, and humans in general. Open a newspaper, do we really need to worry about the potential future of Terminator becoming real right now?
This post does a great job addressing AGI paranoia and adds a very interesting layer that I had not previously thought about. These fears are projections of what the speakers fear about themselves. They could also be fears about what’s currently happening or looming wrapped in a future sci-fi dystopia veneer.
Src: Michael Solana on Medium
Interesting vision of the future of programming, and one that feels very in line with AI. This episode feels like a companion piece to the podcast episode I shared in the last Listen Up post.
Programming will move away from if-then (which ML engineering does) and start to be more about interacting with and interpreting data.
Though provoking discussion of the future of our data as well.
This post is a great crash course in where we are with AI, how we got here, and what the future of Artificial General Intelligence looks like. 🔮
I generally think we’re a ways off. There are some really impressive examples of AI (like the standard AlphaGo and AlphaZero examples), but they also implode if the parameters of the task change or are asked to do something “off script”. They also require a lot of compute power. ⚡️⚡️⚡️
Our current AI boom is fueled by exploding data and compute power. It is turning older ideas into reality. I wonder if AI works on a weird cycle where breakthrough ideas occur and then stagnate until the tech catches up; there is an explosion, fears and feats accelerate, research booms and new ideas surface; is the next step another stagnation as we wait for the next tech catch up?
Src: Hacker Noon
Real talk, Deepfakes terrify me. 🙀
The implications of fake video that looks real to the human eye has “dystopian sci-fi” written all over it. If you thought the last election was a circus, wait until the first Deepfakes election. 🎪
A few outcomes that come to mind:
- Political mudslinging reaches its zenith and societal upheaval follows in its wake as no one knows who or what they can trust and our partisan rhetoric devolves even further.
- Famous people (actors were the first to pop into my head) can license their likeness, visually and audibly, and make money from acting without ever being on set as their licensed lookalike is projected onto another actor’s performance.
- Those VR pop stars will be taken to another level.
- Hollywood doubles down on their “don’t try anything new, it’s only sequels or reboots” strategy with new performances by deceased actors (music industry has been doing this forever) and cheaply cranking out even more.
This post is a great overview on the topic, but I noticed a glaring oversight in the tech section. It doesn’t matter if methods to catch these advance enough to be useful if they catch them after they go viral. A retraction/correction to a fake news item almost never gets the same reach as the fake item itself. I believe we see video as the last bastion of truth in media because we don’t have a lot of examples of convincing fake footage, either manufactured or edited. That’s about to change.
Src: Jensen Price on Medium