While writing my post the other day about tech companies and government contracts for AI., I started to realize how it’s a strange situation. The brands don’t seem to mix with the goals of the contracts.
Microsoft is the most normal seeming of the bunch, they are more of a B2B company that has probably been powering government for ages now. The Hololens is a way for them to be the computing platform of the future and make a play for being the dominant OS again. I would guess it’s the most brazen military use they’ve gone after, but they’re really just looking to become the military’s computing platform in the field. It fits.
Amazon and Google though. Those seem a bit…off.
Google’s involvement in Project Maven fits in that Big G dabbles in everything. They probably saw some interesting AI problem that could potentially be solved by a massive amount of compute and realized they could be paid for it. Also, identifying objects in an image via computer vision is basically a search problem. And search is kind of Google’s thing.
Amazon’s facial recognition efforts and push is really strange for a consumer retail and logistics company. But it makes a bit more sense if you view it as an AWS project. Even still, it just seems like a strange choice. Amazon is all about owning all aspects of the customer experience and positioning themselves as the only resource a consumer needs when it comes to shopping. Even AWS could be seen as a platform to power other companies in their serving of customers. Essentially getting a cut of any consumer activity that doesn’t happen on their platform. Rekognition is a weird fit. And for a company so focused on its reputation and having positive associations for customers, it seems like more of a potential liability than a win.
Ultimately these tech giants are becoming the new GEs. We may know them for a few specific things, but they appear to have designs on making money in every feasible way possible. Of course, that hasn’t worked out so great for GE in the long run so this could get interesting.
There has been a trend of sorts lately around tech giant employees boycotting their employers’ activities related to military and government contracts within the realm of AI.  I get it, none of these people signed up to make weapons. This is putting aside the fact that not all of the activities that have been boycotted were direct weapons projects. But still, they probably took these jobs wanting to improve peoples’ lives, not take them.
But! I would argue these are the kinds of people we want developing these technologies. Especially the iterations that involve features with the terms “intelligence” or “intelligent” attached to them. I want people with moral hang ups about this tech’s usage having an active voice in their development process.
Also, these is a long history of milestone tech achievements being directly related to military R&D. Autonomous vehicles, it could be argued, are in part due to DARPA. Google Maps can thank the military’s Global Positioning System for being possible. And, oh yeah, the internet! Basically, this isn’t the first time that tech and war have intermingled.
I fear what gets lost in the decision making process of these individuals is that just because they say “no” doesn’t mean the military is just going to say “well, we tried” and walk away. Nor are other countries, friend or foe, going to take any notice of our hesitancy and let it guide their decisions. This means we could be left with only people who experience no moral qualms about the work developing the next generation of smart weapons and technologies. And honestly, that thought terrifies me.
- Google. Amazon (not war and the potential use cases do freak me out). Microsoft.
A lot of focus is placed on the US-China AI space race (guilty), but the nature of AI could make for a surprise victor. Or at least a leveling of the playing field. 🚜
There is a risk that the United States, like many leading powers in the past, could take an excessively cautious approach to the adoption of AI capabilities because it currently feels secure in its conventional military superiority.
I noticed an interesting note in the piece that arms regulations are, by and large, aren’t placed on useful defense technologies that are easily spread. Like tanks and jets (“easily spread” is relative in this case). Compared to nukes, which are heavily regulated but hard to manufacture anyway. 🏭
AI is not subject to the same manufacturing difficulties and provides far more useful. It is also difficult to draw a clear line between commercial and military uses. All of this creates a scenario that will be tough to regulate with nearly all governments incentivized to take a shot. Interesting times ahead. 🔮
Src: Foreign Policy
Surprising no one, the Pentagon wants more of that AI goodness in its weaponry. They want it about $2 billion worth of bad. 💰
But, having AI be able to explain its decision making appear to be vitally important, which is nice to see. It sounds like plenty of people are uneasy with the idea of robots doing what they want with no oversight, at least in war scenarios. 👍
enabling AI systems to make decisions even when distractions are all around, and to then explain those decisions to their operators will be “critically important…in a warfighting scenario.”
Src: The Verge
Maybe that open letter decrying autonomous weapons wasn’t the best choice? 🤔
Relax, nothing crazy happened. Paul Scharre just brought up some really good points in this interview with the MIT Tech Review. And they boil down to the best way to impact the smart weapons sector is to help educate and steer policy, not stay away from it. 🔖
The open letter is the typical tech sector response to a problem like this too, avoid it and shift blame. “We’re just engineers”. 🙄
Smart weapons are coming one way or another, and I like the idea of having the people concerned about them involved in their creation and regulation. ⚒
Src: MIT Tech Review
Let’s be honest, autonomous weapons powered by AI are a real possibility. Smart subs anyone? But a bunch of tech and AI people are supporting the Lethal Autonomous Weapons Pledge from the Future of Life Institute. Why? 🚫
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.
Src: Future of Life
China is developing autonomous AI-powered attack submarines. So this should be fun. Here are a couple totally not terrifying things about them. 🙁
“The AI has no soul. It is perfect for this kind of job,” said Lin Yang, Chief Scientist on the project. “[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.”
Yup, not scary at all. 😨
It’s the decision-making that will cause the most concern as the AI is being designed not to seek input during the course of a mission.
Totally not terrifying. 😰
Src: AI News