AI Could Create A Defense Dark Horse ๐ŸŽ

A lot of focus is placed on the US-China AI space race (guilty), but the nature of AI could make for a surprise victor. Or at least a leveling of the playing field. ๐Ÿšœ

There is a risk that the United States, like many leading powers in the past, could take an excessively cautious approach to the adoption of AI capabilities because it currently feels secure in its conventional military superiority.

I noticed an interesting note in the piece that arms regulations are, by and large, aren’t placed on useful defense technologies that are easily spread. Like tanks and jets (“easily spread” is relative in this case). Compared to nukes, which are heavily regulated but hard to manufacture anyway. ๐Ÿญ

AI is not subject to the same manufacturing difficulties and provides far more useful. It is also difficult to draw a clear line between commercial and military uses. All of this creates a scenario that will be tough to regulate with nearly all governments incentivized to take a shot. Interesting times ahead. ๐Ÿ”ฎ

Src: Foreign Policy

That’s Instructor AI, Cadet

The Air Force is embracing AI to mix up their training process, and could be inventing the future of all education while they’re at it. ๐ŸŽ“

The overall objective is to move away from an โ€œindustrial ageโ€ training model with pre-set timetables and instruction plans to one that adapts to each airmanโ€™s learning pace.

Hmmmmm….what else follows an industrial model? ๐Ÿค”๐ŸŽ’๐ŸšŒ๐Ÿซ

Among other benefits, they appear to show that artificial intelligence โ€œcoachesโ€ are highly effective at gathering data from the process of training any particular pilot, and then refocusing that process on exactly the areas in which the student needs the most help.

This is really, really cool. Personalized training could be here and if it works for pilots of multi-million dollar aircraft it should be able to work in most other fields no problem. ๐Ÿฐ

Src: Federal News Radio

Artificially Wartelligent ๐Ÿ’ฃ

Surprising no one, the Pentagon wants more of that AI goodness in its weaponry. They want it about $2 billion worth of bad. ๐Ÿ’ฐ

But, having AI be able to explain its decision making appear to be vitally important, which is nice to see. It sounds like plenty of people are uneasy with the idea of robots doing what they want with no oversight, at least in war scenarios. ๐Ÿ‘

enabling AI systems to make decisions even when distractions are all around, and to then explain those decisions to their operators will be โ€œcritically importantโ€ฆin a warfighting scenario.โ€

Src: The Verge

With Great Computing Power Comes Great Responsibility? ๐Ÿšจ

Maybe that open letter decrying autonomous weapons wasn’t the best choice? ๐Ÿค”

Relax, nothing crazy happened. Paul Scharre just brought up some really good points in this interview with the MIT Tech Review. And they boil down to the best way to impact the smart weapons sector is to help educate and steer policy, not stay away from it. ๐Ÿ”–

The open letter is the typical tech sector response to a problem like this too, avoid it and shift blame. “We’re just engineers”. ๐Ÿ™„

Smart weapons are coming one way or another, and I like the idea of having the people concerned about them involved in their creation and regulation. โš’

Src: MIT Tech Review

The Tangled Web We Weave ๐ŸŒ

AI and nationalism are strange bedfellows. On the one hand, most research at this point has been collaborative and open. Indeed, a lot of advances are open source and practically everything gets a paper written explaining the process. On the other hand, the implications for military use and national advancement are very real. ๐Ÿ›

The US and China are the two main players in this slowly unfolding drama. The big tech companies of each are building research centers in the other and are likely attracting the bulk of the world’s talent. But they are beginning to diverge in a major way, China’s AI industry is heavily backed by the government and follows a policy of dual-use: commercial and military. The US government is basically staying out of it (probably because most of the decision makers don’t understand it) and employees are calling for their companies to stay out of military contracts and applications. ๐Ÿ‡จ๐Ÿ‡ณ๐Ÿ‡บ๐Ÿ‡ธ

No one benefits if an isolationist approach is taken, but nothing good will happen if the realities of what could result from partnerships and investments are ignored. China seems to want all joint ventures to skew towards benefitting them. Could we be headed towards a Cold War? โ›„๏ธ

Could AI spark a new wave of spying? Industrial espionage, asset development and exploitation, academic pillaging, funding stipulations, code breaking? ๐Ÿ•ต๏ธโ€โ™€๏ธ๐Ÿ•ต๏ธโ€โ™‚๏ธ

The post on ASPI does a great job breaking down the situation and outlining what’s going on in China. Plus they recommend some approaches to potential solutions. Ultimately it feels like this is going to become a traditional battle between two countries that manifests in entirely new ways due to the technology involved. ๐Ÿ—บ

Src: ASPI

Tech Stands Against Smart Weapons โœ‹

Let’s be honest, autonomous weapons powered by AI are a real possibility. Smart subs anyone? But a bunch of tech and AI people are supporting the Lethal Autonomous Weapons Pledge from the Future of Life Institute. Why? ๐Ÿšซ

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others โ€“ or nobody โ€“ will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.

Src: Future of Life

The Arms Race Begins ๐Ÿ”ซ

China is developing autonomous AI-powered attack submarines. So this should be fun. Here are a couple totally not terrifying things about them. ๐Ÿ™

“The AI has no soul. It is perfect for this kind of job,โ€ said Lin Yang, Chief Scientist on the project. โ€œ[An AI sub] can be instructed to take down a nuclear-powered submarine or other high-value targets. It can even perform a kamikaze strike.โ€

Yup, not scary at all. ๐Ÿ˜จ

Itโ€™s the decision-making that will cause the most concern as the AI is being designed not to seek input during the course of a mission.

Totally not terrifying. ๐Ÿ˜ฐ

Src: AI News