Israel has developed an Artificial Intelligence (AI)-based targeting process.
Collecting data using signal intelligence (SIGINT), visual intelligence (VISINT), human intelligence (HUMINT), geographical intelligence (GEOINT) and more, the IDF has mountains of raw data that must be combed through to find the key pieces necessary to carry out a strike.“Gospel” used AI to generate recommendations for troops in the research division of Military Intelligence, which used them to produce quality targets and then passed them on to the IAF to strike.
Against a non-peer adversary like Hamas, this is just one more tool in the toolkit. It occurs to me, however, that this greatly imperils the laws of war should the tool be employed by a near-peer adversary (or a peer, or a better).
If you're not very concerned about ethics, you can fully automate this process. "Gospel" can generate targets that are passed directly to automated drone strikes or artillery as soon as they are tagged by the first program. Just let it roll until "Gospel" stops telling you there's anything to hit, or you run out of ammo for the drones/guns.
As noted in our recent discussion about OODA-loops, the ability to make a decision and act on it faster than your opponent can be the fundamental determinant of victory in war. The AI shortens the decision chain; eliminating the human lawyers shortens it further. A peer-ish adversary using AI would quickly be inside our OODA loops as long as we continued to use our lawyers.
The only pragmatic way to avoid defeat would be to eliminate the lawyers and automate our own weapons' decision-making. You might be able to program the AI with the appropriate lawyerly criteria; but even then, you're adding extra processing cycles that the enemy AIs don't have to run. That too would allow the enemy to get inside your OODA loop.
As a consequence, the introduction of this AI-based targeting is likely to eliminate the laws of war as a practical feature of modern combat. Even if we avoid Skynet-style AIs, we will end up creating unethical ones because they'll be the only ones that can compete. The alternative is defeat by an even more vicious power; either way, we end up with worse wars and evil AIs in control of the weapons of war.
I think that's a good bit of analysis you've done there. You're probably not wrong about how automated decisions in weapons systems are going to unfold.
ReplyDeleteWhen I read it though, I get this eerie chill - like I'm watching Saberhagen's Berserker series come true.
Oddly enough, I also had a sci-fi thought while pondering it. In my case it was from Dune, in which there was a two-generation war to eliminate man-made intelligence after it got going. That's how you got Mentat assassins, well, Mentats generally.
ReplyDeleteThe only pragmatic way to avoid defeat would be to eliminate the lawyers and automate our own weapons' decision-making.
ReplyDeleteI'd be happy to start by eliminating the lawyers and, you know, trusting our combat leadership and combatants to make, in the main, the proper decisions in the instant of the need, if not in properly and effectively trained anticipation of need (which extends OODA backward in time).
In the end, whoever has the enemy inside his OODA loop will be faced with a decision that no AI should ever be trusted with: choosing between abject surrender and enslavement or resorting to area weapons up to and including MOABs and nuclear weapons to overwhelm the enemy nation whose forces are about to win.
Eric Hines
Too much automation could be a bad thing, too, as it will have an inevitable tendency to eliminate the redundant human factors which could leave you with a severe shortage of trigger pullers, or lots of trigger pullers with no targets, if the AI goes south. While not directly comparable, automated document handling produced a situation in our company where it was well-known that we'd be buried by an avalanche of paper because we didn't have sufficient staff if we were forced by circumstances to try to do without it but having sufficient staff for that contingency was unnecessary overhead.
ReplyDelete...it [automation] will have an inevitable tendency to eliminate the redundant human factors which could leave you with a severe shortage of trigger pullers, or lots of trigger pullers with no targets....
ReplyDeleteOr lots of trigger pullers with lots of targets but no idea how to service them manually--which makes the trigger pullers themselves targets.
Eric Hines
Ok Skynet. Good job.
ReplyDelete