All launched effects are a type of drone (or UAV), but not all drones are launched effects. The term "medium-range launched effect" specifically refers to a tactical, host-platform-deployed, often expendable unmanned system optimized for extending a crewed platform's reach in contested environments—frequently acting as a loitering munition when armed. It blurs the line between a reusable reconnaissance drone and a guided missile by adding loiter, decision-making, and standoff capability.I like to run these stories by you guys, because I'm interested in the developments but have too little background knowledge to put them in context.
Force multipliers
I found this RedState article about Apache Helicopter launching ALTIUS-700 "medium-range launched effect (MR-LE)" interesting. The author mentioned that he was "not clear on what the difference is between a drone and a 'medium-range launched effect,'" so as usual I asked Grok:
Subscribe to:
Post Comments (Atom)
3 comments:
So, the F-35 is the first fighter plane we've built that isn't a fighter plane with computers aboard: it is a computer with a fighter built around it. This enables it to control a vast array of these 'launched effects,' effectively allowing it to dominate an air battle it doesn't have to actually approach.
Likewise, they can integrate with each other along a very long line, so you could have a phalanx (so to speak) of combat controllers with perfect situational awareness of everything along that line. Each node of this would be highly defensible -- it's a pretty good fighter -- should an enemy missile or aircraft penetrate to it. The idea, however, is to conduct the war increasingly away from the human component of the kill chain.
What Anthropic refused to do, a few weeks back, was to help remove the human from the kill chain. At that point you could have robots directing the forward-deployed robots in the same role: slightly faster, perhaps more complete in its ability to process and understand the data provided, but without a moral stake or a capacity to grapple with the world that human beings experience. That's what DOD wants to do, though; and they'll find someone to do it, even if Anthropic continues to refuse.
I'm a bit more sanguine about this sort of thing than is Grim.
First, a small digression: what Anthropic wanted to do was impose its judgment on DoD and to do so even though US law already bars the sort of thing Anthropic was claiming it wanted to bar contractually. It is silly to the point of disingenuous for Anthropic to claim it didn't trust DoD to follow the law while trusting it to follow contract terms. It is, of course Anthropic's right to specify how its product gets used, but it's also DoD's right to demur and to seek other sources for the AI capability.
Regarding humans in the decision-making process, it's enough for me to have the human authorize and commit the forward-deployed robots to the battle and to have the robots execute the relevant tactics, provided the human is actively monitoring the battle and has a real-time override switch that would allow him to cancel the tactic(s) in use and order another tactic or tactics to be used, and provided the human also has a real-time kill switch that would allow him to cancel the robots altogether and recall them.
"Order another tactic:" I would expect these OPLANs (Operations Plan) to have a variety of Annexes, each of which is a different tactic or set of tactics to deal with different battle scenarios, most of which would be employable in sequence as the battle-in-progress unfolds, or as stand-alone if the battle progressed more-or-less as planned or expected. We were up to Annex H in our basic OPLAN when I was at Sembach, and I suspect that was a relative short list as such things go.
Eric Hines
It is silly to the point of disingenuous for Anthropic to claim it didn't trust DoD to follow the law while trusting it to follow contract terms. It is, of course Anthropic's right to specify how its product gets used, but it's also DoD's right to demur and to seek other sources for the AI capability.
I have what I think is a pragmatic issue with the idea of taking humans out of the kill chain; but on the other hand, I also don't necessarily agree that "it is of course Anthropic's right to specify how its product gets used." If I buy a thing, whatever that thing is, I expect to own it and get to use it however I want.
Somehow software companies and other tech firms have managed to carve out an exception for themselves, whereby we pay them a license to use their products in the ways they prefer instead of 'owning' a thing. Even legacy places like Microsoft have figured out how to force everyone into this model: I have owned many copies of MS Word, Windows, etc., from the days when you could own such things. They eventually quit supporting them, so if you wanted to continue to use the things you built yourself using products you owned, you had to let them swap you into the licensed-permission model product instead.
I don't particularly care for that approach. That's separate, however, from my concerns that every single story we have about stuff like this turns into Skynet or the Matrix.
Post a Comment