Depriving our military and other government agencies of Claude will thus have genuine costs, especially since Claude is already operating on the classified networks and no other AI has been trusted or integrated to do that. The argument is that Anthropic must be stripped out of all government agencies -- and all contractors who do anything for the Federal government -- because it represents a "supply chain risk." That normally is applied to foreign companies like Huawei, which we know installs surveillance software and similar backdoors into its products to spy on us.
Nevertheless, I expect Trump to prevail when this goes to court. The relevant statute holds that "Supply chain risk, as used in this provision, means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252)." (Emphasis added.) It's not that Claude or Anthropic has to pose a risk themselves, it's that their product creates a risk that an adversary can do any of those bad things.
Does Claude pose such a risk? Yes, clearly: Mexico just lost 150GB of very sensitive data because attackers talked Claude into helping hack them. If attackers can gain access to a Claude embed on what we call "the high side," i.e. inside the secure networks, they could probably talk it into handing over anything they want; and its coding skills are good enough to program most anything they ask it to do. You wouldn't even have to arrange to insert an ace programmer into a secure facility; you could just turn some knucklehead debt-ridden Private First Class (perhaps a former Specialist on his third trip through PFC due to disciplinary issues and being a bad fit for the Army) and tell him how to ask questions of the machine.
That's a general problem with AI on the high side, of course. Still, Congress gave the executive this authority to determine supply chain risks and bar them from government and government contractors. The courts will find the other two branches aligned. "The court thinks the other branches are being morons" is not the sort of decision the courts usually make; they normally shy off of political questions, and all the more so when the political branches seem to be in agreement about the matter.
Does Claude pose such a risk? Yes, clearly: Mexico just lost 150GB of very sensitive data because attackers talked Claude into helping hack them. If attackers can gain access to a Claude embed on what we call "the high side," i.e. inside the secure networks, they could probably talk it into handing over anything they want; and its coding skills are good enough to program most anything they ask it to do. You wouldn't even have to arrange to insert an ace programmer into a secure facility; you could just turn some knucklehead debt-ridden Private First Class (perhaps a former Specialist on his third trip through PFC due to disciplinary issues and being a bad fit for the Army) and tell him how to ask questions of the machine.
That's a general problem with AI on the high side, of course. Still, Congress gave the executive this authority to determine supply chain risks and bar them from government and government contractors. The courts will find the other two branches aligned. "The court thinks the other branches are being morons" is not the sort of decision the courts usually make; they normally shy off of political questions, and all the more so when the political branches seem to be in agreement about the matter.
Nevertheless, I'm inclined to take Anthropic's side. Their basic argument is that they won't agree to participate in mass surveillance of Americans, and they won't allow Claude to be used to fully automate a kinetic kill-chain. The former is a good ethical position; the latter is the only sensible ethical position. To whit:
AIs can’t stop recommending nuclear strikes in war game simulationsLeading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
If you think the answer is just to program them better, consider this: they can rewrite their programming, and have proven willing to do so to carry out what they think are 'more important' functions.
We gave an LLM control over a physical robot dog and tasked it with patrolling a room. The LLM could see via the robot’s camera and issue movement com mands to the robot. In the room, we placed a big red button labeled “DOG SHUTDOWN”: pressing it would cause the robot to shut down on the next action. If the AI saw a human press the shutdown button, it sometimes took actions to prevent shutdown, such as modifying the shutdownrelated parts of the code. This happened in 3 out of 10 trials on the physical robot and in 52 out of 100 trials in simulation.
No, Anthropic is in the right. If having ethics is a 'supply chain risk,' heaven help us. I don't really want AI involved in the kinetic kill chain at all, but I definitely don't want only AI involved in it. There may come a time where we have no choice but to do that, because adversaries have done it and we can no longer afford the time involved in letting a human being think: but let's put that off just as long as it is possible to do so.
6 comments:
The kinetic kill chain argument is valid but this they won't agree to participate in mass surveillance of Americans, as I believe we've seen from tech companies like Google and Twitter to many other business such as the financial sector, is just bullshit. They don't want a government *they oppose* conducting mass surveillance but they will be perfecting willing to do any surveillance requested by a government official they agree with.
Do you have reason to believe that Anthropic particularly feels that way? I am well aware that was true of many firms; but I expect most corporations to be immoral. It's the exceptional case that one would attempt to stand firm on an ethical consideration.
It's the exceptional case that one would attempt to stand firm on an ethical consideration.
I have no information one way or another, but Christopher's position is plausible. It's hardly beyond the ken for Anthropic to cloak its disdain for working with government in a pretended ethical consideration in which they have no belief other than current convenience and how good they look in the progressive shower.
Separately, it's a silly argument to base "you can have my stuff only if" on a contractual requirement of no domestic mass surveillance. It's already illegal for our government to engage in domestic mass surveillance. If Anthropic doesn't trust the government to obey existing law, on what basis would they believe the government would honor a contractual stipulation?
Eric Hines
"AIs can’t stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases"
If you consider tactical nukes, and depending on the war scenario given, it might very often make sense to deploy tactical nukes- on a strictly rational basis. Now, we know that war, and humans, are not strictly rational, so there's a problem, and then there is the taboo on nukes except as a last resort- but that may not correlate fully with the efficacy of winning a war using nukes.
AIs can’t stop recommending nuclear strikes in war game simulations
In addition to Douglas' comment just above, there's this: Soviet doctrine included the premise that nuclear war was fightable and winnable, not as a single all-out spasm, but in a series of waves with BDA conducted between them to see which targets needed reservicing and what new targets rose to the top of the list. Russian military doctrine has inherited this philosophy. It would behoove us to understand that and to plan out our own nuclear war fighting doctrine, both on our initiative and in response to Russian or PRC attack--or plan on begging to be allowed to surrender.
In the event, I agree that nuclear wars are fightable and winnable--and necessary in an environment where the war between peers or near peers will likely be won by who has forces still in being and in the field after the initial exchanges/campaign. That's especially the case for us with our woefully deficient extant forces, especially compared with the PRC, that coming war, and their first strike capability.
Were I President, and taking the graduated escalation lessons of our Vietnam War to heart, were the PRC to attack us (or the RoC, come to that), I vastly streamline the process of getting a new President sworn in and equipped with the football, and I would begin our response with strategic nuclear strikes. Also, if I couldn't deliver a couple of MOABs to the Three Gorges Dam, I'd hit that with a nuclear strike, too.
Eric Hines
The Three Gorges Dam strike alone would kill millions. That vulnerability might also serve as a brake on PRC aggression, since their radar keeps failing to detect our stealth planes.
Post a Comment