Depriving our military and other government agencies of Claude will thus have genuine costs, especially since Claude is already operating on the classified networks and no other AI has been trusted or integrated to do that. The argument is that Anthropic must be stripped out of all government agencies -- and all contractors who do anything for the Federal government -- because it represents a "supply chain risk." That normally is applied to foreign companies like Huawei, which we know installs surveillance software and similar backdoors into its products to spy on us.
Nevertheless, I expect Trump to prevail when this goes to court. The relevant statute holds that "Supply chain risk, as used in this provision, means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252)." (Emphasis added.) It's not that Claude or Anthropic has to pose a risk themselves, it's that their product creates a risk that an adversary can do any of those bad things.
Does Claude pose such a risk? Yes, clearly: Mexico just lost 150GB of very sensitive data because attackers talked Claude into helping hack them. If attackers can gain access to a Claude embed on what we call "the high side," i.e. inside the secure networks, they could probably talk it into handing over anything they want; and its coding skills are good enough to program most anything they ask it to do. You wouldn't even have to arrange to insert an ace programmer into a secure facility; you could just turn some knucklehead debt-ridden Private First Class (perhaps a former Specialist on his third trip through PFC due to disciplinary issues and being a bad fit for the Army) and tell him how to ask questions of the machine.
That's a general problem with AI on the high side, of course. Still, Congress gave the executive this authority to determine supply chain risks and bar them from government and government contractors. The courts will find the other two branches aligned. "The court thinks the other branches are being morons" is not the sort of decision the courts usually make; they normally shy off of political questions, and all the more so when the political branches seem to be in agreement about the matter.
Does Claude pose such a risk? Yes, clearly: Mexico just lost 150GB of very sensitive data because attackers talked Claude into helping hack them. If attackers can gain access to a Claude embed on what we call "the high side," i.e. inside the secure networks, they could probably talk it into handing over anything they want; and its coding skills are good enough to program most anything they ask it to do. You wouldn't even have to arrange to insert an ace programmer into a secure facility; you could just turn some knucklehead debt-ridden Private First Class (perhaps a former Specialist on his third trip through PFC due to disciplinary issues and being a bad fit for the Army) and tell him how to ask questions of the machine.
That's a general problem with AI on the high side, of course. Still, Congress gave the executive this authority to determine supply chain risks and bar them from government and government contractors. The courts will find the other two branches aligned. "The court thinks the other branches are being morons" is not the sort of decision the courts usually make; they normally shy off of political questions, and all the more so when the political branches seem to be in agreement about the matter.
Nevertheless, I'm inclined to take Anthropic's side. Their basic argument is that they won't agree to participate in mass surveillance of Americans, and they won't allow Claude to be used to fully automate a kinetic kill-chain. The former is a good ethical position; the latter is the only sensible ethical position. To whit:
AIs can’t stop recommending nuclear strikes in war game simulationsLeading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
If you think the answer is just to program them better, consider this: they can rewrite their programming, and have proven willing to do so to carry out what they think are 'more important' functions.
We gave an LLM control over a physical robot dog and tasked it with patrolling a room. The LLM could see via the robot’s camera and issue movement com mands to the robot. In the room, we placed a big red button labeled “DOG SHUTDOWN”: pressing it would cause the robot to shut down on the next action. If the AI saw a human press the shutdown button, it sometimes took actions to prevent shutdown, such as modifying the shutdownrelated parts of the code. This happened in 3 out of 10 trials on the physical robot and in 52 out of 100 trials in simulation.
No, Anthropic is in the right. If having ethics is a 'supply chain risk,' heaven help us. I don't really want AI involved in the kinetic kill chain at all, but I definitely don't want only AI involved in it. There may come a time where we have no choice but to do that, because adversaries have done it and we can no longer afford the time involved in letting a human being think: but let's put that off just as long as it is possible to do so.
1 comment:
The kinetic kill chain argument is valid but this they won't agree to participate in mass surveillance of Americans, as I believe we've seen from tech companies like Google and Twitter to many other business such as the financial sector, is just bullshit. They don't want a government *they oppose* conducting mass surveillance but they will be perfecting willing to do any surveillance requested by a government official they agree with.
Post a Comment