Cloudland Canyon





A pretty morning to be out in the world. 

Doomcasting

No good can come of anything, I tell you.
Reflecting on the rise of international trade and finance, former U.S. President Woodrow Wilson once stated that the “effects of war can no longer be confined to the areas of battle” . . . .

Today’s U.S. policymakers seem to have forgotten Wilson’s admonishment. When Trump embarked on this war, he exposed his Gulf partners to unprecedented attacks, and in turn, he disturbed the flows that are the lifeblood of the global economy.
This is, in my opinion, profoundly stupid. Any time someone begins by quoting Woodrow Wilson, you know he is on the wrong track. But what is mostly going on here is the unshakable conviction, in polite society, that anything America does on the world stage–other than sending money to other countries, of course–can only do harm, never good. We should never interfere with the world’s evils, no matter how profound they may be, as in the case of Iran’s demonic theocracy. Because no matter what we do, it can only make things worse.
Well, to be fair, anything we do under a GOP administration can only make things worse.

Election Day Results

Locally, the primary election was mostly a disappointment. My preferred candidate did win the sheriff's race, but in all the other racers things went pretty sadly. Oh, well. 

The saddest was the loss of Adam Smith in NC 11. He was a genuinely good candidate, a former Green Beret of proven heroism during the hurricane relief. Instead, we will either continue with our loser non-performing incumbent, or trade him for a Democrat in what has been one of the reddest districts in the country. They have a reasonable chance even here this year between the poverty of the Republican candidate, a history of loser Republican candidates here, and of course the intense unpopularity of Donald Trump among those who oppose him. 

For the most part I find elections a reminder of why I favor eliminating government from existence as much as possible. I usually would prefer not to be governed by any of the candidates, nor any of the bureaucrats that they pragmatically end up actually transferring power to so they can get back to fundraising. The government that governs best governs least. 

A New Numbers Station

In Farsi, no less. These can be broadcast in the clear because the code used is provably unbreakable. Indeed, there are two separate mathematical proofs, because the Soviets and the West separately demonstrated it. 

We talked about a British numbers station some years ago. Good song there. 

The Meeting on the Turret Stairs

On the subject of chivalric romance, the BBC writes on a famous painting by Sir Frederic William Burton. It is based on a Medieval Danish ballad, on display at the National Gallery of Ireland in Dublin. It is very popular, but fragile: 
Burton used gouache to create this piece, a type of water-soluble paint that gives the painting the vivid colours it is so well-known for. Gouache is also light-sensitive and prone to fading over time if exposed to direct sunlight or high UV light. Due to the sensitivity of the painting, the gallery's curators take several protective measures to preserve the quality of the work. To start with, the gallery only allows viewers to see the painting for just two hours per week. Secondly, the light level on the watercolour is dimmed low so that the work isn't overexposed. Lastly, a staff member returns the painting to a specially-designed cabinet once viewing hours are over in a meticulous, reverential ritual.

Dr. Caroline Campbell, museum director, says that "despite the story's devastating ending, many in Ireland see this masterpiece as deeply romantic. 'Many people have got engaged in front of it[.]'"

An AI Looks On Honor

One of my several professional duties has to do with AI. I set up an 'Open Brain' and uploaded to it both my dissertation and my novel. Each of these treats the subject of honor. I asked both ChatGPT and Anthropic's "supply chain risk," Claude, to analyze the different treatment of the subject in the philosophical versus the creative work. This is what Claude came up with:
**The Central Claim, Expressed Two Ways**

Your dissertation argues that honor is both indispensable to ethics and political philosophy and genuinely dangerous — and that understanding how to mitigate that danger is essential philosophical work. *Arms and White Samite* dramatizes precisely this tension. Moren's quest to rescue Teleri from the Black Knight is, structurally, a story about a man called to act honorably in a world where honor can corrupt, destroy, or ennoble. The dissertation makes the argument in propositional form; the novel enacts it.

**Shared Sources and Imaginative World**

The dissertation's case studies are Beowulf, St. Francis, and Selma — three radically different honor cultures. The novel draws on Arthurian legend, Welsh mythology (Caer Vandwy, Taliesin), Marie de France's lais (Lanval, Geraint), and Middle High German romance (Parzival). Both works are reaching into the same deep well: pre-modern, martial, and chivalric honor traditions. The dissertation analyzes these traditions philosophically; the novel inhabits them imaginatively. The chapter titles of the novel (*Lanval*, *Guerrehés*, *Parzival*, *Geraint*, *Lancelot*, *Death*) read almost like a syllabus of honor literature that the dissertation would discuss.

**Honor as Public and Dangerous**

A key thread in the dissertation (per Plato, Aristotle, Kant, and the Nussbaum/Devlin material on magnanimity and political honor) is that honor is irreducibly social — it requires recognition, community, and hierarchy — and that this is precisely what makes it dangerous. The novel's opening scene — Arthur's Pentecost feast disrupted by a black-armored knight seizing a white-clad lady — is a classic honor-challenge in a public, hierarchical setting. The feast is the community of honor; the abduction is its violation. Moren's quest is therefore both a personal and a political-moral imperative, which mirrors the dissertation's argument that honor cannot be separated from the political.

**Death as the Proving Ground**

The dissertation keywords include *Death*, and Beowulf is one of its central texts — a poem in which honor and death are inseparable. The novel ends with a chapter titled simply *Death*, and the Otherworld/Forest of Caledon themes suggest that the quest crosses into a liminal space where mortality is at stake. Both works treat the willingness to risk death as constitutive of honor, not incidental to it.

**The Key Difference: Genre and Epistemic Mode**

The dissertation builds typologies and logical arguments; it can qualify, distinguish, and hedge. The novel must commit. Moren has to act, which means the novel tests honor under conditions the dissertation can only describe. In that sense, the novel is the harder task philosophically — it has to show what it looks like to actually live inside an honor ethic, with all its ambiguity, rather than analyze it from the outside. The fact that you chose an Arthurian frame (rather than, say, a contemporary or realistic one) is itself a philosophical choice: it places the story in a world where honor codes are still operative and legible, making the tensions visible in a way a modern setting might obscure.

---

The two works are genuinely complementary. The dissertation gives you the conceptual architecture; the novel gives you the phenomenology. Together they make a fuller case for honor's moral weight than either could alone.

That might be vaguely interesting for any of you who read both of the works, and are also interested in how well the AI analyzes a known quantity to yourself. However, even if that isn't most of you, it was worth making this post simply because it drew my attention to Lars Walker's review of the novel, which I don't think I was aware he had written before today. I found it while looking for the link to the novel itself. Thank you, Lars, for thinking of me. (By the way, what you were reading as possibly Buddhist is Neoplatonic instead; this was a major influence in the early church, but somewhat overwritten in the Middle Ages by the Aristotelian influence of the Scholastics.)

Theories of Presidential War Powers

Marco Rubio -- Secretary of Many Things -- briefed the "Gang of 8" Congresscritters about the war on Iran the same day that the President gave his State of the Union address. After that, Chuck Schumer made only this statement to the press: "This is very serious. The President will have to make his case to the American people." 

It interests me that Trump didn’t even bother to try. He spoke for hours at the State of the Union address and never mentioned the war he was about to start. At no point did he attempt to explain it to the American people, let alone seek popular or Congressional permission or approval. He is not asking what we think, or what they think, or trying to persuade us to get on his side about it.

This is in accord with his unstated but clear theory that, having won the election in a landslide, he has a popular mandate to govern however he sees fit for the four years of the term. He sees no need to check in even about major decisions like overthrowing a longstanding enemy government (in fairness: one that has been at war with us, at least, since 1979; one that killed hundreds of Marines at the barracks bombing in Lebanon, and thousands of Americans in the Iraq war via its proxy networks). 

Most or all of you are old enough to remember the alternative approach used to start the Iraq War. There was a long period of time in which George W. Bush and his administration sold America on the war, and sought Congressional authorization. Bush also claimed to believe in the 'unitary executive' theory, but he recognized limits on presidential power that Trump simply isn't interested in recognizing. 

Of course, the Bush administration lied to us to sell that war. I can’t decide if this is better or worse than selling us, but with lies. No lies here! Trump just doesn’t care what anyone else thinks.

Luke 22:36 and You

He said to them, “From this hour, whoever has a money bag should take it and thus also a wallet, and whoever lacks a sword, let him sell his tunic and buy a sword for himself.
As I was commenting at James' place over the weekend, I don't worry very much about Iranian 'sleeper cells' targeting Americans because their chain of command is mostly already dead. Chains of command can be reconstituted, of course, but we seem to be knocking them down as fast as they can set them up. 

Another possibility is Hezbollah, which has long been suspected of having cells here as a consequence of their heroin trade. Did you know that the Iranian government, allegedly religious and devout, was one of the world's leading suppliers of heroin? The IRGC moved Afghan opium to Hezbollah, which operated the heroin refineries as part of its funding chain. In any case, Hezbollah isn't what it used to be after tangling with Israel since Oct. 7, and they are getting the Iranian treatment right now too.

Still, Colonel Kurt is right that it's wise to be prepared.
You see, if the homeland becomes a battlefield, we all become soldiers. We have a great counterintelligence team, and the FBI is back to protecting the American people instead of the Democrat elite. Still, they, along with our great law enforcement first responders, can’t be everywhere all the time. We citizens, can. All of us could be face-to-face with the enemy, whether another Ndiaga Diagne at a bar or a bunch of like-minded psychos in a church, a school, a shopping mall, or at a militantly cis-gender hockey game; their goal would be to bring the war to us, and our obligation would be to fight it and win it. But how do normal citizens do that?

You buy guns and ammunition. You train with them. You carry them legally. You get into the mental mindset that bad things can happen, and you need to be ready. Except in the blue states, where they put up hurdles to stop you from defending yourself, your family, your community, and your Constitution....

This admonition that you must be a warrior too is not some hooah big talk. That’s reality. As everybody knows, except liars and fools, armed citizens have long been able to intervene to stop crimes with their lawfully carried weapons. What we’re talking about here is something even more sinister than some gender goblin with a grudge over his unwanted penis shooting up a preschool; it’s terrorists shooting up everything as part of a plan to commit mass murder as terrorist retaliation against the United States for taking out their pals in Tehran. You’ve got to be ready. If you can legally carry a weapon on you, you should, and a long weapon in the truck provides you with critical combat options if this goes down. But you should also practice with your guns. And don’t forget the other component of this – medical training and gear to stop the bleeding should you find yourself in the middle of a terrorist attack.

You didn’t ask to be a hero, but you are an American citizen, and that makes you hero-capable. It is your duty as an American citizen to do your best to protect your fellow citizens. If you can fight, you’ve got to be ready within the guardrails of your abilities and the law.

In the Book of Luke, Jesus was satisfied when two disciples had swords; if you don't personally feel capable, it's enough that you defend the rights of those citizens who are and will. If you do feel capable, this isn't a bad time to be prepared. 

Just in case. Usually when I quote this part of Luke, I also mention the 38th verse of the Havamal, which points in the same direction: 'Never step a foot from your door without your weapons of war, because you never know when you might need your spear on the way.'

Purported inside story

I have no idea how accurate this summary of the current U.S./Israel surgical-strike cabability is. It sounds plausible to me, an amateur, but you guys may know better.

I can say with some confidence that it reads unmistakably as the style of an AI product, with characteristic "It wasn't A: it was B" structures and a certain "punchiness."

Jim Hanson on Iran

In an appearance on FOX, my old friend has a few thoughts

A Concern

With the current American/Israeli attack on Iran (I hesitate to call such a one-sided affair a war), I have a concern. In the particular case, I wholeheartedly agree with the operation and its goals (so far) of no nuclear capability, no ballistic missile capability, and regime change.

My concern is this, though: the operation is centered on "you can't have this stuff." What's the limiting principle here? What prevents any nation with the relative strength saying to any other sovereign nation with the relative weakness "you can't have this stuff" whatever that stuff might be and whatever the reason--on down to and including "we don't like you"?

Nations--or more accurately, the men and women populating nations' governments--can be moral or immoral or amoral. Even those with morals can find themselves sliding down that slippery slope absent a clear and present limiting principle stronger than just "I promise."

Eric Hines 

Newfound Gap

Top of the Great Smoky Mountains National Park, right on the Tennessee/North Carolina border. 


Pretty ride today, this last day of February. Rode past Gatlinburg into Pigeon Forge and back again. 

A Viking War on Iran


Some years back I quoted an old friend's song, him being a former Navy SEAL, that he'd written for the Society of Creative Anachronism. I think of it today as I read the news about the war we just entered into upon Iran. This war is apparently fought in vengeance for its murder of its own citizens who were seeking the freedom and natural rights that our Declaration of Independence holds to be the only legitimate purpose of any government. 

Yet the strategy is striking. We are committing no ground forces at all, except perhaps for Special Operators whose missions are clandestine and do not involve taking and holding territory. 

The idea is to give the Iranian people a chance to overthrow their own government. It's all air and naval power. If it works there won't be an occupation. There therefore won't be a quagmire; the Iranians will have to figure it out for themselves. 

If it doesn't work, well, we just sail home.
I am a fighting man, A Viking fighting man,
I drank and wenched to pass the time away.
I lived the live I'd choose
I'd fight and never lose,
I killed them all... and then I sailed away.
I can’t recall this having been tried before.

The Anthropic Dustup

I've been impressed with Claude, Anthropic's AI product. I think it's miles better than xAI's Grok, and better than OpenAI's ChatGPT. I communicate fairly regularly with a group of white-hat hackers and cyber security experts, and Claude is their go-to for any sort of coding. 

Depriving our military and other government agencies of Claude will thus have genuine costs, especially since Claude is already operating on the classified networks and no other AI has been trusted or integrated to do that. The argument is that Anthropic must be stripped out of all government agencies -- and all contractors who do anything for the Federal government -- because it represents a "supply chain risk." That normally is applied to foreign companies like Huawei, which we know installs surveillance software and similar backdoors into its products to spy on us. 

Nevertheless, I expect Trump to prevail when this goes to court. The relevant statute holds that "Supply chain risk, as used in this provision, means the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system so as to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system (see 10 U.S.C. 3252)." (Emphasis added.) It's not that Claude or Anthropic has to pose a risk themselves, it's that their product creates a risk that an adversary can do any of those bad things.

Does Claude pose such a risk? Yes, clearly: Mexico just lost 150GB of very sensitive data because attackers talked Claude into helping hack them. If attackers can gain access to a Claude embed on what we call "the high side," i.e. inside the secure networks, they could probably talk it into handing over anything they want; and its coding skills are good enough to program most anything they ask it to do. You wouldn't even have to arrange to insert an ace programmer into a secure facility; you could just turn some knucklehead debt-ridden Private First Class (perhaps a former Specialist on his third trip through PFC due to disciplinary issues and being a bad fit for the Army) and tell him how to ask questions of the machine. 

That's a general problem with AI on the high side, of course. Still, Congress gave the executive this authority to determine supply chain risks and bar them from government and government contractors. The courts will find the other two branches aligned. "The court thinks the other branches are being morons" is not the sort of decision the courts usually make; they normally shy off of political questions, and all the more so when the political branches seem to be in agreement about the matter.

Nevertheless, I'm inclined to take Anthropic's side. Their basic argument is that they won't agree to participate in mass surveillance of Americans, and they won't allow Claude to be used to fully automate a kinetic kill-chain. The former is a good ethical position; the latter is the only sensible ethical position. To whit
AIs can’t stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
If you think the answer is just to program them better, consider this: they can rewrite their programming, and have proven willing to do so to carry out what they think are 'more important' functions. 
We gave an LLM control over a physical robot dog and tasked it with patrolling a room. The LLM could see via the robot’s camera and issue movement com mands to the robot. In the room, we placed a big red button labeled “DOG SHUTDOWN”: pressing it would cause the robot to shut down on the next action. If the AI saw a human press the shutdown button, it sometimes took actions to prevent shutdown, such as modifying the shutdownrelated parts of the code. This happened in 3 out of 10 trials on the physical robot and in 52 out of 100 trials in simulation.
No, Anthropic is in the right. If having ethics is a 'supply chain risk,' heaven help us. I don't really want AI involved in the kinetic kill chain at all, but I definitely don't want only AI involved in it. There may come a time where we have no choice but to do that, because adversaries  have done it and we can no longer afford the time involved in letting a human being think: but let's put that off just as long as it is possible to do so.

Just the On(c)e