Moral Instincts

Moral Instincts:

Steven Pinker's latest is in the New York Times; and while I'm sure several of you will scoff at the idea of looking toward that source for hints on morality, it's an interesting read, when taken together with Joe's piece below. It treats the moral dimension in similar terms to those we have employed in debating genetic engineering.

One of the important areas comes when looking at whether there is a rational basis for morality. He cites two:

One is the prevalence of nonzero-sum games. In many arenas of life, two parties are objectively better off if they both act in a nonselfish way than if each of them acts selfishly. You and I are both better off if we share our surpluses, rescue each other’s children in danger and refrain from shooting at each other, compared with hoarding our surpluses while they rot, letting the other’s child drown while we file our nails or feuding like the Hatfields and McCoys. Granted, I might be a bit better off if I acted selfishly at your expense and you played the sucker, but the same is true for you with me, so if each of us tried for these advantages, we’d both end up worse off. Any neutral observer, and you and I if we could talk it over rationally, would have to conclude that the state we should aim for is the one in which we both are unselfish. These spreadsheet projections are not quirks of brain wiring, nor are they dictated by a supernatural power; they are in the nature of things.

The other external support for morality is a feature of rationality itself: that it cannot depend on the egocentric vantage point of the reasoner. If I appeal to you to do anything that affects me — to get off my foot, or tell me the time or not run me over with your car — then I can’t do it in a way that privileges my interests over yours (say, retaining my right to run you over with my car) if I want you to take me seriously. Unless I am Galactic Overlord, I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.

Not coincidentally, the core of this idea — the interchangeability of perspectives — keeps reappearing in history’s best-thought-through moral philosophies, including the Golden Rule (itself discovered many times); Spinoza’s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kant’s Categorical Imperative; and Rawls’s Veil of Ignorance. It also underlies Peter Singer’s theory of the Expanding Circle — the optimistic proposal that our moral sense, though shaped by evolution to overvalue self, kin and clan, can propel us on a path of moral progress, as our reasoning forces us to generalize it to larger and larger circles of sentient beings.
One of the things we discussed in detail below was the peril to the Golden Rule as a limiting rule of ethics that could arise from tampering with inherited human nature. As we are, the Golden Rule limits us: but it could as easily, and just as rationally, become a license rather than a limitation if we are allowed to edit each other.

The Zero-Sum Game system for judging morality is a basis I hadn't considered. It suffers from one obvious limitation: by its nature, it limits moral judgments to utilitarian grounds. You can use these games to measure whether our sense of ethics is in accord with practical benefits: more food, say.

A key question of ethics, however, is establishing what the good is. Aristotle asserts, I believe correctly, that the rational part of the soul is not useful here: it is the emotive part that determines what is to be desired, and the rational part is limited to means-to-the-end. The Zero-Sum Game method is thus only good as a test for whether the means-to-the-end method is effective or not. As a result, its use as a test for ethics is quite limited.

UPDATE: The long section on what the author calls "trolleyology" demonstrates something important about the dilemma mentioned above.
The gap between people’s convictions and their justifications is also on display in the favorite new sandbox for moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says “yes.”

Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge?

...

When people pondered the dilemmas that required killing someone with their bare hands, several networks in their brains lighted up. One, which included the medial (inward-facing) parts of the frontal lobes, has been implicated in emotions about other people. A second, the dorsolateral (upper and outer-facing) surface of the frontal lobes, has been implicated in ongoing mental computation (including nonmoral reasoning, like deciding whether to get somewhere by plane or train). And a third region, the anterior cingulate cortex (an evolutionarily ancient strip lying at the base of the inner surface of each cerebral hemisphere), registers a conflict between an urge coming from one part of the brain and an advisory coming from another.

But when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge. Together, the findings corroborate Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.
That's fine, and useful. But the important questions are these: is it bad that we have nonutilitarian ethical calculations arising from irrational emotions? We may find ourselves with the power to change the conditions in which the emotions rule. Should we? Is that an improvement?

We may also find ourselves with the power to change the emotion that rules in these cases, so that emotion still wins, but not in the way it currently does. Should we? Why? What defensible answer is there to the question, "Why?"

Perilous matters, these.

No comments: