PollingReport.com consolidates polling from a number of sources over time. The link takes you to a summary of public attitudes to immigration, though other issues are addressed elsewhere in the site. One of the strongest messages is that voters favor Arizona's immigration bill and think the Obama administration should butt out. On most other immigration issues, public opinion is far less clear. Americans' support for amnesty, for instance, swings over all the place depending on how the question is worded. If you throw in enough words about ensuring that a new law will take account of work history, tax payments, and ties to the community, it will be popular. Other formulations of the question, however, can elicit a lukewarm response even if they refer generally to those same considerations. Similarly, if a question sticks closely to whether immigration is the primary responsibility of the federal or the state government, opinion will be mixed. But throw in the question of whether the state should be allowed to step up if the federal government fails, and Arizona wins hands-down.
For the most part, you can find the expected divergence of opinion between Republicans and Democrats, with Independents splitting the difference. Nevertheless, I was surprised to find that both Republicans and Democrats respond well to the statement "Do you favor or oppose allowing local boards [emphasis supplied] to determine whether illegal immigrants can stay in the United States based on factors such as how long the immigrants have lived here, if they have a family, a job and are paying taxes, and have other ties to the community?" while Independents do not.
Questions about whether immigrants contribute to or detract from American prosperity yield mixed results until you throw in the concept of balancing an immigrant's contribution against his drain on public freebies.
By far the clearest division of opinion appears when the answers are separated between Latino and non-Latino. This division dwarfs the disagreements among the parties generally.
The new slavery
We all love lawyers, don't we? -- when they make up those clever, mind-expanding arguments by way of increasing social justice. The International Union of Operating Engineers has sued Indiana’s governor, attorney general, and labor commissioner, asserting novel theories under which the state's right-to-work laws are slavery prohibited by the Thirteenth Amendment. First, the union is required to negotiate on behalf of all workers, regardless of what percentage of them have elected not to join the union. Okay, I at least understand why that one gets up their noses, even if I can't quite buy calling it slavery. But the second argument is that the law requires union workers to labor alongside non-union workers. If that's slavery, too, we've got a whole lot of restructuring to do.
What should we call it when taxpayers are forced to work to support other people? If we start calling it slavery whenever someone imposes a free-rider element on the system, let alone whenever we're forced to endure the company of people we disapprove of in public places, we're going to need a new word for real slavery. By this theory, the Jim Crow laws were an admirable anti-slavery measure.
*Updated to substitute a better link for the broken one above (thanks, Valerie!).
What should we call it when taxpayers are forced to work to support other people? If we start calling it slavery whenever someone imposes a free-rider element on the system, let alone whenever we're forced to endure the company of people we disapprove of in public places, we're going to need a new word for real slavery. By this theory, the Jim Crow laws were an admirable anti-slavery measure.
*Updated to substitute a better link for the broken one above (thanks, Valerie!).
New horizons in tech world
Not all you younguns will remember these things, so come sit by Gramma's rocker while she reminisces about 1979 with the help of these old AT+T videos. One is a recruiting spot for the Bell Labs, showing earnest young tech geeks and their bad hair talking about good places to work and good communities for their families. These cutting-edge careers involved things like computer-to-computer communications that were about to revolutionize data transport. The young technicians are cheerfully brisk about their career opportunities, without imagining that they're the center of the world.
The other video shows the happenin' new designer telephones, the kind you used to plug into a wall -- some even had a dial. The featured homes all look more like something out of Dallas or A Clockwork Orange than what I remember of homes back then, when I was a new college graduate. The phones are fun to look at, but it's the clothing that cracks me up.
The other video shows the happenin' new designer telephones, the kind you used to plug into a wall -- some even had a dial. The featured homes all look more like something out of Dallas or A Clockwork Orange than what I remember of homes back then, when I was a new college graduate. The phones are fun to look at, but it's the clothing that cracks me up.
Word sleuthing
Here's something that's been bothering me lately. (I don't have enough real trouble.) What is the root of the past participle "fraught," as in "fraught with menace"? On the analogy of "thought" and "taught," I get frink or freach, which lacked a certain something. On the analogy of "wrought," whose root I imagined to be either work or wreak, I get fork or freak. Freak seems to hold real promise: when you're freaking out, you're fraught. Somehow the word "free" seems to be involved, as well, which is how you get the contrast between "barrier-free" and "barrier-fraught" architecture, but as far as I can tell no one thinks there's a true etymological link between free and fraught.
Today I finally tried to look it up. Most sources claim the root is the same as the participle, "fraught," but they admit that nobody says "to fraught" and that, if they did, its archaic meaning would be close to what we now suggest with the word "freight." I can accept freight. A situation is metaphorically freighted with some quality just as it can be fraught with that quality. So I'm glad we cleared that up.
The experts claim, by the way, that the proper past participle of wreak is "wreaked," while "wrought" goes only with "work." Well, I don't know. I always thought you wrought havoc.
Today I finally tried to look it up. Most sources claim the root is the same as the participle, "fraught," but they admit that nobody says "to fraught" and that, if they did, its archaic meaning would be close to what we now suggest with the word "freight." I can accept freight. A situation is metaphorically freighted with some quality just as it can be fraught with that quality. So I'm glad we cleared that up.
The experts claim, by the way, that the proper past participle of wreak is "wreaked," while "wrought" goes only with "work." Well, I don't know. I always thought you wrought havoc.
Efficient laundry
High-falutin' detergents add expensive enzymes, which break up stains. They really work, but when the wash cycle is over the enzymes go down the drain along with the cheap soap and dirty water. But wait a minute -- didn't they tell us in chemistry class that the whole point of enzymes is that they facilitate reactions without being used up?
Two bright fellows, C.S. Pundir and Nidhi Chauhan, reported to the Journal of Industrial and Engineering Chemistry Research that they had bound the four most common laundry enzymes to plastic surfaces (a bucket and scrub brush used in pre-washing) in a way that made the enzymes available for at least 200 re-uses over a three-month period. It's a cheaper approach, and a lot less junk in the wastewater, too. It's not commercially available yet, unfortunately.
Two bright fellows, C.S. Pundir and Nidhi Chauhan, reported to the Journal of Industrial and Engineering Chemistry Research that they had bound the four most common laundry enzymes to plastic surfaces (a bucket and scrub brush used in pre-washing) in a way that made the enzymes available for at least 200 re-uses over a three-month period. It's a cheaper approach, and a lot less junk in the wastewater, too. It's not commercially available yet, unfortunately.
Just-in-Time Structures
Conventional structures are sized for maximum loads, but maximum loads don't happen very often. Wouldn't it be great if we could save material by strengthening structures only during emergencies, so to speak? At the University of Stuttgart, they're experimenting with hydraulic drives that respond to unusual loads, which permits a structure to be made much thinner and lighter than usual. In this prototype, a curved wooden shell touches down at four points, three of which end at moveable hydraulic cylinders. A control system reads the load status at multiple points in the structure and moves the three free-floating points to counteract variable loads resulting from wind or snow. As a result, the shell can be much thinner than what you'd expect for its huge span: only four centimeters thick for 100 square meters of structure.
Imagine a bridge built with this system. You really wouldn't want to lose power to the control system while traffic was on the bridge.
Imagine a bridge built with this system. You really wouldn't want to lose power to the control system while traffic was on the bridge.
Back to Part I
My apologies for dropping out of this discussion here - it deserved more time than I could give it 'til now, and Grim gave me a not-at-all easy reference to look over - which I quite failed to grasp. (I've read Part II and am joining in that one separately.) I want to return to a part of Part I.
Grim was reexplaining Kant's problem in terms of a believer, like Chesterton, who claimed to have pieced together evidence from throughout his life that brought him to believe in God.
I'm not in a world like that. The evidence I get runs the other way - within limits.[1] Yes it is possible that this is all a great self-consistent illusion of the brain-in-vat variety. But, I have to say, so what? What difference does this make to anything I have to do? Why paralyze myself by claiming, "This evidence isn't perfect; it could be all wrong without my knowing, so I'll declare all my knowledge completely nonexistent, without value, not knowledge at all?" It's the only evidence I've got and I'll take it as far as it seems to get me. Any map that I carry is not the same thing as the land it represents. It's only an indirect representation, and by its nature imperfect. Do I throw it away? Declare it's no map at all?
Stephen Donaldson's Thomas Covenant - the protagonist of a few trilogies he wrote - was in a similar situation. He kept being transported to a fantasy world, which included a villain named Lord Foul, and (at least in the first two I, which I read) never seemed certain whether he was really visiting another world or dreaming the whole thing. But, he figured, whichever way it was - he was going to fight Lord Foul. I don't understand any other approach.
[1] I'm partly color-blind and accept there are things you can see that I can't; I can be fooled by optical illusions and know that what I think I'm seeing isn't always quite right.
Let's say that someone has encountered a number of phenomena that they believe demonstrate the existence of God. One counterargument to their reasoned belief in God would be to point out that they have misconstrued the causes of the phenomena...Yet he has come by his knowledge in the same way we come by knowledge of anything that is outside of ourselves in the world.Absolutely, boss. But the quality of that evidence is the thing I always want to examine. (Chesterton makes it impossible because, after a book of build-up, he won't even say what that evidence is. But that is another story.) Putting it that way blurs the distinction between evidence of different quality (per Chesterton again, between the kind of man who doubts the existence of God and the kind who doubts the existence of cows).
So your objection, and Tom's after a fashion, is that you want to say that 'well, we can't have perfect knowledge of things outside of us, but we can have approximate knowledge' -- knowledge on a scale, as Tom put it. The problem is that doesn't get off the ground. Everything you think you know about the outside world is phenomenal (Kant is arguing). Every experience, every sensation, every fact you think you know is actually just a fact about your own internal thoughts...Not so. The perceptions I get are evidence about the external world. "Direct" in the legal sense; "indirect" the way you say Kant's using it. The things I experience are consistent in such a way that they back each other up, and are evidence for each other. I see what looks like a fire; I feel the heat from it; I touch it and get burned by it; I hear and read about it. This is all evidence that such a thing as fire exists. It would be different if I lived in a world where I saw things that looked solid, but my hand passed through them when I tried to touch them; or things that looked just like fire sometimes burned and sometimes didn't for no apparent reason; or I felt my skin was crawling with bugs but everyone else said I was suffering from delusional parasitosis. Those situations would be evidence that my senses were not reliable and that the knowledge I got from them was not so useful.
I'm not in a world like that. The evidence I get runs the other way - within limits.[1] Yes it is possible that this is all a great self-consistent illusion of the brain-in-vat variety. But, I have to say, so what? What difference does this make to anything I have to do? Why paralyze myself by claiming, "This evidence isn't perfect; it could be all wrong without my knowing, so I'll declare all my knowledge completely nonexistent, without value, not knowledge at all?" It's the only evidence I've got and I'll take it as far as it seems to get me. Any map that I carry is not the same thing as the land it represents. It's only an indirect representation, and by its nature imperfect. Do I throw it away? Declare it's no map at all?
Stephen Donaldson's Thomas Covenant - the protagonist of a few trilogies he wrote - was in a similar situation. He kept being transported to a fantasy world, which included a villain named Lord Foul, and (at least in the first two I, which I read) never seemed certain whether he was really visiting another world or dreaming the whole thing. But, he figured, whichever way it was - he was going to fight Lord Foul. I don't understand any other approach.
[1] I'm partly color-blind and accept there are things you can see that I can't; I can be fooled by optical illusions and know that what I think I'm seeing isn't always quite right.
Weren't We Just Talking About This?
A biologist writes about culture in terms that will seem quite familiar:
RICH AND SEEMINGLY BOUNDLESS as the creative arts seem to be, each is filtered through the narrow biological channels of human cognition. Our sensory world, what we can learn unaided about reality external to our bodies, is pitifully small.He gives a litany of examples, to which we might add: and all that's without the problems of apperception.
More on Bullies, From President Obama
Well, then-aspiring-author Obama, rather. Dr. Althouse is reading more around the dog-eating tract that has gotten so much attention.
Doubtless the President would agree with our advice, then: the way to deal with bullies is to teach them to fear your own strength, not to whine, and to learn to fight smarter and better than they do.The man pulled the blade across the bird’s neck in a single smooth motion. Blood shot out in a long, crimson ribbon. The man stood up, holding the bird far away from his body, and suddenly tossed it high into the air. It landed with a thud, then struggled to its feet, its head lolling grotesquely against its side, its legs pumping wildly in a wide, wobbly circle. I watched as the circle grew smaller, the blood trickling down to a gurgle, until finally the bird collapsed, lifeless.... Later, lying alone beneath a mosquito net canopy, I listened to the crickets chirp under the moonlight and remembered the last twitch of life that I’d witnessed a few hours before. I could barely believe my good fortune.Why does the boy — as remembered by the man — connect the killing of the bird to his own good fortune? Is it some elemental realization that simply to be alive is amazing, the bird being dead? Or is he excited to be in this new place with lots of thrilling new activities like beheading a bird and shortly thereafter eating it? Or is it the connection to the father figure, who's so eager to show the boy what life is really about and so easily overcomes the reticence of the mother? The next thing that happens in the book is that Lolo teaches him how to deal with bullies: Don't cry over the lump where he hit you with a rock; learn boxing. Lolo buys boxing gloves for him and teaches him to "keep moving, but always stay low—don’t give them a target." Good advice! And it's on the very next page that Lolo teaches him to eat dog (and snake) meat.... The point is: Life was a big adventure. And meat was part of the adventure — meat from real animals that lived and died.
Also, never to believe in clean hands.
Not your grandfather's DNA
If some science fiction writer doesn't pick up on this idea for a story about really alien forms of life, he's missing a good bet.
All life we know of on Earth depends on RNA or DNA, the long, ladder-like molecules that hold the sequences of three-letter words (each spelled with the four-base alphabet A-G-C-T) that serve as code for the 20 amino-acid building blocks of our proteins. No one knows how such a code developed in the first place, or why all earthly life uses essentially the same code. No one knows why all life strings the code along a ladder built of the sugar called ribose (the R in RNA) or its slightly altered cousin, deoxyribose (the D in DNA). Was it just the structure that fell into place first, like the QWERTY keyboard, and everyone kept using it from then on? There's no obvious reason why the ladder couldn't be built of other sugars. For that matter, there's no obvious reason why the alphabet employed by our genetic code for protein synthesis couldn't choose other letters from the unknown number of potential nucleotide bases of which our familiar A, G, C, and T comprise only four (five, if you count the U that substitutes for T in RNA). Further, there's no obvious reason why the code should limit itself to three-letter words and resulting vocabulary of 4 to the third power, or 64 words.
Obviously a 64-word vocabulary is sufficient to spell out some amazing complexity. There's no limit in principle to the sentences you can form with 64 words. You don't even need four letters to spell a lot of words, as binary computers attest. The point is, the code our DNA uses is not the only way to skin a cat. For instance, the amino acids that line up like pearls on a string to form our long-chain protein molecules number 22 in total, but of those only 20 are assigned a three-letter "code word" in our DNA. (The other two get added in by separate enzymes at a later stage in the protein synthesis process not directly controlled by a DNA transcription.) What's more, a number of other amino acids have specialized uses other than as beads in the protein string, such as for neurotransmitters or steps in metabolic pathways, but they are not assigned a three-letter DNA code word. So our genetic code has more words than it needs for the amino acids we use, but it uses up the redundancy in synonyms for some of them, while having no word for others.
So, back to the article I linked to above. The guys who fool around with this stuff are beginning to synthesize genetic molecules they call "XNA." These are still identifiable as nucleic acids, using a sugar and a phosphate for the ladder backbone and the familiar bases A, G, C, and T for the rungs, but they use different sugars from the usual ribose or D-ribose. Some of the alternative sugars turn out to be more structurally sound, standing up unusually well, for instance, under the stress of voracious enzymes and extreme pH levels.
Intrepid experimenters are even adding a couple of new bases to the usual A-G-C-T quartet, thus vastly expanding the code's vocabulary. I'll be very interested to learn how (and if) the surrounding cell mechanism learns to "read" the new words. I've never quite been able to understand even how the old words are read. Some sources I've read suggest that the shapes of the A-G-C-T code words are in some way physical cookie-cutter templates for the corresponding amino acids, but my impression is that that part is not well understood, and in any event I certainly don't understand it. It's one of my favorite mysteries.
All this work still sticks pretty close to Earth-style genetic molecules, of course, using a sugar-phosphate ladder backbone with bases for rungs. And yet sugars surely aren't the only way to construct a ladder, nor ladders the only possible structure on which to string a series of letters, nor a linear string of letters the only way to express and preserve a code. What works here needn't be what works best under different conditions. So I'm really curious to see how experiments in synthetic genetics come out. Because of my abiding interest in the origins of life, I'd love to find out more about how ordinary molecules could possibly have developed into active metabolism from dead-end equilibrium, and from there into replicating systems that take resources from the outside world and use energy to restructure them according to their own pattern. If nothing else, I'd like to see a better understanding develop of what kind of proto-molecules could possibly have developed into RNA, which, as primitive as it may be, is still an extremely complex structure and very, very far removed from the kind of chemical gunk you can generate from experiments designed to mimic primordial conditions.
I often hear casual statements to the effect that conditions on such-and-such a planet are "too extreme" to support life. I don't find a statement like that meaningful. Even on Earth in recent decades, we've found microbes thriving in extremely hot, cold, or poisonous conditions we'd confidently have called impossible until they were discovered. The assumption in the 1950s that life originated in shallow seas is giving way to the notion that it may have started in deep-sea thermal vents or in venues sporting other extremes of heat and pressure. We'd have to know considerably more about how life originated here before we could make any sensible statements about what it needs to get started universally, or about what sorts of forms life might take besides the ones we're used to.
All life we know of on Earth depends on RNA or DNA, the long, ladder-like molecules that hold the sequences of three-letter words (each spelled with the four-base alphabet A-G-C-T) that serve as code for the 20 amino-acid building blocks of our proteins. No one knows how such a code developed in the first place, or why all earthly life uses essentially the same code. No one knows why all life strings the code along a ladder built of the sugar called ribose (the R in RNA) or its slightly altered cousin, deoxyribose (the D in DNA). Was it just the structure that fell into place first, like the QWERTY keyboard, and everyone kept using it from then on? There's no obvious reason why the ladder couldn't be built of other sugars. For that matter, there's no obvious reason why the alphabet employed by our genetic code for protein synthesis couldn't choose other letters from the unknown number of potential nucleotide bases of which our familiar A, G, C, and T comprise only four (five, if you count the U that substitutes for T in RNA). Further, there's no obvious reason why the code should limit itself to three-letter words and resulting vocabulary of 4 to the third power, or 64 words.
Obviously a 64-word vocabulary is sufficient to spell out some amazing complexity. There's no limit in principle to the sentences you can form with 64 words. You don't even need four letters to spell a lot of words, as binary computers attest. The point is, the code our DNA uses is not the only way to skin a cat. For instance, the amino acids that line up like pearls on a string to form our long-chain protein molecules number 22 in total, but of those only 20 are assigned a three-letter "code word" in our DNA. (The other two get added in by separate enzymes at a later stage in the protein synthesis process not directly controlled by a DNA transcription.) What's more, a number of other amino acids have specialized uses other than as beads in the protein string, such as for neurotransmitters or steps in metabolic pathways, but they are not assigned a three-letter DNA code word. So our genetic code has more words than it needs for the amino acids we use, but it uses up the redundancy in synonyms for some of them, while having no word for others.
So, back to the article I linked to above. The guys who fool around with this stuff are beginning to synthesize genetic molecules they call "XNA." These are still identifiable as nucleic acids, using a sugar and a phosphate for the ladder backbone and the familiar bases A, G, C, and T for the rungs, but they use different sugars from the usual ribose or D-ribose. Some of the alternative sugars turn out to be more structurally sound, standing up unusually well, for instance, under the stress of voracious enzymes and extreme pH levels.
Intrepid experimenters are even adding a couple of new bases to the usual A-G-C-T quartet, thus vastly expanding the code's vocabulary. I'll be very interested to learn how (and if) the surrounding cell mechanism learns to "read" the new words. I've never quite been able to understand even how the old words are read. Some sources I've read suggest that the shapes of the A-G-C-T code words are in some way physical cookie-cutter templates for the corresponding amino acids, but my impression is that that part is not well understood, and in any event I certainly don't understand it. It's one of my favorite mysteries.
All this work still sticks pretty close to Earth-style genetic molecules, of course, using a sugar-phosphate ladder backbone with bases for rungs. And yet sugars surely aren't the only way to construct a ladder, nor ladders the only possible structure on which to string a series of letters, nor a linear string of letters the only way to express and preserve a code. What works here needn't be what works best under different conditions. So I'm really curious to see how experiments in synthetic genetics come out. Because of my abiding interest in the origins of life, I'd love to find out more about how ordinary molecules could possibly have developed into active metabolism from dead-end equilibrium, and from there into replicating systems that take resources from the outside world and use energy to restructure them according to their own pattern. If nothing else, I'd like to see a better understanding develop of what kind of proto-molecules could possibly have developed into RNA, which, as primitive as it may be, is still an extremely complex structure and very, very far removed from the kind of chemical gunk you can generate from experiments designed to mimic primordial conditions.
I often hear casual statements to the effect that conditions on such-and-such a planet are "too extreme" to support life. I don't find a statement like that meaningful. Even on Earth in recent decades, we've found microbes thriving in extremely hot, cold, or poisonous conditions we'd confidently have called impossible until they were discovered. The assumption in the 1950s that life originated in shallow seas is giving way to the notion that it may have started in deep-sea thermal vents or in venues sporting other extremes of heat and pressure. We'd have to know considerably more about how life originated here before we could make any sensible statements about what it needs to get started universally, or about what sorts of forms life might take besides the ones we're used to.
Bullies
Bookworm has a post up about bullying. I found myself trying to recall how it was handled when I was a kid, but honestly I can't remember a single example. Did I grow up in some kind of pacifist's paradise? Now and then kids would be mean to other kids, ostracizing them, forming nasty little cliques, but I can't remember anyone being beaten up or physically terrorized. Sometimes the outdoor play in the neighborhood got a little rough as kids experimented with projectiles and informal combat, but it seemed to be equally rough on everyone who got in the way, not directed at any special scapegoats or victims.
Did any of you grow up in similarly bully-free schools or neighborhoods? If you grew among the bullies I read about all the time, did people fight back? Get the adults involved?
Did any of you grow up in similarly bully-free schools or neighborhoods? If you grew among the bullies I read about all the time, did people fight back? Get the adults involved?
The Unity of Consciousness, Part II
In the first part, we discussed a problem about how we could know the world, which is related to another problem about how we can share knowledge (and therefore test its validity). Let's take a step backward to see how the problem arose. As mentioned, Kant's model came to him while he was crafting a response to Hume. Hume denied the Aristotelian model of knowledge, which had underlay scholarship for hundreds of years. Kant's problem was that Hume's attack was a challenge to the doctrine of cause and effect; but it also challenged the Aristotelian concept of what it meant to have knowledge. We're going to look at the Aristotelian model that Hume was challenging.
What we finally came to in our last discussion was an idea (from Tom) that knowledge isn't an internal mental state -- rather, it is a kind of relationship between you and the thing you know about. There's a contemporary school of philosophy that believes just that; but it is also true of the ancient position.
As Aristotle explains in De Anima and elsewhere, knowledge comes to be in us via a process that starts when we encounter the unknown thing. First we must perceive the thing through our senses. Either the sense itself or (in cases where more than one sense is involved) our "common sense" will present us with an image of the thing in our minds. This image in our minds is very similar to what Kant was calling our representation, but for Aristotelians it is not knowledge. Knowledge comes after we use our imagination:
At this point, we have knowledge. In Aristotle's terms, the final cause is normally also the formal cause -- that it, it is the form of the thing. The form of the chair or the bird comes to be in our minds. That is real knowledge, without mistake: we possess the form.
There are a couple of problems with this approach. It will jump out that Aristotle is using at least one and possibly two invisibles of the type that the West has come to fear since Ockham. "Form" isn't visible except when expressed in matter; the form in our minds is visible only as an image in our minds. Likewise, Aristotle puts all this down to the working of the soul. It seems like we could simply say that he's using "soul" where we would use "mind," but that's not right: the soul turns out to be another form. In fact it is our form, the organizing principle that makes us who we are and gives us our purpose (which, for Aristotle, is to seek understanding through rational activity; but you can take the more pedestrian view that our purpose, as with any animal, is merely to sustain ourselves and produce others like us).
The other problem is that Aristotle has a difficulty with how the form could come to be in our minds. In the Physics, he gives an account in which any sort of motion is a movement of a thing from potential to actual (or a falling away: a house can move away from being a house by collapsing, so that it is again only a potential house).
So if the form comes to be in our minds, it must have already existed there potentially. That's a very interesting claim, but it is a claim that makes sense of the idea that there is a relationship between us and the world. It's a much brighter picture than that which comes from Kant, because we really have knowledge -- the actual form of the actual things -- and it makes sense that we can convey that knowledge to others.
But then you realize that this means that all forms must exist in our minds potentially -- how could that be the case? (The claim is not as shocking as it sounds at first: if you think it through, you realize that it really must be true that, if we can have knowledge of X today, we must have had the potential to know X yesterday. Thus, it follows that you now potentially know everything that you could actually know.) It makes a kind of sense on something like an externalist picture: we are part of the world, not separate from it, and thus we are related to the world in certain ways. One of those ways could be having a mind shaped for knowledge of the world.
There is another problem, though, which is that we can also obtain knowledge through contemplation alone: for example, we can come to knowledge of mathematical truths simply by thinking. We are never encountering an actual form in an actual thing; yet we are coming to knowledge all the same. That means not only that we must have the potential for the knowledge in our minds, but that we need an account of where the actual form is that we are grasping.
Aristotle's solution is to posit an "Active Intellect," which is to say a kind of universal consciousness in which all human minds participate. This is a surprising solution, very much unlike Aristotle -- it's almost Platonic, and very similar to what the later neoplatonists will suggest. This Active Intellect contains all the forms in an actual way, and thus this explains how our minds can obtain knowledge through contemplation alone.
The modern urge is to do away with "forms" as invisible or mystical, but remember what forms are: they're organizing principles that structure matter in a particular way. These things certainly exist: this is what DNA does, for example; or, if you like, the difference between hydrogen and helium is the way in which its matter is ordered and structured. So forms are real enough; and they do exist in an actual way, and come to be in our minds when we grasp them.
Here the problem is the opposite one we had before. There are large parts of this picture that really work, and are highly satisfying; but there remain some troubles we have to sort out. Let's stop here and talk it through.
What we finally came to in our last discussion was an idea (from Tom) that knowledge isn't an internal mental state -- rather, it is a kind of relationship between you and the thing you know about. There's a contemporary school of philosophy that believes just that; but it is also true of the ancient position.
As Aristotle explains in De Anima and elsewhere, knowledge comes to be in us via a process that starts when we encounter the unknown thing. First we must perceive the thing through our senses. Either the sense itself or (in cases where more than one sense is involved) our "common sense" will present us with an image of the thing in our minds. This image in our minds is very similar to what Kant was calling our representation, but for Aristotelians it is not knowledge. Knowledge comes after we use our imagination:
To the thinking soul images serve as if they were contents of perception (and when it asserts or denies them to be good or bad it avoids or pursues them).... The faculty of thinking then thinks the forms in the images, and as in the former case what is to be pursued or avoided is marked out for it, so where there is no sensation and it is engaged upon the images it is moved to pursuit or avoidance. E.g.. perceiving by sense that the beacon is fire, it recognizes in virtue of the general faculty of sense that it signifies an enemy, because it sees it moving; but sometimes by means of the images or thoughts which are within the soul, just as if it were seeing, it calculates and deliberates what is to come by reference to what is present; and when it makes a pronouncement, as in the case of sensation it pronounces the object to be pleasant or painful, in this case it avoids or persues and so generally in cases of action.In other words, we take our initial image and use our imagination to add or subtract qualities. In this process, we sort out what it is that makes the thing that kind of thing -- the purpose, or function, which Aristotle calls the 'final cause.' The beacon can be lit or not; and in sorting out the difference we learn what it is that makes it a beacon and not just a fire (i.e., that it is lit only when the enemy is coming; and thus its final cause is to warn us of invasions). A chair can have two legs or three or four; or it can be blue or red. None of these things causes it to stop being a chair. However, if it is too small, or broken, it cannot be a chair (though it might, in the first case, be a toy chair). A bird can be bigger or smaller (and even flightless!), but it serves a purpose (its own purpose, that is: it sustains itself as a bird, and is involved in the production of more birds of that type).
At this point, we have knowledge. In Aristotle's terms, the final cause is normally also the formal cause -- that it, it is the form of the thing. The form of the chair or the bird comes to be in our minds. That is real knowledge, without mistake: we possess the form.
There are a couple of problems with this approach. It will jump out that Aristotle is using at least one and possibly two invisibles of the type that the West has come to fear since Ockham. "Form" isn't visible except when expressed in matter; the form in our minds is visible only as an image in our minds. Likewise, Aristotle puts all this down to the working of the soul. It seems like we could simply say that he's using "soul" where we would use "mind," but that's not right: the soul turns out to be another form. In fact it is our form, the organizing principle that makes us who we are and gives us our purpose (which, for Aristotle, is to seek understanding through rational activity; but you can take the more pedestrian view that our purpose, as with any animal, is merely to sustain ourselves and produce others like us).
The other problem is that Aristotle has a difficulty with how the form could come to be in our minds. In the Physics, he gives an account in which any sort of motion is a movement of a thing from potential to actual (or a falling away: a house can move away from being a house by collapsing, so that it is again only a potential house).
So if the form comes to be in our minds, it must have already existed there potentially. That's a very interesting claim, but it is a claim that makes sense of the idea that there is a relationship between us and the world. It's a much brighter picture than that which comes from Kant, because we really have knowledge -- the actual form of the actual things -- and it makes sense that we can convey that knowledge to others.
But then you realize that this means that all forms must exist in our minds potentially -- how could that be the case? (The claim is not as shocking as it sounds at first: if you think it through, you realize that it really must be true that, if we can have knowledge of X today, we must have had the potential to know X yesterday. Thus, it follows that you now potentially know everything that you could actually know.) It makes a kind of sense on something like an externalist picture: we are part of the world, not separate from it, and thus we are related to the world in certain ways. One of those ways could be having a mind shaped for knowledge of the world.
There is another problem, though, which is that we can also obtain knowledge through contemplation alone: for example, we can come to knowledge of mathematical truths simply by thinking. We are never encountering an actual form in an actual thing; yet we are coming to knowledge all the same. That means not only that we must have the potential for the knowledge in our minds, but that we need an account of where the actual form is that we are grasping.
Aristotle's solution is to posit an "Active Intellect," which is to say a kind of universal consciousness in which all human minds participate. This is a surprising solution, very much unlike Aristotle -- it's almost Platonic, and very similar to what the later neoplatonists will suggest. This Active Intellect contains all the forms in an actual way, and thus this explains how our minds can obtain knowledge through contemplation alone.
The modern urge is to do away with "forms" as invisible or mystical, but remember what forms are: they're organizing principles that structure matter in a particular way. These things certainly exist: this is what DNA does, for example; or, if you like, the difference between hydrogen and helium is the way in which its matter is ordered and structured. So forms are real enough; and they do exist in an actual way, and come to be in our minds when we grasp them.
Here the problem is the opposite one we had before. There are large parts of this picture that really work, and are highly satisfying; but there remain some troubles we have to sort out. Let's stop here and talk it through.
Continuing scandal dogs embattled Secret Service
This time they actually let someone get hurt. Investigations are underway to determine whether alcohol and underage hookers were involved. Speaking of which, I'm unable to give this image proper attribution because my husband got it off of some wargamers' site:
A Couple from Hoyt Axton
Not a well known name these days, but a man with a strong voice.
The Kingston Trio did some of his pieces.
The Kingston Trio did some of his pieces.
For The Women Who Love Us:
Waylon Jennings has a famous song about a good-hearted woman, who loves a man who may not be good enough for her. What isn't as well known is that he has another song on the subject, almost to the same tune but with a slower tempo: and a more intimate.
All of us who have been long from home will understand: and all of you, whose men have long been gone.
All of us who have been long from home will understand: and all of you, whose men have long been gone.
Setting the Bar:
It is very rare that a man should be at once under consideration for both the Medal of Honor, and sainthood.
There seems to be only one other example, which I owe to Deltabravo's posting in the comments of BLACKFIVE. Even looking outside the United States, to degrees on par with the Medal of Honor, there are perishing few. Joan of Arc, Alfred the Great -- him to only some Catholics -- perhaps Olaf or Edwin of Northumbria, perhaps St. George, and not many others.
It's a rare company.
There seems to be only one other example, which I owe to Deltabravo's posting in the comments of BLACKFIVE. Even looking outside the United States, to degrees on par with the Medal of Honor, there are perishing few. Joan of Arc, Alfred the Great -- him to only some Catholics -- perhaps Olaf or Edwin of Northumbria, perhaps St. George, and not many others.
It's a rare company.
On a Father's Love
We didn't say anything here about the infamous controversy of Ms. Samantha Brick, which probably most of you noticed a few weeks ago (I would guess since it pervaded even the parts of the internet that I normally visit). There wasn't much to say about it except that most of the negative reactions were unjustified, since no amount of inflated self-esteem could account for the regular buying of free drinks and other attentions that generally do accompany beautiful women. However she might have appeared to the multitude who wrote to insult her, to those men at those times she plainly was a joy, and her presence an honor to which they wanted to pay tribute.
She has written a followup piece, though, that probably deserves comment. It is about her father, and what his constant love did for her.
This piece, far more than the other, is a thing worth conveying to all who might hear it.
She has written a followup piece, though, that probably deserves comment. It is about her father, and what his constant love did for her.
This piece, far more than the other, is a thing worth conveying to all who might hear it.
A Moment of Unity:
I don't think we've mentioned Atrios here for most of the decade he's been blogging -- looks like once in 2003 and once in 2005 -- but he was significant to the left side of the blogosphere at the beginning. He's celebrating his ten year anniversary this week and, a few minor disagreements on tone aside, it's hard to take issue with him on this point. If there has been a less insightful and more overrated writer in the major media than Tom Friedman, I can't think who he would be.
Happy birthday, Duncan Black.
While we're celebrating this moment of comity, is there anything at all to object to in the following segment? The Breitbart boys are trying to blow it up, but generally I think there's a lot of sound advice in it. You can put that down to broken clocks and twice a day, or to whatever else you like, but all things considered there's a serious issue at work here.
I saw Richard Cohen say today, of Paul Ryan's budget, that it was:
Happy birthday, Duncan Black.
While we're celebrating this moment of comity, is there anything at all to object to in the following segment? The Breitbart boys are trying to blow it up, but generally I think there's a lot of sound advice in it. You can put that down to broken clocks and twice a day, or to whatever else you like, but all things considered there's a serious issue at work here.
I saw Richard Cohen say today, of Paul Ryan's budget, that it was:
...an Ayn Randish document whose great virtue is a terrible honesty. (We are indeed going broke.)If you think through the consequences of that, Mr. Farrakhan's warning has a different sound. If we have reached the days when the bipartisan blinders can't keep out that fact any longer -- that fact and all the consequences for our society that it portends -- he might not be too far wrong.
Modest WaPo Dislikes Spotlight, Wide Circulation
You know how it is. You just want to publish a newspaper to a few like-minded citizens, without all the fuss that comes from robust circulation numbers and other unwelcome attention. Then your news desk gets a story suggesting that the Affordable Care Act will increase the budget deficit by $340 billion (or even as much as $527 billion) instead of reducing it by $132 billion, as the President previously had claimed. Your editors get together and decide that it's sort of news, but it doesn't deserve to be highlighted; you put it below the fold on page A3. You make sure it is "prominent on the home page for only a short time."But the unruly public refuses to go along with your expert judgment of the story's unimportance. Brash bloggers turn on the high beams. Before you know it, Drudge links to it under the headline "ObamaCare Explodes Deficit." (You hate it when they call it "ObamaCare." It's "derisive.") Next thing you know, conservative and liberal bloggers are abuzz with citations to your story and arguments about how the budget impact should be calculated.
The wise old heads at your publicity-shy news desk all recognize a familiar futile attempt by the unwashed masses to determine truth and falsehood. With their superior sophistication, the new desk professionals grasp that
The truth is that every complex law change, every annual federal budget, is a risk. They’re all based on assumptions and forecasts that may or may not come true. And when they don’t, Congress and the president have to adjust.Just because someone points out that a vast budget impact, which was widely reported and heavily relied on in the process of getting the law passed, was transparently based on double-counting (the Medicare "doc fix"), and is off by the better part of a trillion dollars, doesn't mean it's news. It's just all part of the inevitable world of forecasts and assumptions that may not come true. Happens all the time. Nothing to see here. Move along.
But it's too late. The Washington Post's ombudsman sadly acknowledges that the paper gets a "frisson of pleasure" from the attention that a hot story attracts -- but they're above all that. They're more interested, apparently, in pushing their favorite agenda. So they really wish people would let them give unfavorable stories a quick, decent burial below the fold on page A3 after running for an eye-blink.
Something's really got to be done about making the new media shut up.
H/t Instapundit.
Subscribe to:
Comments (Atom)

