The apple didn't hit Newton on the head and inspire him with the sudden insight that there's such a thing as gravity. People had been noodling over the obvious tendency of things to fall just about forever. For a long time, their views on the subject took the form of theories about how objects might be animated, such as by an innate desire to be reunited with the Earth. During the Enlightenment, as creatures now known as "scientists" began to emerge, the focus left the supposed interior experience of the objects and trained itself on finding universal, predictable patterns in the movement.
So what was really going through Newton's mind when the apple fell from the tree? Before Newton was well launched on his extraordinary career, natural philosophers already had adopted the "inertia" model of movement; that is to say, objects tend to keep moving in a straight line unless slowed or diverted by an outside force. But this was puzzling in view of the evident circular/elliptical movement of heavenly bodies. There was a strong tendency to find circles "perfect" and "beautiful," resulting in a popular view that lowly straight-line movements characterized earthly bodies while heavenly bodies moved in stately and superior circles. Were there separate laws of motion on Earth and in Heaven?
Newton's brilliance lay in a unifying theme that would explain why an apple appears to fall straight down while the Moon describes a circular orbit around the Earth.
We have now finally arrived at that idyllic summer afternoon in Grantham in 1666, as the young Isaac Newton, home from university to avoid the plague, whilst lying in his mother’s garden contemplating the universe, as one does, chanced to see an apple falling from a tree. Newton didn’t ask why it fell, but set off on a much more interesting, complicated and fruitful line of speculation. Newton’s line of thought went something like this. If Descartes is right with his theory of inertia, . . . then there must be some force pulling the moon down towards the earth and preventing it shooting off in a straight line at a tangent to its orbit. What if, he thought, the force that holds the moon in its orbit and the force that cause the apple to fall to the ground were one and the same? This frighteningly simple thought is the germ out of which Newton’s theory of universal gravity and his masterpiece the Principia grew.Newton guessed that, if the Moon were motionless, it would fall straight down to Earth the same as the apple. But the Moon has a momentum that's at right angles to the gravity vector, which always points to the center of the Earth, meaning that the Moon's path is gradually changing in direction as the it "falls" sideways around the Earth. The same gravitational force could account for the curved motion of the Moon and the straight motion of the apple.
The Principia was published in 1687, after Newton put considerable additional work into his first intuition about gravity, including the critical insight that elliptical planetary orbits result from a force pointing from each planet straight down into the Sun, which is inversely proportional to the square of the distance between the two. Not that Newton dreamed up either the planets' elliptical orbits or the inverse-square law on his own. Galileo had noticed in the late 16th and early 17th centuries that gravity acts as a constant acceleration on falling bodies, no matter what their weights. Kepler published his three laws of planetary motion in the first couple of decades of the 17th century, showing that planets move in ellipses of which the Sun is one focus. Between Newton's 1666 "apple moment" and the 1687 publication of the Principia, Hooke and others were inching their way toward the inverse-square law, first realizing that gravity always operated in one direction (earlier theories included the idea that gravity pushed at one point in the orbit and pulled at another), then establishing that its attractive power varied with distance, and finally nailing down the understanding that gravity alters with the square of the distance between the attractive bodies. Newton's genius was to understand that the inverse-square law, plus the tendency of objects to move in a straight line unless acting on by a force, simultaneously explained the elliptical paths of planets in Heaven and the straight downward fall of an apple from a tree on Earth.
As Richard Feynman used to say, in the old world people believed that angels flew behind planets and pushed them in their circular paths. Now, in the advanced modern world, we say that the angels are invisible and they push at right angles to what we thought back then. We still have no idea what gravity is, but we're considerably more adept as describing what kinds of motions it produces, on Earth as it is in Heaven.
27 comments:
I gather from the fact that you put "perfect" and "beautiful" in scare quotes that you haven't encountered the arguments for these.
The ancient thought was that absolute perfection was achieved in a kind of unity, so that a thing did not need to change (which means, among other things, that it did not need to move). The world, even the heavens, is unable to sustain that ideal -- just why not wasn't entirely clear, but manifestly it was not. (Plato, in his myth of the demiurge, suggests that there's something about matter itself that is corrupt and disorderly.)
So the best that can be done, given the need for things to change and move, is for them to move in an eternal and unchanging way. The best way to achieve that is to go in a circle, because you always return to where you began.
Newton changed a lot more than that just eliminating the distinction between heaven and earth, though: he eliminated every distinction among different kinds of things. When we use a Newtonian formula like "F=ma," we don't care in the least if the objects we are talking about are living ducks or sacks of feathers, stones or planets, fire or water.
That's productive of a kind of oddity, because in our other sciences we not only do care but need to care. In chemistry, for example, it's utterly important whether the thing I am trying to combine with hydrogen is oxygen or magnesium: everything about the character of the reaction depends on the existing structures, which is just the thing Newton was able to dispense with in physics.
So the question becomes whether physics is ultimately capable of explaining all those higher-order things using principles that don't care about natural kinds; or if natural kinds have a kind of reality of their own, but as emergent properties. There are good arguments on both sides.
Put another way, the eternal motion is superior because it doesn't come to an end: it sustains itself, as God does (to use the ancient Christian framing, because it will be more familiar). Angular movements 'use themselves up,' so that the energy put into throwing a rock carries it only so far; then it drops to the ground.
So the question was whether the rock moved in a different way than the circular body because it had a different nature. The answer is no, sort of: it turns out it doesn't matter, for the purpose of that question, what its nature happens to be. But its nature seems to matter elsewhere, sometimes quite a bit.
Naturally I've encountered the argument. I suppose the quotation marks expressed my fundamental disaffection with it: the tendency to assume that physical processes must occur in a particular way because of the emotions we project onto them, rather than to observe them carefully and see what they actually do. It was Newton's genius to see why the same principle elegantly explained both straight and curved paths without resorting to just-so stories in which one was base and the other spiritually elevated.
If the explanation for the different paths really had lain in the nature of the objects themselves, that would have been interesting, too, but it's not what observation showed for how objects attract each other at a distance. In other contexts, such as chemical reactions at close quarters, it does matter--the point being that it's not a question of whether we've decided ahead of time that it would be a good thing for things to work a certain way, but a question of what we observe. It's beautiful enough in its own right, without our having to distort it to fit a preconceived scheme.
Have you? It's not generally taught anymore; what is usually taught is a kind of studied disdain for those of our ancestors who thought it was plausible. Yet it is a good argument, strong in places where our own is somewhat mysterious, and in line with what they could observe empirically.
Another thing they were concerned with was 'action at a distance,' which remains a concern today. We like to talk about forces, but we aren't always able to explain just why a "force" should allow one object to act on another without contact. Mostly, like Newton, we just assume that it somehow does when we are working out our calculations.
This also didn't apply only above the earth. The idea wasn't just that there were intellects with spirits that were moving the planets (or the sphere of fixed stars); it's that air must not provide resistance when a ball is thrown, but instead a kind of wave-action force to keep the ball moving. Otherwise, when you let the ball go how could you keep pushing it? The answer -- that the air moves out of the way, but then resumes its place behind the ball, imparting additional force -- is another one we tend to think of now as laughably wrong, but which had a good theoretical problem behind it.
If you're interested, anyway, most of Feynman's lectures are now available online. They are first and second-year lectures, so you may find them basic given your own studies. He's an entertaining thinker, though.
If you do decide to read Feynman's lectures, there's a point he makes at the end of 1-5 that strikes me as very similar to being on the ancient side of Newton. It's the biological version of the atomic hypothesis: "Everything that animals do, atoms do," that is, everything that animals do can be explained by atomic actions.
That's a very challenging hypothesis both if it is true, and if it is not. It certainly appears to be true that many things can be so explained. Like the ancient theorists, we've gotten a challenging argument out of empirical observation and it has carried us a very long way.
But it's a doctrine, if looked at the wrong way. What if everything can't be so explained? Well, one thing you could do is insist that things really can be -- we just have to keep looking for the explanation, because other things aren't admitted. Just because it has proven so powerful for so long, as Aristotle's mechanics did, what is supposed to be a hypothesis has become, in our minds, a law of nature.
I guess what I'm trying to say is, this standard critique of pre-Newtonian thinkers bothers me because it is a good example of exactly what its proponents think it isn't. They argue that we shouldn't 'decide ahead of time' how things should work, but just 'go on observations.' And they put themselves up as examples of this wisdom.
But of course, that's not how science works at all. The one thing you really have to do is decide how things work ahead of time: this is called "forming a hypothesis." And in order to form a hypothesis, you have to develop a model that suggests why things might work out in the way you are hypothesizing they will. If the hypothesis appears to be confirmed, you take it as a model for developing new pre-observational ideas about 'how things ought to be.'
This is the whole business of science. It's what the Aristotelian thinkers were doing before Newton, and it's what Newtonian thinkers were doing until they ran up into the problems being caused by things like relativity. Then, adherents of Newtonian physics were the ones insisting that some final tweaking of the laws would get them right; it wasn't necessary to abandon these beautiful, simple laws for something else.
And they were right to try to make it work, not wrong to insist on their vision. You have to push each envelope as far as you can to find its limits. When the atomic hypothesis is invalidated, whenever that is, the only reason we'll be able to know it wasn't valid was that someone who loved it defended it to the last with every tool, every argument, and every test they could devise.
"Everything that animals do, atoms do"--and an even more entertaining side of that is that, for all we can tell, elementary particles have as much free will as living creatures do. They seem to be free to choose every available path, though the one we experience on a macro level is a kind of average of the most likely paths. Whether that apparent freedom is an illusion born of our inability to see "behind the curtain" or "inside the black box" is something we can know nothing about at present, but it's still interesting.
I agree with you entirely that nothing Newton or any other scientific thinker has done has chipped away at the central mystery of action at a distance. We have no real basis for a hypothesis on that subject (non fingo!), whether we call it angels or anything else. That was the point of Feynman's elegant little joke: we think we're so much more advanced, but all we've really done is change the direction of the imputed force while attributing it to an equally and irreducibly mysterious source. The only sense in which we've made progress on that score is that we may be slightly more on guard against facile explanations of what the force ultimately is or what it means. We can describe and predict the result of the force with some accuracy, but that's about it.
None of which means I'm any more enamored than I ever was of ancient theories about the superior spirituality of curved paths over straight ones. As a metaphor that kind of thing can be useful in story or poetry. I part ways with it when it interferes with observations that don't fit pre-conceived theories. For me, observations always trump theories. If I make an observation that contradicts a theory, I may not relinquish the theory without a lot of additional thought and testing, but I'm never going to discard the observation. The truth will be what it is, not what I insisted it ought to be.
So I distrust all systems of scientific thought that strike me as unmoored from careful observation, which includes many explanations that focus on "why." "Why" is a good tool to apply to moral questions, but not to physical investigations, where "what" and "how" and "can we use this explanation to make good predictions" should predominate.
PS, Yes, I love Feynman's lectures and have listened to all of them that I could find on tape. I also have his book "Q.E.D," which is one of my favorites, as well as his wonderful three-volume textbook series. I wish I'd had it when I studied physics in high school and college. Feynman had a real gift for avoiding obscurity and illuminating first principles.
I think you're conflating the act of positing a provisional hypothesis with the insistence on clinging to a system in the face of conflicting evidence. Certainly scientists have to make inspired guesses before they can test them. But the proof is in the testing, not in the construction of the hypothesis.
It's human nature, apparently, to fall in love with our mental systems. The danger is not only that the beautiful scheme binds us to nonconfirming data, but that all too often people come to think that a few heretics need to be burned in order to protect people from the results of inquiry into natural physical process that might undermine a beautiful philosophical structure. I'd be happier if we didn't rest our moral and spiritual systems on physical theories at all, particularly not ones for which we lack physical supporting data (or worse, for which there's lots of data, but unfortunately it contradicts the system).
There's no reason for scientists and philosophers to quarrel over issues like first causes or the inherent meaning of natural laws. They just get in each other's way. The scientists have no special expertise in meaning, and the philosophers have no authority to overrule observations.
"Why" is a good tool to apply to moral questions, but not to physical investigations, where "what" and "how" and "can we use this explanation to make good predictions" should predominate.
It can't be avoided, Tex. What you've just offered is a moral argument -- an argument about what "should" be done. Just as there is no way to avoid making hypotheses before observations become useful, or making systems that might ground hypotheses, you can't divide "is" from "ought." The idea that you could -- Hume's idea -- is one of the signal mistakes of modernism. It not only 'should not' be done, it cannot be done.
Physics, in a way, has become formally detached from philosophy; but the detachment is an illusion. It's still very much a part of the discipline, which arches over all human knowledge. Physics may be the place where we do the observing and measuring and predicting, but you must still answer the question of why it is worth doing -- especially at the theoretical levels, where practical applications (mere engineering!) are not only distant but often impossible. Yet even to take engineering as the end of the process is to make a moral judgment.
Not at all. It's perfectly appropriate to investigate physical processes without claiming to have definitive answers to the "why" of anything. Each scientist will have to answer the personal question of why the effort is worth the trouble, of course, but he won't be pretending to apply scientific principles to that inquiry.
What's more, each scientist can draw his own conclusions about "why," without falling into either of the twin traps I identified: (1) blinding himself to non-conforming data or (2) squelching anyone else's reports of non-conforming data.
"Mere engineering" makes me laugh. I agree that the application of engineering tools in the real world requires moral and philosophical judgements of each actor--but that has little or nothing to do with the appropriate frame of mind to apply to inquiries into physical processes, at least if the point is to get information that will be borne out in the real world.
The claim isn't that you can't investigate processes without having definitive answers to "why" things happen. The claim is that making a decision that you "should" proceed that way -- or "should usually" -- is a kind of moral judgment. "Should" judgments are judgments of that kind.
It could be grounded in many different ways, but it's impossible to break moral judgments from physical investigations. After all, you decided that you "ought" to spend your time that way, which means neglecting other things you could have done instead.
Even a utilitarian judgment that it's the most-likely-productive approach is still a moral judgment (after all, utilitarianism is one of the major subdivisions of the field of Ethics). It could be wrong, too: the source for the next new hypothetical model could be an aesthetic reading of the facts we have heretofore observed.
Indeed you might argue that it's likely to be. After all, the existing hypothetical model intentionally dismisses claims like that out of hand, so that they aren't considered. If there were any truth in them at all, it is a truth that will likely have been missed. (To say 'well, there shouldn't be any truth' is to be dogmatic in just the way we began talking about: it mistakes a moral claim for an empirical one.)
Yes, we agree that moral judgments of the "why" sort are legitimately part of the scientist's decisions about how to live his life. But no moral judgment of that kind could legitimately blind a scientist to a physical observation. Honesty is a basic moral obligation.
I do agree to that. Now why is that "should" different? "Scientists should be honest." That's different from the others.
Can you say why? What you have said so far is that morality -- ethics, philosophy -- is a basic part of science: that, far from excluding moral assumptions in our answers, this one at least is indispensable. But a moment ago we were saying that forcing scientific answers to attend to a moral code was a problem. Just what is the difference?
I think you already know my views on the source of moral imperatives! They are quite conventional.
Humans should conform to a moral code. Facts cannot be made to do so. A physical fact can't be made more true, or even more likely, by applying even the most spiritually impeccable standard of what the fact "should" be. That's a standard that applies to human beings possessed of free will and moral obligations. Physical facts simply are what they are; our role is to observe and report them honestly and without distortion based on our druthers. What bearing the physical facts have on our duties, on the other hand, is entirely within the moral realm.
It's an interesting problem, I think: of a piece with 20th century physics. To some degree even the most honest observer is going to find what he is looking for: particles or waves. So the quality of the observer is paramount.
He must be virtuous, in other words. But that's a shocking claim in itself: that the observations should depend on the virtues of the observer. Something funny is going on here. Something wonderful, maybe.
Whether a particle is best described as a particle or a wave depends on the physical aspects of its observation, not on the observer's moral virtue. A bad man will be confronted with evidence of the same apparent duality as a good man. The difference is that a bad man may be more likely to report his observations dishonestly, or to put them to a corrupt use.
This discussion reminds me of a possibly apocryphal story about Niels Bohr. Albert Einstein was famously given to skepticism about the centrality of probability theory in physics, asserting, for instance, that "God does not play at dice." Bohr is supposed to have admonished him: "Stop telling God what to do, Albert." We may think that observations of the workings of the world suggest an explanation that's at odds with our expectations of beauty and perfection, but it's not for us to prejudge. The world is as God made it, and if it falls short of our preconceived notions, it's our notions that are at fault, not Creation.
...depends on the physical aspects of its observation, not on the observer's moral virtue.
There are virtues besides moral ones, which are the ones I was thinking of -- virtues like perseverance, for example. But the moral virtues play a role too. Consider what the scientists observed in the climate change debate.
The argument against conservatives was that we were anti-science if we wouldn't believe what the scientists were observing in their experiments. The truth is that we didn't believe in the virtue of the scientists. You don't have to get as far as "hide the decline" for a lack of virtue to have an impact on the observed results.
Not all of it was bad science, in other words: some of it was perfectly good science, in terms of how the results were collected and the processes by which it was reviewed and re-tested independently. Some of it had to do with what questions you chose to ask, and what you wanted to find. Those are questions of virtue, sometimes including moral virtue, and they're impossible to separate from the scientific process.
Did I give you the impression that I thought science could be conducted without reference to moral issues? Of course when we practice science, or any other human endeavor, we are acting as moral beings for good or ill. I meant only that our mental schemes (moral or otherwise) could not change the truth of the facts that it's our business to observe, and that we will not get useful results if we let our projections distort our ability to face facts.
Isn't that exactly what went wrong with climate science? People got invested in a result, and a grand mental design, to the point of letting it blind them to the data they collected. They believed their models--their mental constructs--even if the incoming data contradicted their predictions. They tried to fudge the data instead of re-evaluating the mental constructs. "But it just HAS to be this way!" In other words, stop telling what God to do, Albert.
When it gets as far as fudging data, you're clearly wrong. Some of these scientists were immoral as well as lacking the epistemic virtues.
What I'm thinking of is more the way in which the hypotheses were honestly obtained from a model, honestly tested, and honestly confirmed. In good science you have to control for all but one variable. That means the questions you can ask are very small questions, or rather very specific. Invalidating any given hypothesis may not invalidate or even call into question the overall model. Likewise, or even more, confirming a given hypothesis may not do anything to confirm the model.
What I mean by the quality of the observer is really best found in these good cases. The observer finds what the observer sets out to discover: and it's got to be something very small and specific.
Of course. -- We're talking completely at cross-purposes.
I'm inclined to bait you by saying, "Maybe." :) But I won't do that. I'm much too nice a person.
In any case, what Newton did differently was -- and Galileo even more so -- he was good at not caring if he agreed with anyone else. Both men were very much in love with the 'mental systems' they constructed. They just happened to be very different from the ones around them. It let them ask different little questions from the ones others were asking, and that was what made the difference.
This is why we kept hearing about the importance of 'consensus' from working scientists during the climate debate, and why they were so wrong to want it.
Being indifferent to public opinion is a good thing. Being indifferent to data is not; I'm not aware any evidence that that charge could fairly be lodged against either Newton or Galileo.
Are you aware that most of Newton's writings were on alchemy? :)
Of course, and he didn't get far with any good theories there. But did he ever get stuck on an alchemy theory and use it to disregard experimental evidence? There's no guarantee that even hard work and genius will give you a good theory every time, but honesty remains important whether your theory's a good one or a bad one.
Depends on what you mean by "disregard." Mostly that's not what happens, with Newton or anyone else. (Fudging the data isn't what I'm talking about.) The question is whether you try to understand the results you got in terms of your theory, or if you generate a new theoretical framework.
Now mostly people don't do the latter, and not just for bad reasons. Usually people who do are crackpots. But if you think you're at the point that you have to -- and Newton certainly never gave up on the theory of alchemy -- the best way to approach it is not to go forward, but to go back. Einstein and the other 'new physics' leaders made their progress in large part by returning to the prior arguments, and seeing what assumptions had gotten baked into Newton (and others) that might be questioned. There's a very interesting set of arguments from the late 19th century, and early 20th, re-examining the pre-modern heritage of science in order to try to see what got missed with the then-new ideas were being formed.
When learning to play a particular game, it helps to have good examples. Thank you both.
Post a Comment