Archive for the ‘Science’ Category

Evolution by natural selection is a fact

February 5, 2019

My recent “free will” essay prompted some comments about evolution (on the Times-Union blog site.) One invoked (at verbose length) the old “watchmaker” argument. Nature’s elegant complexity is analogized to finding a watch in the sand; surely it couldn’t have assembled itself by random natural processes. There had to be a watchmaker.

This argument is fallacious because a watch is purpose-built and nature is not. Not the result of a process aimed at producing what we see today; instead one that could just as well have produced an infinity of alternative possibilities.

Look at a Jackson Pollock painting and you could say that to create precisely this particular pattern of splotches must have (like the watch) taken an immense amount of carefully planned work. Of course we know he just flung paint at the canvas. The complex result is what it is, not something Pollock “designed.”

Some see God in a similar role, not evolution’s designer but, rather, just setting it in motion. Could life have arisen out of nowhere, from nothing? Or could the Universe itself? Actually science has some useful things to say about that — better than positing a God who always existed or “stands outside time and space,” or some such woo-woo nonsense. And for life’s beginnings, while we don’t have every “i” dotted and “t” crossed (the earliest life could not have left fossils), we do know the basic story:

Our early seas contained an assortment of naturally occurring chemicals, whose interactions and recombinations were catalyzed by lightning, heat, pressure, and other natural phenomena. Making ever more complex molecules, by the trillion. One of the commonest elements is carbon, very promiscuous at hooking up with other atoms to create elaborate combinations.

Eventually one of those had the property of duplicating itself, by glomming other chemical bits floating by, or by splitting. Maybe that was an extremely improbable fluke. But realize it need only have happened once. Because each copy would go on to make more, and soon they’d be all over the place.

However, the copying would not have been perfect; there’d be occasional slight variations; with some faulty but also some better at staying intact and replicating. Those would spread more widely, with yet more variations, some yet more successful. Developing what biologist Richard Dawkins, in The Selfish Gene, called “survival machines.” Such as a protective coating or membrane. We’ve discovered a type of clay that spontaneously forms such membranes, which moreover divide upon reaching a certain size. So now you’ve got the makings of a primitive cell.

Is this a far-fetched story? To the contrary, given early Earth’s conditions, it actually seems inevitable. It’s hard to imagine it not happening. The 1952 Miller-Urey experiment reproduced those conditions in a test tube and the result was the creation of organic compounds, the “building blocks of life.”

That’s how evolution began. The duplicator molecules became genes (made of DNA). Their “survival machines” became organisms. That’s what we humans really are, glorified copying machines. A chicken is just an egg’s way to make another egg.

Of course DNA and genes, and Nature itself, do nothing with conscious purpose. Replicators competing with each other is simply math. Imagine your computer screen with one blue and one red dot. And a program saying every three seconds the blue dot will make another blue dot; but the red one will make two. Soon your screen will be all red.

A parable: A king wishes to bestow a reward, and invites the recipient to suggest one. He asks for a single rice grain — on a chessboard’s first square — then two on the second — and so on. The king, thinking he’s getting away cheaply, readily agrees. But before even reaching the final square, it’s all the rice in the kingdom.

This is the power of geometric multiplication. The power of genes replicating, in vast numbers, over vast time scales. (A billion years is longer than we can grasp.) And recall how genes are effectively in competition because occasionally their copies are imperfect (“mutations”), so no two organisms are exactly identical, and some are better at surviving and reproducing. Those supplant the others, just like red supplanted blue on your computer screen. But the process never stops, and in the fulness of time, new varieties evolve into new species. It’s propelled by ever-changing environments, requiring that organisms adapt by changing, or perish. This is evolution by natural selection.

Fossils provide indisputable proof. It’s untrue that there are “missing links.” In case after case, fossils show how species (including humans) have changed and evolved over time. (The horse is a great example. My illustration is from a website actually denying horse evolution, arguing that each of the earlier versions was a stand-alone species, unrelated to one another!)

We even see evolution happening live. Antibiotics changed the environment for bacteria. So drug-resistant bacteria rapidly evolved. Once-rare mutations enabling them to survive antibiotics have proliferated while the non-resistant are killed off.

Note that evolution doesn’t mean inexorable progression toward ever more complex or “higher” life forms. Again, the only thing that matters is gene replication (remember that red computer screen). Whatever works at causing more copies to be made is what will evolve. Humans evolved big brains because that happened to be a very successful adaptation. If greater simplicity works better, then an animal will evolve in that direction. There are in fact examples of this.

Another false argument against evolution is so-called “irreducible complexity.” Author Michael Behe claimed something like an eye could never have evolved without a designer because an incomplete, half-formed eye would be useless, conferring no advantage on an organism. In fact eyes did evolve through a long process beginning with light-sensitive cells that were primitive motion detectors, not at all useless. They did entail a survival advantage, albeit small, but it multiplied over eons, and improved by gradual incremental tweaks. So the eye, far from rebutting evolution, thus beautifully illustrates how evolution actually proceeds, and refutes any idea of intelligent design.

In fact, because our eyes did evolve in the undirected the way they did, they’re very sub-optimal. A competent designer would have done far better. He would not have put the wiring in front of the light-sensitive parts, blocking some light, nor bunched the optic nerve fibers to cause a blind spot. So we can’t see well in dim light. Some other animals (like squids) have much better eye design. And wouldn’t a really intelligent design include a third eye in the back?

Evolution by natural selection is the one great fact of biology. Not merely the best explanation for what we see in Nature, but the only possible rational explanation, and one that explains everything. As the geneticist Theodosius Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.”


Consciousness, Self, and Free Will

January 29, 2019

What does it really mean to be conscious? To experience things? To have a self? And does that self really make choices and decisions?

I have wrestled with these issues numerous times on this blog. Recently I gave a talk, trying to pull it all together. Here is a link to the full text: But here is a condensed version:

It might seem that the more neuroscience advances, the less room there is for free will. We’re told it’s actually an illusion; that even the self is an illusion. But Daniel Dennett, in 2003, wrote Freedom Evolves, arguing that we do have a kind of free will after all.

The religious say evil exists because God gave people free will. But can you really have free will if God is omniscient and knows what you will do? This equates to the concept of causation; of determinism. Laplace was a French thinker who posited that if a mind (“Laplace’s demon”) could know every detail of the state of the Universe at a given moment, it would know what will happen next. But Dennett says this ignores the random chance factor. And quantum mechanics tells us that, at the subatomic level at least, things do happen randomly, without preceding causes.

Nevertheless, the deterministic argument against free will says that everything your brain does and decides is a result of causes beyond conscious control. That if you pick chocolate over vanilla, it’s because of something that happened among your brain neurons, whose structure was shaped by your biology, your genes, by everything that happened before. Like a computer program that cannot “choose” how it behaves.

Schopenhauer said, “a man can do what he wants but cannot will what he wants.” In other words, you can choose chocolate over vanilla, but can’t choose to have a preference for chocolate. Or: which gender to have sex with.

And what does the word “you” really mean? This is the problem of the self, of consciousness, entwined with the problem of free will. We all know what having a conscious self feels like. Sort of. But philosopher David Hume said no amount of introspection enabled him to catch hold of his self.

Another philosopher, Rene Descartes, conceived mind as something existing separately from our physical bodies. This “Cartesian dualism” is a false supernatural notion. Instead, mind and self can only be produced by (or emerge from) physical brain activity. There’s no other rational possibility.

Let’s consider how we experience vision. We not only see what’s before us, but also things we remember, or even things we imagine. All of it could be encoded (like in a computer) into 1s and 0s — zillions of them. But then how do “you” see that as a picture? We imagine what’s been called a “Cartesian theatre” (from Descartes again), with a projection screen, viewed by a little person in there (a “homunculus”). But how does the homunculus see? Is there another smaller one inside his brain? And so on endlessly?

A more helpful concept is representation, applicable to all mental processing. Nothing can be experienced directly in the brain. If it’s raining it can’t be wet inside your brain. But your brain constructs a representation of the rain. Like an artist painting a scene. And how exactly does the brain do that? We’re still working on that.

Similarly, what actually happens when you experience something like eating a cookie, or having sex? The experience isn’t mainly in the mouth or genitals but in the mind. By creating (from the sensory inputs) a representation. But then how do “you” (without a homunculus) see or experience that representation? Why, of course, by means of a further representation: of yourself having that experience.

And according to neuroscientist Antonio Damasio, in his book Descartes’ Error, we need yet another, third order representation, so that you not only know it’s raining, but know you know it. Still further, the mind also must maintain a representation of who “you” are. Including information like knowledge of your past, and ideas about your future, which must be constantly refreshed and updated.

All pretty complicated. Happily, our minds — just like our computer screens — hide from us all that internal complexity and give us a smooth simplified interface.


A totally deterministic view might make our lives might seem meaningless. But Dennett writes that we live in an “atmosphere of free will” — “the enveloping, enabling, life-shaping, conceptualatmosphere of intentional action, planning and hoping and promising — and blaming, resenting, punishing and honoring.” This is all independent of whether determinism is true in some physical sense.

Determinism and causality are actually tricky concepts. If a ball is going to hit you, but you duck, would Laplace’s demon have predicted your ducking, so you were never going to be hit? In other words, whatever happens is what had to happen.

Dennett poses the example of a golfer missing a putt who says, “I could have made it.” What does that really mean? Repeat the exact circumstances and the result must be the same. However, before he swung, was it possible for him to swing differently than he wound up doing? Or was it all pre-ordained? Could he have, might he have, swung differently?

Martin Luther famously said, “Here I stand, I can do no other.” Was he denying his own free will? Could he have done otherwise? Or was his stand indeed a supreme exercise of personal will?

Jonathan Haidt, in his book The Righteous Mind, likened one’s conscious self to a rider on an elephant, which is the unconscious. We suppose the rider is the boss, directing the elephant, but it’s really the other way around. The rider’s role is just to come up with rationalizations for what the elephant wants. (This is a key factor in political opinions.)

And often we behave with no conscious thought at all. When showering, I go through an elaborate sequence of motions as if on autopilot. My conscious mind might be elsewhere. And how often have I (consciously) deliberated over whether to say a certain thing, only to hear the words pop suddenly out of my mouth?

A famous experiment, by neurologist Benjamin Libet, seemingly proved that a conscious decision to act is actually preceded, by some hundreds of milliseconds, by an unconscious triggering event in your brain. This has bugged me no end. I’ll try to beat it by, say, getting out of bed exactly when I myself decide, bypassing Libet’s unconscious brain trigger. I might decide I’ll get up on a count of three. But where did that decision come from?

However, even if the impetus for action arises unconsciously, we can veto it. If not free will, this has been called “free won’t.” It comes from our ability to think about our thoughts.

There’s a fear that without free will, there’s no personal responsibility, destroying the moral basis of society. Illustrative was a 2012 article in The Humanist magazine arguing against punishing Anders Breivik, the Norwegian mass murderer, because the killings were caused by brain events beyond his control. But “Free won’t” is a helpful concept here. Psychologist Thomas Szasz has argued that we all have antisocial impulses, yet to act upon them crosses a behavioral line that almost everyone can control. So Breivik was capable of choosing not to kill 77 people, and can be held responsible for his choice.

As his book title suggests, Dennett maintains that evolution produced our conscious self with free will. But those were unnecessary for nearly all organisms that ever existed. As long as the right behavior was forthcoming, there was no need for it “to be experienced by any thing or anybody.” However, as the environment and behavioral challenges grow more complex, it becomes advantageous to consider alternative actions. In developing this ability, Dennett says a key role was played by communication in a social context, with back-and-forth discussion of reasons for actions, highly enhanced by language. Recall the importance of representation. I mentioned the artist and his canvas. Our minds don’t have paints, but create word pictures and metaphors, multiplying the power of representation.

Another book by Dennett, in 1991, was Consciousness Explained. It said that the common idea of your self as a “captain at the helm” in your mind is wrong. It’s really more like a gaggle of crew members fighting over the wheel. A lot of neurons sparking all over the place. And what you’re thinking at any given moment is a matter of which gang of neurons happens to be on top.

Yet in Freedom Evolves, Dennett now winds up insisting that we can and do use rationality and deliberation to resolve such internal conflicts, and that “there is somebody home” (the self) after all, to take responsibility and be morally accountable. This might sound like positing a sort of homunculus in there. But let me offer my own take.

When the crewmen battle over the wheel, to say the outcome is deterministically governed by a long string of preceding causes is too simplistic. Instead, everything about that competition among neuron groups embodies who you are, your personality and character, constructed over years. Shaped by many deterministic factors, yes — your biology, genes, upbringing, experiences, a host of other environmental influences, etc. But also, importantly, shaped by all your past choices and decisions. We are not wholly self-constructed, but we are partly self-constructed. Your past history reflects past battles over the wheel, but in all those too, personality and character factors came into play.

They can change throughout one’s life, even sometimes from conscious efforts to change. And no choice or decision is ever a foregone conclusion. Even if most people, most of the time, do behave very predictably, it’s not like the chess computer that will play the same move every time. Causation is not compulsion. People are not robots.

Nothing is more deterministically caused than a smoker’s lighting up, a consequence of physical addiction on top of psychological and behavioral conditioning, and even social ritual. Seemingly a textbook case of B.F. Skinner’s deterministic behaviorism. Yet smokers quit! Surely that’s free will.

Now, you might say the quitting itself actually has its own deterministic causes — predictable by Laplace’s demon — whatever happens is what had to happen. But this loads more weight upon the concept of determinism than it can reasonably be made to carry. In fact, there’s no amount of causation, biological or otherwise, that predicts behavior with certainty. There are just too many variables. Including the “free won’t” veto power.

And even if Libet was right, and a decision like exactly when to move your finger (or get out of bed) really is deterministically caused — how is that relevant to our choices and decisions that really matter? When in college, I’d been programmed my whole life to become a doctor. But one night I thought really hard about it and decided on law instead. Concerning a decision like that, the Libet experiment, the whole concept of determinism, tells us nothing.

This is compatibilism: a view of free will that’s actually compatible with causation and determinism.

We started with the question, how can you have free will if an omniscient God knows what you’ll do? Well, the answer is, he cannot know. But — even if God — or Laplace’s demon — could (hypothetically) predict what your self will do — so what? It’s still your self that does it. A different self would do different. And you’re responsible (at least to a considerable degree) for your self. That’s my view of free will.


No, Virginia, there is no Santa Claus

December 22, 2018

We gave our daughter the middle name Verity, which actually means truth, and tried to raise her accordingly.

About the Easter Bunny and the Tooth Fairy, she wised up pretty early, as a toddler. About Santa, she was skeptical, but brought scientific reason to bear. A big unwieldy rocking horse she doubted could have gotten into the house without Santa’s help. So that convinced her — for a while at least.

Recently a first grade teacher was fired for telling students there is no Santa (nor any other kind of magic). This reality dunk was considered a kind of child abuse; puncturing their illusions deemed cruel; plenty of time for that when they grow up. However, the problem is that a lot of people never do get with reality. As comedian Neal Brennan said (On The Daily Show), belief in Santa Claus may be harmless but is a “gateway drug” to other more consequential delusions.

People do usually give up belief in Santa. But not astrology, UFOs, and, of course (the big ones) God and Heaven. The only thing making those illusions seemingly more credible than Santa Claus is the fact that so many people still cling to them.

America is indeed mired in a pervasive culture of magical beliefs, not just with religion, but infecting the whole public sphere. Like the “Good guy with a gun” theory. Like climate change denial. And of course over 40% still believe the world’s worst liar is somehow “making America great again.” (History shows even the rottenest leaders always attract plenty of followers.)

Liberals are not immune. Beliefs about vaccines and GM foods being harmful are scientifically bunk. In fact it’s those beliefs that do harm.

I’ve written repeatedly about the importance of confirmation bias — how we love information that seemingly supports our beliefs and shun anything contrary. The Economist recently reported on a fascinating study, where people had to choose whether to read and respond to eight arguments supporting their own views on gay marriage, or eight against. But choosing the former could cost them money. Yet almost two-thirds of Americans (on both sides of the issue) actually still opted against exposure to unwelcome advocacy! In another study, nearly half of voters made to hear why others backed the opposing presidential candidate likened the experience to having a tooth pulled.

And being smarter actually doesn’t help. In fact, smarter people are better at coming up with rationalizations for their beliefs and for dismissing countervailing information.

Yet a further study reported by The Economist used an MRI to scan people’s brains while they read statements for or against their beliefs. Based on what brain regions lit up, the study concluded that major beliefs are an integral part of one’s sense of personal identity. No wonder they’re so impervious to reality.

Remarkably, given the shitstorm so totally perverting the Republican party, not a single Republican member of Congress has renounced it.

The Economist ended by saying “accurate information does not always seem to have much of an effect (but we will keep trying anyway).”

So will I.

The REALLY big picture

December 19, 2018

We start from the fact that the Universe was created by God in 4004 BC.

Oops, not exactly. It was actually more like 13,800,000,000 BC (give or take a year or two). The event is called the Big Bang — a name given by astronomer Fred Hoyle intended sarcastically — and it was not an “explosion.” Rather, if you take the laws of physics and run the tape backwards, you get to a point where the Universe is virtually infinitely tiny, dense, and hot. A “singularity,” where the laws of physics break down — and we can’t go farther back to hypothesize what came before. Indeed, since Time began with the Big Bang, “before” has no meaning. Nevertheless, while some might say God did it, it’s reasonable instead to posit some natural phenomenon, a “quantum fluctuation” or what have you.

So after the Big Bang we started with what’s called the “Quantum Gravity Epoch.” It was rather brief as “epochs” go – lasting, to be exact, 10-43 of a second. That’s 1 divided by the number 1 followed by 43 zeroes.

That was followed by the “Inflationary Epoch,” which also went fairly quick, ending when the Universe was still a youngster 10-34 of a second old.

But in that span of time between 10-43 and 10-34 of a second, something big happened. You know how it is when you eat a rich dessert and virtually blow up in size? We don’t know what the Universe ate, but it did blow up, going from a size almost infinitely small to one almost infinitely large, in just that teensy fraction of a second; thus expanding way faster than the speed of light.

After that hectic start, things became more leisurely. It took another few hundred million years, at least, for the first stars to twinkle on.

This is the prevailing scientific model. If you find this story hard to believe, well, you can believe the Bible instead.

Here are some more facts to get your head around. Our galaxy comprises one or two hundred billion stars, and is around 100,000 light years across. A light year is the distance light travels in a year – about 6 trillion miles. And ours is actually a pipsqueak galaxy; at the bottom of the range which goes up to ten times bigger. And how many galaxies are there? Wait for it . . . two trillion. But that’s only in the observable part of the Universe; we can only see objects whose light could reach us within the 13.8 billion years the Universe has existed. Because of its expansion during that time, the observable part actually stretches 93 billion light years. We don’t know how much bigger the total Universe might be. Could be ten trillion light years across. (I don’t want to talk about “infinite.”)

Now, it was Hubble who in 1929 made the astounding discovery that some of the pinpoints of light we were seeing in the sky are not stars but other galaxies. And more, they are moving away from us; the farther away, the faster. Actually, it’s not that the galaxies are moving; rather, space itself is expanding. Jain analogized the galaxies to ants on the surface of a balloon. If you inflate it, the distance between ants grows, even while they themselves don’t move. And note, space is not expanding into anything. It is making more space as it goes along.

But there are two big mysteries. Newton posited that the force of gravity is proportional to mass and diminishes with the square of the distance between masses. However, what we see in other galaxies does not conform to this law; it’s as though there has to be more mass. We don’t yet know what that is; we call it “dark matter.” (There is an alternative theory, that Newton’s law of gravity doesn’t hold true at great distances, which might account for what we see with no “dark matter.”)

The other problem is that what we know of physics and gravity suggests that the Universe’s expansion should be slowing. But we have found that at a certain point during its history, the expansion accelerated, and continues to do so. This implies the existence of a force we can’t yet account for; we label it “dark energy.”

“Ordinary matter” (that we can detect) accounts for only 5% of the Universe. Another 24% is dark matter and 71% dark energy. (Remember that matter and energy are interchangeable. That’s how we get atom bombs.)

But, again, the story is a lot simpler if you choose instead to believe the Bible.

(This is my recap of a recent talk by Vivek Jain, SUNY Associate Professor of Physics, at the Capital District Humanist Society.)

“The Discovery” — Scientific proof of Heaven

December 6, 2018

Our daughter recommended seeing this Netflix film, “The Discovery.” It starts with scientist Thomas Harbor (Robert Redford) giving a rare interview about his discovery proving that we go somewhere after death.

This has precipitated a wave of suicides. Asked if he feels responsible, Harbor simply says “no.” Then a man shoots himself right in front of him.

Next, cut to Will and Ayla (“Isla” according to Wikipedia) who meet as the lone passengers on an island ferry. Talk turns to “the discovery.” Will is a skeptic who doesn’t think it’s proven.

Turns out Will is Harbor’s estranged son, traveling to reconnect with him at Harbor’s island castle. Where he runs a cult peopled with lost souls unmoored by “the discovery.” While continuing his work, trying to learn where, exactly, the dead go.

Meantime, people keep killing themselves, aiming to “get there” — wherever “there” is. Will saves Ayla from drowning herself and brings her into the castle.

Harbor has created a machine to get a fix on “there” by probing a brain during near-death experiences — his own. It doesn’t work. “We need a corpse,” he decides.

So Will — his skepticism now forgotten — and Ayla steal one from a morgue. This is where the film got seriously silly. (Real scientists nowadays aren’t body snatchers.) The scene with the dead guy hooked up to the machine and subjected to repeated electrical shocks was straight out of Frankenstein 1931.

This doesn’t work either. At first. But later, alone in the lab, Will finds a video actually had gotten extracted from the corpse’s brain. Now he’s on a mission to decode it.

I won’t divulge more of the plot. But the “there” in question is “another plane of existence.” Whatever that might actually mean. There’s also some “alternate universes” thing going on, combined with some Groundhog Dayish looping. A real conceptual mishmash.

One review faulted the film for mainly wandering in the weeds of relationship tensions rather than really exploring the huge scientific and philosophical issues. I agree.

The film’s metaphysical incoherence goes with the territory of “proving” an afterlife. There was no serious effort at scientific plausibility, which would be a tall order. Mind and self are entirely rooted in brain function. When the brain dies, that’s it.

The film didn’t delve either into the thinking of any of the folks who committed suicide, which would have been interesting. After all, many millions already strongly believe in Heaven, yet are in no hurry to go. But, as I have said, “belief” is a tricky concept. You may persuade yourself that you believe something, while another part of your mind does not.

The film’s supposed scientific proof presumably provides the clincher. Actually, religious people, even while professing that faith stands apart from questions of evidence, nevertheless do latch on to whatever shreds of evidence they can, to validate their beliefs. For Heaven, there’s plenty, including testimonies of people who’ve been there. But there’s still that part of the brain that doesn’t quite buy it. Would an assertedly scientific discovery change this?

I doubt it. Most people have a shaky conception of science, with many religious folks holding an adversarial stance toward it. Science is, remember, the progenitor of evolution, which they hate. Meantime — this the film completely ignored — religionists generally consider suicide a sin against God. Surely that can’t be your best route to Heaven!

The film did mention that people going on a trip want to see a brochure first. That’s what Harbor’s further work aimed to supply. Without it — without “the discovery” having provided any idea what the afterlife might be like — killing oneself to get there seems a pretty crazy crapshoot. Even for religious nuts.

Truth, beauty, and goodness

September 20, 2018

Which among the three would you choose?

I read Howard Gardner’s 2011 book, Truth, Beauty, and Goodness Reframed: Educating for the Virtues in the Twenty-first Century. (Frankly I’d picked it up because I confused him with Martin Gardner; but never mind.)

Beauty I won’t discuss. But truth and goodness seem more important topics today than ever.

Many people might feel their heads spin as age-old seeming truths fall. “Eternal verities” and folk wisdom have been progressively undermined by science. Falling hardest, of course, is God, previously the explanation for everything we didn’t understand. He clings on as the “God of the gaps,” the gaps in our knowledge that is, but those continue to shrink.

Darwin was a big gap-filler. One might still imagine a god setting in motion the natural processes Darwin elucidated, but that’s a far cry from his (God’s) former omnicompetence.

While for me such scientific advancements illuminate truth, others are disconcerted by them, often refusing to accept them, thus placing themselves in an intellectually fraught position with respect to the whole concept of truth. If one can eschew so obvious a fact as evolution, then everything stands upon quicksand.

Muddying the waters even more is postmodernist relativism. This is the idea that truth itself is a faulty concept; there really is no such thing as truth; and science is just one way of looking at the world, no better than any other. What nonsense. Astronomy and astrology do not stand equally vis-a-vis truth. (And if all truth is relative, that statement applies to itself.)

Though postmodernism did enjoy a vogue in academic circles, as a provocatively puckish stance against common sense by people who fancied themselves more clever, it never much infected the wider culture, and even its allure in academia deservedly faded. And yet postmodernism did not sink without leaving behind a cultural scum. While it failed to topple the concept of truth, postmodernism did inflict some lasting damage on it, opening the door to abuse it in all sorts of other ways.

All this background helped set the stage for what’s happening in today’s American public square. One might have expected a more gradual pathology until Trump greatly accelerated it by testing the limits and finding they’d fallen away. Once, a clear lie would have been pretty much fatal for a politician. Now one who lies continuously and extravagantly encounters almost no consequences.

It’s no coincidence that many climate change deniers and believers in Biblical inerrancy, young Earth creationism, Heaven, and Hell, are similarly vulnerable to Trump’s whoppers. Their mental lie detector fails here because it’s already so compromised by the mind contortions needed to sustain those other counter-factual beliefs.

But of course there’s also simple mental laziness — people believing things with no attempt at critical evaluation.

A long-ago episode in my legal career sticks with me. I was counsel for the staff experts in PSC regulatory proceedings. We had submitted some prepared testimony; the utility filed its rebuttal. I read their document with a horrible sinking feeling. They’d demolished our case! But then we went to work carefully analyzing their submittal, its chains of logic, evidence, and inferences. In the end, we shot it as full of holes as they had initially seemed to do to ours.

The point is that the truth can take work. Mark Twain supposedly said a lie can race around the globe while the truth is putting its shoes on. Anyone reading that utility rebuttal, and stopping there, would likely have fallen for it. And indeed, that’s how things usually do go. Worse yet, polemical assertions are often met with not critical thinking but, on the contrary, receptivity. That’s the “confirmation bias” I keep stressing. People tend to believe things that fit with their preconceived opinions — seeking them out, and saying, “Yeah, that’s right” — while closing eyes and ears to anything at odds with those beliefs.

A further aspect of postmodernism was moral relativism. Rejection of empirical truth as a concept was extended to questions of right and wrong — if there’s no such thing as truth, neither are right and wrong valid concepts. The upshot is nonjudgmentalism.

Here we see a divergence between young and old. Nonjudgmentalism is a modern tendency. Insofar as it engenders an ethos of tolerance toward human differences, that’s a good thing. It has certainly hastened the decline of prejudice toward LGBTQs.

Yet tolerance and nonjudgmentalism are not the same. Tolerance reflects the fundamental idea of libertarianism/classical liberalism — that your right to swing your fist stops at my nose — but otherwise I have no right to stop your swinging it. Nor to stop, for example, sticking your penis in a willing orifice. Nonjudgmentalism is, however, a much broader concept, embodying again the postmodernist rejection of any moral truths. Thus applied in full force it would wipe out even the fist/nose rule.

That is not as absurd a concern as it might seem. Howard Gardner’s book speaks to it. He teaches at Harvard and expresses surprise at the extent to which full-bore nonjudgmentalism reigns among students. They are very reluctant to judge anything wrong. Such as cheating on exams, shoplifting, and other such behaviors all too common among students. A situational ethic of sorts is invoked to excuse and exculpate, and thereby avoid the shibboleth of judgment.

Presumably they’d still recognize the clearest moral lines, such as the one about murder? Not so fast. Gardner reports on conducting “numerous informal ‘reflection’ sessions with young people at various secondary schools and colleges in the United States.” Asked to list people they admire, students tend to demur, or confine themselves only to ones they know personally. And they’re “strangely reluctant” to identify anyone they don’t admire. “Indeed,” Gardner writes, “in one session [he] could not even get students to state that Hitler should be featured on a ‘not-to-be-admired’ list.”

Well, ignorance about history also seems lamentably endemic today. But what Gardner reports is actually stranger than might first appear. As I have argued, we evolved in groups wherein social cooperation was vital to survival, hence we developed a harsh inborn judgmentalism against anything appearing to be anti-social behavior. That (not religion) is the bedrock of human morality. And if that deep biological impulse is being overridden and neutered by a postmodernist ethos of nonjudgmentalism, that is a new day indeed for humankind, with the profoundest implications.

Was America founded as a “Christian nation?”

August 13, 2018

We’re often told that it was. The aim is to cast secularism as somehow un-American, and override the Constitution’s separation of church and state. But it’s the latter idea that’s un-American; and it’s historical nonsense. Just one more way in which the religious right is steeped in lies (forgetting the Ninth Commandment).


They assault what is in fact one of the greatest things about America’s birth. It’s made clear in Susan Jacoby’s book, Freethinkers: A History of American Secularism.

Firstly, it tortures historical truth to paint the founding fathers as devout Christians. They were not; instead men of the Enlightenment. While “atheism” wasn’t even a thing at the time, most of them were as close to it as an Eighteenth Century person could be. Franklin was surely one of the century’s most irreverent. Washington never in his life penned the name “Christ.” Jefferson cut-and-pasted his own New Testament, leaving out everything supernatural and Christ’s divinity. In one letter he called Christian doctrine “metaphysical insanity.”

The secularism issue was arguably joined in 1784 (before the Constitution) when Patrick Henry introduced a bill in Virginia’s legislature to tax all citizens to fund “teachers of the Christian religion.” Most states still routinely had quasi-official established churches. But James Madison and others mobilized public opinion onto an opposite path. The upshot was Virginia passing not Henry’s bill but, instead, one Jefferson had proposed years earlier: the Virginia Statute of Religious Freedom.

It was one of three achievements Jefferson had engraved on his tombstone.

The law promulgated total separation of church and state. Nobody could be required to support any religion, nor be penalized or disadvantaged because of religious beliefs or opinions. In the world of the Eighteenth Century, this was revolutionary. News of it spread overseas and created an international sensation. After all, this was a world still bathed in blood from religious believers persecuting other religious believers. It was not so long since people were burned at the stake over religion, and since a third of Europe’s population perished in wars of faith. Enough, cried Virginia, slashing this Gordian knot of embroiling governmental power with religion.

Soon thereafter delegates met in Philadelphia to create our Constitution. It too was revolutionary; in part for what it did not say. The word “God” nowhere appears, let alone the word “Christian.” Instead of starting with a nod to the deity, which would have seemed almost obligatory, the Constitution begins “We the people of the United States . . . .” We people did this, ourselves, with no god in the picture.

This feature did not pass unnoticed at the time; to the contrary, it was widely denounced, as an important argument against ratifying the Constitution. But those views were outvoted, and every state ratified.

It gets better. Article 6, Section 3 says “no religious test shall ever be required” for holding any public office or trust. This too was highly controversial, contradicting what was still the practice in most states, and with opponents warning that it could allow a Muslim (!) president. But the “no religious test” provision shows the Constitution’s framers were rejecting all that, and totally embracing, instead, the religious freedom stance of Virginia’s then-recent enactment. And that too was ratified.

Indeed, it still wasn’t even good enough. In the debates over ratification, many felt the Constitution didn’t sufficiently safeguard freedoms, including religious freedom, and they insisted on amendments, which were duly adopted in 1791. That was the Bill of Rights. And the very first amendment guaranteed freedom of both speech and religion — which go hand-in-hand. This made clear that all Americans have a right to their opinions, and to voice those opinions, including ideas about religion, and that government could not interfere. Thus would Jefferson later write of “the wall of separation” between church and state.

All this was, again, revolutionary. The founders, people of great knowledge and wisdom, understood exactly what they were doing, having well in mind all the harm that had historically been done by government entanglement with religion. What they created was something new in the world, and something very good indeed.

Interestingly, as Jacoby’s book explains, much early U.S. anti-Catholic prejudice stemmed from Protestants’ fear that Catholics, if they got the chance, would undermine our hard-won church-state separation, repeating the horrors Europe had endured.

A final point by Jacoby: the religious attack on science (mainly, evolution science) does not show religion and science are necessarily incompatible. Rather, it shows that a religion claiming “the one true answer to the origins and ultimate purpose of human life” is “incompatible not only with science but with democracy.” Because such a religion really says that issues like abortion, capital punishment, or biomedical research can never be resolved by imperfect human opinion, but only by God’s word. This echoes the view of Islamic fundamentalists that democracy itself, with humans presuming to govern themselves, is offensive to God. What that means in practice, of course, is not rule by (a nonexistent) God but by pious frauds who pretend to speak for him.

I’m proud to be a citizen of a nation founded as a free one* — not a Christian one.

* What about slaves? What about women? Sorry, I have no truck with those who blacken America’s founding because it was not a perfect utopia from Day One. Rome wasn’t built in a day. The degree of democracy and freedom we did establish were virtually without precedent in the world of the time. And the founders were believers in human progress, who created a system open to positive change; and in the centuries since, we have indeed achieved much progress.

How to become a Nazi

July 9, 2018

You’re a nurse, and a doctor instructs you, by phone, to give his patient 20 Mg of a certain drug. The bottle clearly says 10 Mg is the maximum allowable daily dose. Would you administer the 20 Mg? Asked this hypothetical question, nearly all nurses say no. But when the experiment was actually run, 21 out of 22 nurses followed the doctor’s orders, despite knowing it was wrong.

Then there was the famous Milgram experiment. Participants were directed to administer escalating electric shocks to other test subjects for incorrect answers. Most people did as instructed, even when the shocks elicited screams of pain; even when the victims apparently lost consciousness. (They were actors and not actually shocked.)

These experiments are noted in Michael Shermer’s book, The Moral Arc, in a chapter about the Nazis. Shermer argues that in the big picture we are morally progressing. But here he examines how it can go wrong, trying to understand how people became Nazis.

Normal people have strong, deeply embedded moral scruples. But they are very situation-oriented. Look at the famous “runaway trolley” hypothetical. Most people express willingness to pull a switch to detour the trolley to kill one person to prevent its killing five. But if you have to physically push the one to his death — even though the moral calculus would seem equivalent — most people balk.

So it always depends on the circumstances. In the nurse experiment, when it came down to it, the nurses were unwilling to go against the doctor. Likewise in Milgram’s experiment, it was the authority of the white-coated supervisor that made people obey his order to give shocks, even while most felt very queasy about it.

Nazis too often explained themselves saying, “I was only following orders.” And, to be fair, the penalty for disobeying was often severe. But that was hardly the whole story. In fact, the main thing was the societal normalization of Nazism. When your entire community, from top to bottom, is besotted with an idea, it’s very hard not to be sucked in.

Even if it is, well, crazy. Nazi swaggering might actually not have been delusional if confined to the European theatre. They overran a lot of countries. But then unbridled megalomania led them to take on, as well, Russia — and America. This doomed insanity they pursued to the bitter end.

Yet they didn’t see it that way. The power of groupthink.

And what about the idea of exterminating Jews? They didn’t come to it all at once, but in incremental steps. They actually started with killing “substandard” Germans — mentally or physically handicapped, the blind, the deaf — tens of thousands. With the Jews they began with social ostracizing and increasing curtailment of rights.

This was accompanied by dehumanization and demonization. Jews were not just called inferior, genetically and morally, but blamed for a host of ills, including causing WWI, and causing Germany’s defeat. Thusly Germans convinced themselves the Jews deserved whatever they got, had “brought it on themselves.” These ideas were in the very air Germans breathed.

Part of this was what Shermer calls “pluralistic ignorance” — taking on false beliefs because you imagine everyone holds them. Like college students who’ve been shown to have very exaggerated ideas of their peers’ sexual promiscuity and alcohol abuse, causing them to conform to those supposed norms. Germans similarly believed negative stereotypes about Jews because they thought most fellow Germans held such views. Actually many did not, but kept that hidden, for obvious reasons. There was no debate about it.

Of course it was all factually nonsense. An insult to intelligence, to anyone who knew anything about anything. Yet Germany — seemingly the most culturally advanced society on Earth, the epicenter of learning, philosophy, the arts — fell completely for this nonsense and wound up murdering six million in its name.*

Which brings me to Trumpism. (You knew it would.) Am I equating it with Nazism? No. Not yet. But the pathology has disturbing parallels. The tribalism, the groupthink, the us-versus-them, nationalism, racism, and contempt for other peoples. The demonization of immigrants, falsely blaming them for all sorts of ills, to justify horrible mistreatment like taking children from parents — even saying, “they brought it on themselves.” And especially the suspension of critical faculties to follow blindly a very bad leader and swallow bushels of lies.

I might once have said “it can’t happen here” because of our strong democratic culture. Today I’m not so sure. Culture can change. That within the Republican party certainly has. Not so long ago the prevailing national attitude toward politicians was “I’m from Missouri,” and “they’re all crooks and liars.” Too cynical perhaps but the skepticism was healthy, and it meant that being caught in a lie (or even flip-flopping) was devastating for a politician. Contrast Republicans’ attitude toward Trump (a politician after all). Not only a real crook and constant flip-flopper, but a Brobdingnagian liar. That 40% of Americans line up in lockstep behind this is frightening. And as for our democratic culture, the sad truth is that too few still understand its principles and values. Germans in their time were the apogee of civilization, and then they became Nazis.

Shermer quotes Hitler saying, “Give me five years and you will not recognize Germany again.” Fortunately Trump will have only four — let’s hope. But America is already becoming unrecognizable.

* My grandfather was a good patriotic German who’d even taken a bullet for his country in WWI. But that didn’t matter; he was Jewish. Fortunately he, with wife and daughter, got out alive. His mother did not.

Are humans smarter than (other) animals?

June 27, 2018

Around 1900, “Clever Hans” was a famous German horse with seeming mathematical ability. Asked “what is four times three?” Hans would tap his hoof twelve times. He was usually right even when his owner wasn’t present; and even when given the questions in writing!

Animal intelligence — and consciousness — are age old puzzles to us. French philosopher Rene Descartes saw other animals as, in effect, mechanical contrivances. And even today many see all their behaviors as produced not by intelligent consciousness (like ours) but rather by instinct — pre-installed algorithms that dictate responses to stimuli — like computers running programs.

Clever Hans’s story is recapped in Yuval Noah Harari’s book, Homo Deus. It was eventually proven that Hans knew no math at all. Instead, he was cued to stop tapping his hoof by onlookers’ body language and facial expressions. But, Harari says, that didn’t debunk Hans’s intelligence, it did the opposite. His performance required far more brain power than simple math! You might have memorized 4×3=12 — but could you have gotten the answer the way Hans did?

This points up the difficulty of inferring animal mentation using human yardsticks. Harari explains Hans’s abilities by noting that horses, unequipped for verbal language, communicate instead through body language — so they get pretty good at it. Much better than us.

So if horses are so smart, why aren’t they sitting in the stands at Saratoga while humans run around the track? Well, for one thing, building that sort of facility would have been a lot harder for horses with hooves rather than our dextrous five-fingered hands. Our tool-making capability is a huge factor. And our intelligence, taken as a whole, probably does outstrip that of any other animal. It had to, because early humans faced far more complex survival challenges. Countless other species failed such tests and went extinct. We did not because an evolutionary fluke gave us, just in time, an extreme adaptation in our brains, unlike any other animal’s. Our equivalent of the narwhal’s huge tusk or the giraffe’s neck.

That happened around a couple of hundred thousand years ago. Yet for around 98% of those years, humans achieved little more than mere survival. Only in the last few thousand have we suddenly exploded into a force dominating the Earth as no creature before.

Why that delay? In fact, Harari notes, our stone age ancestors must have been even smarter than people today. After all, their lives were much tougher. One mistake and you’d be dead; your dumb genes would not make it into the next generation.

Harari thinks — I tend to agree — that cooperation proved to be humanity’s killer app. PBS TV’s recent “Civilizations” series illuminates how things really got going with the development of agriculture about 10,000 years ago. Arguably farmers were actually worse off in many ways; and maybe even humanity as a whole for about 9,800 of those years. But agriculture, and the production of food surpluses, did make possible the rise of cities, where people could specialize in particular enterprises, and interact and exchange ideas with large numbers of other people. That eventually paid off spectacularly, in terms of human material well-being, in modern times.

Harari notes that ants and bees too live in large cooperative communities. So why haven’t they developed computers and spaceships? Our super intelligent consciousness also gave us great flexibility to adapt to changing circumstances. Insects have a far more limited repertoire of responses. As Harari writes, “If a hive faces a new threat or a new opportunity, the bees cannot, for example, guillotine the queen and establish a republic.”

Modern life: the big challenge we face

June 23, 2018

Tom Friedman’s latest book made my head spin. It’s Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations. He’s a bigger optimist than me.

The “accelerations” in question concern technology, globalization, and climate change, all transforming the world at breakneck speed. Faster, indeed, than human psychology and culture can keep up with.


What spun my head was Friedman’s rundown of technology’s acceleration. He sees 2007 as an inflection point, with the iPhone and a host of other advances creating a newly powerful platform that he calls not the Cloud but the “Supernova.” For instance there’s Hadoop. Ever heard of it? I hadn’t. It’s a company, that also emerged in 2007, revolutionizing the storage and organization of “Big Data” (as best I understand it), making possible explosions in other technologies. And GitHub — 2007 again — blasting open the ability to create software.*

All this is great — for people able to swim in it. But that’s not everybody. A lot of people are thrown for a loop, disoriented, left behind. Bringing them up to speed is what Friedman says we must do. Otherwise, we’ll need a level of income redistribution that’s politically impossible.

The age-old fear (starting with the Luddites) is “automation” making people obsolete and killing jobs. It’s never happened — yet. Productivity improvements have always made society richer and created more jobs than those lost. But Friedman stresses that the new jobs are of a different sort now. No longer can routine capabilities produce a good income — those capabilities are being roboticized. However, what robots can’t substitute for is human social skills, which are increasingly what jobs require. AI programs can, for example, perform medical diagnoses better than human doctors, so the role of a doctor will become more oriented toward patient relations, where humans will continue to outperform machines.

But schools aren’t teaching that. Our education system is totally mismatched to the needs of the Twenty-first Century. And I can’t see it undergoing the kind of radical overhaul required.

I’ve often written how America’s true inequality is between the better educated and the less educated, which have become two separate cultures. Friedman says a college degree is now an almost indispensable requirement for the prosperous class, but it’s something children of the other class find ever harder to obtain. All the affirmative action to help them barely nibbles at the problem.

On NPR’s This American Life I heard a revealing profile of an apparently bright African-American kid who did make it into a good college, with a scholarship no less. But he had no idea how to navigate in that unfamiliar environment, and got no help there, left to sink or swim on his own. He sank.

Friedman talks up various exciting innovative tools available to such people not born into the privileged class, to close the gap. But to take advantage of them you have to be pretty smart and clued in. I keep thinking about all the people who aren’t, with no idea how they might thrive, or even just get by, in the new world whooshing up around them. I’ve written about them in discussing books like The End of Men and Hillbilly Elegy. It wasn’t just “hillbillies” Vance was talking about there, but a big swath of the U.S. population. A harsh observer might call them losers; throw-away people.

I’m enraged when charter schools are demonized as a threat to public education. That’s a Democrat/liberal counterpart to Republican magical thinking. These liberals who spout about inequality and concern for the disadvantaged are in denial about how the education system is part of the problem. Public schools do fine in leafy white suburbs; schools full of poor and minority kids do not. For those kids, charter school lotteries offer virtually the only hope.

Of course, the problem of people unfitted for modernity isn’t unique to America. There are billions more in other countries. Yet most of us don’t realize how fast an awful lot of those people are actually coming up to speed. But there’s still going to be a hard core who just cannot do it, and no conceivable government initiatives or other innovations will be a magic wand turning them into fairies. Instead it seems we’re headed toward one of those future-dystopia sci-fi films where humanity is riven between two virtually distinct species — the golden ones who live beautiful lives, forever, and the rest who sink into immiseration. I do think most people can be in the former group. And I hope they’ll be generous enough to carry the others at least partway to the Eden.

But what Friedman keeps stressing is the need for culture, especially in politics, to change along with the landscape. He applies what he says is the real lesson of biological evolution: it’s not the strongest that thrive, but the most adaptable. In many ways America does fulfill this criterion. Yet in other ways we’re doing the opposite, especially in the political realm where so much of the problem needs to be addressed. The mentioned need for radical education reform is just one example. Our constitution worked great for two centuries; now, not so much. Our political life has become sclerotic, frozen. Add to that our inhabiting a post-truth world where facts don’t matter. Can’t really address any problems that way.

Friedman enumerates an 18-point to-do list for American public policy. Mostly no-brainers. But almost none of it looks remotely do-able today. In fact, on a lot of the points — like opening up more to globalized trade — we’re going the wrong way.

He concludes with an extended look at the Minnesota community where he grew up in the ’50s and ’60s. It echoed Robert Putnam’s describing his own childhood community in Our Kids. Both were indeed communities, full of broad-based community spirit. Friedman contrasts the poisonously fractious Middle East where he spent much of his reporting career. He also reported a lot about Washington — and sees U.S. politics increasingly resembling the Middle East with its intractable tribal conflicts.

I’ve seen this change too in my lifetime — remembering when, for all our serious political disagreements, adversaries respected each other and strove to solve problems in a spirit of goodwill. Most politicians (and their supporters) embodied civic-mindedness, sincerity, and a basic honesty. No longer. Especially, sadly, on the Republican side, which for decades I strongly supported. Now it’s dived to the dark side, the road to perdition.

Friedman wrote before the 2016 election — where America turned its back on all he’s saying. Can we repent, and veer toward a better road, before it’s too late?

*Microsoft has just bought GitHub.