Archive for the ‘Science’ Category

Greta Thunberg is wrong

October 1, 2019

Greta Thunberg, the 16-year-old Swedish climate warrior, berates the world (“How dare you?”) for pursuing a “fairy tale” of continued economic growth — putting money ahead of combating global warming. A previous local newspaper commentary hit every phrase of the litany: “species decimation, rainforest destruction . . . ocean acidification . . . fossil-fuel-guzzling, consumer-driven . . . wreaked havoc . . . blind to [the] long-term implication . . . driven by those who would profit . . . our mad, profligate  . . . warmongering . . . plasticization and chemical fertilization . . . failed to heed the wise admonition of our indigenous elders . . . .”

The litany of misanthropes hating their own species and especially their civilization.

Lookit. There’s no free lunch. Call it “raping the planet” if you like, but we could never have risen from the stone age without utilizing as fully as possible the natural resources available. And if you romanticize our pre-modern existence (“harmony with nature” and all), well, you’d probably be dead now, because most earlier people didn’t make thirty. And those short lives were nasty and brutish. There was no ibuprofen.

This grimness pretty much persisted until the Industrial Revolution. Only now, by putting resource utilization in high gear, could ordinary folks begin to live decently. People like that commentator fantasize giving it up. Or, more fantastical, our somehow still living decently without consuming the resources making it possible.

These are often the same voices bemoaning world poverty. Oblivious to how much poverty has actually declined — thanks to all the resource utilization they condemn. And to how their program would deny decent lives to the billion or so still in extreme poverty. Hating the idea of pursuing economic growth may be fine for those living in affluent comfort. Less so for the world’s poorest.

Note, as an example, the mention of “chemical fertilization.” This refers to what’s called the “green revolution” — revolutionizing agriculture to improve yields and combat hunger, especially in poorer nations. It’s been estimated this has saved a couple billion lives. And of course made a big dent in global poverty.

But isn’t “chemical fertilization,” and economic development more generally, bad for the environment? Certainly! Again, no free lunch. In particular, the climate change we’re hastening will, as Thunberg says, likely have awful future impacts. Yet bad as that is, it’s not actually humanity’s biggest challenge. The greater factors affecting human well-being will remain the age-old prosaic problems of poverty, disease, malnutrition, conflict, and ignorance. Economic growth helps us battle all those. We should not cut it back for the sake of climate. In fact, growing economic resources will help us deal with climate change too. It’s when countries are poor that they most abuse the environment; affluence improves environmental stewardship. And it’s poor countries who will suffer most from climate change, and will most need the resources provided by economic growth to cope with it.

Of course we must do everything reasonably possible to minimize resource extraction, environmental impacts, and the industrial carbon emissions that accelerate global warming. But “reasonably possible” means not at the expense of lower global living standards. Bear in mind that worldwide temperatures will continue to rise even if we eliminate carbon emissions totally (totally unrealistic, of course). Emission reductions can moderate warming only slightly. That tells us to focus less on emissions and more on preparing to adapt to higher temperatures. And more on studying geo-engineering possibilities for removing greenhouse gases from the atmosphere and otherwise re-cooling the planet. Yet most climate warriors actually oppose such efforts, instead obsessing exclusively on carbon reduction, in a misguided jihad against economic growth, as though to punish humanity for “raping the planet.”

Most greens are also dead set against nuclear power, imagining that renewables like solar and wind energy can fulfill all our needs. Talk about fairy tales. Modern nuclear power plants are very safe and emit no greenhouse gases. We cannot hope to bend down the curve of emissions without greatly expanded use of nuclear power. Radioactive waste is an issue. But do you think handling that presents a bigger challenge than to replace the bulk of existing power generation with renewables?

I don’t believe we’re a race of planet rapists. Our resource utilization and economic development has improved quality of life — the only thing that can ultimately matter. The great thing about our species, enabling us to be so spectacularly successful, is our ability to adapt and cope with what nature throws at us. Climate change and environmental degradation are huge challenges. But we can surmount them. Without self-flagellation.


Thinking like a caveman

September 18, 2019


What is it like to be a bat? That famous essay by philosopher Thomas Nagel keeps nagging at us. What is it like to be me? Of this I should have some idea. But why is being me like that? — how does it work? — are questions that really bug me.

Science knows a lot about how our neurons work. Those doings of billions of neurons, each with very limited, specific, understandable functions, join to create one’s personhood. A leap we’re only beginning to understand.

Steven Mithen’s book, The Prehistory of the Mind, takes the problem back a step, asking how our minds came to exist in the first place. It’s a highly interesting inquiry.

Of course the simple answer is evolution. Life forms have natural variability, and variations that prove more successful in adapting to changing environments proliferate. This builds over eons. Our minds were a very successful adaptation.

But they could not have sprung up all at once. Doesn’t work that way. So by what steps did they evolve? The question is problematical given our difficulty in reverse-engineering the end product. But Mithen’s analysis actually helps toward such understanding.

He uses two metaphors to describe what our more primitive, precursor minds were like. One is a Swiss Army knife. It’s a tool that’s really a tool kit. Leaving aside for the moment the elusive concept of “mind,” all living things have the equivalent of Swiss Army knives to guide their behavior in various separate domains. A cat, for example, has a program in its brain for jumping up to a ledge; another for catching a mouse; and so forth. The key point is that each is a separate tool, used separately; two or more can’t be combined.

Which brings in Mithen’s other metaphor for the early human mind: a cathedral. Within it, there are various chapels, each containing one of the Swiss Army knife tools, each one a brain program for dealing with a specific type of challenge. The main ones Mithen identifies are a grasp of basic physics in connection with tool-making and the like; a feel for the natural world; one for social interaction; and language arts, related thereto.

This recalls Martin Gardner’s concept of multiple intelligences. Departing from an idea that “intelligence” is a single capability that people have more or less of, Gardner posited numerous diverse particularized capabilities, such as interpersonal skills, musical, spatial-visual, etc. A person can be strong in one and weak in another.

Mithen agrees, yet nevertheless also hypothesizes what he calls “general intelligence.” By this he means “a suite of general-purpose learning rules, such as those for learning associations between events.” Here’s where his metaphors bite. The Swiss Army knife doesn’t have a general intelligence tool. That’s why a cat is extremely good at mousing but lacks a comprehensive viewpoint on its situation.

In Mithen’s cathedral, however, there is general intelligence, situated right in the central nave. However, the chapels, each containing their specific tools, are closed off from it and from each other. The toolmaking program doesn’t communicate with the social interaction program; none of them communicates with the general intelligence.

Does this seem weird? Not at all. Mithen invokes an analogy to driving while conversing with a passenger. Two wholly separate competences are operating, but sealed off from each other, neither impinging on the other.

This, Mithen posits, was indeed totally the situation of early humans (like Neanderthals). Our own species arose something like 100,000 years ago, but for around half that time, it seems, we too had minds like Neanderthals, like Mithen’s compartmentalized cathedral, lacking pathways for the various competences to talk to each other. He describes a “rolling” sort of consciousness that could go from one sphere to another, but was in something of a blur about seeing any kind of big picture.

Now, if you were intelligently building this cathedral, you wouldn’t do it this way. But evolution is not “intelligent design.” It has to work with what developed previously. And what it started with was much like the Swiss Army knife, with a bunch of wholly separate competences that each evolved independently.

That’s good enough for most living things, able to survive and reproduce without a “general intelligence.” Evolving the latter was something of a fluke for humans. (A few other creatures may have something like it.)

The next step was to integrate the whole tool kit; to open the doors of all the chapels leading into the central nave. The difference was that while a Neanderthal could be extremely skilled at making a stone tool, while he was doing it he really couldn’t ponder about it in the context of his whole life. We can. Mithen calls this “cognitive fluidity.”

The way I like to put it, the essence of our consciousness is that we don’t just have thoughts, we can think about our thoughts. That’s the integration Mithen talks about — a whole added layer of cognition. And it’s that layering, that thinking about our thinking, that gives us a sense of self, more powerfully than any other creature.

I’ve previously written too of how the mind makes sense of incoming information by creating representations. Like pictures in the mind, often using metaphors. And here too there’s layering; we make representations of representations; representations of ourselves perceiving those representations. That indeed is how we do perceive — and think about what we perceive. And we make representations of concepts and beliefs.

All this evolved because it was adaptive — enabling its possessors to better surmount the challenges of their environment. But this cognitive fluidity, Mithen says, is also at the heart of art, religion, science — all of human culture.

Once we achieved this capability, it blew the doors off the cathedral, and it was off to the races.

“Science for Heretics” — A nihilistic view of science

August 10, 2019

Physicist Barrie Condon has written Science for Heretics: Why so much of science is wrong. Basically arguing that science cannot really understand the world, and maybe shouldn’t even try. The book baffles me.

It’s full of sloppy mistakes (many misspelled names). It’s addressed to laypeople and does not read like a serious science book. Some seems downright crackpot. Yet, for all that, the author shows remarkably deep knowledge, understanding, and even insight into the scientific concepts addressed, often explaining them quite lucidly in plain English. Some of his critiques of science are well worth absorbing. And, rather than the subtitle’s “science is wrong,” the book is really more a tour through all the questions it hasn’t yet totally answered.

A good example is the brain. We actually know a lot about its workings. Yet how they result in consciousness is a much harder problem.

Condon’s first chapter is “Numbers Shmumbers,” about the importance of mathematics in science. His premise is that math is divorced from reality and thereby leads science into black holes of absurdity, like . . . well, black holes.* He starts with 1+1=? — whose real world answer, he says, is never 2! Because that answer assumes each “1” is identical to the other, while in reality no two things are ever truly identical. For Condon, this blows up mathematics and all the science incorporating it.

But identicality is a red herring. It’s perfectly valid to say I have two books, even if they’re very different, because “books” is acategory. One book plus one book equals two books.

Similarly, Condon says that in the real world no triangle’s angles equal 180 degrees because you can never make perfectly straight lines. Nor can any lines be truly parallel. And he has fun mocking the concepts of zero and infinity.

However, these are all concepts. That you can’t actually draw a perfect triangle doesn’t void the concept. This raises the age-old question (which Condon nibbles at) of whether mathematics is something “out there” as part of the fabric of reality, or just something we cooked up in our minds. My answer: we couldn’t very well have invented a mathematics with 179 degree triangles. The 180 degrees (on flat surfaces!) is an aspect of reality — which we’ve discovered.

A key theme of the book is that reality is complex and messy, so the neat predictions of scientific theory often fail. A simplified high school picture may indeed be too simple or even wrong (like visualizing an atom resembling the solar system). But this doesn’t negate our efforts to understand reality, or the value of what we do understand.

Modern scientific concepts do, as Condon argues, often seem to violate common sense. Black holes for example. But the evidence of their reality mounts. Common sense sees a table as a solid object, but we know from science that it’s actually almost entirely empty space. In fact, the more deeply we peer into the atomic and even sub-atomic realms, we never do get to anything solid.

Condon talks about chaos theory, and how it messes with making accurate predictions about the behavior of any system. Weather is a prime example. Because the influencing factors are so complex that a tiny change in starting conditions can mean a big difference down the line. Fair enough. But then — exemplifying what’s wrong with this book — he says of chaos theory, “[t]his new, more humble awareness marked a huge retreat by science. It clearly signaled its inherent limitations.” Not so! Chaos theory was not a “retreat” but an advance, carrying to a new and deeper level our understanding of reality. (I’ve written about chaos theory and its implications, very relevantly to Condon’s book:

After reading partway, I was asking myself, what’s Condon really getting at? He’s a very knowledgeable scientist. But if science is as futile as he seems to argue — then what? I suspected Condon might have gone religious, so I flipped to the last chapter, expecting to find a deity or some other sort of mysticism. But no. Condon has no truck with such stuff either.

He does conclude by saying “we need to profoundly re-assess how we look at the universe,” and “who knows what profound insights may be revealed when we remove [science’s] blinkers.” But Condon himself offers no such insights. Instead (on page 55) he says simply that “we are incapable of comprehending the universe” and “there are no fundamental laws underlying the universe to begin with. The universe just is the way it is.” (My emphasis)

No laws? Newton’s inverse square law of gravitation is a pretty good descriptor of how celestial bodies actually behave. A Condon might say it doesn’t exactly explain the orbit of Mercury, which shows how simple laws can fail to model complex reality. But Einstein’s theory was a refinement to Newton’s — and it did explain Mercury’s orbit.

So do we now know everything about gravitation? Condon makes much of how galaxies don’t obey our current understanding, if you only count visible matter; so science postulates invisible “dark matter” to fix this. Which Condon derides as a huge fudge factor. And I’m actually a heretic myself on this, having written about an alternate theory that would slightly tweak the laws of gravitation making “dark matter” unnecessary ( But here is the real point. We may not yet have gravitation all figured out. But that doesn’t mean the universe is lawless.

Meantime, you might wonder how, if our scientific understandings were not pretty darn good, computers could work and planes could fly. Condon responds by saying that actually, “our technology rarely depend[s] on scientific theory.” Rather, it’s just engineering. “Engineers have learnt from observation and experience,” and “[u]nburdened by theory they were . . . simply observing regularities in the behavior of the universe.”**

And how, pray tell, do “regularities in the behavior of the universe” differ from laws? In fact, a confusion runs through the book between science qua “theory” (Condon’s bete noire) and science qua experimentation revealing how nature behaves. And what does it mean to say, “the universe just is the way it is?” That explains nothing.

But it can be the very first step in a rational process of understanding it. Recognizing that it is a certain way, rather than some other way (or lawless). That there must be reasons for its being the way it is. Reasons we can figure out. Those reasons are fundamental laws. That’s science.

And, contrary to the thrust of Condon’s book, we have gained a tremendous amount of understanding. The very fact that he could write it — after all, chock full of science— and pose all the kinds of questions he does — testifies to that understanding. Quantum mechanics, for example, which Condon has a field day poking fun at, does pose huge puzzles, and some of our theories may indeed need refinement. Yet quantum mechanics has opened for us a window into reality, at a very deep level, that Aristotle or Eratosthenes could not even have imagined.

Condon strangely never mentions Thomas Kuhn, whose seminal The Structure of Scientific Revolutions characterized scientific theories as paradigms, a new one competing against an old one, and until one prevails there’s no scientific way to choose. You might thus see no reason to believe anything science says, because it can change. But modern science doesn’t typically lurch from one theory to a radically opposing one. Kuhn’s work was triggered by realizing Aristotle’s physics was not a step toward modern theories but totally wrong. However, Aristotle wasn’t a scientist at all, did no experimentation; he was an armchair thinker. Science is in fact a process of honing in ever closer to the truth through interrogating reality.

Nor does Condon discuss Karl Popper’s idea of science progressing by “falsification.” Certitude about truth may be elusive, but we can discover what’s not true. A thousand white swans don’t prove all swans are white, but one black swan disproves it.

And as science thusly progresses, it doesn’t mean we’ve been fools or deluded before. Newton said that if he saw farther, it’s because he stood on the shoulders of giants. And what Newton revealed about motion and gravity was not overturned by Einstein but instead refined. Newton wasn’t wrong. And those who imagine Darwinian evolution is “just a theory” that future science may discard will wait in vain.

Unfortunately, such people will leap upon Condon’s book as confirmation for their seeing science (but not the Bible) as fallible.*** Thinking that because science doesn’t know everything, they’re free to disregard it altogether, substituting nonsense nobody could ever possibly know.

Mark Twain defined faith as believing what you know ain’t so. Science is not a “faith.” Nor even a matter of “belief.” It’s the means for knowing,

*But later he spends several pages on the supposed danger of the Large Hadron Collider creating black holes (that Condon doesn’t believe in) and destroying the world. Which obviously didn’t happen.

**But Condon says (misplaced) reliance on theory is increasingly superseding engineering know-how, with bad results, citing disasters like the Challenger with its “O” rings. Condon’s premise strikes me as nonsense; and out of literally zillions of undertakings, zero disasters would be miraculous.

***While Condon rejects “intelligent design,” he speculates that Darwinian natural selection isn’t the whole story — without having any idea what the rest might be.

Fantasyland: How America Went Haywire

July 3, 2019

(A condensed version of my June 18 book review talk)

In this 2017 book Kurt Andersen is very retro; believes in truth, reason, science, and facts. But he sees today’s Americans losing their grip on those. Andersen traces things back to the Protestant Reformation, preaching that each person decides what to believe.

Religious zealotry has repeatedly afflicted America. But in the early Twentieth Century that, Andersen says, seemed to be fizzling out. Christian fundamentalism was seen as something of a joke, culminating with the 1925 Scopes “monkey” trial. But evangelicals have made a roaring comeback. In fact, American Christians today are more likely than ever to be fundamentalist, and fundamentalism has become more extreme. Fewer Christians now accept evolution, and more insist on biblical literalism.

Other fantasy beliefs have also proliferated. Why? Andersen discusses several factors.

First he casts religion itself as a gateway drug. Such a suspension of critical faculties warps one’s entire relationship with reality. So it’s no coincidence that the strongly religious are often the same people who indulge in a host of other magical beliefs. The correlation is not perfect. Some religious Americans have sensible views about evolution, climate change, even Trump — and some atheists are wacky about vaccination and GM foods. Nevertheless, there’s a basic synergy between religious and other delusions.

Andersen doesn’t really address tribalism, the us-against-them mentality. Partisan beliefs are shaped by one’s chosen team. Climate change denial didn’t become prevalent on the right until Al Gore made climate a left-wing cause. Some on the left imagine Venezuela’s Maduro regime gets a bum rap.

Andersen meantime also says popular culture blurs the line between reality and fantasy, with pervasive entertainment habituating us to a suspension of disbelief. I actually think this point is somewhat overdone. People understand the concept of fiction. The problem is with the concept of reality.

Then there’s conspiracy thinking. Rob Brotherton’s book Suspicious Minds: Why We Believe Conspiracy Theories says we’re innately primed for them, because in our evolution, pattern recognition was a key survival skill. That means connecting dots. We tend to do that, even if the connections aren’t real.

Another big factor, Andersen thinks, was the “anything goes” 1960s counterculture, partly a revolt against the confines of rationality. Then there’s post-modernist relativism, considering truth itself an invalid concept. Some even insist that hewing to verifiable facts, the laws of physics, biological science, and rationality in general, is for chumps. Is in fact an impoverished way of thinking, keeping us from seeing some sort of deeper truth. As if these crackpots are the ones who see it.

Then along came the internet. “Before,” writes Andersen, “cockamamie ideas and outright falsehoods could not spread nearly as fast or widely, so it was much easier for reason and reasonableness to prevail.” Now people slurp up wacky stuff from websites, talk radio, and Facebook’s so-called “News Feed” — really a garbage feed.

Andersen considers “New Age” spirituality a new form of American religion. He calls Oprah its Pope, spreading the screwball messages of a parade of hucksters, like Eckhart Tolle, and the “alternative medicine” promoter Doctor Oz. Among these so-called therapies are homeopathy, acupuncture, aromatherapy, reiki, etc. Read Wikipedia’s scathing article about such dangerous foolishness. But many other other mainstream gatekeepers have capitulated. News media report anti-scientific nonsense with a tone of neutrality if not acceptance. Even the U.S. government now has an agency promoting what’s euphemized as “Complementary and Integrative Health;” in other words, quackery.

Guns are a particular focus of fantasy belief. Like the “good guy with a gun.” Who’s actually less a threat to the bad guy than to himself, the police, and innocent bystanders. Guns kept to protect people’s families mostly wind up shooting family members. Then there’s the fantasy of guns to resist government tyranny. As if they’d defeat the U.S. military.

Of course Andersen addresses UFO belief. A surprising number of Americans report being abducted by aliens, taken up into a spaceship to undergo a proctology exam. Considering the nearest star being literally 24 trillion miles away, would aliens travel that far just to study human assholes?

A particularly disturbing chapter concerns the 1980s Satanic panic. It began with so-called “recovered memory syndrome.” Therapists pushing patients to dredge up supposedly repressed memories of childhood sexual abuse. (Should have been called false memory syndrome.) Meantime child abductions became a vastly overblown fear. Then it all got linked to Satanic cults, with children allegedly subjected to bizarre and gruesome sexual rituals. This new witch hunt culminated with the McMartin Preschool trial. Before the madness passed, scores of innocent people got long prison terms.

A book by Tom Nichols, The Death of Expertise, showed how increasing formal education doesn’t actually translate into more knowledge (let alone wisdom or critical thinking). Education often leads people to overrate their knowledge, freeing them to reject conventional understandings, like evolution and medical science. Thus the anti-vaccine insanity.

Another book, Susan Jacoby’s The Age of American Unreason, focuses on our culture’s anti-intellectual strain. Too much education, some people think, makes you an egghead. And undermines religious faith. Yet Jacoby also notes how 19th Century Americans would travel long distances to hear lecturers like Robert Ingersoll, the great atheist, and Huxley the evolutionist. Jacoby also vaunts 20th century “Middlebrow” American culture, with “an affinity for books; the desire to understand science; a strong dose of rationalism; above all, a regard for facts.”

Today in contrast there’s an epidemic of confirmation bias: people embracing stuff that supports pre-existing beliefs, and shutting out contrary information. Smarter folks are actually better at confabulating rationalizations for that. And how does one make sense of the world and of new information? Ideally by integrating it with, and testing it against, your body of prior knowledge and understanding. But many Americans come short there — blank slates upon which rubbish sticks equally well as truth.

I also think reality used to be more harsh and unforgiving. To get through life you needed a firm grip on reality. That has loosened. The secure, cushy lives given us by modernity — by, indeed, the deployment of supreme rationality in the age of science — free people to turn their backs on that sort of rationality and indulge in fantasy.

Anderson’s subtitle is How America Went Haywire. As if that applies to America as a whole. But we are an increasingly divided nation. Riven between those whose faith has become more extreme and those moving in the opposite direction; which also drives political polarization. So it’s not all Americans we’re talking about.

Still, the haywire folks are big shapers of our culture. And there are real costs. Anti-vaccine hysteria undermines public health. The 1980s child threat panic ruined lives. Gun madness kills many thousands. And of course they’ve given us a haywire president.

Yet is it the end of the world? Most Americans go about their daily lives, do their jobs, in a largely rational pragmatic way (utilizing all the technology the Enlightenment has given). Obeying laws, being good neighbors, good members of society. Kind, generous, sincere, ethical people. America is still, in the grand sweep of human history, an oasis of order and reasonableness.

Meantime religious faith is collapsing throughout the advanced world, and even in America religion, for all its seeming ascendancy, is becoming more hysterical because it is losing. The younger you are, the less religious you are likely to be. And there are signs that evangelical Christianity is being hurt by its politicization, especially its support for a major moral monster.

I continue to believe in human progress. That people are capable of rationality, that in the big picture rationality has been advancing, and it must ultimately prevail. That finally we will, in the words of the Bible itself, put childish things away.

Why does evolution produce such diversity?

June 26, 2019

A science writer friend pointed me to a recent “Edge” essay by Freeman Dyson ( Dyson, 95, is a truly great mind, which I am not. Nor an evolutionary biologist. Nevertheless —

Dyson begins with the question: why has evolution produced such a vast diversity of species? If “survival of the fittest” natural selection is the mechanism, shouldn’t we expect each ecological niche to wind up occupied by the one species most perfectly adapted? With others losing out in the competition and disappearing. Thus, in the Amazon rain forest, for example, just one variety of insect rather than thousands; and worldwide, maybe only a few hundred species altogether, rather than the millions actually existing (many with only slight differences). Also, we might expect species slimmed down to efficient essentials, not ongepotchket ones (a Yiddish word for “excessively and unaesthetically decorated.”) These things puzzled Darwin himself.

Darwin worked before we knew anything of genes, Dyson points out. He discusses the contributions of several later people. First is Motoo Kimura with the concept of “genetic drift,” an evolutionary mechanism separate from natural selection. It’s the randomness inherent in gene transmission through sexual reproduction. A given gene’s frequency in a large population will vary less than in a small one, where such random fluctuations will loom larger. Like if you make 1000 coin tosses you’ll always get very close to 500 heads, whereas with only ten tosses you might well get seven heads, a big deviation. So in small populations such genetic drift can drive evolutionary change faster than in a large population where genetic drift is negligible and slower natural selection is the dominant factor. Thus it’s small populations (often ones that get isolated from the larger mass) that most tend to spin off new species.

Dyson combines this idea with cultural evolution which, for humans in particular, is a much bigger factor than biological evolution. Dyson sees genetic drift involved with big local effects, such as the flourishing of ancient Athens or Renaissance Florence.

Then there’s Ursula Goodenough’s idea that mating paradigms, in particular, seem to change faster than other species characteristics. This too makes for rapid evolutionary jumps in genetically isolated populations. Dyson comments: “Nature loves to gamble. Nature thrives by taking risks. She scrambles mating system genes so as to increase the risk that individual parents will fail to find mates. [This] is part of Nature’s plan.” Because it raises the likelihood that parents who do succeed will birth new species.

And then there’s Richard Dawkins and The Selfish Gene. I keep coming back to that book because this — when fully understood — is a very powerful idea indeed.

It tells us that evolution is all about gene replication and nothing else. Thus I take some issue with Dyson’s language anthropomorphizing “Nature” as gambling. He writes as though Nature wants evolution to occur. But it doesn’t have aims. Nor does a gene “want” to make the most copies of itself; it’s simply that one doing so will be more prevalent in a population. That’s what evolution is.

So taking again Goodenough’s point, supposing any given characteristic (here, a mating paradigm) does result in some copies of the relevant gene failing to replicate, if nevertheless in the long run the characteristic means other copies of the same gene will replicate more, then that gene becomes more prevalent. There’s no “gambling” taking place, and no extra points earned if a new species happens to be created. It’s simply the math of the outcome — more copies of the gene.

I also take issue with Dyson’s associating local cultural flourishing with genetic drift. Whatever happened in Fifth Century BC Athens was a purely cultural phenomenon that had nothing to do with changes in Athenians’ genes. While the local gene pool would have differed a (tiny) bit from other human ones, there’s no basis to imagine there was natural selection favoring genes conducive to artistic flourishing, and in any case there would have been insufficient time for such natural selection to play out.

So — returning to the starting question — why all the diversity? While Dyson does point to some mechanistic aspects of evolution militating in that direction, I think there’s a larger and simpler answer. The problem lies in a syllable. “Survival of the fittest” is not quite exactly right; it’s really “survival of the fit.” There’s a big difference. It’s not only the fittest that survive; you don’t have to be the fittest; you just have to be fit. It’s not a winner-take-all competition.

This comports with Dawkins’s selfish gene insight. The genes that continue to exist in an environment are those that have been able to replicate. That doesn’t require being the best at replicating. The best, it is true, will be represented with the most copies, but there will also exist copies of those that are merely okay at replicating; even ones that are lousy, as long as they can replicate at all. The most successful don’t kill off the less successful. Only those totally failing to adapt to their environment die out.

That’s why there are a zillion different varieties of insects in the Amazon rain forest.

But Dyson’s larger point is that for humans, again, cultural evolution outstrips the biological, and this is certainly true. As Dyson notes, language is a huge factor (unique to humans) driving cultural evolution. And while biological evolution does tend toward ever greater diversification, human cultural evolution is actually pushing us in the opposite direction. The degree of human diversity is being collapsed by our cultural evolution — not only our biological diversity, in “races” whose separateness increasingly breaks down, but also cultural diversity, with ancient barriers that separated human groups into combative enclaves breaking down too, so that it is more and more appropriate to speak of a universal humanity.

Humans becoming gods — or chips in a cosmic computer?

May 23, 2019

Yuval Noah Harari is a thinker of Big Ideas, with a capital B and a capital I. An Israeli historian, he wrote Sapiens: a Brief History of Humankind, about how we got where we are. Where we’re going is addressed in the sequel, Homo Deus: A Brief History of Tomorrow.

The title implies man becoming God. But there’s a catch.

Harari sees us having experienced, in the last few centuries, a humanist revolution. With the ideas of the Enlightenment triumphant — science trumping superstition, and the liberal values of the Declaration of Independence — freedom in both the political and economic spheres — trumping autocracy and feudalism. As the word “humanist” implies, these values exalt the human, the individual human, as the ultimate source of meaning. We find meaning not in some deity or cosmic plan but in ourselves and our efforts to make our lives better. We do that through deploying our will, using our rationality to make choices and decisions — both in politics, through democratic voting, and in economics, through consumer choice.

But Harari plays the skunk at this picnic he’s described. The whole thing, he posits, rests upon the assumption that we do make choices and decisions. But what if we actually don’t? This is the age-old argument about free will. Harari recognizes its long antecedents, but asserts that the question has really, finally, been settled by science, something he discusses at length. The more science probes into our mental processes, there’s no “there” there. That is, the idea that inside you there’s a master controller, a captain at the helm, is a metaphor with no actual reality. We don’t “make” decisions and choices. It’s more like they happen to us.

As Schopenhauer said (Harari strangely fails to quote him), “a man can do what he wants, but cannot will what he wants.”

And if we humans are not, in any genuine sense, making choices and decisions through a conscious thinking process — but rather are actuated by deterministic factors we can neither see nor control — in politics, economics, and even in how we live our lives — what does that mean for the humanist construct of valorizing those choices above all else?

There’s a second stink-bomb Harari throws into the humanist picnic. He says humanism valued the individual human because he or she was, in a very tangible way, valuable. Indeed, indispensable. Everything important in society rested on human participation. The economy required people engaged in production. Human agents were required to disseminate the information requisite for progress to occur and spread. A society even needed individual humans to constitute the armies they found so needful.

But what if all that ceases being true? Economic production is increasingly achieved through robots and artificial intelligences. They are also taking care of information dissemination. Even human soldiers are becoming obsolete (as will become true too of the need for them). Thus Harari sees humans becoming useless irrelevancies.

Or at least most of us. Here’s another stink-bomb. Liberal humanist Enlightenment values also rested fundamentally on the idea of human equality. Not literal equality, of course, in the sense of everyone being the same, or even having the same conditions of life. Rather it was equality in the ineffable sense of value and dignity. Spiritual equality, if you will.

And indeed, the Enlightenment/humanist revolution did go a long way toward that ideal, as a philosophical concept that was increasingly powerful, but also as a practical reality. Despite very real wealth inequality, there has (especially in the advanced nations) actually been a great narrowing of the gap between the rich and the rest in terms of quality of life. Earlier times were in contrast generally characterized by a tiny elite living poshly while the great mass of peasants were immured in squalor.

Harari thinks we’re headed back to that, when most people become useless. We may continue to feed them, but the gap between them and the very few superior beings will become a chasm. I’ve previously written about prospects for virtual immortality, which will probably not be available to the mass underclass.

What will that do to the putative ideal of human equality?

Having rejected the notion of human beings as autonomous choice-makers, Harari doesn’t seem to think we do possess any genuine ultimate value along the lines that humanism posits. Instead, we are just biological algorithms. To what purpose?

Evolutionary biology (as made clear in Richard Dawkins’s The Selfish Gene) tells us that, at least as far as Nature is concerned, life’s only purpose is the replication of genes. But that’s a tricky concept. It isn’t a purpose in any conscious, intentional sense, of course. Rather, it’s simply a consequence of the brute mathematical fact that if a gene (a set of molecules) is better at replicating than some other gene, the former will proliferate more, and the world will be filled with its progeny. No “meaning” to be seen there.

But Harari takes it one step further back. The whole thing is just a system for processing information (or “data”). As I understand it, that’s his take on what “selfish gene” biology really imports. And he applies the same concept to human societies. The most successful are the ones that are best at information processing. Democracy beats tyranny because democracy is better at information processing. Ditto for free market capitalism versus other economic models. At least till now; Harari thinks these things may well cease being true in the future.

This leads him to postulate what the religion of the future will be: “Dataism.” He sees signs of it emerging already. This religion would recognize that the ultimate cosmic value is not some imagined deity’s imagined agenda, but information processing. Which Harari thinks has the virtue of being true.

So the role of human beings would be to serve that ultimate cosmic value. Chips in the great computer that is existence. Hallelujah! But wait — artificial systems will do that far better than we can. Where will that leave us?

Here’s what I think.

Enlightenment humanist values have had a tremendous positive effect on the human condition. But Harari writes as though this triumph is complete. Maybe so on New York’s Upper East Side, but in the wider world, not so much. Far from being ready to progress from Harari’s Phase II to Phase III (embracing Dataism), much of humanity is still trying to get from Phase I to Phase II. The Enlightenment does not reign everywhere. Anti-scientific, religious, and superstitious beliefs remain powerful. Democracy is under assault in many places, and responsible citizenship is crumbling. Look at the creeps elected in Italy (and America).

Maybe this is indeed a reaction to what Harari is talking about, with humans becoming less valuable, and they feel it, striking out in elections like Italy’s and America’s and the Brexit vote, while autocrats and demagogues like Erdogan and Trump exploit such insecurities. In this respect Harari’s book complements Tom Friedman’s which I’ve reviewed, arguing that the world is now changing faster than people, institutions, and cultures can keep up with and adapt to.

Free will I’ve discussed before too. I fully acknowledge the neuroscience saying the “captain at the helm” self is an illusion, and Schopenhauer was right that our desires are beyond our control. But our actions aren’t. As legal scholar Jeffrey Rosen has observed, we may not have free will, exactly, but we do have free won’t. The capability to countermand impulses and control our behavior. Thus, while the behavior of lighting up is, for a smoker, determinism par excellence, smokers can and do quit.

You might reply that quitting too is driven by deterministic factors, but I think this denies the reality of human life. The truth is that our thought and behavior is far too complex to be reduced to simplistic Skinnerian determinism.

The limits of a deterministic view are spotlighted by an example Harari himself cites: the two Koreas. Their deterministic antecedents were extremely similar, yet today the two societies could not be more different. Accidents of history — perhaps a sort of butterfly effect — made all the difference. Such effects also come into play when one looks at an individual human from the standpoint of determinism.

Harari’s arguments about humans losing value, and that anyway we’re nothing but souped-up information processors, I will take together. Both ideas overlook that the only thing in the cosmos that can matter and have meaning is the feelings of beings capable of feeling. (I keep coming back to that because it’s really so central.) The true essence of humanist philosophy is that individual people matter not because of what we produce but because of what we are: beings capable of feeling. Nothing else matters, or can matter.

The idea of existence as some vast computer-like data processor may be a useful metaphor for understanding its clockwork. But it’s so abstract a concept I’m not really sure. And in any case it isn’t really relevant to human life as it’s actually lived. We most certainly do not live it as akin to chips in a cosmic computer. Instead we live it through feelings experienced individually which, whatever one can say about how the brain works, are very real when felt. Once again, nothing can matter except insofar as it affects such feelings.

I cannot conceive of a future wherein that ceases being true.

My pro basketball experience

March 31, 2019

This pic of me at the game didn’t come out so good

Last Sunday we went to Boston for a Celtics game. I’m no sports fan. In fact, the last pro sports event I attended was a Dodgers baseball game. When they were still in Brooklyn (and Ike was president).

But my wife is a basketball aficionado, and we’ve been hosting a gal from Somaliland who plays it in high school. So I went with them.


I really enjoyed the fan-cam and people’s reactions seeing themselves on the jumbotron. Most didn’t immediately realize they were having their fifteen nanoseconds of fame. A few never did, eyes glued to their phones. Most did exuberant dancing and arm-waving. One woman grabbed her husband’s head and kissed him on the lips. But I thought the most romantic one was the gal holding up a sign saying, “Marcus Smart will you marry me?” — until (silly me) I learned Smart is a Celtics player, not (presumably) her inamorata.

The game itself was less entertaining. Very much the same thing repeated over and over. Speaking of repetition, the jumbotron kept showing the word “DEFENSE” in giant block letters crashing down and crushing a bunch of what appeared to be pick-up sticks. And the crowd would duly pick up the chant, “DEFENSE! DEFENSE!” I waited, in vain, for a little offense; especially as the Celtics’ defense was being crushed by the San Antonio Spurs.


They lost 486 to 9. Or something like that.


I am no basketball expert. Yet I could have advised one thing to improve their score: doing free throws underhand (“granny style”) rather than overhead. Studies have in fact been done, and it’s proven that the former gives a higher success rate. Yet players universally ignore this. Why? They think it looks girly, not macho. So Vince Lombardi was actually wrong — winning isn’t the only thing.


Anyhow, some fans were deflated by the Celtics’ drubbing. Some even left early, in disgust, or perhaps to avoid the traffic crush. But most seemed to have a good time nevertheless. Even sports nuts ultimately understand that these games are Not Really Truly Important. They’re harmless. At least we no longer gather in stadiums to watch combatants literally kill each other. And at least these Celtics fans wore green hats, not red ones, and their chants weren’t hateful.

And I achieved my own personal goal for the evening: home and snug in bed by 1:30 AM.

The truth about vaccines, autism, measles, and other illnesses

February 26, 2019

The left derides the right for science denialism, on evolution and climate change. But many on the left have their own science blind spots, on GM foods and vaccination.

The anti-vax movement is based on junk science. The fraudulent study that started the whole controversy, by Andrew Wakefield, supposedly linking vaccines and autism, has been totally debunked. The true causes of autism remain debatable, but in the wake of Wakefield there have been numerous (genuine) scientific studies, and now at least one thing can be ruled out with certainty: vaccination.

“But my kid became autistic right after vaccination” — we hear this a lot. Post hoc ergo propter hoc (after which, therefore because of which) is a logic fallacy. One thing may follow another with no causal link. Kids are typically scheduled for vaccinations at right around the same age that autism first shows up. It’s just coincidence.

Anti-vaxers throw up a flurry of other allegations of harm, and keep insisting science hasn’t answered them. Not so. All such claims have been conclusively refuted. True, it’s possible to have a bad reaction to any injection, but with vaccination such cases are so extremely rare that all the fearmongering is totally disproportionate. The fundamental safety of vaccines is proven beyond any rational doubt.

I heard it reported that parents objecting to vaccination actually tend to be smarter than average. Proving you can be too smart for your own good. Tom Nichols’s book The Death of Expertise shows education often leads people to overrate their own knowledge, making them confident to just reject conventional medical science. They make the mistake of deferring instead to a movement that’s rooted in a mindset of hostility toward elites and experts of all stripes, and receptiveness to conspiracy theories, ready to believe big pharma, the medical establishment, and of course the government, all promote vaccination for evil purposes. People go online and find all this nonsense, and it fits with their pre-existing mindset, so they become impervious to the facts.

Still, we’re told this is a free country and people should be allowed to make these decisions for themselves and their own children. Such pleas resonate with my libertarian instincts; I don’t like government telling us what to do. But the vaccination issue isn’t so simple. Children are unable to choose for themselves. While parents are free to raise kids as they see fit, we don’t allow child abuse. And the law steps in, rightly, when Christian Scientists for example want to deny their kids needed medical treatment.

The same principle should apply to vaccination. Indeed, more so — because parental decisions here don’t just affect their own kids. When a high enough share of a population is vaccinated, a disease is blocked from propagating, so even the unvaccinated are safe. It’s called “herd immunity.” But with enough unvaccinated available victims, the disease can get a toehold and spread. Vaccinated people are still safe, but not babies too young for vaccination, and people who can’t be vaccinated, for various legitimate medical reasons.

Our herd immunities are now in fact being broken by the widespread refusal of vaccination. Thus dangerous illnesses, like whooping cough and measles, that had been virtually eradicated, are making a big comeback, with sharply rising infection rates.

This is a serious public health issue, and for once the solution is simple. Vaccination must be mandatory, absent valid medical reasons. Opt-outs on religious or “philosophical” grounds should be ended. There are no arguably legitimate religious or other doctrines that could justify refusal to vaccinate. These are just pretexts by people suckered by the pseudo-scientific anti-vax campaign.

We all should be free to do as we please, as long as it harms no others. The freedoms that matter are living as one chooses, and self-expression. Requiring vaccination does not violate these freedoms in a meaningful way; while refusing it does harm others. While you might argue that you have a right against unwanted injections, they are a far less drastic impingement upon personal freedom than is quarantining people with contagious illnesses. Their personal freedom is surely trumped by society’s right to protect others from disease.

To anti-vaxers, the minuscule risk from vaccination may seem larger than the risk from illnesses like whooping cough. That’s only because vaccination had practically eradicated those diseases. Anti-vaxers are getting a free ride from the herd immunity conferred by the vaccination of others. Anti-vax parents act as though only their kids matter, other kids and the herd immunity do not. Where is the social solidarity? Doing something because it’s good for all of us together?

Vaccination is a fantastic accomplishment of humankind, conquering the dread specters of so many diseases that afflicted life, and brought early death, throughout most of history. If you want to shout from the rooftops arguing that vaccination is a devil’s plot, you should have a right to do so. As long as you’re vaccinated.

Pachinko by Min Jin Lee — a novel of identity

February 22, 2019

Min Jin Lee

I read this 2017 novel for a book group. A nice thing about such groups is exposure to rewarding reads you’d never otherwise pick up.

Japan occupied Korea from 1910 to 1945. Sunja is born there around 1916. Her mother subsists running a humble boarding house. Teenaged Sunja is pursued, and impregnated, by businessman Koh Hansu. She vaguely expects marriage; but surprise surprise, he already has a wife back in Japan.

Then an ethereal young Korean Christian minister, Isak, rescues Sunja by marrying her. They relocate to Japan, where he has a posting waiting, and live with his brother and sister-in-law. The child is named Noa; later Isak and Sunja have their own son, Mozasu. (Their names are derived from Noah and Moses.) Both eventually wind up running pachinko parlors; pachinko is a pinball-like game very popular in Japan.

But the book’s main focus is on Korean identity in a Japanese culture that despises Koreans. They are stereotyped negatively and suffer systematic discrimination (despite the impossibility of identifying Koreans by appearance). Japan’s forcing many thousands of Korean women into brothels for soldiers during WWII is well known. Japan (unlike Germany) has been recalcitrant on repentance for this and other crimes.

The novel barely mentions those “comfort women,” but describes much other mistreatment suffered by Koreans. Isak is jailed, suspected of insufficient loyalty to the Emperor, and dies from his horrible ordeal.

Koreans living in Japan remain distinctly second-class citizens — if allowed citizenship at all, after generations of residence. Mozasu’s son, in 1989, works there for an investment bank, until he’s screwed over because he’s Korean.

But what really prompts me to write is Noa’s story. (BIG SPOILER ALERT) He didn’t know Koh Hansu was his real father. Koh reappears, now quite wealthy, as Noa’s benefactor, financing his much coveted university education. Noa and his mother Sunja are resistent, but accept Koh’s largesse. But then Noa’s girlfriend meets Koh, sees the resemblance, and taunts Noa with the obvious. Also that Koh must be a yakuza— a gangster.*

These revelations crush Noa. Cursing what his mother did, he runs away to start a new life, cutting all ties to his family, and starting his own new one, with a wife and children (and passing as Japanese). He sends Koh money to repay what he’d received. He also sends Sunja money but never divulges contact information. For sixteen years.

Finally Koh locates Noa, now 45, and Sunja goes to him, in his office. The reunion is difficult but doesn’t go too badly. Noa promises to come visit her. Then he shoots himself.

He had thought he’d escaped his parentage, but now must have realized he could not. And he could not live with that.

Koh was indeed a gangster. A nasty piece of work, as revealed in only a few glimpses. But as far as Sunja’s family knew, he was just a “businessman.” Noa’s girlfriend could not have known the truth about Koh, nor could Noa, it was just an unsubstantiated suspicion. Perhaps Noa should have probed further before shooting himself.

Or perhaps that’s nitpicking. The real issue here is the heart of human identity. Noah felt himself irremediably contaminated. He had bad blood.

This idea of “bad blood” reverberates throughout human history. The sins of the father visited upon the sons. How many people have indeed been punished for crimes or derelictions (real or just imagined) by forebears?

It’s the heart of racism. The notion that all members of some group are birds of a feather, sharing some (stereotyped) characteristics. As vividly depicted in this book, where the antipathy of Japanese toward “those people” (Koreans) is a constant.

Here’s some science. Biology is not destiny. Even where genes are indicative of certain behavioral traits (and there are such), genes never determine how any individual will behave in any situation. At most, they may delineate proclivities, but an individual’s actual behavior results from too many variables to be predicted by genes or anything else. And it’s certainly untrue that any human subgroup shares biologically determined behavioral traits (different from other subgroups).

Of course there are human behaviors, genetically evolved, which we share as a species. But they don’t differ among subgroups. And even if there were such subgroup-specific genes, their effect would be overwhelmed by all the other factors influencing a given individual’s personal behavior.

That’s not to deny cultural differences. Cultural groups do have their own characteristics, that’s the definition of culture. But it’s not genetic. Remove an individual at birth from their specific culture, and there’s no innate biological reason for replicating behavior particular to that culture.

So Noa’s human identity was not dictated by his father’s gangsterhood. His blood was no more bad than anyone else’s. It was up to him to shape his own life. And, even if there were gangster genes inherited from his father (a dubious idea), those genes would not anyway determine his own character, which would still be his to create.

You can be what you choose to be.

*An echo of Great Expectations? Noa studies literature — he loves Dickens!

Evolution by natural selection is a fact

February 5, 2019

My recent “free will” essay prompted some comments about evolution (on the Times-Union blog site.) One invoked (at verbose length) the old “watchmaker” argument. Nature’s elegant complexity is analogized to finding a watch in the sand; surely it couldn’t have assembled itself by random natural processes. There had to be a watchmaker.

This argument is fallacious because a watch is purpose-built and nature is not. Not the result of a process aimed at producing what we see today; instead one that could just as well have produced an infinity of alternative possibilities.

Look at a Jackson Pollock painting and you could say that to create precisely this particular pattern of splotches must have (like the watch) taken an immense amount of carefully planned work. Of course we know he just flung paint at the canvas. The complex result is what it is, not something Pollock “designed.”

Some see God in a similar role, not evolution’s designer but, rather, just setting it in motion. Could life have arisen out of nowhere, from nothing? Or could the Universe itself? Actually science has some useful things to say about that — better than positing a God who always existed or “stands outside time and space,” or some such woo-woo nonsense. And for life’s beginnings, while we don’t have every “i” dotted and “t” crossed (the earliest life could not have left fossils), we do know the basic story:

Our early seas contained an assortment of naturally occurring chemicals, whose interactions and recombinations were catalyzed by lightning, heat, pressure, and other natural phenomena. Making ever more complex molecules, by the trillion. One of the commonest elements is carbon, very promiscuous at hooking up with other atoms to create elaborate combinations.

Eventually one of those had the property of duplicating itself, by glomming other chemical bits floating by, or by splitting. Maybe that was an extremely improbable fluke. But realize it need only have happened once. Because each copy would go on to make more, and soon they’d be all over the place.

However, the copying would not have been perfect; there’d be occasional slight variations; with some faulty but also some better at staying intact and replicating. Those would spread more widely, with yet more variations, some yet more successful. Developing what biologist Richard Dawkins, in The Selfish Gene, called “survival machines.” Such as a protective coating or membrane. We’ve discovered a type of clay that spontaneously forms such membranes, which moreover divide upon reaching a certain size. So now you’ve got the makings of a primitive cell.

Is this a far-fetched story? To the contrary, given early Earth’s conditions, it actually seems inevitable. It’s hard to imagine it not happening. The 1952 Miller-Urey experiment reproduced those conditions in a test tube and the result was the creation of organic compounds, the “building blocks of life.”

That’s how evolution began. The duplicator molecules became genes (made of DNA). Their “survival machines” became organisms. That’s what we humans really are, glorified copying machines. A chicken is just an egg’s way to make another egg.

Of course DNA and genes, and Nature itself, do nothing with conscious purpose. Replicators competing with each other is simply math. Imagine your computer screen with one blue and one red dot. And a program saying every three seconds the blue dot will make another blue dot; but the red one will make two. Soon your screen will be all red.

A parable: A king wishes to bestow a reward, and invites the recipient to suggest one. He asks for a single rice grain — on a chessboard’s first square — then two on the second — and so on. The king, thinking he’s getting away cheaply, readily agrees. But before even reaching the final square, it’s all the rice in the kingdom.

This is the power of geometric multiplication. The power of genes replicating, in vast numbers, over vast time scales. (A billion years is longer than we can grasp.) And recall how genes are effectively in competition because occasionally their copies are imperfect (“mutations”), so no two organisms are exactly identical, and some are better at surviving and reproducing. Those supplant the others, just like red supplanted blue on your computer screen. But the process never stops, and in the fulness of time, new varieties evolve into new species. It’s propelled by ever-changing environments, requiring that organisms adapt by changing, or perish. This is evolution by natural selection.

Fossils provide indisputable proof. It’s untrue that there are “missing links.” In case after case, fossils show how species (including humans) have changed and evolved over time. (The horse is a great example. My illustration is from a website actually denying horse evolution, arguing that each of the earlier versions was a stand-alone species, unrelated to one another!)

We even see evolution happening live. Antibiotics changed the environment for bacteria. So drug-resistant bacteria rapidly evolved. Once-rare mutations enabling them to survive antibiotics have proliferated while the non-resistant are killed off.

Note that evolution doesn’t mean inexorable progression toward ever more complex or “higher” life forms. Again, the only thing that matters is gene replication (remember that red computer screen). Whatever works at causing more copies to be made is what will evolve. Humans evolved big brains because that happened to be a very successful adaptation. If greater simplicity works better, then an animal will evolve in that direction. There are in fact examples of this.

Another false argument against evolution is so-called “irreducible complexity.” Author Michael Behe claimed something like an eye could never have evolved without a designer because an incomplete, half-formed eye would be useless, conferring no advantage on an organism. In fact eyes did evolve through a long process beginning with light-sensitive cells that were primitive motion detectors, not at all useless. They did entail a survival advantage, albeit small, but it multiplied over eons, and improved by gradual incremental tweaks. So the eye, far from rebutting evolution, thus beautifully illustrates how evolution actually proceeds, and refutes any idea of intelligent design.

In fact, because our eyes did evolve in the undirected the way they did, they’re very sub-optimal. A competent designer would have done far better. He would not have put the wiring in front of the light-sensitive parts, blocking some light, nor bunched the optic nerve fibers to cause a blind spot. So we can’t see well in dim light. Some other animals (like squids) have much better eye design. And wouldn’t a really intelligent design include a third eye in the back?

Evolution by natural selection is the one great fact of biology. Not merely the best explanation for what we see in Nature, but the only possible rational explanation, and one that explains everything. As the geneticist Theodosius Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.”