Archive for the ‘Science’ Category

Probability, coincidence, and the origin of life

November 30, 2019

The philosopher Epicurus was shown a wall of pictures — told, reverently, they portrayed sailors who, in storms, prayed to the gods and were saved. “But where,” he asked, “are the pictures of those who prayed and drowned?”

He was exposing the mistake of counting hits and ignoring misses. It’s common when evaluating seemingly paranormal, supernatural, or even miraculous occurrences. Like when some acquaintance appears in a dream and then you learn they’ve just died. Was your dream premonitory? But how often do you dream of people who don’t die? As with Epicurus, this frequently applies to religious “miracles” like answered prayers. We count the hits and ignore the many more unanswered prayers.

I usually work with the radio on. How often do you think I’ll write a word while hearing the same word from the radio? (Not common words, of course, like “like” or “of course.”) In fact it happens regularly, every few days. Spooky? Against astronomical odds? For a particular word, like “particular,” the odds would indeed be very small. But the open-ended case of any word matching is far less improbable. Recently it was “Equatorial Guinea!” Similarly, the odds of any two people’s birthdays matching are about one in 365. But how many must there be in a room before two birthdays likely match? Only 23! This surprises most folks — showing we have shaky intuitions regarding probability and coincidence. Most coincidences are not remarkable at all, but expectable, like my frequent radio matches.

So what does all this have to do with the origin of life? I recently began discussing Dawkins’s book, The Blind Watchmaker, and life’s having (almost certainly) begun with a fairly simple molecular structure, naturally occurring, with the characteristic of self-duplication. Dawkins addresses our intuition that that’s exceedingly improbable.

The essence of evolution by natural selection is, again, small incremental steps over eons of time, each making beneficiaries a bit likelier to survive and reproduce. The replicator molecule utilized by all life is DNA,* which maybe can’t be called “simple” — but Dawkins explains that DNA could itself have evolved in steps, from simpler precursors —non-living ones.

Indeed, non-living replication is familiar to us. That’s how crystals form. They grow by repeating a molecular structure over and over. (I’ve illustrated one we own — trillions of molecules creating a geometrical object with perfectly flat sides.) Dawkins writes of certain naturally occurring clays with similar properties, which could plausibly have been a platform for evolving the more elaborate self-replicators that became life.

Maybe this still seems far-fetched to you. But Dawkins elucidates another key insight relevant here.

Our brains evolved (obviously) to navigate the environment we lived in. Our abilities to conceptualize are tailored accordingly, and don’t extend further (which would have been a waste of biological resources). Thus, explains Dawkins, our intuitive grasp of time is grounded in the spectrum of intervals in our everyday experience — from perhaps a second or so at one end to a century or two at the other. But that’s only a tiny part of the full range, which goes from nanoseconds to billions of years. We didn’t need to grasp those. Likewise, our grasp of sizes runs from perhaps a grain of sand to a mountain. Again, a tiny part of the true spectrum, an atom being vastly smaller, the galaxy vastly larger. Those sizes we never needed to imagine — and so we really can’t.

This applies to all very large (or small) numbers. Our intuitions about probability are similarly circumscribed.

If you could hypothetically travel to early Earth, might you witness life beginning — as I’ve explained it? Of course not. Not in a lifetime. The probability seems so small it feels like zero. And accordingly some people just reject the idea.

Suppose it’s so improbable that it would only occur once in a billion years. But it did have a billion years to happen in! Wherein a one-in-a-billion-year event is hardly unlikely.

The odds against winning the lottery are also astronomical. Our human capacity to grasp such probabilities is, again, so limited that many people play the lottery with no clue about the true smallness of their chances. Yet people win the lottery. And I had my “Equatorial Guinea” coincidence.

And what’s the probability that life did not evolve naturally, along general lines I’ve suggested, but was instead somehow deliberately created by a super-intelligent being of unimaginable power — whose existence in the first place nobody can begin to account for?

Surely zero; a childishly absurd idea. As Sherlock Holmes said, once you eliminate the impossible, whatever remains, howsoever improbable, must be the truth. But the Darwinian naturalistic theory of life is not at all improbable or implausible. There’s tons of evidence for it. And even if there weren’t, Dawkins observes, it would still be the only concept capable of explaining life. Not only is it true, it must be true.

* That all living things use the same DNA code makes it virtually certain that all had a common ancestor. Your forebears were not, actually, monkeys; but the ancestors of all humans, and of all monkeys, were fish.

Evolution: The Blind Watchmaker and the bat

November 24, 2019

What is it Like to be a Bat? was a famous essay (I keep coming back to) by Philosopher Thomas Nagel. Its point being our difficulty in grasping — that is, constructing an intuitively coherent internal model of — the bat experience. Because it’s so alien to our own.

Biologist Richard Dawkins, though, actually tackles Nagel’s question in his book The Blind Watchmaker. The title refers to William Paley’s 1802 Natural Theology, once quite influential, arguing for what’s now called “intelligent design.” Paley said if you find a rock in the sand, its presence needs no explanation; but if you find a watch, that can only be explained by the existence of a watchmaker. And Paley likens the astonishing complexity of life forms to that watch.

I’ve addressed this before, writing about evolution. Paley’s mistake is that a watch is purpose-built, which is not true of anything in nature. Nature never aimed to produce exactly what we see today. Instead, it’s an undirected process that could have produced an infinitude of alternative possibilities. What we have are the ones that just happened to fall out of that process — very unlike a watch made by a watchmaker.

However, it’s not mere “random chance,” as some who resist Darwinism mistakenly suppose. The random chance concept would analogize nature to a child with a pile of lego blocks, tumbling them together every which way. No elegant creation could plausibly result. But evolution works differently, through serial replication.

It began with an agglomeration of molecules, a very simple naturally occurring structure, but having one crucial characteristic: a tendency to duplicate itself (using other molecules floating by). If such a thing arising seems improbable, realize it need only have occurred once. Because each duplicate would then be making more duplicates. Ad infinitum. And as they proliferate, slight variations accidentally creeping in (mutations) would make some better at staying in existence and replicating. That’s natural selection.

Dawkins discusses bats at length because the sophistication of their design (more properly, their adaptation) might seem great evidence for Paleyism.

Bats’ challenge is to function in the dark. Well, why didn’t they simply evolve for daytime? Because that territory was already well occupied, and there was a living to be made at night — for a creature able to cope with it.

Darkness meant usual vision systems wouldn’t work. Bats’ alternative is echolocation — sonar. They “see” by emitting sound pulses and using the echoes to build, in their brains, a model of their outside environment. Pulses are sent between ten and 200 times per second, each one updating the model. Bat brains have developed the software to perform this high speed data processing and modeling, on the fly.

Now get this. Their signals’ strength diminishes with the square of the distance, both going out and coming back. So the outgoing signals must be quite loud (fortunately beyond the range of human hearing) for the return echos to be detectable. But there’s a problem. To pick up the weak return echos, bat ears have to be extremely sensitive. But such sensitive ears would be wrecked by the loudness of the outgoing signals.

So what to do? Bats turn off their ears during each outgoing chirp, and turn them on again to catch each return echo. Ten to 200 times a second!

Another problem: Typically there’s a zillion bats around, all creating these echos simultaneously. How can they distinguish their own from all those others? Well, they can, because each has its own distinctive signal. Their brain software masters this too, sorting their own echos from all the background noise.

The foregoing might suggest, a la Nagel, that the bat experience is unfathomable. Our own vision seems a much simpler and better way of seeing the world. But not so fast. Dawkins explains that the two systems are really quite analogous. While bats use sound waves, we use light waves. However, it’s not as though we “see” the light directly. Both systems entail the brain doing a lot of processing and manipulation of incoming data to build a model of the outside environs. And the bat system does this about as well as ours.

Dawkins imagines an alien race of “blind” batlike creatures, flabbergasted to learn of a species called humans actually capable of utilizing inaudible (!) rays called “light” to “see.” He goes on to describe our very complex system for gathering light signals, and transmitting them into the brain, which then somehow uses them to construct a model of our surroundings which, somehow, we can interpret as a coherent picture. Updated every fraction of a second. (Their Nagel might write, “What is it like to be a human?”)*

A Paleyite would find it unimaginable that bat echolocation could have evolved without a designer. But what’s really hard for us to imagine is the immensity of time for a vast sequence of small changes accumulating to produce it.

Dogs evolved (with some human help) from wolves over just a few thousand years; indeed, with variations as different as Chihuahuas and Saint Bernards. And we’re scarcely capable of grasping the incommensurateness between those mere thousands of years and the many millions over which evolution operates.

Remember what natural selection entails. Small differences between two species-mates may be a matter of chance, but what happens next is not. A small difference can give one animal slightly better odds of reproducing. Repeat a thousand or a million times and those differences grow large; likewise a tiny reproductive advantage also compounds over time. It’s not a random process, but nor does it require an “intelligent designer.”

Dawkins gives another example. Imagine a mouse-sized animal, where females have a slight preference for larger males. Very very slight. Larger males thus have a very very slight probability of leaving more offspring. The creature’s increasing size would be imperceptible during a human lifetime. How long would it take to reach elephant size? The surprising answer: just 60,000 years! An eyeblink of geological time. This would be considered “sudden” by normal evolutionary standards.**

Returning to vision, a favorite argument of anti-evolutionists is that such a system’s “irreducible complexity” could never have evolved by small steps — because an incomplete eye would be useless. Dawkins eviscerates this foolish argument. Lots of people in fact have visual systems that are incomplete or defective in various ways, some with only 5% of normal vision. But for them, 5% is far better than zero!

The first simple living things were all blind. But “in the country of the blind the one-eyed man is king.” Even just having some primitive light-sensitive cells would have conferred a survival and reproductive advantage, better enabling their possessors to find food and avoid becoming food. And such light detectors would have gradually improved, by many tiny steps, over eons; each making a creature more likely to reproduce.

Indeed, a vision system — any vision system at all — is so advantageous that virtually all animals evolved one, not copying each other, but along separate evolutionary paths, resulting in a wide array of varying solutions to the problem — including bat echolocation, utilizing principles so different from ours.

But none actually reflects optimized “intelligent” design. Not what a half decent engineer or craftsman would have come up with. Instead, the evolution by tiny steps means that at each stage nature was constrained to work with what was already there; thus really (in computer lingo) a long sequence of “kludges.” For example, no rational designer would have bunched our optic nerve fibers in the front of the eye, creating a blind spot.

You might, if you still cling to an imaginary “designer,” ask her about that. And while you’re at it, ask why no third eye in the back of our heads?

(To be continued)

* Some blind humans are actually learning to employ echolocation much like bats, using tongue clicks.

** This is not to say evolution entails slow steady change. Dawkins addresses the “controversy” between evolutionary “gradualists” and “punctuationists” who hypothesize change in bursts. Their differences are smaller than the words imply. Gradualists recognize rates of change vary (with periods of stasis); punctuationists recognize that evolutionary leaps don’t occur overnight. Both are firmly in the Darwinian camp.

Greta Thunberg is wrong

October 1, 2019

Greta Thunberg, the 16-year-old Swedish climate warrior, berates the world (“How dare you?”) for pursuing a “fairy tale” of continued economic growth — putting money ahead of combating global warming. A previous local newspaper commentary hit every phrase of the litany: “species decimation, rainforest destruction . . . ocean acidification . . . fossil-fuel-guzzling, consumer-driven . . . wreaked havoc . . . blind to [the] long-term implication . . . driven by those who would profit . . . our mad, profligate  . . . warmongering . . . plasticization and chemical fertilization . . . failed to heed the wise admonition of our indigenous elders . . . .”

The litany of misanthropes hating their own species and especially their civilization.

Lookit. There’s no free lunch. Call it “raping the planet” if you like, but we could never have risen from the stone age without utilizing as fully as possible the natural resources available. And if you romanticize our pre-modern existence (“harmony with nature” and all), well, you’d probably be dead now, because most earlier people didn’t make thirty. And those short lives were nasty and brutish. There was no ibuprofen.

This grimness pretty much persisted until the Industrial Revolution. Only now, by putting resource utilization in high gear, could ordinary folks begin to live decently. People like that commentator fantasize giving it up. Or, more fantastical, our somehow still living decently without consuming the resources making it possible.

These are often the same voices bemoaning world poverty. Oblivious to how much poverty has actually declined — thanks to all the resource utilization they condemn. And to how their program would deny decent lives to the billion or so still in extreme poverty. Hating the idea of pursuing economic growth may be fine for those living in affluent comfort. Less so for the world’s poorest.

Note, as an example, the mention of “chemical fertilization.” This refers to what’s called the “green revolution” — revolutionizing agriculture to improve yields and combat hunger, especially in poorer nations. It’s been estimated this has saved a couple billion lives. And of course made a big dent in global poverty.

But isn’t “chemical fertilization,” and economic development more generally, bad for the environment? Certainly! Again, no free lunch. In particular, the climate change we’re hastening will, as Thunberg says, likely have awful future impacts. Yet bad as that is, it’s not actually humanity’s biggest challenge. The greater factors affecting human well-being will remain the age-old prosaic problems of poverty, disease, malnutrition, conflict, and ignorance. Economic growth helps us battle all those. We should not cut it back for the sake of climate. In fact, growing economic resources will help us deal with climate change too. It’s when countries are poor that they most abuse the environment; affluence improves environmental stewardship. And it’s poor countries who will suffer most from climate change, and will most need the resources provided by economic growth to cope with it.

Of course we must do everything reasonably possible to minimize resource extraction, environmental impacts, and the industrial carbon emissions that accelerate global warming. But “reasonably possible” means not at the expense of lower global living standards. Bear in mind that worldwide temperatures will continue to rise even if we eliminate carbon emissions totally (totally unrealistic, of course). Emission reductions can moderate warming only slightly. That tells us to focus less on emissions and more on preparing to adapt to higher temperatures. And more on studying geo-engineering possibilities for removing greenhouse gases from the atmosphere and otherwise re-cooling the planet. Yet most climate warriors actually oppose such efforts, instead obsessing exclusively on carbon reduction, in a misguided jihad against economic growth, as though to punish humanity for “raping the planet.”

Most greens are also dead set against nuclear power, imagining that renewables like solar and wind energy can fulfill all our needs. Talk about fairy tales. Modern nuclear power plants are very safe and emit no greenhouse gases. We cannot hope to bend down the curve of emissions without greatly expanded use of nuclear power. Radioactive waste is an issue. But do you think handling that presents a bigger challenge than to replace the bulk of existing power generation with renewables?

I don’t believe we’re a race of planet rapists. Our resource utilization and economic development has improved quality of life — the only thing that can ultimately matter. The great thing about our species, enabling us to be so spectacularly successful, is our ability to adapt and cope with what nature throws at us. Climate change and environmental degradation are huge challenges. But we can surmount them. Without self-flagellation.

Thinking like a caveman

September 18, 2019

 

What is it like to be a bat? That famous essay by philosopher Thomas Nagel keeps nagging at us. What is it like to be me? Of this I should have some idea. But why is being me like that? — how does it work? — are questions that really bug me.

Science knows a lot about how our neurons work. Those doings of billions of neurons, each with very limited, specific, understandable functions, join to create one’s personhood. A leap we’re only beginning to understand.

Steven Mithen’s book, The Prehistory of the Mind, takes the problem back a step, asking how our minds came to exist in the first place. It’s a highly interesting inquiry.

Of course the simple answer is evolution. Life forms have natural variability, and variations that prove more successful in adapting to changing environments proliferate. This builds over eons. Our minds were a very successful adaptation.

But they could not have sprung up all at once. Doesn’t work that way. So by what steps did they evolve? The question is problematical given our difficulty in reverse-engineering the end product. But Mithen’s analysis actually helps toward such understanding.

He uses two metaphors to describe what our more primitive, precursor minds were like. One is a Swiss Army knife. It’s a tool that’s really a tool kit. Leaving aside for the moment the elusive concept of “mind,” all living things have the equivalent of Swiss Army knives to guide their behavior in various separate domains. A cat, for example, has a program in its brain for jumping up to a ledge; another for catching a mouse; and so forth. The key point is that each is a separate tool, used separately; two or more can’t be combined.

Which brings in Mithen’s other metaphor for the early human mind: a cathedral. Within it, there are various chapels, each containing one of the Swiss Army knife tools, each one a brain program for dealing with a specific type of challenge. The main ones Mithen identifies are a grasp of basic physics in connection with tool-making and the like; a feel for the natural world; one for social interaction; and language arts, related thereto.

This recalls Martin Gardner’s concept of multiple intelligences. Departing from an idea that “intelligence” is a single capability that people have more or less of, Gardner posited numerous diverse particularized capabilities, such as interpersonal skills, musical, spatial-visual, etc. A person can be strong in one and weak in another.

Mithen agrees, yet nevertheless also hypothesizes what he calls “general intelligence.” By this he means “a suite of general-purpose learning rules, such as those for learning associations between events.” Here’s where his metaphors bite. The Swiss Army knife doesn’t have a general intelligence tool. That’s why a cat is extremely good at mousing but lacks a comprehensive viewpoint on its situation.

In Mithen’s cathedral, however, there is general intelligence, situated right in the central nave. However, the chapels, each containing their specific tools, are closed off from it and from each other. The toolmaking program doesn’t communicate with the social interaction program; none of them communicates with the general intelligence.

Does this seem weird? Not at all. Mithen invokes an analogy to driving while conversing with a passenger. Two wholly separate competences are operating, but sealed off from each other, neither impinging on the other.

This, Mithen posits, was indeed totally the situation of early humans (like Neanderthals). Our own species arose something like 100,000 years ago, but for around half that time, it seems, we too had minds like Neanderthals, like Mithen’s compartmentalized cathedral, lacking pathways for the various competences to talk to each other. He describes a “rolling” sort of consciousness that could go from one sphere to another, but was in something of a blur about seeing any kind of big picture.

Now, if you were intelligently building this cathedral, you wouldn’t do it this way. But evolution is not “intelligent design.” It has to work with what developed previously. And what it started with was much like the Swiss Army knife, with a bunch of wholly separate competences that each evolved independently.

That’s good enough for most living things, able to survive and reproduce without a “general intelligence.” Evolving the latter was something of a fluke for humans. (A few other creatures may have something like it.)

The next step was to integrate the whole tool kit; to open the doors of all the chapels leading into the central nave. The difference was that while a Neanderthal could be extremely skilled at making a stone tool, while he was doing it he really couldn’t ponder about it in the context of his whole life. We can. Mithen calls this “cognitive fluidity.”

The way I like to put it, the essence of our consciousness is that we don’t just have thoughts, we can think about our thoughts. That’s the integration Mithen talks about — a whole added layer of cognition. And it’s that layering, that thinking about our thinking, that gives us a sense of self, more powerfully than any other creature.

I’ve previously written too of how the mind makes sense of incoming information by creating representations. Like pictures in the mind, often using metaphors. And here too there’s layering; we make representations of representations; representations of ourselves perceiving those representations. That indeed is how we do perceive — and think about what we perceive. And we make representations of concepts and beliefs.

All this evolved because it was adaptive — enabling its possessors to better surmount the challenges of their environment. But this cognitive fluidity, Mithen says, is also at the heart of art, religion, science — all of human culture.

Once we achieved this capability, it blew the doors off the cathedral, and it was off to the races.

“Science for Heretics” — A nihilistic view of science

August 10, 2019

Physicist Barrie Condon has written Science for Heretics: Why so much of science is wrong. Basically arguing that science cannot really understand the world, and maybe shouldn’t even try. The book baffles me.

It’s full of sloppy mistakes (many misspelled names). It’s addressed to laypeople and does not read like a serious science book. Some seems downright crackpot. Yet, for all that, the author shows remarkably deep knowledge, understanding, and even insight into the scientific concepts addressed, often explaining them quite lucidly in plain English. Some of his critiques of science are well worth absorbing. And, rather than the subtitle’s “science is wrong,” the book is really more a tour through all the questions it hasn’t yet totally answered.

A good example is the brain. We actually know a lot about its workings. Yet how they result in consciousness is a much harder problem.

Condon’s first chapter is “Numbers Shmumbers,” about the importance of mathematics in science. His premise is that math is divorced from reality and thereby leads science into black holes of absurdity, like . . . well, black holes.* He starts with 1+1=? — whose real world answer, he says, is never 2! Because that answer assumes each “1” is identical to the other, while in reality no two things are ever truly identical. For Condon, this blows up mathematics and all the science incorporating it.

But identicality is a red herring. It’s perfectly valid to say I have two books, even if they’re very different, because “books” is acategory. One book plus one book equals two books.

Similarly, Condon says that in the real world no triangle’s angles equal 180 degrees because you can never make perfectly straight lines. Nor can any lines be truly parallel. And he has fun mocking the concepts of zero and infinity.

However, these are all concepts. That you can’t actually draw a perfect triangle doesn’t void the concept. This raises the age-old question (which Condon nibbles at) of whether mathematics is something “out there” as part of the fabric of reality, or just something we cooked up in our minds. My answer: we couldn’t very well have invented a mathematics with 179 degree triangles. The 180 degrees (on flat surfaces!) is an aspect of reality — which we’ve discovered.

A key theme of the book is that reality is complex and messy, so the neat predictions of scientific theory often fail. A simplified high school picture may indeed be too simple or even wrong (like visualizing an atom resembling the solar system). But this doesn’t negate our efforts to understand reality, or the value of what we do understand.

Modern scientific concepts do, as Condon argues, often seem to violate common sense. Black holes for example. But the evidence of their reality mounts. Common sense sees a table as a solid object, but we know from science that it’s actually almost entirely empty space. In fact, the more deeply we peer into the atomic and even sub-atomic realms, we never do get to anything solid.

Condon talks about chaos theory, and how it messes with making accurate predictions about the behavior of any system. Weather is a prime example. Because the influencing factors are so complex that a tiny change in starting conditions can mean a big difference down the line. Fair enough. But then — exemplifying what’s wrong with this book — he says of chaos theory, “[t]his new, more humble awareness marked a huge retreat by science. It clearly signaled its inherent limitations.” Not so! Chaos theory was not a “retreat” but an advance, carrying to a new and deeper level our understanding of reality. (I’ve written about chaos theory and its implications, very relevantly to Condon’s book: https://rationaloptimist.wordpress.com/2017/01/04/chaos-fractals-and-the-dripping-faucet/)

After reading partway, I was asking myself, what’s Condon really getting at? He’s a very knowledgeable scientist. But if science is as futile as he seems to argue — then what? I suspected Condon might have gone religious, so I flipped to the last chapter, expecting to find a deity or some other sort of mysticism. But no. Condon has no truck with such stuff either.

He does conclude by saying “we need to profoundly re-assess how we look at the universe,” and “who knows what profound insights may be revealed when we remove [science’s] blinkers.” But Condon himself offers no such insights. Instead (on page 55) he says simply that “we are incapable of comprehending the universe” and “there are no fundamental laws underlying the universe to begin with. The universe just is the way it is.” (My emphasis)

No laws? Newton’s inverse square law of gravitation is a pretty good descriptor of how celestial bodies actually behave. A Condon might say it doesn’t exactly explain the orbit of Mercury, which shows how simple laws can fail to model complex reality. But Einstein’s theory was a refinement to Newton’s — and it did explain Mercury’s orbit.

So do we now know everything about gravitation? Condon makes much of how galaxies don’t obey our current understanding, if you only count visible matter; so science postulates invisible “dark matter” to fix this. Which Condon derides as a huge fudge factor. And I’m actually a heretic myself on this, having written about an alternate theory that would slightly tweak the laws of gravitation making “dark matter” unnecessary (https://rationaloptimist.wordpress.com/2012/07/23/there-is-no-dark-matter/). But here is the real point. We may not yet have gravitation all figured out. But that doesn’t mean the universe is lawless.

Meantime, you might wonder how, if our scientific understandings were not pretty darn good, computers could work and planes could fly. Condon responds by saying that actually, “our technology rarely depend[s] on scientific theory.” Rather, it’s just engineering. “Engineers have learnt from observation and experience,” and “[u]nburdened by theory they were . . . simply observing regularities in the behavior of the universe.”**

And how, pray tell, do “regularities in the behavior of the universe” differ from laws? In fact, a confusion runs through the book between science qua “theory” (Condon’s bete noire) and science qua experimentation revealing how nature behaves. And what does it mean to say, “the universe just is the way it is?” That explains nothing.

But it can be the very first step in a rational process of understanding it. Recognizing that it is a certain way, rather than some other way (or lawless). That there must be reasons for its being the way it is. Reasons we can figure out. Those reasons are fundamental laws. That’s science.

And, contrary to the thrust of Condon’s book, we have gained a tremendous amount of understanding. The very fact that he could write it — after all, chock full of science— and pose all the kinds of questions he does — testifies to that understanding. Quantum mechanics, for example, which Condon has a field day poking fun at, does pose huge puzzles, and some of our theories may indeed need refinement. Yet quantum mechanics has opened for us a window into reality, at a very deep level, that Aristotle or Eratosthenes could not even have imagined.

Condon strangely never mentions Thomas Kuhn, whose seminal The Structure of Scientific Revolutions characterized scientific theories as paradigms, a new one competing against an old one, and until one prevails there’s no scientific way to choose. You might thus see no reason to believe anything science says, because it can change. But modern science doesn’t typically lurch from one theory to a radically opposing one. Kuhn’s work was triggered by realizing Aristotle’s physics was not a step toward modern theories but totally wrong. However, Aristotle wasn’t a scientist at all, did no experimentation; he was an armchair thinker. Science is in fact a process of honing in ever closer to the truth through interrogating reality.

Nor does Condon discuss Karl Popper’s idea of science progressing by “falsification.” Certitude about truth may be elusive, but we can discover what’s not true. A thousand white swans don’t prove all swans are white, but one black swan disproves it.

And as science thusly progresses, it doesn’t mean we’ve been fools or deluded before. Newton said that if he saw farther, it’s because he stood on the shoulders of giants. And what Newton revealed about motion and gravity was not overturned by Einstein but instead refined. Newton wasn’t wrong. And those who imagine Darwinian evolution is “just a theory” that future science may discard will wait in vain.

Unfortunately, such people will leap upon Condon’s book as confirmation for their seeing science (but not the Bible) as fallible.*** Thinking that because science doesn’t know everything, they’re free to disregard it altogether, substituting nonsense nobody could ever possibly know.

Mark Twain defined faith as believing what you know ain’t so. Science is not a “faith.” Nor even a matter of “belief.” It’s the means for knowing,

*But later he spends several pages on the supposed danger of the Large Hadron Collider creating black holes (that Condon doesn’t believe in) and destroying the world. Which obviously didn’t happen.

**But Condon says (misplaced) reliance on theory is increasingly superseding engineering know-how, with bad results, citing disasters like the Challenger with its “O” rings. Condon’s premise strikes me as nonsense; and out of literally zillions of undertakings, zero disasters would be miraculous.

***While Condon rejects “intelligent design,” he speculates that Darwinian natural selection isn’t the whole story — without having any idea what the rest might be.

Fantasyland: How America Went Haywire

July 3, 2019

(A condensed version of my June 18 book review talk)

In this 2017 book Kurt Andersen is very retro; believes in truth, reason, science, and facts. But he sees today’s Americans losing their grip on those. Andersen traces things back to the Protestant Reformation, preaching that each person decides what to believe.

Religious zealotry has repeatedly afflicted America. But in the early Twentieth Century that, Andersen says, seemed to be fizzling out. Christian fundamentalism was seen as something of a joke, culminating with the 1925 Scopes “monkey” trial. But evangelicals have made a roaring comeback. In fact, American Christians today are more likely than ever to be fundamentalist, and fundamentalism has become more extreme. Fewer Christians now accept evolution, and more insist on biblical literalism.

Other fantasy beliefs have also proliferated. Why? Andersen discusses several factors.

First he casts religion itself as a gateway drug. Such a suspension of critical faculties warps one’s entire relationship with reality. So it’s no coincidence that the strongly religious are often the same people who indulge in a host of other magical beliefs. The correlation is not perfect. Some religious Americans have sensible views about evolution, climate change, even Trump — and some atheists are wacky about vaccination and GM foods. Nevertheless, there’s a basic synergy between religious and other delusions.

Andersen doesn’t really address tribalism, the us-against-them mentality. Partisan beliefs are shaped by one’s chosen team. Climate change denial didn’t become prevalent on the right until Al Gore made climate a left-wing cause. Some on the left imagine Venezuela’s Maduro regime gets a bum rap.

Andersen meantime also says popular culture blurs the line between reality and fantasy, with pervasive entertainment habituating us to a suspension of disbelief. I actually think this point is somewhat overdone. People understand the concept of fiction. The problem is with the concept of reality.

Then there’s conspiracy thinking. Rob Brotherton’s book Suspicious Minds: Why We Believe Conspiracy Theories says we’re innately primed for them, because in our evolution, pattern recognition was a key survival skill. That means connecting dots. We tend to do that, even if the connections aren’t real.

Another big factor, Andersen thinks, was the “anything goes” 1960s counterculture, partly a revolt against the confines of rationality. Then there’s post-modernist relativism, considering truth itself an invalid concept. Some even insist that hewing to verifiable facts, the laws of physics, biological science, and rationality in general, is for chumps. Is in fact an impoverished way of thinking, keeping us from seeing some sort of deeper truth. As if these crackpots are the ones who see it.

Then along came the internet. “Before,” writes Andersen, “cockamamie ideas and outright falsehoods could not spread nearly as fast or widely, so it was much easier for reason and reasonableness to prevail.” Now people slurp up wacky stuff from websites, talk radio, and Facebook’s so-called “News Feed” — really a garbage feed.

Andersen considers “New Age” spirituality a new form of American religion. He calls Oprah its Pope, spreading the screwball messages of a parade of hucksters, like Eckhart Tolle, and the “alternative medicine” promoter Doctor Oz. Among these so-called therapies are homeopathy, acupuncture, aromatherapy, reiki, etc. Read Wikipedia’s scathing article about such dangerous foolishness. But many other other mainstream gatekeepers have capitulated. News media report anti-scientific nonsense with a tone of neutrality if not acceptance. Even the U.S. government now has an agency promoting what’s euphemized as “Complementary and Integrative Health;” in other words, quackery.

Guns are a particular focus of fantasy belief. Like the “good guy with a gun.” Who’s actually less a threat to the bad guy than to himself, the police, and innocent bystanders. Guns kept to protect people’s families mostly wind up shooting family members. Then there’s the fantasy of guns to resist government tyranny. As if they’d defeat the U.S. military.

Of course Andersen addresses UFO belief. A surprising number of Americans report being abducted by aliens, taken up into a spaceship to undergo a proctology exam. Considering the nearest star being literally 24 trillion miles away, would aliens travel that far just to study human assholes?

A particularly disturbing chapter concerns the 1980s Satanic panic. It began with so-called “recovered memory syndrome.” Therapists pushing patients to dredge up supposedly repressed memories of childhood sexual abuse. (Should have been called false memory syndrome.) Meantime child abductions became a vastly overblown fear. Then it all got linked to Satanic cults, with children allegedly subjected to bizarre and gruesome sexual rituals. This new witch hunt culminated with the McMartin Preschool trial. Before the madness passed, scores of innocent people got long prison terms.

A book by Tom Nichols, The Death of Expertise, showed how increasing formal education doesn’t actually translate into more knowledge (let alone wisdom or critical thinking). Education often leads people to overrate their knowledge, freeing them to reject conventional understandings, like evolution and medical science. Thus the anti-vaccine insanity.

Another book, Susan Jacoby’s The Age of American Unreason, focuses on our culture’s anti-intellectual strain. Too much education, some people think, makes you an egghead. And undermines religious faith. Yet Jacoby also notes how 19th Century Americans would travel long distances to hear lecturers like Robert Ingersoll, the great atheist, and Huxley the evolutionist. Jacoby also vaunts 20th century “Middlebrow” American culture, with “an affinity for books; the desire to understand science; a strong dose of rationalism; above all, a regard for facts.”

Today in contrast there’s an epidemic of confirmation bias: people embracing stuff that supports pre-existing beliefs, and shutting out contrary information. Smarter folks are actually better at confabulating rationalizations for that. And how does one make sense of the world and of new information? Ideally by integrating it with, and testing it against, your body of prior knowledge and understanding. But many Americans come short there — blank slates upon which rubbish sticks equally well as truth.

I also think reality used to be more harsh and unforgiving. To get through life you needed a firm grip on reality. That has loosened. The secure, cushy lives given us by modernity — by, indeed, the deployment of supreme rationality in the age of science — free people to turn their backs on that sort of rationality and indulge in fantasy.

Anderson’s subtitle is How America Went Haywire. As if that applies to America as a whole. But we are an increasingly divided nation. Riven between those whose faith has become more extreme and those moving in the opposite direction; which also drives political polarization. So it’s not all Americans we’re talking about.

Still, the haywire folks are big shapers of our culture. And there are real costs. Anti-vaccine hysteria undermines public health. The 1980s child threat panic ruined lives. Gun madness kills many thousands. And of course they’ve given us a haywire president.

Yet is it the end of the world? Most Americans go about their daily lives, do their jobs, in a largely rational pragmatic way (utilizing all the technology the Enlightenment has given). Obeying laws, being good neighbors, good members of society. Kind, generous, sincere, ethical people. America is still, in the grand sweep of human history, an oasis of order and reasonableness.

Meantime religious faith is collapsing throughout the advanced world, and even in America religion, for all its seeming ascendancy, is becoming more hysterical because it is losing. The younger you are, the less religious you are likely to be. And there are signs that evangelical Christianity is being hurt by its politicization, especially its support for a major moral monster.

I continue to believe in human progress. That people are capable of rationality, that in the big picture rationality has been advancing, and it must ultimately prevail. That finally we will, in the words of the Bible itself, put childish things away.

Why does evolution produce such diversity?

June 26, 2019

A science writer friend pointed me to a recent “Edge” essay by Freeman Dyson (https://www.edge.org/conversation/freeman_dyson-biological-and-cultural-evolution). Dyson, 95, is a truly great mind, which I am not. Nor an evolutionary biologist. Nevertheless —

Dyson begins with the question: why has evolution produced such a vast diversity of species? If “survival of the fittest” natural selection is the mechanism, shouldn’t we expect each ecological niche to wind up occupied by the one species most perfectly adapted? With others losing out in the competition and disappearing. Thus, in the Amazon rain forest, for example, just one variety of insect rather than thousands; and worldwide, maybe only a few hundred species altogether, rather than the millions actually existing (many with only slight differences). Also, we might expect species slimmed down to efficient essentials, not ongepotchket ones (a Yiddish word for “excessively and unaesthetically decorated.”) These things puzzled Darwin himself.

Darwin worked before we knew anything of genes, Dyson points out. He discusses the contributions of several later people. First is Motoo Kimura with the concept of “genetic drift,” an evolutionary mechanism separate from natural selection. It’s the randomness inherent in gene transmission through sexual reproduction. A given gene’s frequency in a large population will vary less than in a small one, where such random fluctuations will loom larger. Like if you make 1000 coin tosses you’ll always get very close to 500 heads, whereas with only ten tosses you might well get seven heads, a big deviation. So in small populations such genetic drift can drive evolutionary change faster than in a large population where genetic drift is negligible and slower natural selection is the dominant factor. Thus it’s small populations (often ones that get isolated from the larger mass) that most tend to spin off new species.

Dyson combines this idea with cultural evolution which, for humans in particular, is a much bigger factor than biological evolution. Dyson sees genetic drift involved with big local effects, such as the flourishing of ancient Athens or Renaissance Florence.

Then there’s Ursula Goodenough’s idea that mating paradigms, in particular, seem to change faster than other species characteristics. This too makes for rapid evolutionary jumps in genetically isolated populations. Dyson comments: “Nature loves to gamble. Nature thrives by taking risks. She scrambles mating system genes so as to increase the risk that individual parents will fail to find mates. [This] is part of Nature’s plan.” Because it raises the likelihood that parents who do succeed will birth new species.

And then there’s Richard Dawkins and The Selfish Gene. I keep coming back to that book because this — when fully understood — is a very powerful idea indeed.

It tells us that evolution is all about gene replication and nothing else. Thus I take some issue with Dyson’s language anthropomorphizing “Nature” as gambling. He writes as though Nature wants evolution to occur. But it doesn’t have aims. Nor does a gene “want” to make the most copies of itself; it’s simply that one doing so will be more prevalent in a population. That’s what evolution is.

So taking again Goodenough’s point, supposing any given characteristic (here, a mating paradigm) does result in some copies of the relevant gene failing to replicate, if nevertheless in the long run the characteristic means other copies of the same gene will replicate more, then that gene becomes more prevalent. There’s no “gambling” taking place, and no extra points earned if a new species happens to be created. It’s simply the math of the outcome — more copies of the gene.

I also take issue with Dyson’s associating local cultural flourishing with genetic drift. Whatever happened in Fifth Century BC Athens was a purely cultural phenomenon that had nothing to do with changes in Athenians’ genes. While the local gene pool would have differed a (tiny) bit from other human ones, there’s no basis to imagine there was natural selection favoring genes conducive to artistic flourishing, and in any case there would have been insufficient time for such natural selection to play out.

So — returning to the starting question — why all the diversity? While Dyson does point to some mechanistic aspects of evolution militating in that direction, I think there’s a larger and simpler answer. The problem lies in a syllable. “Survival of the fittest” is not quite exactly right; it’s really “survival of the fit.” There’s a big difference. It’s not only the fittest that survive; you don’t have to be the fittest; you just have to be fit. It’s not a winner-take-all competition.

This comports with Dawkins’s selfish gene insight. The genes that continue to exist in an environment are those that have been able to replicate. That doesn’t require being the best at replicating. The best, it is true, will be represented with the most copies, but there will also exist copies of those that are merely okay at replicating; even ones that are lousy, as long as they can replicate at all. The most successful don’t kill off the less successful. Only those totally failing to adapt to their environment die out.

That’s why there are a zillion different varieties of insects in the Amazon rain forest.

But Dyson’s larger point is that for humans, again, cultural evolution outstrips the biological, and this is certainly true. As Dyson notes, language is a huge factor (unique to humans) driving cultural evolution. And while biological evolution does tend toward ever greater diversification, human cultural evolution is actually pushing us in the opposite direction. The degree of human diversity is being collapsed by our cultural evolution — not only our biological diversity, in “races” whose separateness increasingly breaks down, but also cultural diversity, with ancient barriers that separated human groups into combative enclaves breaking down too, so that it is more and more appropriate to speak of a universal humanity.

Humans becoming gods — or chips in a cosmic computer?

May 23, 2019

Yuval Noah Harari is a thinker of Big Ideas, with a capital B and a capital I. An Israeli historian, he wrote Sapiens: a Brief History of Humankind, about how we got where we are. Where we’re going is addressed in the sequel, Homo Deus: A Brief History of Tomorrow.

The title implies man becoming God. But there’s a catch.

Harari sees us having experienced, in the last few centuries, a humanist revolution. With the ideas of the Enlightenment triumphant — science trumping superstition, and the liberal values of the Declaration of Independence — freedom in both the political and economic spheres — trumping autocracy and feudalism. As the word “humanist” implies, these values exalt the human, the individual human, as the ultimate source of meaning. We find meaning not in some deity or cosmic plan but in ourselves and our efforts to make our lives better. We do that through deploying our will, using our rationality to make choices and decisions — both in politics, through democratic voting, and in economics, through consumer choice.

But Harari plays the skunk at this picnic he’s described. The whole thing, he posits, rests upon the assumption that we do make choices and decisions. But what if we actually don’t? This is the age-old argument about free will. Harari recognizes its long antecedents, but asserts that the question has really, finally, been settled by science, something he discusses at length. The more science probes into our mental processes, there’s no “there” there. That is, the idea that inside you there’s a master controller, a captain at the helm, is a metaphor with no actual reality. We don’t “make” decisions and choices. It’s more like they happen to us.

As Schopenhauer said (Harari strangely fails to quote him), “a man can do what he wants, but cannot will what he wants.”

And if we humans are not, in any genuine sense, making choices and decisions through a conscious thinking process — but rather are actuated by deterministic factors we can neither see nor control — in politics, economics, and even in how we live our lives — what does that mean for the humanist construct of valorizing those choices above all else?

There’s a second stink-bomb Harari throws into the humanist picnic. He says humanism valued the individual human because he or she was, in a very tangible way, valuable. Indeed, indispensable. Everything important in society rested on human participation. The economy required people engaged in production. Human agents were required to disseminate the information requisite for progress to occur and spread. A society even needed individual humans to constitute the armies they found so needful.

But what if all that ceases being true? Economic production is increasingly achieved through robots and artificial intelligences. They are also taking care of information dissemination. Even human soldiers are becoming obsolete (as will become true too of the need for them). Thus Harari sees humans becoming useless irrelevancies.

Or at least most of us. Here’s another stink-bomb. Liberal humanist Enlightenment values also rested fundamentally on the idea of human equality. Not literal equality, of course, in the sense of everyone being the same, or even having the same conditions of life. Rather it was equality in the ineffable sense of value and dignity. Spiritual equality, if you will.

And indeed, the Enlightenment/humanist revolution did go a long way toward that ideal, as a philosophical concept that was increasingly powerful, but also as a practical reality. Despite very real wealth inequality, there has (especially in the advanced nations) actually been a great narrowing of the gap between the rich and the rest in terms of quality of life. Earlier times were in contrast generally characterized by a tiny elite living poshly while the great mass of peasants were immured in squalor.

Harari thinks we’re headed back to that, when most people become useless. We may continue to feed them, but the gap between them and the very few superior beings will become a chasm. I’ve previously written about prospects for virtual immortality, which will probably not be available to the mass underclass.

What will that do to the putative ideal of human equality?

Having rejected the notion of human beings as autonomous choice-makers, Harari doesn’t seem to think we do possess any genuine ultimate value along the lines that humanism posits. Instead, we are just biological algorithms. To what purpose?

Evolutionary biology (as made clear in Richard Dawkins’s The Selfish Gene) tells us that, at least as far as Nature is concerned, life’s only purpose is the replication of genes. But that’s a tricky concept. It isn’t a purpose in any conscious, intentional sense, of course. Rather, it’s simply a consequence of the brute mathematical fact that if a gene (a set of molecules) is better at replicating than some other gene, the former will proliferate more, and the world will be filled with its progeny. No “meaning” to be seen there.

But Harari takes it one step further back. The whole thing is just a system for processing information (or “data”). As I understand it, that’s his take on what “selfish gene” biology really imports. And he applies the same concept to human societies. The most successful are the ones that are best at information processing. Democracy beats tyranny because democracy is better at information processing. Ditto for free market capitalism versus other economic models. At least till now; Harari thinks these things may well cease being true in the future.

This leads him to postulate what the religion of the future will be: “Dataism.” He sees signs of it emerging already. This religion would recognize that the ultimate cosmic value is not some imagined deity’s imagined agenda, but information processing. Which Harari thinks has the virtue of being true.

So the role of human beings would be to serve that ultimate cosmic value. Chips in the great computer that is existence. Hallelujah! But wait — artificial systems will do that far better than we can. Where will that leave us?

Here’s what I think.

Enlightenment humanist values have had a tremendous positive effect on the human condition. But Harari writes as though this triumph is complete. Maybe so on New York’s Upper East Side, but in the wider world, not so much. Far from being ready to progress from Harari’s Phase II to Phase III (embracing Dataism), much of humanity is still trying to get from Phase I to Phase II. The Enlightenment does not reign everywhere. Anti-scientific, religious, and superstitious beliefs remain powerful. Democracy is under assault in many places, and responsible citizenship is crumbling. Look at the creeps elected in Italy (and America).

Maybe this is indeed a reaction to what Harari is talking about, with humans becoming less valuable, and they feel it, striking out in elections like Italy’s and America’s and the Brexit vote, while autocrats and demagogues like Erdogan and Trump exploit such insecurities. In this respect Harari’s book complements Tom Friedman’s which I’ve reviewed, arguing that the world is now changing faster than people, institutions, and cultures can keep up with and adapt to.

Free will I’ve discussed before too. I fully acknowledge the neuroscience saying the “captain at the helm” self is an illusion, and Schopenhauer was right that our desires are beyond our control. But our actions aren’t. As legal scholar Jeffrey Rosen has observed, we may not have free will, exactly, but we do have free won’t. The capability to countermand impulses and control our behavior. Thus, while the behavior of lighting up is, for a smoker, determinism par excellence, smokers can and do quit.

You might reply that quitting too is driven by deterministic factors, but I think this denies the reality of human life. The truth is that our thought and behavior is far too complex to be reduced to simplistic Skinnerian determinism.

The limits of a deterministic view are spotlighted by an example Harari himself cites: the two Koreas. Their deterministic antecedents were extremely similar, yet today the two societies could not be more different. Accidents of history — perhaps a sort of butterfly effect — made all the difference. Such effects also come into play when one looks at an individual human from the standpoint of determinism.

Harari’s arguments about humans losing value, and that anyway we’re nothing but souped-up information processors, I will take together. Both ideas overlook that the only thing in the cosmos that can matter and have meaning is the feelings of beings capable of feeling. (I keep coming back to that because it’s really so central.) The true essence of humanist philosophy is that individual people matter not because of what we produce but because of what we are: beings capable of feeling. Nothing else matters, or can matter.

The idea of existence as some vast computer-like data processor may be a useful metaphor for understanding its clockwork. But it’s so abstract a concept I’m not really sure. And in any case it isn’t really relevant to human life as it’s actually lived. We most certainly do not live it as akin to chips in a cosmic computer. Instead we live it through feelings experienced individually which, whatever one can say about how the brain works, are very real when felt. Once again, nothing can matter except insofar as it affects such feelings.

I cannot conceive of a future wherein that ceases being true.

My pro basketball experience

March 31, 2019

This pic of me at the game didn’t come out so good

Last Sunday we went to Boston for a Celtics game. I’m no sports fan. In fact, the last pro sports event I attended was a Dodgers baseball game. When they were still in Brooklyn (and Ike was president).

But my wife is a basketball aficionado, and we’ve been hosting a gal from Somaliland who plays it in high school. So I went with them.

 

I really enjoyed the fan-cam and people’s reactions seeing themselves on the jumbotron. Most didn’t immediately realize they were having their fifteen nanoseconds of fame. A few never did, eyes glued to their phones. Most did exuberant dancing and arm-waving. One woman grabbed her husband’s head and kissed him on the lips. But I thought the most romantic one was the gal holding up a sign saying, “Marcus Smart will you marry me?” — until (silly me) I learned Smart is a Celtics player, not (presumably) her inamorata.

The game itself was less entertaining. Very much the same thing repeated over and over. Speaking of repetition, the jumbotron kept showing the word “DEFENSE” in giant block letters crashing down and crushing a bunch of what appeared to be pick-up sticks. And the crowd would duly pick up the chant, “DEFENSE! DEFENSE!” I waited, in vain, for a little offense; especially as the Celtics’ defense was being crushed by the San Antonio Spurs.

 

They lost 486 to 9. Or something like that.

Wrong

I am no basketball expert. Yet I could have advised one thing to improve their score: doing free throws underhand (“granny style”) rather than overhead. Studies have in fact been done, and it’s proven that the former gives a higher success rate. Yet players universally ignore this. Why? They think it looks girly, not macho. So Vince Lombardi was actually wrong — winning isn’t the only thing.

 

Anyhow, some fans were deflated by the Celtics’ drubbing. Some even left early, in disgust, or perhaps to avoid the traffic crush. But most seemed to have a good time nevertheless. Even sports nuts ultimately understand that these games are Not Really Truly Important. They’re harmless. At least we no longer gather in stadiums to watch combatants literally kill each other. And at least these Celtics fans wore green hats, not red ones, and their chants weren’t hateful.

And I achieved my own personal goal for the evening: home and snug in bed by 1:30 AM.

The truth about vaccines, autism, measles, and other illnesses

February 26, 2019

The left derides the right for science denialism, on evolution and climate change. But many on the left have their own science blind spots, on GM foods and vaccination.

The anti-vax movement is based on junk science. The fraudulent study that started the whole controversy, by Andrew Wakefield, supposedly linking vaccines and autism, has been totally debunked. The true causes of autism remain debatable, but in the wake of Wakefield there have been numerous (genuine) scientific studies, and now at least one thing can be ruled out with certainty: vaccination.

“But my kid became autistic right after vaccination” — we hear this a lot. Post hoc ergo propter hoc (after which, therefore because of which) is a logic fallacy. One thing may follow another with no causal link. Kids are typically scheduled for vaccinations at right around the same age that autism first shows up. It’s just coincidence.

Anti-vaxers throw up a flurry of other allegations of harm, and keep insisting science hasn’t answered them. Not so. All such claims have been conclusively refuted. True, it’s possible to have a bad reaction to any injection, but with vaccination such cases are so extremely rare that all the fearmongering is totally disproportionate. The fundamental safety of vaccines is proven beyond any rational doubt.

I heard it reported that parents objecting to vaccination actually tend to be smarter than average. Proving you can be too smart for your own good. Tom Nichols’s book The Death of Expertise shows education often leads people to overrate their own knowledge, making them confident to just reject conventional medical science. They make the mistake of deferring instead to a movement that’s rooted in a mindset of hostility toward elites and experts of all stripes, and receptiveness to conspiracy theories, ready to believe big pharma, the medical establishment, and of course the government, all promote vaccination for evil purposes. People go online and find all this nonsense, and it fits with their pre-existing mindset, so they become impervious to the facts.

Still, we’re told this is a free country and people should be allowed to make these decisions for themselves and their own children. Such pleas resonate with my libertarian instincts; I don’t like government telling us what to do. But the vaccination issue isn’t so simple. Children are unable to choose for themselves. While parents are free to raise kids as they see fit, we don’t allow child abuse. And the law steps in, rightly, when Christian Scientists for example want to deny their kids needed medical treatment.

The same principle should apply to vaccination. Indeed, more so — because parental decisions here don’t just affect their own kids. When a high enough share of a population is vaccinated, a disease is blocked from propagating, so even the unvaccinated are safe. It’s called “herd immunity.” But with enough unvaccinated available victims, the disease can get a toehold and spread. Vaccinated people are still safe, but not babies too young for vaccination, and people who can’t be vaccinated, for various legitimate medical reasons.

Our herd immunities are now in fact being broken by the widespread refusal of vaccination. Thus dangerous illnesses, like whooping cough and measles, that had been virtually eradicated, are making a big comeback, with sharply rising infection rates.

This is a serious public health issue, and for once the solution is simple. Vaccination must be mandatory, absent valid medical reasons. Opt-outs on religious or “philosophical” grounds should be ended. There are no arguably legitimate religious or other doctrines that could justify refusal to vaccinate. These are just pretexts by people suckered by the pseudo-scientific anti-vax campaign.

We all should be free to do as we please, as long as it harms no others. The freedoms that matter are living as one chooses, and self-expression. Requiring vaccination does not violate these freedoms in a meaningful way; while refusing it does harm others. While you might argue that you have a right against unwanted injections, they are a far less drastic impingement upon personal freedom than is quarantining people with contagious illnesses. Their personal freedom is surely trumped by society’s right to protect others from disease.

To anti-vaxers, the minuscule risk from vaccination may seem larger than the risk from illnesses like whooping cough. That’s only because vaccination had practically eradicated those diseases. Anti-vaxers are getting a free ride from the herd immunity conferred by the vaccination of others. Anti-vax parents act as though only their kids matter, other kids and the herd immunity do not. Where is the social solidarity? Doing something because it’s good for all of us together?

Vaccination is a fantastic accomplishment of humankind, conquering the dread specters of so many diseases that afflicted life, and brought early death, throughout most of history. If you want to shout from the rooftops arguing that vaccination is a devil’s plot, you should have a right to do so. As long as you’re vaccinated.