Archive for the ‘Science’ Category

“Science for Heretics” — A nihilistic view of science

August 10, 2019

Physicist Barrie Condon has written Science for Heretics: Why so much of science is wrong. Basically arguing that science cannot really understand the world, and maybe shouldn’t even try. The book baffles me.

It’s full of sloppy mistakes (many misspelled names). It’s addressed to laypeople and does not read like a serious science book. Some seems downright crackpot. Yet, for all that, the author shows remarkably deep knowledge, understanding, and even insight into the scientific concepts addressed, often explaining them quite lucidly in plain English. Some of his critiques of science are well worth absorbing. And, rather than the subtitle’s “science is wrong,” the book is really more a tour through all the questions it hasn’t yet totally answered.

A good example is the brain. We actually know a lot about its workings. Yet how they result in consciousness is a much harder problem.

Condon’s first chapter is “Numbers Shmumbers,” about the importance of mathematics in science. His premise is that math is divorced from reality and thereby leads science into black holes of absurdity, like . . . well, black holes.* He starts with 1+1=? — whose real world answer, he says, is never 2! Because that answer assumes each “1” is identical to the other, while in reality no two things are ever truly identical. For Condon, this blows up mathematics and all the science incorporating it.

But identicality is a red herring. It’s perfectly valid to say I have two books, even if they’re very different, because “books” is acategory. One book plus one book equals two books.

Similarly, Condon says that in the real world no triangle’s angles equal 180 degrees because you can never make perfectly straight lines. Nor can any lines be truly parallel. And he has fun mocking the concepts of zero and infinity.

However, these are all concepts. That you can’t actually draw a perfect triangle doesn’t void the concept. This raises the age-old question (which Condon nibbles at) of whether mathematics is something “out there” as part of the fabric of reality, or just something we cooked up in our minds. My answer: we couldn’t very well have invented a mathematics with 179 degree triangles. The 180 degrees (on flat surfaces!) is an aspect of reality — which we’ve discovered.

A key theme of the book is that reality is complex and messy, so the neat predictions of scientific theory often fail. A simplified high school picture may indeed be too simple or even wrong (like visualizing an atom resembling the solar system). But this doesn’t negate our efforts to understand reality, or the value of what we do understand.

Modern scientific concepts do, as Condon argues, often seem to violate common sense. Black holes for example. But the evidence of their reality mounts. Common sense sees a table as a solid object, but we know from science that it’s actually almost entirely empty space. In fact, the more deeply we peer into the atomic and even sub-atomic realms, we never do get to anything solid.

Condon talks about chaos theory, and how it messes with making accurate predictions about the behavior of any system. Weather is a prime example. Because the influencing factors are so complex that a tiny change in starting conditions can mean a big difference down the line. Fair enough. But then — exemplifying what’s wrong with this book — he says of chaos theory, “[t]his new, more humble awareness marked a huge retreat by science. It clearly signaled its inherent limitations.” Not so! Chaos theory was not a “retreat” but an advance, carrying to a new and deeper level our understanding of reality. (I’ve written about chaos theory and its implications, very relevantly to Condon’s book: https://rationaloptimist.wordpress.com/2017/01/04/chaos-fractals-and-the-dripping-faucet/)

After reading partway, I was asking myself, what’s Condon really getting at? He’s a very knowledgeable scientist. But if science is as futile as he seems to argue — then what? I suspected Condon might have gone religious, so I flipped to the last chapter, expecting to find a deity or some other sort of mysticism. But no. Condon has no truck with such stuff either.

He does conclude by saying “we need to profoundly re-assess how we look at the universe,” and “who knows what profound insights may be revealed when we remove [science’s] blinkers.” But Condon himself offers no such insights. Instead (on page 55) he says simply that “we are incapable of comprehending the universe” and “there are no fundamental laws underlying the universe to begin with. The universe just is the way it is.” (My emphasis)

No laws? Newton’s inverse square law of gravitation is a pretty good descriptor of how celestial bodies actually behave. A Condon might say it doesn’t exactly explain the orbit of Mercury, which shows how simple laws can fail to model complex reality. But Einstein’s theory was a refinement to Newton’s — and it did explain Mercury’s orbit.

So do we now know everything about gravitation? Condon makes much of how galaxies don’t obey our current understanding, if you only count visible matter; so science postulates invisible “dark matter” to fix this. Which Condon derides as a huge fudge factor. And I’m actually a heretic myself on this, having written about an alternate theory that would slightly tweak the laws of gravitation making “dark matter” unnecessary (https://rationaloptimist.wordpress.com/2012/07/23/there-is-no-dark-matter/). But here is the real point. We may not yet have gravitation all figured out. But that doesn’t mean the universe is lawless.

Meantime, you might wonder how, if our scientific understandings were not pretty darn good, computers could work and planes could fly. Condon responds by saying that actually, “our technology rarely depend[s] on scientific theory.” Rather, it’s just engineering. “Engineers have learnt from observation and experience,” and “[u]nburdened by theory they were . . . simply observing regularities in the behavior of the universe.”**

And how, pray tell, do “regularities in the behavior of the universe” differ from laws? In fact, a confusion runs through the book between science qua “theory” (Condon’s bete noire) and science qua experimentation revealing how nature behaves. And what does it mean to say, “the universe just is the way it is?” That explains nothing.

But it can be the very first step in a rational process of understanding it. Recognizing that it is a certain way, rather than some other way (or lawless). That there must be reasons for its being the way it is. Reasons we can figure out. Those reasons are fundamental laws. That’s science.

And, contrary to the thrust of Condon’s book, we have gained a tremendous amount of understanding. The very fact that he could write it — after all, chock full of science— and pose all the kinds of questions he does — testifies to that understanding. Quantum mechanics, for example, which Condon has a field day poking fun at, does pose huge puzzles, and some of our theories may indeed need refinement. Yet quantum mechanics has opened for us a window into reality, at a very deep level, that Aristotle or Eratosthenes could not even have imagined.

Condon strangely never mentions Thomas Kuhn, whose seminal The Structure of Scientific Revolutions characterized scientific theories as paradigms, a new one competing against an old one, and until one prevails there’s no scientific way to choose. You might thus see no reason to believe anything science says, because it can change. But modern science doesn’t typically lurch from one theory to a radically opposing one. Kuhn’s work was triggered by realizing Aristotle’s physics was not a step toward modern theories but totally wrong. However, Aristotle wasn’t a scientist at all, did no experimentation; he was an armchair thinker. Science is in fact a process of honing in ever closer to the truth through interrogating reality.

Nor does Condon discuss Karl Popper’s idea of science progressing by “falsification.” Certitude about truth may be elusive, but we can discover what’s not true. A thousand white swans don’t prove all swans are white, but one black swan disproves it.

And as science thusly progresses, it doesn’t mean we’ve been fools or deluded before. Newton said that if he saw farther, it’s because he stood on the shoulders of giants. And what Newton revealed about motion and gravity was not overturned by Einstein but instead refined. Newton wasn’t wrong. And those who imagine Darwinian evolution is “just a theory” that future science may discard will wait in vain.

Unfortunately, such people will leap upon Condon’s book as confirmation for their seeing science (but not the Bible) as fallible.*** Thinking that because science doesn’t know everything, they’re free to disregard it altogether, substituting nonsense nobody could ever possibly know.

Mark Twain defined faith as believing what you know ain’t so. Science is not a “faith.” Nor even a matter of “belief.” It’s the means for knowing,

*But later he spends several pages on the supposed danger of the Large Hadron Collider creating black holes (that Condon doesn’t believe in) and destroying the world. Which obviously didn’t happen.

**But Condon says (misplaced) reliance on theory is increasingly superseding engineering know-how, with bad results, citing disasters like the Challenger with its “O” rings. Condon’s premise strikes me as nonsense; and out of literally zillions of undertakings, zero disasters would be miraculous.

***While Condon rejects “intelligent design,” he speculates that Darwinian natural selection isn’t the whole story — without having any idea what the rest might be.

Advertisements

Fantasyland: How America Went Haywire

July 3, 2019

(A condensed version of my June 18 book review talk)

In this 2017 book Kurt Andersen is very retro; believes in truth, reason, science, and facts. But he sees today’s Americans losing their grip on those. Andersen traces things back to the Protestant Reformation, preaching that each person decides what to believe.

Religious zealotry has repeatedly afflicted America. But in the early Twentieth Century that, Andersen says, seemed to be fizzling out. Christian fundamentalism was seen as something of a joke, culminating with the 1925 Scopes “monkey” trial. But evangelicals have made a roaring comeback. In fact, American Christians today are more likely than ever to be fundamentalist, and fundamentalism has become more extreme. Fewer Christians now accept evolution, and more insist on biblical literalism.

Other fantasy beliefs have also proliferated. Why? Andersen discusses several factors.

First he casts religion itself as a gateway drug. Such a suspension of critical faculties warps one’s entire relationship with reality. So it’s no coincidence that the strongly religious are often the same people who indulge in a host of other magical beliefs. The correlation is not perfect. Some religious Americans have sensible views about evolution, climate change, even Trump — and some atheists are wacky about vaccination and GM foods. Nevertheless, there’s a basic synergy between religious and other delusions.

Andersen doesn’t really address tribalism, the us-against-them mentality. Partisan beliefs are shaped by one’s chosen team. Climate change denial didn’t become prevalent on the right until Al Gore made climate a left-wing cause. Some on the left imagine Venezuela’s Maduro regime gets a bum rap.

Andersen meantime also says popular culture blurs the line between reality and fantasy, with pervasive entertainment habituating us to a suspension of disbelief. I actually think this point is somewhat overdone. People understand the concept of fiction. The problem is with the concept of reality.

Then there’s conspiracy thinking. Rob Brotherton’s book Suspicious Minds: Why We Believe Conspiracy Theories says we’re innately primed for them, because in our evolution, pattern recognition was a key survival skill. That means connecting dots. We tend to do that, even if the connections aren’t real.

Another big factor, Andersen thinks, was the “anything goes” 1960s counterculture, partly a revolt against the confines of rationality. Then there’s post-modernist relativism, considering truth itself an invalid concept. Some even insist that hewing to verifiable facts, the laws of physics, biological science, and rationality in general, is for chumps. Is in fact an impoverished way of thinking, keeping us from seeing some sort of deeper truth. As if these crackpots are the ones who see it.

Then along came the internet. “Before,” writes Andersen, “cockamamie ideas and outright falsehoods could not spread nearly as fast or widely, so it was much easier for reason and reasonableness to prevail.” Now people slurp up wacky stuff from websites, talk radio, and Facebook’s so-called “News Feed” — really a garbage feed.

Andersen considers “New Age” spirituality a new form of American religion. He calls Oprah its Pope, spreading the screwball messages of a parade of hucksters, like Eckhart Tolle, and the “alternative medicine” promoter Doctor Oz. Among these so-called therapies are homeopathy, acupuncture, aromatherapy, reiki, etc. Read Wikipedia’s scathing article about such dangerous foolishness. But many other other mainstream gatekeepers have capitulated. News media report anti-scientific nonsense with a tone of neutrality if not acceptance. Even the U.S. government now has an agency promoting what’s euphemized as “Complementary and Integrative Health;” in other words, quackery.

Guns are a particular focus of fantasy belief. Like the “good guy with a gun.” Who’s actually less a threat to the bad guy than to himself, the police, and innocent bystanders. Guns kept to protect people’s families mostly wind up shooting family members. Then there’s the fantasy of guns to resist government tyranny. As if they’d defeat the U.S. military.

Of course Andersen addresses UFO belief. A surprising number of Americans report being abducted by aliens, taken up into a spaceship to undergo a proctology exam. Considering the nearest star being literally 24 trillion miles away, would aliens travel that far just to study human assholes?

A particularly disturbing chapter concerns the 1980s Satanic panic. It began with so-called “recovered memory syndrome.” Therapists pushing patients to dredge up supposedly repressed memories of childhood sexual abuse. (Should have been called false memory syndrome.) Meantime child abductions became a vastly overblown fear. Then it all got linked to Satanic cults, with children allegedly subjected to bizarre and gruesome sexual rituals. This new witch hunt culminated with the McMartin Preschool trial. Before the madness passed, scores of innocent people got long prison terms.

A book by Tom Nichols, The Death of Expertise, showed how increasing formal education doesn’t actually translate into more knowledge (let alone wisdom or critical thinking). Education often leads people to overrate their knowledge, freeing them to reject conventional understandings, like evolution and medical science. Thus the anti-vaccine insanity.

Another book, Susan Jacoby’s The Age of American Unreason, focuses on our culture’s anti-intellectual strain. Too much education, some people think, makes you an egghead. And undermines religious faith. Yet Jacoby also notes how 19th Century Americans would travel long distances to hear lecturers like Robert Ingersoll, the great atheist, and Huxley the evolutionist. Jacoby also vaunts 20th century “Middlebrow” American culture, with “an affinity for books; the desire to understand science; a strong dose of rationalism; above all, a regard for facts.”

Today in contrast there’s an epidemic of confirmation bias: people embracing stuff that supports pre-existing beliefs, and shutting out contrary information. Smarter folks are actually better at confabulating rationalizations for that. And how does one make sense of the world and of new information? Ideally by integrating it with, and testing it against, your body of prior knowledge and understanding. But many Americans come short there — blank slates upon which rubbish sticks equally well as truth.

I also think reality used to be more harsh and unforgiving. To get through life you needed a firm grip on reality. That has loosened. The secure, cushy lives given us by modernity — by, indeed, the deployment of supreme rationality in the age of science — free people to turn their backs on that sort of rationality and indulge in fantasy.

Anderson’s subtitle is How America Went Haywire. As if that applies to America as a whole. But we are an increasingly divided nation. Riven between those whose faith has become more extreme and those moving in the opposite direction; which also drives political polarization. So it’s not all Americans we’re talking about.

Still, the haywire folks are big shapers of our culture. And there are real costs. Anti-vaccine hysteria undermines public health. The 1980s child threat panic ruined lives. Gun madness kills many thousands. And of course they’ve given us a haywire president.

Yet is it the end of the world? Most Americans go about their daily lives, do their jobs, in a largely rational pragmatic way (utilizing all the technology the Enlightenment has given). Obeying laws, being good neighbors, good members of society. Kind, generous, sincere, ethical people. America is still, in the grand sweep of human history, an oasis of order and reasonableness.

Meantime religious faith is collapsing throughout the advanced world, and even in America religion, for all its seeming ascendancy, is becoming more hysterical because it is losing. The younger you are, the less religious you are likely to be. And there are signs that evangelical Christianity is being hurt by its politicization, especially its support for a major moral monster.

I continue to believe in human progress. That people are capable of rationality, that in the big picture rationality has been advancing, and it must ultimately prevail. That finally we will, in the words of the Bible itself, put childish things away.

Why does evolution produce such diversity?

June 26, 2019

A science writer friend pointed me to a recent “Edge” essay by Freeman Dyson (https://www.edge.org/conversation/freeman_dyson-biological-and-cultural-evolution). Dyson, 95, is a truly great mind, which I am not. Nor an evolutionary biologist. Nevertheless —

Dyson begins with the question: why has evolution produced such a vast diversity of species? If “survival of the fittest” natural selection is the mechanism, shouldn’t we expect each ecological niche to wind up occupied by the one species most perfectly adapted? With others losing out in the competition and disappearing. Thus, in the Amazon rain forest, for example, just one variety of insect rather than thousands; and worldwide, maybe only a few hundred species altogether, rather than the millions actually existing (many with only slight differences). Also, we might expect species slimmed down to efficient essentials, not ongepotchket ones (a Yiddish word for “excessively and unaesthetically decorated.”) These things puzzled Darwin himself.

Darwin worked before we knew anything of genes, Dyson points out. He discusses the contributions of several later people. First is Motoo Kimura with the concept of “genetic drift,” an evolutionary mechanism separate from natural selection. It’s the randomness inherent in gene transmission through sexual reproduction. A given gene’s frequency in a large population will vary less than in a small one, where such random fluctuations will loom larger. Like if you make 1000 coin tosses you’ll always get very close to 500 heads, whereas with only ten tosses you might well get seven heads, a big deviation. So in small populations such genetic drift can drive evolutionary change faster than in a large population where genetic drift is negligible and slower natural selection is the dominant factor. Thus it’s small populations (often ones that get isolated from the larger mass) that most tend to spin off new species.

Dyson combines this idea with cultural evolution which, for humans in particular, is a much bigger factor than biological evolution. Dyson sees genetic drift involved with big local effects, such as the flourishing of ancient Athens or Renaissance Florence.

Then there’s Ursula Goodenough’s idea that mating paradigms, in particular, seem to change faster than other species characteristics. This too makes for rapid evolutionary jumps in genetically isolated populations. Dyson comments: “Nature loves to gamble. Nature thrives by taking risks. She scrambles mating system genes so as to increase the risk that individual parents will fail to find mates. [This] is part of Nature’s plan.” Because it raises the likelihood that parents who do succeed will birth new species.

And then there’s Richard Dawkins and The Selfish Gene. I keep coming back to that book because this — when fully understood — is a very powerful idea indeed.

It tells us that evolution is all about gene replication and nothing else. Thus I take some issue with Dyson’s language anthropomorphizing “Nature” as gambling. He writes as though Nature wants evolution to occur. But it doesn’t have aims. Nor does a gene “want” to make the most copies of itself; it’s simply that one doing so will be more prevalent in a population. That’s what evolution is.

So taking again Goodenough’s point, supposing any given characteristic (here, a mating paradigm) does result in some copies of the relevant gene failing to replicate, if nevertheless in the long run the characteristic means other copies of the same gene will replicate more, then that gene becomes more prevalent. There’s no “gambling” taking place, and no extra points earned if a new species happens to be created. It’s simply the math of the outcome — more copies of the gene.

I also take issue with Dyson’s associating local cultural flourishing with genetic drift. Whatever happened in Fifth Century BC Athens was a purely cultural phenomenon that had nothing to do with changes in Athenians’ genes. While the local gene pool would have differed a (tiny) bit from other human ones, there’s no basis to imagine there was natural selection favoring genes conducive to artistic flourishing, and in any case there would have been insufficient time for such natural selection to play out.

So — returning to the starting question — why all the diversity? While Dyson does point to some mechanistic aspects of evolution militating in that direction, I think there’s a larger and simpler answer. The problem lies in a syllable. “Survival of the fittest” is not quite exactly right; it’s really “survival of the fit.” There’s a big difference. It’s not only the fittest that survive; you don’t have to be the fittest; you just have to be fit. It’s not a winner-take-all competition.

This comports with Dawkins’s selfish gene insight. The genes that continue to exist in an environment are those that have been able to replicate. That doesn’t require being the best at replicating. The best, it is true, will be represented with the most copies, but there will also exist copies of those that are merely okay at replicating; even ones that are lousy, as long as they can replicate at all. The most successful don’t kill off the less successful. Only those totally failing to adapt to their environment die out.

That’s why there are a zillion different varieties of insects in the Amazon rain forest.

But Dyson’s larger point is that for humans, again, cultural evolution outstrips the biological, and this is certainly true. As Dyson notes, language is a huge factor (unique to humans) driving cultural evolution. And while biological evolution does tend toward ever greater diversification, human cultural evolution is actually pushing us in the opposite direction. The degree of human diversity is being collapsed by our cultural evolution — not only our biological diversity, in “races” whose separateness increasingly breaks down, but also cultural diversity, with ancient barriers that separated human groups into combative enclaves breaking down too, so that it is more and more appropriate to speak of a universal humanity.

Humans becoming gods — or chips in a cosmic computer?

May 23, 2019

Yuval Noah Harari is a thinker of Big Ideas, with a capital B and a capital I. An Israeli historian, he wrote Sapiens: a Brief History of Humankind, about how we got where we are. Where we’re going is addressed in the sequel, Homo Deus: A Brief History of Tomorrow.

The title implies man becoming God. But there’s a catch.

Harari sees us having experienced, in the last few centuries, a humanist revolution. With the ideas of the Enlightenment triumphant — science trumping superstition, and the liberal values of the Declaration of Independence — freedom in both the political and economic spheres — trumping autocracy and feudalism. As the word “humanist” implies, these values exalt the human, the individual human, as the ultimate source of meaning. We find meaning not in some deity or cosmic plan but in ourselves and our efforts to make our lives better. We do that through deploying our will, using our rationality to make choices and decisions — both in politics, through democratic voting, and in economics, through consumer choice.

But Harari plays the skunk at this picnic he’s described. The whole thing, he posits, rests upon the assumption that we do make choices and decisions. But what if we actually don’t? This is the age-old argument about free will. Harari recognizes its long antecedents, but asserts that the question has really, finally, been settled by science, something he discusses at length. The more science probes into our mental processes, there’s no “there” there. That is, the idea that inside you there’s a master controller, a captain at the helm, is a metaphor with no actual reality. We don’t “make” decisions and choices. It’s more like they happen to us.

As Schopenhauer said (Harari strangely fails to quote him), “a man can do what he wants, but cannot will what he wants.”

And if we humans are not, in any genuine sense, making choices and decisions through a conscious thinking process — but rather are actuated by deterministic factors we can neither see nor control — in politics, economics, and even in how we live our lives — what does that mean for the humanist construct of valorizing those choices above all else?

There’s a second stink-bomb Harari throws into the humanist picnic. He says humanism valued the individual human because he or she was, in a very tangible way, valuable. Indeed, indispensable. Everything important in society rested on human participation. The economy required people engaged in production. Human agents were required to disseminate the information requisite for progress to occur and spread. A society even needed individual humans to constitute the armies they found so needful.

But what if all that ceases being true? Economic production is increasingly achieved through robots and artificial intelligences. They are also taking care of information dissemination. Even human soldiers are becoming obsolete (as will become true too of the need for them). Thus Harari sees humans becoming useless irrelevancies.

Or at least most of us. Here’s another stink-bomb. Liberal humanist Enlightenment values also rested fundamentally on the idea of human equality. Not literal equality, of course, in the sense of everyone being the same, or even having the same conditions of life. Rather it was equality in the ineffable sense of value and dignity. Spiritual equality, if you will.

And indeed, the Enlightenment/humanist revolution did go a long way toward that ideal, as a philosophical concept that was increasingly powerful, but also as a practical reality. Despite very real wealth inequality, there has (especially in the advanced nations) actually been a great narrowing of the gap between the rich and the rest in terms of quality of life. Earlier times were in contrast generally characterized by a tiny elite living poshly while the great mass of peasants were immured in squalor.

Harari thinks we’re headed back to that, when most people become useless. We may continue to feed them, but the gap between them and the very few superior beings will become a chasm. I’ve previously written about prospects for virtual immortality, which will probably not be available to the mass underclass.

What will that do to the putative ideal of human equality?

Having rejected the notion of human beings as autonomous choice-makers, Harari doesn’t seem to think we do possess any genuine ultimate value along the lines that humanism posits. Instead, we are just biological algorithms. To what purpose?

Evolutionary biology (as made clear in Richard Dawkins’s The Selfish Gene) tells us that, at least as far as Nature is concerned, life’s only purpose is the replication of genes. But that’s a tricky concept. It isn’t a purpose in any conscious, intentional sense, of course. Rather, it’s simply a consequence of the brute mathematical fact that if a gene (a set of molecules) is better at replicating than some other gene, the former will proliferate more, and the world will be filled with its progeny. No “meaning” to be seen there.

But Harari takes it one step further back. The whole thing is just a system for processing information (or “data”). As I understand it, that’s his take on what “selfish gene” biology really imports. And he applies the same concept to human societies. The most successful are the ones that are best at information processing. Democracy beats tyranny because democracy is better at information processing. Ditto for free market capitalism versus other economic models. At least till now; Harari thinks these things may well cease being true in the future.

This leads him to postulate what the religion of the future will be: “Dataism.” He sees signs of it emerging already. This religion would recognize that the ultimate cosmic value is not some imagined deity’s imagined agenda, but information processing. Which Harari thinks has the virtue of being true.

So the role of human beings would be to serve that ultimate cosmic value. Chips in the great computer that is existence. Hallelujah! But wait — artificial systems will do that far better than we can. Where will that leave us?

Here’s what I think.

Enlightenment humanist values have had a tremendous positive effect on the human condition. But Harari writes as though this triumph is complete. Maybe so on New York’s Upper East Side, but in the wider world, not so much. Far from being ready to progress from Harari’s Phase II to Phase III (embracing Dataism), much of humanity is still trying to get from Phase I to Phase II. The Enlightenment does not reign everywhere. Anti-scientific, religious, and superstitious beliefs remain powerful. Democracy is under assault in many places, and responsible citizenship is crumbling. Look at the creeps elected in Italy (and America).

Maybe this is indeed a reaction to what Harari is talking about, with humans becoming less valuable, and they feel it, striking out in elections like Italy’s and America’s and the Brexit vote, while autocrats and demagogues like Erdogan and Trump exploit such insecurities. In this respect Harari’s book complements Tom Friedman’s which I’ve reviewed, arguing that the world is now changing faster than people, institutions, and cultures can keep up with and adapt to.

Free will I’ve discussed before too. I fully acknowledge the neuroscience saying the “captain at the helm” self is an illusion, and Schopenhauer was right that our desires are beyond our control. But our actions aren’t. As legal scholar Jeffrey Rosen has observed, we may not have free will, exactly, but we do have free won’t. The capability to countermand impulses and control our behavior. Thus, while the behavior of lighting up is, for a smoker, determinism par excellence, smokers can and do quit.

You might reply that quitting too is driven by deterministic factors, but I think this denies the reality of human life. The truth is that our thought and behavior is far too complex to be reduced to simplistic Skinnerian determinism.

The limits of a deterministic view are spotlighted by an example Harari himself cites: the two Koreas. Their deterministic antecedents were extremely similar, yet today the two societies could not be more different. Accidents of history — perhaps a sort of butterfly effect — made all the difference. Such effects also come into play when one looks at an individual human from the standpoint of determinism.

Harari’s arguments about humans losing value, and that anyway we’re nothing but souped-up information processors, I will take together. Both ideas overlook that the only thing in the cosmos that can matter and have meaning is the feelings of beings capable of feeling. (I keep coming back to that because it’s really so central.) The true essence of humanist philosophy is that individual people matter not because of what we produce but because of what we are: beings capable of feeling. Nothing else matters, or can matter.

The idea of existence as some vast computer-like data processor may be a useful metaphor for understanding its clockwork. But it’s so abstract a concept I’m not really sure. And in any case it isn’t really relevant to human life as it’s actually lived. We most certainly do not live it as akin to chips in a cosmic computer. Instead we live it through feelings experienced individually which, whatever one can say about how the brain works, are very real when felt. Once again, nothing can matter except insofar as it affects such feelings.

I cannot conceive of a future wherein that ceases being true.

My pro basketball experience

March 31, 2019

This pic of me at the game didn’t come out so good

Last Sunday we went to Boston for a Celtics game. I’m no sports fan. In fact, the last pro sports event I attended was a Dodgers baseball game. When they were still in Brooklyn (and Ike was president).

But my wife is a basketball aficionado, and we’ve been hosting a gal from Somaliland who plays it in high school. So I went with them.

 

I really enjoyed the fan-cam and people’s reactions seeing themselves on the jumbotron. Most didn’t immediately realize they were having their fifteen nanoseconds of fame. A few never did, eyes glued to their phones. Most did exuberant dancing and arm-waving. One woman grabbed her husband’s head and kissed him on the lips. But I thought the most romantic one was the gal holding up a sign saying, “Marcus Smart will you marry me?” — until (silly me) I learned Smart is a Celtics player, not (presumably) her inamorata.

The game itself was less entertaining. Very much the same thing repeated over and over. Speaking of repetition, the jumbotron kept showing the word “DEFENSE” in giant block letters crashing down and crushing a bunch of what appeared to be pick-up sticks. And the crowd would duly pick up the chant, “DEFENSE! DEFENSE!” I waited, in vain, for a little offense; especially as the Celtics’ defense was being crushed by the San Antonio Spurs.

 

They lost 486 to 9. Or something like that.

Wrong

I am no basketball expert. Yet I could have advised one thing to improve their score: doing free throws underhand (“granny style”) rather than overhead. Studies have in fact been done, and it’s proven that the former gives a higher success rate. Yet players universally ignore this. Why? They think it looks girly, not macho. So Vince Lombardi was actually wrong — winning isn’t the only thing.

 

Anyhow, some fans were deflated by the Celtics’ drubbing. Some even left early, in disgust, or perhaps to avoid the traffic crush. But most seemed to have a good time nevertheless. Even sports nuts ultimately understand that these games are Not Really Truly Important. They’re harmless. At least we no longer gather in stadiums to watch combatants literally kill each other. And at least these Celtics fans wore green hats, not red ones, and their chants weren’t hateful.

And I achieved my own personal goal for the evening: home and snug in bed by 1:30 AM.

The truth about vaccines, autism, measles, and other illnesses

February 26, 2019

The left derides the right for science denialism, on evolution and climate change. But many on the left have their own science blind spots, on GM foods and vaccination.

The anti-vax movement is based on junk science. The fraudulent study that started the whole controversy, by Andrew Wakefield, supposedly linking vaccines and autism, has been totally debunked. The true causes of autism remain debatable, but in the wake of Wakefield there have been numerous (genuine) scientific studies, and now at least one thing can be ruled out with certainty: vaccination.

“But my kid became autistic right after vaccination” — we hear this a lot. Post hoc ergo propter hoc (after which, therefore because of which) is a logic fallacy. One thing may follow another with no causal link. Kids are typically scheduled for vaccinations at right around the same age that autism first shows up. It’s just coincidence.

Anti-vaxers throw up a flurry of other allegations of harm, and keep insisting science hasn’t answered them. Not so. All such claims have been conclusively refuted. True, it’s possible to have a bad reaction to any injection, but with vaccination such cases are so extremely rare that all the fearmongering is totally disproportionate. The fundamental safety of vaccines is proven beyond any rational doubt.

I heard it reported that parents objecting to vaccination actually tend to be smarter than average. Proving you can be too smart for your own good. Tom Nichols’s book The Death of Expertise shows education often leads people to overrate their own knowledge, making them confident to just reject conventional medical science. They make the mistake of deferring instead to a movement that’s rooted in a mindset of hostility toward elites and experts of all stripes, and receptiveness to conspiracy theories, ready to believe big pharma, the medical establishment, and of course the government, all promote vaccination for evil purposes. People go online and find all this nonsense, and it fits with their pre-existing mindset, so they become impervious to the facts.

Still, we’re told this is a free country and people should be allowed to make these decisions for themselves and their own children. Such pleas resonate with my libertarian instincts; I don’t like government telling us what to do. But the vaccination issue isn’t so simple. Children are unable to choose for themselves. While parents are free to raise kids as they see fit, we don’t allow child abuse. And the law steps in, rightly, when Christian Scientists for example want to deny their kids needed medical treatment.

The same principle should apply to vaccination. Indeed, more so — because parental decisions here don’t just affect their own kids. When a high enough share of a population is vaccinated, a disease is blocked from propagating, so even the unvaccinated are safe. It’s called “herd immunity.” But with enough unvaccinated available victims, the disease can get a toehold and spread. Vaccinated people are still safe, but not babies too young for vaccination, and people who can’t be vaccinated, for various legitimate medical reasons.

Our herd immunities are now in fact being broken by the widespread refusal of vaccination. Thus dangerous illnesses, like whooping cough and measles, that had been virtually eradicated, are making a big comeback, with sharply rising infection rates.

This is a serious public health issue, and for once the solution is simple. Vaccination must be mandatory, absent valid medical reasons. Opt-outs on religious or “philosophical” grounds should be ended. There are no arguably legitimate religious or other doctrines that could justify refusal to vaccinate. These are just pretexts by people suckered by the pseudo-scientific anti-vax campaign.

We all should be free to do as we please, as long as it harms no others. The freedoms that matter are living as one chooses, and self-expression. Requiring vaccination does not violate these freedoms in a meaningful way; while refusing it does harm others. While you might argue that you have a right against unwanted injections, they are a far less drastic impingement upon personal freedom than is quarantining people with contagious illnesses. Their personal freedom is surely trumped by society’s right to protect others from disease.

To anti-vaxers, the minuscule risk from vaccination may seem larger than the risk from illnesses like whooping cough. That’s only because vaccination had practically eradicated those diseases. Anti-vaxers are getting a free ride from the herd immunity conferred by the vaccination of others. Anti-vax parents act as though only their kids matter, other kids and the herd immunity do not. Where is the social solidarity? Doing something because it’s good for all of us together?

Vaccination is a fantastic accomplishment of humankind, conquering the dread specters of so many diseases that afflicted life, and brought early death, throughout most of history. If you want to shout from the rooftops arguing that vaccination is a devil’s plot, you should have a right to do so. As long as you’re vaccinated.

Pachinko by Min Jin Lee — a novel of identity

February 22, 2019

Min Jin Lee

I read this 2017 novel for a book group. A nice thing about such groups is exposure to rewarding reads you’d never otherwise pick up.

Japan occupied Korea from 1910 to 1945. Sunja is born there around 1916. Her mother subsists running a humble boarding house. Teenaged Sunja is pursued, and impregnated, by businessman Koh Hansu. She vaguely expects marriage; but surprise surprise, he already has a wife back in Japan.

Then an ethereal young Korean Christian minister, Isak, rescues Sunja by marrying her. They relocate to Japan, where he has a posting waiting, and live with his brother and sister-in-law. The child is named Noa; later Isak and Sunja have their own son, Mozasu. (Their names are derived from Noah and Moses.) Both eventually wind up running pachinko parlors; pachinko is a pinball-like game very popular in Japan.

But the book’s main focus is on Korean identity in a Japanese culture that despises Koreans. They are stereotyped negatively and suffer systematic discrimination (despite the impossibility of identifying Koreans by appearance). Japan’s forcing many thousands of Korean women into brothels for soldiers during WWII is well known. Japan (unlike Germany) has been recalcitrant on repentance for this and other crimes.

The novel barely mentions those “comfort women,” but describes much other mistreatment suffered by Koreans. Isak is jailed, suspected of insufficient loyalty to the Emperor, and dies from his horrible ordeal.

Koreans living in Japan remain distinctly second-class citizens — if allowed citizenship at all, after generations of residence. Mozasu’s son, in 1989, works there for an investment bank, until he’s screwed over because he’s Korean.

But what really prompts me to write is Noa’s story. (BIG SPOILER ALERT) He didn’t know Koh Hansu was his real father. Koh reappears, now quite wealthy, as Noa’s benefactor, financing his much coveted university education. Noa and his mother Sunja are resistent, but accept Koh’s largesse. But then Noa’s girlfriend meets Koh, sees the resemblance, and taunts Noa with the obvious. Also that Koh must be a yakuza— a gangster.*

These revelations crush Noa. Cursing what his mother did, he runs away to start a new life, cutting all ties to his family, and starting his own new one, with a wife and children (and passing as Japanese). He sends Koh money to repay what he’d received. He also sends Sunja money but never divulges contact information. For sixteen years.

Finally Koh locates Noa, now 45, and Sunja goes to him, in his office. The reunion is difficult but doesn’t go too badly. Noa promises to come visit her. Then he shoots himself.

He had thought he’d escaped his parentage, but now must have realized he could not. And he could not live with that.

Koh was indeed a gangster. A nasty piece of work, as revealed in only a few glimpses. But as far as Sunja’s family knew, he was just a “businessman.” Noa’s girlfriend could not have known the truth about Koh, nor could Noa, it was just an unsubstantiated suspicion. Perhaps Noa should have probed further before shooting himself.

Or perhaps that’s nitpicking. The real issue here is the heart of human identity. Noah felt himself irremediably contaminated. He had bad blood.

This idea of “bad blood” reverberates throughout human history. The sins of the father visited upon the sons. How many people have indeed been punished for crimes or derelictions (real or just imagined) by forebears?

It’s the heart of racism. The notion that all members of some group are birds of a feather, sharing some (stereotyped) characteristics. As vividly depicted in this book, where the antipathy of Japanese toward “those people” (Koreans) is a constant.

Here’s some science. Biology is not destiny. Even where genes are indicative of certain behavioral traits (and there are such), genes never determine how any individual will behave in any situation. At most, they may delineate proclivities, but an individual’s actual behavior results from too many variables to be predicted by genes or anything else. And it’s certainly untrue that any human subgroup shares biologically determined behavioral traits (different from other subgroups).

Of course there are human behaviors, genetically evolved, which we share as a species. But they don’t differ among subgroups. And even if there were such subgroup-specific genes, their effect would be overwhelmed by all the other factors influencing a given individual’s personal behavior.

That’s not to deny cultural differences. Cultural groups do have their own characteristics, that’s the definition of culture. But it’s not genetic. Remove an individual at birth from their specific culture, and there’s no innate biological reason for replicating behavior particular to that culture.

So Noa’s human identity was not dictated by his father’s gangsterhood. His blood was no more bad than anyone else’s. It was up to him to shape his own life. And, even if there were gangster genes inherited from his father (a dubious idea), those genes would not anyway determine his own character, which would still be his to create.

You can be what you choose to be.

*An echo of Great Expectations? Noa studies literature — he loves Dickens!

Evolution by natural selection is a fact

February 5, 2019

My recent “free will” essay prompted some comments about evolution (on the Times-Union blog site.) One invoked (at verbose length) the old “watchmaker” argument. Nature’s elegant complexity is analogized to finding a watch in the sand; surely it couldn’t have assembled itself by random natural processes. There had to be a watchmaker.

This argument is fallacious because a watch is purpose-built and nature is not. Not the result of a process aimed at producing what we see today; instead one that could just as well have produced an infinity of alternative possibilities.

Look at a Jackson Pollock painting and you could say that to create precisely this particular pattern of splotches must have (like the watch) taken an immense amount of carefully planned work. Of course we know he just flung paint at the canvas. The complex result is what it is, not something Pollock “designed.”

Some see God in a similar role, not evolution’s designer but, rather, just setting it in motion. Could life have arisen out of nowhere, from nothing? Or could the Universe itself? Actually science has some useful things to say about that — better than positing a God who always existed or “stands outside time and space,” or some such woo-woo nonsense. And for life’s beginnings, while we don’t have every “i” dotted and “t” crossed (the earliest life could not have left fossils), we do know the basic story:

Our early seas contained an assortment of naturally occurring chemicals, whose interactions and recombinations were catalyzed by lightning, heat, pressure, and other natural phenomena. Making ever more complex molecules, by the trillion. One of the commonest elements is carbon, very promiscuous at hooking up with other atoms to create elaborate combinations.

Eventually one of those had the property of duplicating itself, by glomming other chemical bits floating by, or by splitting. Maybe that was an extremely improbable fluke. But realize it need only have happened once. Because each copy would go on to make more, and soon they’d be all over the place.

However, the copying would not have been perfect; there’d be occasional slight variations; with some faulty but also some better at staying intact and replicating. Those would spread more widely, with yet more variations, some yet more successful. Developing what biologist Richard Dawkins, in The Selfish Gene, called “survival machines.” Such as a protective coating or membrane. We’ve discovered a type of clay that spontaneously forms such membranes, which moreover divide upon reaching a certain size. So now you’ve got the makings of a primitive cell.

Is this a far-fetched story? To the contrary, given early Earth’s conditions, it actually seems inevitable. It’s hard to imagine it not happening. The 1952 Miller-Urey experiment reproduced those conditions in a test tube and the result was the creation of organic compounds, the “building blocks of life.”

That’s how evolution began. The duplicator molecules became genes (made of DNA). Their “survival machines” became organisms. That’s what we humans really are, glorified copying machines. A chicken is just an egg’s way to make another egg.

Of course DNA and genes, and Nature itself, do nothing with conscious purpose. Replicators competing with each other is simply math. Imagine your computer screen with one blue and one red dot. And a program saying every three seconds the blue dot will make another blue dot; but the red one will make two. Soon your screen will be all red.

A parable: A king wishes to bestow a reward, and invites the recipient to suggest one. He asks for a single rice grain — on a chessboard’s first square — then two on the second — and so on. The king, thinking he’s getting away cheaply, readily agrees. But before even reaching the final square, it’s all the rice in the kingdom.

This is the power of geometric multiplication. The power of genes replicating, in vast numbers, over vast time scales. (A billion years is longer than we can grasp.) And recall how genes are effectively in competition because occasionally their copies are imperfect (“mutations”), so no two organisms are exactly identical, and some are better at surviving and reproducing. Those supplant the others, just like red supplanted blue on your computer screen. But the process never stops, and in the fulness of time, new varieties evolve into new species. It’s propelled by ever-changing environments, requiring that organisms adapt by changing, or perish. This is evolution by natural selection.

Fossils provide indisputable proof. It’s untrue that there are “missing links.” In case after case, fossils show how species (including humans) have changed and evolved over time. (The horse is a great example. My illustration is from a website actually denying horse evolution, arguing that each of the earlier versions was a stand-alone species, unrelated to one another!)

We even see evolution happening live. Antibiotics changed the environment for bacteria. So drug-resistant bacteria rapidly evolved. Once-rare mutations enabling them to survive antibiotics have proliferated while the non-resistant are killed off.

Note that evolution doesn’t mean inexorable progression toward ever more complex or “higher” life forms. Again, the only thing that matters is gene replication (remember that red computer screen). Whatever works at causing more copies to be made is what will evolve. Humans evolved big brains because that happened to be a very successful adaptation. If greater simplicity works better, then an animal will evolve in that direction. There are in fact examples of this.

Another false argument against evolution is so-called “irreducible complexity.” Author Michael Behe claimed something like an eye could never have evolved without a designer because an incomplete, half-formed eye would be useless, conferring no advantage on an organism. In fact eyes did evolve through a long process beginning with light-sensitive cells that were primitive motion detectors, not at all useless. They did entail a survival advantage, albeit small, but it multiplied over eons, and improved by gradual incremental tweaks. So the eye, far from rebutting evolution, thus beautifully illustrates how evolution actually proceeds, and refutes any idea of intelligent design.

In fact, because our eyes did evolve in the undirected the way they did, they’re very sub-optimal. A competent designer would have done far better. He would not have put the wiring in front of the light-sensitive parts, blocking some light, nor bunched the optic nerve fibers to cause a blind spot. So we can’t see well in dim light. Some other animals (like squids) have much better eye design. And wouldn’t a really intelligent design include a third eye in the back?

Evolution by natural selection is the one great fact of biology. Not merely the best explanation for what we see in Nature, but the only possible rational explanation, and one that explains everything. As the geneticist Theodosius Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.”

Consciousness, Self, and Free Will

January 29, 2019

What does it really mean to be conscious? To experience things? To have a self? And does that self really make choices and decisions?

I have wrestled with these issues numerous times on this blog. Recently I gave a talk, trying to pull it all together. Here is a link to the full text: http://www.fsrcoin.com/freewill.html. But here is a condensed version:

It might seem that the more neuroscience advances, the less room there is for free will. We’re told it’s actually an illusion; that even the self is an illusion. But Daniel Dennett, in 2003, wrote Freedom Evolves, arguing that we do have a kind of free will after all.

The religious say evil exists because God gave people free will. But can you really have free will if God is omniscient and knows what you will do? This equates to the concept of causation; of determinism. Laplace was a French thinker who posited that if a mind (“Laplace’s demon”) could know every detail of the state of the Universe at a given moment, it would know what will happen next. But Dennett says this ignores the random chance factor. And quantum mechanics tells us that, at the subatomic level at least, things do happen randomly, without preceding causes.

Nevertheless, the deterministic argument against free will says that everything your brain does and decides is a result of causes beyond conscious control. That if you pick chocolate over vanilla, it’s because of something that happened among your brain neurons, whose structure was shaped by your biology, your genes, by everything that happened before. Like a computer program that cannot “choose” how it behaves.

Schopenhauer said, “a man can do what he wants but cannot will what he wants.” In other words, you can choose chocolate over vanilla, but can’t choose to have a preference for chocolate. Or: which gender to have sex with.

And what does the word “you” really mean? This is the problem of the self, of consciousness, entwined with the problem of free will. We all know what having a conscious self feels like. Sort of. But philosopher David Hume said no amount of introspection enabled him to catch hold of his self.

Another philosopher, Rene Descartes, conceived mind as something existing separately from our physical bodies. This “Cartesian dualism” is a false supernatural notion. Instead, mind and self can only be produced by (or emerge from) physical brain activity. There’s no other rational possibility.

Let’s consider how we experience vision. We not only see what’s before us, but also things we remember, or even things we imagine. All of it could be encoded (like in a computer) into 1s and 0s — zillions of them. But then how do “you” see that as a picture? We imagine what’s been called a “Cartesian theatre” (from Descartes again), with a projection screen, viewed by a little person in there (a “homunculus”). But how does the homunculus see? Is there another smaller one inside his brain? And so on endlessly?

A more helpful concept is representation, applicable to all mental processing. Nothing can be experienced directly in the brain. If it’s raining it can’t be wet inside your brain. But your brain constructs a representation of the rain. Like an artist painting a scene. And how exactly does the brain do that? We’re still working on that.

Similarly, what actually happens when you experience something like eating a cookie, or having sex? The experience isn’t mainly in the mouth or genitals but in the mind. By creating (from the sensory inputs) a representation. But then how do “you” (without a homunculus) see or experience that representation? Why, of course, by means of a further representation: of yourself having that experience.

And according to neuroscientist Antonio Damasio, in his book Descartes’ Error, we need yet another, third order representation, so that you not only know it’s raining, but know you know it. Still further, the mind also must maintain a representation of who “you” are. Including information like knowledge of your past, and ideas about your future, which must be constantly refreshed and updated.

All pretty complicated. Happily, our minds — just like our computer screens — hide from us all that internal complexity and give us a smooth simplified interface.

 

A totally deterministic view might make our lives might seem meaningless. But Dennett writes that we live in an “atmosphere of free will” — “the enveloping, enabling, life-shaping, conceptualatmosphere of intentional action, planning and hoping and promising — and blaming, resenting, punishing and honoring.” This is all independent of whether determinism is true in some physical sense.

Determinism and causality are actually tricky concepts. If a ball is going to hit you, but you duck, would Laplace’s demon have predicted your ducking, so you were never going to be hit? In other words, whatever happens is what had to happen.

Dennett poses the example of a golfer missing a putt who says, “I could have made it.” What does that really mean? Repeat the exact circumstances and the result must be the same. However, before he swung, was it possible for him to swing differently than he wound up doing? Or was it all pre-ordained? Could he have, might he have, swung differently?

Martin Luther famously said, “Here I stand, I can do no other.” Was he denying his own free will? Could he have done otherwise? Or was his stand indeed a supreme exercise of personal will?

Jonathan Haidt, in his book The Righteous Mind, likened one’s conscious self to a rider on an elephant, which is the unconscious. We suppose the rider is the boss, directing the elephant, but it’s really the other way around. The rider’s role is just to come up with rationalizations for what the elephant wants. (This is a key factor in political opinions.)

And often we behave with no conscious thought at all. When showering, I go through an elaborate sequence of motions as if on autopilot. My conscious mind might be elsewhere. And how often have I (consciously) deliberated over whether to say a certain thing, only to hear the words pop suddenly out of my mouth?

A famous experiment, by neurologist Benjamin Libet, seemingly proved that a conscious decision to act is actually preceded, by some hundreds of milliseconds, by an unconscious triggering event in your brain. This has bugged me no end. I’ll try to beat it by, say, getting out of bed exactly when I myself decide, bypassing Libet’s unconscious brain trigger. I might decide I’ll get up on a count of three. But where did that decision come from?

However, even if the impetus for action arises unconsciously, we can veto it. If not free will, this has been called “free won’t.” It comes from our ability to think about our thoughts.

There’s a fear that without free will, there’s no personal responsibility, destroying the moral basis of society. Illustrative was a 2012 article in The Humanist magazine arguing against punishing Anders Breivik, the Norwegian mass murderer, because the killings were caused by brain events beyond his control. But “Free won’t” is a helpful concept here. Psychologist Thomas Szasz has argued that we all have antisocial impulses, yet to act upon them crosses a behavioral line that almost everyone can control. So Breivik was capable of choosing not to kill 77 people, and can be held responsible for his choice.

As his book title suggests, Dennett maintains that evolution produced our conscious self with free will. But those were unnecessary for nearly all organisms that ever existed. As long as the right behavior was forthcoming, there was no need for it “to be experienced by any thing or anybody.” However, as the environment and behavioral challenges grow more complex, it becomes advantageous to consider alternative actions. In developing this ability, Dennett says a key role was played by communication in a social context, with back-and-forth discussion of reasons for actions, highly enhanced by language. Recall the importance of representation. I mentioned the artist and his canvas. Our minds don’t have paints, but create word pictures and metaphors, multiplying the power of representation.

Another book by Dennett, in 1991, was Consciousness Explained. It said that the common idea of your self as a “captain at the helm” in your mind is wrong. It’s really more like a gaggle of crew members fighting over the wheel. A lot of neurons sparking all over the place. And what you’re thinking at any given moment is a matter of which gang of neurons happens to be on top.

Yet in Freedom Evolves, Dennett now winds up insisting that we can and do use rationality and deliberation to resolve such internal conflicts, and that “there is somebody home” (the self) after all, to take responsibility and be morally accountable. This might sound like positing a sort of homunculus in there. But let me offer my own take.

When the crewmen battle over the wheel, to say the outcome is deterministically governed by a long string of preceding causes is too simplistic. Instead, everything about that competition among neuron groups embodies who you are, your personality and character, constructed over years. Shaped by many deterministic factors, yes — your biology, genes, upbringing, experiences, a host of other environmental influences, etc. But also, importantly, shaped by all your past choices and decisions. We are not wholly self-constructed, but we are partly self-constructed. Your past history reflects past battles over the wheel, but in all those too, personality and character factors came into play.

They can change throughout one’s life, even sometimes from conscious efforts to change. And no choice or decision is ever a foregone conclusion. Even if most people, most of the time, do behave very predictably, it’s not like the chess computer that will play the same move every time. Causation is not compulsion. People are not robots.

Nothing is more deterministically caused than a smoker’s lighting up, a consequence of physical addiction on top of psychological and behavioral conditioning, and even social ritual. Seemingly a textbook case of B.F. Skinner’s deterministic behaviorism. Yet smokers quit! Surely that’s free will.

Now, you might say the quitting itself actually has its own deterministic causes — predictable by Laplace’s demon — whatever happens is what had to happen. But this loads more weight upon the concept of determinism than it can reasonably be made to carry. In fact, there’s no amount of causation, biological or otherwise, that predicts behavior with certainty. There are just too many variables. Including the “free won’t” veto power.

And even if Libet was right, and a decision like exactly when to move your finger (or get out of bed) really is deterministically caused — how is that relevant to our choices and decisions that really matter? When in college, I’d been programmed my whole life to become a doctor. But one night I thought really hard about it and decided on law instead. Concerning a decision like that, the Libet experiment, the whole concept of determinism, tells us nothing.

This is compatibilism: a view of free will that’s actually compatible with causation and determinism.

We started with the question, how can you have free will if an omniscient God knows what you’ll do? Well, the answer is, he cannot know. But — even if God — or Laplace’s demon — could (hypothetically) predict what your self will do — so what? It’s still your self that does it. A different self would do different. And you’re responsible (at least to a considerable degree) for your self. That’s my view of free will.

 

No, Virginia, there is no Santa Claus

December 22, 2018

We gave our daughter the middle name Verity, which actually means truth, and tried to raise her accordingly.

About the Easter Bunny and the Tooth Fairy, she wised up pretty early, as a toddler. About Santa, she was skeptical, but brought scientific reason to bear. A big unwieldy rocking horse she doubted could have gotten into the house without Santa’s help. So that convinced her — for a while at least.

Recently a first grade teacher was fired for telling students there is no Santa (nor any other kind of magic). This reality dunk was considered a kind of child abuse; puncturing their illusions deemed cruel; plenty of time for that when they grow up. However, the problem is that a lot of people never do get with reality. As comedian Neal Brennan said (On The Daily Show), belief in Santa Claus may be harmless but is a “gateway drug” to other more consequential delusions.

People do usually give up belief in Santa. But not astrology, UFOs, and, of course (the big ones) God and Heaven. The only thing making those illusions seemingly more credible than Santa Claus is the fact that so many people still cling to them.

America is indeed mired in a pervasive culture of magical beliefs, not just with religion, but infecting the whole public sphere. Like the “Good guy with a gun” theory. Like climate change denial. And of course over 40% still believe the world’s worst liar is somehow “making America great again.” (History shows even the rottenest leaders always attract plenty of followers.)

Liberals are not immune. Beliefs about vaccines and GM foods being harmful are scientifically bunk. In fact it’s those beliefs that do harm.

I’ve written repeatedly about the importance of confirmation bias — how we love information that seemingly supports our beliefs and shun anything contrary. The Economist recently reported on a fascinating study, where people had to choose whether to read and respond to eight arguments supporting their own views on gay marriage, or eight against. But choosing the former could cost them money. Yet almost two-thirds of Americans (on both sides of the issue) actually still opted against exposure to unwelcome advocacy! In another study, nearly half of voters made to hear why others backed the opposing presidential candidate likened the experience to having a tooth pulled.

And being smarter actually doesn’t help. In fact, smarter people are better at coming up with rationalizations for their beliefs and for dismissing countervailing information.

Yet a further study reported by The Economist used an MRI to scan people’s brains while they read statements for or against their beliefs. Based on what brain regions lit up, the study concluded that major beliefs are an integral part of one’s sense of personal identity. No wonder they’re so impervious to reality.

Remarkably, given the shitstorm so totally perverting the Republican party, not a single Republican member of Congress has renounced it.

The Economist ended by saying “accurate information does not always seem to have much of an effect (but we will keep trying anyway).”

So will I.