Archive for the ‘Science’ Category

UFO Abductions and America’s Reality Crisis

January 23, 2023

People on America’s right are in thorough reality denial. Headlined of course by the 2020 “stolen election” lie. False beliefs about Covid and vaccines cost many lives, perhaps hundreds of thousands. There’s much more. And the left is not immune.

How do we know what’s true? (This is called epistemology.)

At a recent social gathering of humanist friends — ordinarily a respite from all the craziness out there — one very intelligent guy, author of numerous published books (and a man of the left), brought up a UFO abduction story. In 1989, a woman was wafted out of a 12th floor New York apartment window, escorted by aliens — witnessed by a whole motorcade in the street below, including a UN Secretary-General.

The woman returned to tell her tale. She was abducted multiple times; other family members were abducted too. Leading my friend to suggest the aliens must be keeping tabs on them. He displayed a book, Witnessed, by Budd Hopkins, documenting all this.

Wow. How could a skeptic like me respond to these seemingly verified facts?

Occam’s (or Ockham’s) razor, also known as the principle of parsimony, says that to explain any phenomenon, the simplest, least complex answer is most likely.

Here, there are two basic possibilities:

1) The book’s story is true, however mind-blowing and confounding of one’s prior understandings; or

2) It’s simply untrue.

Number 2 is overwhelmingly more probable. People make stuff up all the time; lie; get things wrong; or experience delusions. That amply explains all alien abduction reports; none has ever been proven true.

Later, quick googling produced a lengthy point-by-point debunking of Hopkins’s narrative, indicating that it too never happened. Including the supposed UN chief’s testimony.

My friend, unfazed, disparaged my “methodology” with talk about primary versus secondary sources. Well, “primary sources” can lie. It’s vastly more plausible that this abduction story was a product of human confabulation. Tellingly, people in our group were puzzled that they’d never before heard about this event. Which would have shaken the world — if real.

Religious folks deem the Bible an authoritative primary source — with the ultimate credible author. “Budd Hopkins said it; I believe it; that settles it??” I noticed that most reviewers on Amazon gave Hopkins’s book high marks — yet most were unpersuaded by its tall tale.

And which is more plausible? (1) That the 2020 election was stolen, despite Biden’s margin being 7 million; Republicans participated everywhere in overseeing elections; voters had ample reasons to reject Trump; his 60 lawsuits all went nowhere; not a single Biden ballot was proven fraudulent; indeed, the Republican-orchestrated Arizona audit raised Biden’s vote total —

OR (2) That Trump, the biggest liar in political history, simply lied because his sick psyche could not face the humiliation of losing.

Most Republicans go with #1.

And which is more plausible? (1) Most other people are nuts, or (2) I am.

Evolutionarily, the human brain was our “killer app” enabling our species to survive and prosper. Essential to that app is the ability to perceive reality. An early human who could perceive a lion lurking in the bushes had a survival advantage, and got to pass along his genes.

Moreover, to think there’s a lion and be wrong was better than the reverse. The former mistake carried a small penalty; the latter, a huge one. So humans grew very good at seeing lions even where there are none. This explains a lot of our epistemological problems. Why we are so prone to believe election lies, UFO abduction tales, conspiracy theories, and other ridiculous things. Those are lions that aren’t there.

But our evolution-derived brain software still actually serves us extremely well. We’re still very good at seeing real lions — that is, facts about reality that affect our lives. Without that, we could not even function on a day-to-day basis (especially given modern life’s complexities compared to what our distant forebears faced). We certainly could not, for example, drive cars; without a very firm grasp of realities on the roads, you’d quickly be dead.

But matters like election lies and UFOs are different. False beliefs about them seem to carry no real-life consequences. They are perceptual freebies — we can relax our guard, indulge ourselves, and believe the wildest conspiracy theories, seemingly with no cost.

Though there was a cost for many Covid conspiracy believers. That’s one indicator that our indulgence for seeming belief freebies has gotten way out of hand. And even where such false beliefs ostensibly carry no penalty for the individual holding them, for society at large they do. We are, intellectually and cognitively, drowning in a flood of nonsense. How can we be responsible citizens, members of communities, under such conditions? True understanding of the world, of reality, is essential. Furthermore, Trump’s stolen election lie, and others, have very grave consequences for our democracy, undermining trust in our institutions, setting us against each other, tearing apart our social fabric itself.

That’s a lion in the bushes too few see.

An Artificial Intelligence Rebuts Me!

December 30, 2022

I’ve been rebutted by an Artificial Intelligence bot.

My nephew, Ben Broderick Phillips, works for Microsoft, with Artificial Intelligence (AI) within his wheelhouse. For Christmas, he fed one of my recent blog posts — “Ai Weiwei and Bono on Art and Capitalism”* — to an AI program (“ChatGPT”), asking it for a summary, and then a rebuttal. (Note the synchrony between the name “Ai” and AI.)

The AI was operating not just on my blog post’s bare words, but bringing into play a ton of information it had previously been fed about the world, presumably including many other polemics. That is the kind of “deep learning” making AI powerful.

So, here is its summary:

“This blog post discusses Ai Weiwei’s critique of capitalism in the art world as expressed in an essay he wrote for The Economist. The author of the blog post disputes Ai Weiwei’s assertion that contemporary art is primarily focused on entertainment and wealth accumulation, rather than addressing important social and political issues. The author also discusses Bono’s belief that capitalism is the solution to poverty and asserts that it is the default mode for human interactions. The blog post suggests that while capitalism has contributed to economic growth and the lifting of some out of poverty, it is not the only solution and can often perpetuate or exacerbate inequality and exploitation. It is important to consider alternative economic systems and how they might better serve the needs and interests of artists and the broader society.”

It’s quite well written — better, I daresay, than most humans would do, reading smoothly, intelligently and concisely. I think it passes the “Turing test” — Alan Turing (1912-54), the Brit who really invented the modern concept of the computer, posited one whose answers to questions would convince a human interlocutor that he’s talking to another human.

The first three of the AI’s five sentences are spot-on, recapping what I wrote. However, the last two sentences strangely detour, reading like something the AI might imagine the blogger saying, but it’s not what I actually said. (Maybe a tribute to my independent thinking.)

The AI’s rebuttal is rather longer, six paragraphs. (Interestingly, I’m told the AI initially produced only the first two, and added more on further prompting.) Here’s the full text:

“Ai Weiwei’s critique of capitalism in the art world is not unfounded. While it is true that art has always been a means of expression and personal development, it is also a reflection of the society in which it exists. In a capitalist society, it is not uncommon for the commercialization and commodification of art to take precedence over its intrinsic value. This often leads to the neglect of important social and political issues in favor of wealth accumulation.

“It is not fair to say that all contemporary art is solely focused on entertainment and detached from spiritual life. Many artists, including Ai Weiwei, use their work to address important issues and bring attention to injustices. However, it is important to recognize that the capitalist structure of the art world can often hinder the ability of artists to fully explore these issues and have their work recognized and valued. It is also worth considering alternative economic systems and how they might impact the art world. While capitalism has certainly led to economic growth and the lifting of some out of poverty, it has also contributed to inequality and exploitation. It is important to consider the ways in which different economic systems might enable artists to more fully express themselves and address important issues without the constraints of capitalist values.

“Furthermore, it is not fair to dismiss Ai Weiwei’s critique of capitalism in the art world as simply self-promotion. While it is true that he is an artist who addresses important social and political issues in his work, it does not necessarily follow that his critique is motivated solely by self-interest.

“In addition, Bono’s assertion that “commerce” and “entrepreneurial capitalism” are the only ways to lift people out of poverty is reductionist and ignores the complex factors that contribute to poverty. While economic growth can certainly play a role in reducing poverty, it is not the only solution and can often perpetuate or exacerbate inequality and exploitation.

It is important to recognize that the art world, like any other industry, is shaped by the economic and political systems in which it exists. By acknowledging the ways in which capitalism can impact the art world, we can better understand and address its limitations and explore alternative systems that might better serve the needs and interests of artists and the broader society.

“Additionally, it is important to recognize that the art world is not a monolithic entity and the experiences of different artists will vary. While some may find success and financial stability within the capitalist system, others may struggle to have their work recognized and valued. It is important to consider the ways in which the capitalist structure of the art world may disproportionately impact marginalized artists and communities.

“In conclusion, Ai Weiwei’s critique of capitalism in the art world is not without merit and it is important to consider the ways in which the economic and political systems in which we live shape the art world and the experiences of artists. While capitalism has certainly contributed to economic growth and the lifting of some out of poverty, it is not the only solution and can often perpetuate or exacerbate inequality and exploitation. It is important to consider alternative economic systems and how they might better serve the needs and interests of artists and the broader society.”

[End of rebuttal]

Note first that this essay does not (like the summary did) miscast what I said — it’s indeed a pointed response to it. It may be a bit repetitive; and a trifle wooden; yet a fairly coherent, even glib essay, sounding judicious, reasonable, and persuasive. (Its talk of alternatives to capitalism is facile — as my blog post said, capitalism’s critics never really offer a workable alternative.)

Thus the rebuttal demonstrates the benefit of all the AI’s prior deep learning — spitting out a point of view that actually mirrors what a lot of people might think in response to my blog post. If most would not be able to express it so nicely!

This shows the huge power of AI, how far it has come. Prompting, at our holiday gathering, some discussion of uses for it — someone mentioned writing grant proposals. But the potential is far vaster. How many human jobs can be done — and done better — by AI? Another example: we already know AI does better than human doctors at making diagnoses from X-rays. The world of the future is going to be very different.

The question arises — when does AI become conscious? It’s hard to avoid thinking (what does that word really mean?) that the AI that rebutted my blog is, on some elusive level, sentient.

Futurist Ray Kurzweil has foreseen a “singularity” when machines become smarter than people, and thereafter propel their own further enhancement. Leaving us in the dust? In my seminal 2013 Humanist magazine article — The Human Future: Upgrade or Replacement?** — I envisioned a convergence between biological and non-biological aspects of humanity.

* https://rationaloptimist.wordpress.com/2022/12/13/ai-weiwei-and-bono-on-art-and-capitalism/

** https://rationaloptimist.wordpress.com/2013/07/07/the-human-future-upgrade-or-replacement/

What Does Ancestry Mean?

December 7, 2022

My wife was intrigued by a statistician’s writing that if you go back 3400 years, we’re all related. Not actually surprising if you think about it. After all, you’ve got two parents, four grandparents, eight great-grandparents . . . that’s exponential progression and it’s mathematically powerful.

My previous partner’s nine-times-great-grandfather was Roger Williams (founder of Rhode Island). But she had a lot of forebears in that generation — over 2,000. Go back twenty generations and it’s over a million. That’s only around 500 years. Go back a few centuries more and the number of your ancestors exceeds the entire human population.*

How could that be?

Family trees are not rigid lineages separated from each other. To the contrary, they are all tangled together. Your ancestors were of course not yours alone, but the ancestors of countless other people. And those long-ago ancestors with innumerable modern descendants likewise share those descendants with similarly huge numbers of other forebears.

That suggests you are indeed related to every other human being; a cousin many times removed.

But you may have to go pretty far back for that link. Because while our lineages are tangled together, it’s not random, there is a lot of segregation, notably geographic, among genealogies.

Though there has of course been mixing of disparate segments of humanity, for most of history people in a given geographic locale had limited opportunities for mating with foreigners. So someone like me, with European Jewish ancestry, might have a hard time finding a common ancestor with a Bornean. Yet on the other hand, with each of us having millions of ancestors, a single match is not implausible.

Humanity’s more distant antecedents also show our relatedness. There were many different “homo” species, but all except one went extinct. And the environmental challenges that defeated all those others nearly did us in too. Apparently at some point there was a “bottleneck” that only a very small group managed to scrape through — ancestors of all modern humans. In fact, scientific DNA analysis suggests we may all have descended from a single woman in that band. Her name was Eve.

Going back further, our closest related species is the chimpanzee, with whom we shared a common ancestor around six million years ago. Our DNA is 99% identical to chimp DNA. Among all humans DNA is 99.9% the same.

We are in fact related to every other living thing. Mouse DNA is around 90% identical to ours. Go back to your millions-of-times-great-grandpa and he’s a fish.

DNA tests give ethnicity percentages. For American Blacks, there’s typically a high percentage of West African, but also a significant percentage of northern European. For obvious reasons. I never did a test because I’m pretty sure it would come back almost 100% Ashkenazi Jewish. I’d be shocked if it said something like 12% Cherokee. Though again, somewhere along the line, some other DNA might have crept in there.

My wife’s forebears all came from Ireland. But she queries what it really means to say she’s “Irish.” Questioning whether there’s really any such thing, given Viking incursions and so forth, and again that all our DNA is 99.9% the same anyway. But calling someone “Irish” can mean merely that their not-too-distant forebears were born there.

As to that 99.9% human DNA identicality — the variations within any human subgroup (like “Blacks”) actually outstrip variations between such groups. Yet DNA — which is a string of many thousands of molecules of which there are just four variants, labelled A, C, T and G — does contain sequences which can be identified as unique to particular subgroups.

Thus if I am (mostly) genetically Ashkenazi, that’s a biological difference from a person who has little or no Ashkenazi DNA. Likewise for someone “Irish.” But it’s very important to say that it’s entirely up to us what significance, if any, we place on such differences. Perhaps the answer should be “very little,” given again the 99.9%.

But human life is not that simple, and maybe an even better option is to make the differences something positive. Culture is more important than biology. It’s the cultural differences that really matter; and we can embrace, even celebrate, our human cultural diversity, enriching and strengthening us. That’s how I see America.

I call myself an American rather than a Jew. I don’t follow the Jewish religion, nor even see myself as part of the related culture. Rather, always steeped in history, I see myself as embedded in the great global human project, as my prime source of meaning. And yet my particular ancestry does have a part of that meaning. I’m mindful how it fits in the bigger picture and illuminates it. And how it shaped my own life. For my grandparents and mother in Nazi Germany, Jewish identity was not something they could set aside.

* At 100 generations — before you even hit the 3400 year mark — the number would contain 31 digits.

Epicurianism for Today: Freedom and Happiness

December 1, 2022

At a humanist meeting there was some pamphlet including a list of worthy thinkers. My friend Peter Delivorias remarked upon the omission of Epicurus. A strange omission indeed; Peter’s noting it impressed upon me his intellectual discernment.

Epicurus (341-271 BC) was the best of ancient philosophers. He operated when Greek civilization was still fairly new, and thinkers were feeling their way through virgin territory. Like Plato, oft seen as the very father of philosophy. He was Epicurus’s bête noire, his own work a total rejection of Plato’s. To me Plato’s writings are full of pernicious nonsense; Epicurus’s are full of wisdom.

Human beings have always striven to understand existence, but reading a book about Epicurus* illuminates how far the ancients still had to go, handicapped by fundamental knowledge gaps. Thus might Plato’s errors be forgiven, though I think he was just a nasty character. Again in contrast to Epicurus, who speaks to the human heart — and who, despite the epistemological deficiencies of his time, got a lot right.

My favorite Epicurus story (possibly apocryphal) concerns his viewing a display of portraits of sailors who in storms prayed to the gods, and survived. “But where,” said he, “are the pictures of those who prayed and drowned?”

Thus the rationalist. However, Epicurus did not actually put human reason on a pedestal, subordinating it to nature. But he did liken reason to a judge, weighing evidence, the testimony of the senses. And of course we use our reason to understand nature. Thus Epicurus differed greatly from Plato, with the latter’s notion of perfect “forms” existing somewhere ethereally while what we see on Earth are just imperfect corrupted shadows. For Epicurus, what we see is what we get, that’s all there is.

So his two feet were planted in reality. Yet he did profess belief in the gods, even urging performing all the attendant rituals, as being right and proper from a social standpoint. Deeming faith a principal virtue. However, he was somewhat unique in holding that the gods could not be messing about with earthly matters (too much work, incompatible with their perfect happiness) — hence no one should fear the gods.

Consistent with putting nature above reason, Epicurus held that knowledge of the gods was instilled in people by nature as a “given” of existence. And he spun quite elaborate theories justifying this (as full of absurdities as any religious apologia). “The gods” were not some abstract picture, but highly specific, with names and backstories and everything. Yet even if nature told us about gods, could anyone know such concrete details? It all seems contrary to Epicurus otherwise being such a clear-eyed materialist. Perhaps god belief was so deeply embedded in his society that not even an Epicurus could break free of it. Or — given that so much of his philosophy already contravened contemporary sensibilities — he didn’t dare so complete a breach as atheism would entail. Epicurus, before founding his school in Athens, had already experienced being run out of town (from Mytilene).**

Epicurus deemed pleasure the purpose of life — widely misunderstood as shallow hedonism. His actual stance accords with my own oft-repeated bedrock idea that the only thing that can matter is the feelings of beings capable of feeling. Those feelings can be divided, most fundamentally, between pleasure and pain. The more pleasure there is in the world, and the less pain, the better. That’s the essence of Epicurianism.

Here again Epicurus took issue with Plato, who deemed some pleasures good and others bad. Such censoriousness has persisted into modern times. (Certainly true in Christianity.)

Epicurus did not tell us to go out and load up on sensual “hedonic” pleasures. Rather, his concern was happiness. That’s something experienced over time; ideally, a lifetime. Whereas a pleasure (like food or sex) is durationally restricted. Experiencing such pleasures (and, I would add, anticipating them) do not constitute happiness but do contribute to it.

Epicurus actually preached a simple diet, rather than indulgence in rich foods, as more conducive to health, which is a key ingredient in happiness. Yet at his Athens school, there was a monthly lavish feast. Epicurus said this conferred more pleasure in the foods than if they were everyday experiences.

Also rejected were quests for wealth, power, and glory. Thus he urged against a political career. He did recognize the value of wealth, particularly as enabling one to help out friends when needed — and Epicurus considered friendship absolutely central to a happy life. The problem with power and glory (or fame), however, is their dependency on how other people see you, making you beholden to their fickleness.

Thus conflicting with what Epicurus considered the real key to happiness: freedom. That is, the ability to control your own life, by controlling, to the degree possible, its circumstances. In this he was going against the prevailing ethos regarding fate or fortune or luck, of which most people thought we are playthings. The Romans had a goddess, Fortuna, appearing on many coins, holding a rudder, meaning that she steers us. Epicurus recognized no such force; instead dividing circumstances between those beyond our control and those we can control. With happiness built upon expanding one’s ambit of control — defying fate.

Note that this also argues against unbridled hedonism — that is, letting your appetites and passions control you rather than you controlling them. Not a recipe for true happiness.

The watchword here too was safety. The main aim of controlling your circumstances was to make you safer. That might seem a timorous, cramped idea of happiness; however, life in those times was a lot more perilous and contingent than it is for modern Americans. So the safer you could feel, the happier you’d be.

Hand in hand with safety is the idea of peace, which Epicurus also advocated for the sake of promoting human happiness. And he was also arguing here against Plato’s prescription for an authoritarian state. Plato’s ideal polity would be North Korea. Epicurus in contrast believed the state that governs best is the one that governs least. That is, protecting the safety of its citizens, not threatening it.

His physics was grounded in there being only stuff (made of atoms) and void — so the gods had to be corporeal. This also left no room for an incorporeal soul (two millennia before Descartes!) — so Epicurus ruled out any life after death. This was integral to his identifying pleasure as the purpose (telos) of life — since life’s purpose could only play out between birth and death. Actually then, life itself was what mattered most (indeed, solely); the supreme good.

Verified for Epicurus by one’s greatest fear being death, and greatest joy being an escape from it. Both being embedded by nature — thus again exemplifying his putting nature above reason.

Epicurus wasn’t happy about mortality, but he was, well, philosophical about it. It falls within the realm of things we cannot ultimately control. But we can control how we think about it. Epicurus seems to have been of the “where I am, death is not; where death is, I am not” school. I’ve never found that logic very comforting. The idea of nonexistence is terrifying. But the Epicurean control I exercise is to avoid focusing on it. I’m a believer in worrying about things only when I must. So I’ll deal with nonexistence when I get there.

* DeWitt, Epicurus and His Philosophy, written in 1954 by an academic, and unfortunately reading like it.

** DeWitt does not explore what might really have been going on with Epicurus and religion. He defends him against ancient critics, writing as though endorsing Epicurus’s theology. I infer DeWitt was a Christian; he sees Epicurus as prefiguring much of Christian thinking.

How to Play With Your Food

November 25, 2022

Who could resist that book title found at a yard sale? More, the authors were Penn & Teller — renowned magicians and outspoken advocates for reason against superstition.

We’ve all been scolded, “Don’t play with your food.” Well, food is good to eat, but also fun to play with. Where’s the problem? As the saying goes, you can have your cake (to play with) and eat it too. No?

The book is a how-to guide for tricks involving food. Like pretending to stab your eye with a fork, making a flood of white gunk spew out. Shock your dinner companions.

Most are fairly simple tricks involving sleight of hand and misdirection — or, as the authors put it, “lying.” Lying is wrong as a general moral principle, but only if you owe the lyee the truth. You don’t owe the Gestapo the truth about Jews in your attic. Penn and Teller would have excelled at that game.

And they do have moral scruples. One chapter is “How to Get Your Ethical-Vegetarian Friends to Eat Veal.” The “trick” is simple and obvious. But then they say don’t do it — it would be wrong.

Not everything in the book involves magic, exactly. Penn relates an encounter with non-aesthete truckers at a Nebraska eatery, menacingly picking a fight with him. He lifted his milkshake and poured it over his own head. That so confuzzled the truckers that they backed off and skedaddled. A food trick, I guess. Handy to know.

While magic is mostly fun and entertainment, the authors take a dim view of frauds who actually purport to be on the level. Like with spoon bending and other paranormal nonsense. They observe that if any such were really possible, then it wouldn’t be “paranormal.” So too with “supernatural;” anything real is natural.

There’s a nod to James (“the Amazing”) Randi who tirelessly exposed frauds like spoon bender Uri Geller. And Penn and Teller make this killer point: if someone actually had the kind of mental powers that could bend spoons — why waste them bending spoons?! Likewise regarding “psychics” — why are they hustling suckers for chump change when their abilities (if real) should easily make them rich?

I was reminded of Isaac Asimov’s “Foundation” trilogy. One character, “The Mule,” was a rare mutant who really and truly could read minds. So he wound up ruling the galaxy.

Penn and Teller are merciless against all irrational beliefs. One chapter is headed “Salt in the Wounds of Credulous Fools.” A side box highlights “How many times can we say ‘extraordinary claims require extraordinary evidence,’ ‘you can’t prove a negative,'” and several other truisms of rationality.

The food trick here involves using what’s actually mere table salt to “cure” a fake blister, calling it a “homeopathic” remedy, conning homeopathic suckers to buy some. (Salt couldn’t actually qualify as “homeopathic” which, the authors do correctly note, means there’s nothing in it except plain water; but never mind.) They end here with “make sure you tell them it cures herpes.” Adding, “we are the lowest of the low.”

Climate: We’re Cooked

November 13, 2022

Like the proverbial frog in the pot whose temperature slowly rises.

Yes (sigh) this is about climate change. But please read it anyway, it may provide some clarity.

There’s another big global climate talk-fest going on now in Egypt. The 2015 Paris agreement set an ambitious goal of limiting Earth’s temperature rise to 1.5 degrees Centigrade. That was a big victory for poorer nations, which stood to be harmed most by warming (being less equipped to cope with it). However, Paris included no commitments for specific action to achieve the goal.

Since then, the 1.5 degree goal has become a totemic gospel, dominating climate discussion. But — as argued in a recent analysis in The Economist, aptly titled “An Inconvenient Truth” — the chances of achieving 1.5 are zero (and have been for quite some time). It would have required massive reductions in carbon emissions, that simply are not happening. Rather than biting the bullet, we’ve barely been licking it. Consequently, at this point, 1.5 would require, going forward, reductions even more draconian. Which won’t happen either.

Because there’s no way to develop and deploy, fast enough, the technological fixes that would be required to reduce emissions enough without huge dislocations to our way of life, for which there is no public or political will. We’re talking here about the burning of fossil fuels, as in power generation, industrial processes, car and air travel; and there are many further ways we put carbon into the atmosphere, another big one being agriculture. Cow farts are actually a significant factor.

The 1.5 target was adopted even though 1.5 would entail pretty severe climate effects — but that seemed the outer limit for both what might be achievable and what might be more or less tolerable. Now it looks like 2 degrees is about the best we can hope for. And the difference between 1.5 and 2 is the difference between bad and very bad. While blowing past 2 looks increasingly likely.

What are the bad effects? A lot of ice will melt, dumping more water into the oceans, raising sea levels, and flooding low lying coastal cities (and some island countries). More and worse heat waves, obviously; a lot of places becoming simply uninhabitable. More and worse weather events, like hurricanes. More floods, droughts, forest fires. Big disruptions to agriculture and food production. All of which will send vast numbers of people on the move.

Part of the problem is feedback effects: warming creating conditions that cause more warming. For example, ice reflects a lot of sunlight back into space; less ice means less of that. And permafrost melting would release a lot more carbon-rich methane into the atmosphere. There’s danger of a tipping point, causing runaway warming. That’s apparently what happened to Venus, whose temperature now averages a toasty 867 degrees Fahrenheit.

I have argued forever that the zealots were misguided to insist on emissions reductions exclusively, because reducing them enough was a pipe dream. And even if we cut emissions to zero tomorrow, rising temperatures would still be baked in, due to the carbon already in the atmosphere.

We have three main other options. One is carbon capture and storage — sucking it out of the atmosphere. The technology exists. So far, the amount being done is piddling. However, scaling this up to where it would make a difference would be a colossal and colossally costly undertaking.

Second, there’s geoengineering — action to actually lower temperatures. The best known method would mimic the effect of volcanoes — which do periodically reduce temperatures (remember 1816, the “year without a summer”) by throwing a lot of particles into the upper atmosphere that deflect sunlight. This would be problematical and controversial for a host of reasons, and it too would be a gargantuan undertaking.

Both carbon removal and geoengineering would take many years, if not decades, to be deployed at anything near the scale needed.

That leaves the third course — adaptation. Measures to anticipate and cope with higher temperatures. Like building sea walls to protect cities against rising waters. Some places (Venice, for example; the Netherlands, historically) already do this. I’m skeptical that makes sense in the long term; but there are many other things we can do. The Economist article shows how much is actually being done already, although much more is needed.

The idea that humanity is suicidally wrecking the planet is over-the-top. What we have done is what we had to do, utilizing the planet’s resources in order to make ever better lives for generations of people. Of course it was no free lunch, and now we must pay the price. We will pay it.

We will not go extinct. We are the most adaptable of species. Coming out of steamy Africa, humans accommodated to living in the Arctic, and a vast array of other different climates. And that was without the benefit of all the scientific knowledge and technology we’ve acquired since. We will cope with a warmer planet.

As long as it’s not another Venus.

Consciousness Revisited

July 29, 2022

At the used book sale, I explained, “I’m buying this because I debated this author on this subject.”

It was David Gelernter, Yale professor and computer science guru.* His book is The Tides of Mind – Uncovering the Spectrum of Consciousness. At a local appearance I had challenged his assertion that no artificial system could ever be conscious. I said what the brain does, in creating mind, is not magic; an artificial system replicating its functions could replicate the results. Gelernter insisted consciousness comes from neurons and neurons only; no neurons, no consciousness. Yet neurons are physical objects, not magical either; in principle they’re reproducible.

His position that there’s something ineffable about consciousness that bars an artificial version strikes me as a sort of nonscientific mysticism. Evocative of how old-time science, baffled to understand what life is, had recourse to the notion of an inexplicable “elan vital.” Today we know better.

Gelernter is religious. Early on he says, “The scientist explains the origins of the Universe with a logical argument. The religious believer tells a story . . . Only the logical argument has predictive power. Only the story has normative moral content. Only a fool would pronounce one superior.”

Here’s the problem with that. Science’s power in explaining reality is unarguable. But the “normative moral content” of any given religious belief is highly arguable. I view the moral stories told by conventional religions as hopelessly muddled, being based on false premises. So, yes, I do pronounce the scientific perspective superior.

The book’s key concept, as per the subtitle, is that consciousness operates along a spectrum. The top level entails high focus, with memory use disciplined, thought being rational, reflection and self-awareness strong. The mid-level is less focused, memory use ranges freely and occasionally wanders; “thought seeks experience;” emotions and daydreams emerge. At the lower level, “memory takes off on its own,” thought drifts, reflection and self-awareness are weak; emotions bloom; we fall asleep.

Sure; we all experience these varied sorts of mental states. But Gelernter makes far too much of his hierarchy and applies it far too rigidly.

He posits that the top of the spectrum governs early in the day, when one is sharp, and it’s basically downhill from there. I myself feel my brain does work best in the morning. And I can plunge down the spectrum fairly fast, especially late in the day. But we spend very little time at Gelernter’s lowest level; basically just while falling asleep. (Sleep itself, in his system, is something apart.)

He seems to say that at the top of the spectrum emotions are held at bay. That’s nonsense. There is never a time when a normal human being is not experiencing emotions. And Gelernter’s fundamental mistake here is drawing a dichotomy between emotion and reason. They’re inextricably entwined; it’s emotion that supplies the impetus for using reason. While I’m writing this, my rational functioning is in the foreground, but there’s always a substrate of emotion humming along. I wouldn’t be writing this otherwise.

Here’s an example of the didactic way Gelernter applies his system. Referring to John von Neumann, he suggests that a “first rate mathematical genius soars higher in his logical thought than nearly anyone else,” being “in the region of ‘exceptional wide awakeness.'” Serious mathematics does require bouts of intense concentration. But so, in their many varied ways, do many other human undertakings. The idea that von Neumann ascended to some higher level, breaking through the ceiling of Gelernter’s spectrum, strikes me as nonsensical.

Right after this, he quotes a young Napoleon saying he does “a thousand projects every night as I fall asleep.” From that meagre crumb, he contends Napoleon did the opposite of von Neumann, expanding the spectrum at the bottom; “the need for sleep isn’t felt until farther than usual in the down-spectrum trip,” which “keeps a mind afloat and awake that would otherwise have long since sunk into sleep.”

But maybe Napoleon merely suffered from insomnia. I sometimes have a similar “thousand projects” night not because I’m expanding the spectrum’s bottom but because my mind just won’t shut up.

More broadly, Gelernter thinks there are high-focus and low-focus people. The former tune out all the “noise” that distracts the latter. But there’s another side to that coin. “Keats,” he goes on to say, “had a different kind of low-spectrum genius. He was able to reach a state of perfect quiet watching, of near-pure experience where the mind, perfectly dilate, floods with being. The average person is nearly asleep at the point of reaching such a state. But Keats was able to be (just be), yet remain awake and aware.” (His emphasis) This is nonsensical pseudo-profundity.

Gelernter does write endlessly about that low spectrum level when one transitions to sleep. Though again that’s a tiny part of one’s day. Further, he repeatedly describes the mind’s workings there as entailing some coherence; though bizarre, making a certain sense, telling a sort of story. Supplying an example from his own experience, involving eight sequential images, all anchored in reality, with an explanation for each. My own experience is diametrically different. Trying to fall asleep, I’ll sometimes make a conscious (!) effort to stop thinking thoughts altogether. And I’ll start seeing images so random, so meaningless, sometimes grotesque, they obviously were not consciously produced. “Good,” I’ll think; that signals I’m falling asleep. Thus, oddly, I am still awake. But not for long.

This is not a science book; nor exactly a philosophy book. It’s about the workings of mind, consciousness, self, human psychology, all entwined. An effort to supply the insight we’d wish introspection could, but cannot. One cannot look inward because one is already there.

Gelernter’s bete noire is “computationalism” — analogizing the brain to a computer’s hardware with the mind as software running on it — which calls the most intellectually destructive analogy in at least the last century. Yet Gelernter seems to forget it is indeed an analogy, not a description of reality. And the analogy is useful in debunking Cartesian dualism — the idea that mind and brain are separate. Now that’s a destructive idea that has bedeviled thought for many centuries. No, minds don’t work exactly like computers. Yet (as Ray Kurzweil’s book, How to Create a Mind, explained via neuroscience) there are many parallels between the workings of brains and computers.

At the book’s end, Gelernter says (his emphasis) “[t]he spiritually minded person experiences something: the unity of many people, objects, or events — or of everything in the cosmos.” He stresses this is not a belief in underlying unity, but the direct experience of it — “a far more formidable thing. Cosmic unity becomes an emotion.” It makes some “feel the presence of God.” This too is not a (mere) belief —”one can be argued out of a belief, but never out of a feeling.”

That seems flatly untrue. But a “belief,” by definition, has to be based on something (even if wrong). It’s “feeling” that should carry the modifier “mere.” A feeling can be based on nothing at all. Surely it should not trump a “belief.”

Applying his spectrum theory of mental functioning, Gelernter argues that ancient people operated lower on the spectrum more often than most moderns, and “spiritually minded people were more common.” As was the “spiritually inspiring feeling of cosmic unity.” And people were “more emotional” (his quote marks) than “we cold fish.” Thus they “would have been more ‘plugged into’ each other, more apt to feel each other’s feelings.”

As a student of ancient history, I find this bunk. If ancients were better at feeling each other’s feelings, how come they so often practiced shocking barbarity? They did have much human connectedness — within the confines of a tribe or band. Evolution programmed us to stick together with our mates, but to regard all others as threats. Only in modern times have most of us (apart from Russians) grown beyond that, our ambit of sympathy widened to encompass more people less like us. And so man’s inhumanity to man has lessened.

And I don’t buy theories that earlier people had mental lives fundamentally different from ours. I’ve written refuting Julian Jaynes’s notorious “bicameral mind” theory that the modern sort of consciousness only suddenly emerged around 1000 BC. Modern humans evolved tens of thousands of years earlier with minds functioning exactly as ours do now. If anything, they’d have been forced to operate more at the spectrum’s higher end, because it was much more challenging just to stay alive.

The “cosmic unity” idea might sound like an elevated “spiritual” one. But what exactly does “cosmic unity” mean? Gelernter writes of “a transcendent unity among far-flung objects and events . . . which often (though not always [!]) suggests one creator who stands outside his creation.” Not to me it don’t. Indeed, it’s quite a wild leap. Gelernter also says (his emphasis) a “feeling of cosmic unity can make a person feel outside of — over and against — creation.”

All this, if actually saying anything at all, is moonshine.

Is everything in the cosmos interconnected? Well, yes, in all deriving from a single event, the Big Bang; and being embedded in Einsteinian space-time, all made of the same particles, all following unwavering laws of physics. Is there something “spiritual” there? The word seems meaningless. If anything, the facts bespeak an ultimate materialism. Everything in and about the cosmos is anchored in a physical reality. Does any of it suggest a God? Certainly not. God seems wholly superfluous. (As LaPlace told Napoleon, “I have no need of that hypothesis.“)

But is it awesome? Yes. Now that’s a word that does have meaning. The vastness of the cosmos is awesome to contemplate. As are those facts about it I recited. And the fact that I came into existence with a mind to contemplate them. Meanwhile reality’s deepest truths still elude us. Either it had a beginning, or didn’t. Is it infinite, and if not, what lies beyond? Neither conundrum can our minds encompass. Likewise the final mystery: why is there something and not nothing?

Call all this “spiritual” if you like. I prefer to say simply: it is what it is.

* I had another connection to Gelernter: the brother of the Unabomber, who tried to blow him up, had been to my house.

Sleep and Body Rhythms

July 20, 2022

Sleep’s important role in health and longevity has grown increasingly apparent. Sleep well nightly and you put off the Big Sleep.

I was a sickly kid. But now, at 74, my health is great, with no meds. I’ve also been fortunate to always follow a very regular sleep pattern. The two are evidently related.

We all know we’ve got built-in body clocks. But how they work, exactly, has been a tough scientific problem. I recently read a book by Steven Strogatz, Sync – The Emerging Science of Spontaneous Order, with a most interesting chapter on sleep.

Experiments have put volunteers in isolation rooms with no time clues. They’d sleep whenever. One researcher (Michel Siffre, in 1972) nearly went nuts partway in, begging to be let out. His collaborators outside disregarded this — dubious ethics, I think.

Anyhow, such experiments have shown our body clocks are not exactly 24 hours — typically a bit longer. But the subjects would not necessarily get into a sleep schedule resembling “normality.” Sometimes staying awake longer, and also sometimes sleeping longer. But here’s the interesting thing. The longer sleeps didn’t typically follow the longer wake intervals. Instead, a longer time awake is often followed by a shorter sleep. There seemed no rhyme or reason to this.

Our natural rhythms also include temperature fluctuations. Body temperature rises and falls during the day, seemingly separately from the body clock governing sleep. However, experiments have now actually revealed that the two are not unconnected. And our biological signal for hitting the sack is not feeling tired or sleepy — it’s when body temperature peaks. Going to sleep at that point in the cycle means sleeping long, with temperature now falling into a trough. When temperature starts rising again, that’s the wake-up alarm.

So even if you were tired after long wakefulness, if you go to bed when temperature will soon rise, that will wake you regardless. This is also the time when cortisol (a hormone) is being pumped out, raising alertness.

This pattern explains a lot of accidents, which tend to occur when people are at work in the wee hours, fighting their body thermometers, with brains not operating optimally. Thus TMI, Chernobyl, Bhopal, Exxon Valdez.

Ever notice how, if you stay awake for a long stretch, you become groggy? But if you push through it and keep awake, the grogginess dissipates and you get a “second wind.” What’s really happening is your body temperature rising back up. Likewise many of us feel drowsy in mid-afternoon. Guess what? Another temperature trough. Temperature is on a regular cycle.

“REM” refers to rapid eye movement, during sleep; it means we’re dreaming. Typically the longest and most intense dreams occur later during the night, before waking. But here again it’s been found that REM sleep does not follow the overall sleep time-picture. Instead, it too is governed by the temperature cycle — occurring just after the body is coldest. That’s why we often seem to wake up from a vivid dream.

But again the question is — how does this work? Do we have some sort of internal clock regulating it? Strogatz says that rather than having a clock, a person might be a clock. That is, such time-keeping is built into every component of the body. Body parts removed and kept alive in a dish still exhibit circadian rhythms.

Yet there seems to be a master clock regulating the whole system, apparently in the part of the brain called the hypothalamus. But how exactly that brain module performs that function remains something of a mystery.

Note that body temperature typically has peaked and is starting to fall just a couple of hours before a typical late evening bedtime. That’s what Strogatz calls a “forbidden zone” where it’s hard to fall asleep. Hence if you go to bed early — for instance, knowing you’ll have to get up early — it doesn’t work. This also accounts for a lot of insomnia — people trying to sleep at the “wrong” times given their body cycles.

But what about light and dark you say? You’re right. Daylight is indeed a powerful cue that keeps our body clocks constantly adjusting to the outside environment. This is especially important because as noted, our circadian rhythms are typically set on a schedule slightly longer than 24 hours. Why, is unclear. But without constant readjustment, we’d be haywire. Which in fact afflicts blind people, 80% of whom experience chronic sleep disorders. And the other 20% are apparently not so blind that their photo receptors can’t register any light at all — even if they cannot “see” it.

Steven Pinker on Rationality

July 8, 2022

(This was my July 5 Albany Library book talk; slightly condensed)

Steven Pinker is a Harvard Professor of Psychology; a 600-pound intellectual gorilla of our times; author of a string of blockbuster books. His latest is Rationality – What it is – Why it Seems Scarce – Why it Matters.

In 2011, he wrote The Better Angels of Our Nature: Why Violence Has Declined. AndI recall where a radio interviewer was, like, Pinker, are you out of your mind? Violence declining? But of course that was well supported by evidence.

So now it’s Rationality. And many will similarly say, Pinker, are you out of your mind?

Evidence for human irrationality does abound. And this might seem the worst of times for a book celebrating rationality, with two big elephants in the room stomping on it.

One is American politics. Some voters have always behaved irrationally, yet the system functioned pretty well nevertheless. But now the inmates have taken over the asylum. Or at least one of our two parties; and recalling Yeats’s line: the best lack all conviction, the worst are full of passionate intensity.

Then there’s the international sphere. The Better Angels book emphasized three quarters of a century without wars among major nations. Russia’s Ukraine war blows that up. An assault on rationality.

But maybe, with the world seemingly gone mad, this book on rationality is actually timely.

The core of rationality is logic. Pinker gives the example of a logic puzzle, involving four coins. I’ll omit details; most people get it wrong. But Pinker says we’re better at applying logic when it “involves shoulds and shouldn’ts of human life rather than arbitrary symbols” like in the puzzle. He calls this our “ecological rationality,” our horse sense (though horses don’t have it to anything like our degree).

Here’s a simple logic problem that even many mathematicians, including Paul Erdos, have gotten wrong. The famous Monty Hall problem. On “Let’s Make a Deal,” there are three doors, one hiding a car and two hiding goats. You pick Door #1. Then Monty opens Door #3 to reveal a goat. Should you switch to Door #2? Most people say the one-in-three odds haven’t changed. Wrong! Monty opened Door #3 knowing it had a goat. He didn’t open #2 which, you therefore now know has a 2 in 3 chance of hiding a car. So you should switch.

Pinker emphasizes that rationality is goal oriented, saying “Do you want things or don’t you? If you do, rationality is what allows you to get them.” This entails using knowledge, which he defines as “justified true belief.” People are again logical and rational (generally) in everyday life, but too often fall down on the “justified true belief” thing.

Pinker suggests that seeking an ultimate philosophical reason for reason is misguided. Any postmodernist’s attempt to argue against reason implicitly concedes that rationality is the standard by which any arguments, even arguments against rationality itself, stand or fall. (Similarly, the assertion that nothing is really true would — if correct — apply to that assertion itself.)

And rationality is not just one among many alternative ways of seeing things. Not, as Pinker puts it, “a mysterious oracle that whispers truths in our ear.” Indeed, “reason is the only way we can know anything about anything.”

There’s a common idea that reason and emotion are separate, at odds with each other. Pinker quotes David Hume that reason is, and should be, “the slave of the passions.” While neuroscientist Antonio Damasio has shown that emotions give us the motivations for deploying reason, so the two are inextricably linked. Then Pinker notes that some of our goals can conflict with others; and “you can’t always get what you want.”

We furthermore have goals we don’t even choose, programmed into our genes by evolution. One rational goal may be a slim, healthy body; also making you more sexually attractive, thus advancing a gene-based goal of reproducing. While conflicting with a desire to eat a delicious dessert — which also serves an ancestral genetic goal, to load up on calories when you can. We use our reasoning minds to mediate among conflicting goals.

But of course not with perfect rationality. Scientific work, notably by Kahnemann and Tversky, has revealed many seemingly irrational human cognitive biases. Also programmed into us by evolution, during our long hunter-gatherer past. For example, we fear potential losses more than valuing gains. So may pass up a chance to win $5 if it means an equal chance to lose $4. Sounds irrational. But for our early ancestors, a “loss” could well mean death. And Pinker poses the question, what could happen to you today making you better off? What could happen making you worse off? A lot worse off? So maybe our loss avoidance bias is not so irrational.

And even if, in isolation, some of our ancestral cognitive biases still seem irrational, realize that we’re talking about short-cut heuristics enabling us to make quick intuitive decisions about stuff coming at us every hour of the day. If you had to think your way rationally through all of it, you couldn’t even function. But using that repertoire of innate heuristics, we do function quite well. Making their use quite rational in a broader overall perspective.

Now, what about morality? Hume famously said you can’t get an ought from an is — in other words, how things are (i.e., facts) can’t tell us how they should be (moral laws). Thus there can be no true moral laws, only opinions. Some solve this by invoking God as the source of morality. But that was knocked down by Socrates, in Euthyphro, asking whether something is moral because God says so, or does he say so because it is moral? If the former, why submit to his arbitrary edicts? But if God does have reasons for his moral rules, why not just embrace those reasons and skip the middleman?

Meantime, Pinker says morality is all about how we behave in relation to others. And there we can rationally recognize everyone’s right not to be unjustifiably messed with. If you feel free to bash others, you can’t say they cannot bash you. Thus Pinker posits impartiality as key — nobody’s personal perspective can override those of others. Which is basically the golden rule.

And note that this does not mean self-sacrifice. It’s actually rational from the standpoint of self-interest. Because it makes you feel good about yourself, and also makes a world that’s better for everyone, including you.

There’s a chapter on critical thinking. Pinker catalogs a host of traps we fall into: the straw man argument, moving the goal posts, what-aboutism, ad-hominem arguments, and so forth. Alas such things “are becoming the coin of the realm” in modern intellectual life. And Pinker quotes Leibnitz in the 1600s envisioning a world where all arguments would be resolved by saying, “let us calculate.” Lyndon Johnson liked to quote, “Come let us reason together.” Yet Pinker comments that life is not that simple, and doesn’t work by formal logic. We know what words mean, but applying them in the real world can be challenging. You can get in a lot of trouble nowadays trying to define the word “woman.”

Another chapter deals with probability and randomness. Many people have only a vague sense of what probability really entails. Do you fault the weatherman who said there’s a 10% chance of rain, and you get soaked? Or the political analyst who gave Hillary a 70% probability of winning? And we tend to judge an event’s probability by the availability heuristic, another Kahnemann-Tversky cognitive bias. That is, we judge how likely something is by how easily examples come to mind. Like with plane crashes.

In a state of nature, lacking better information, that’s not necessarily irrational. But modernity does give us better information, telling us plane crashes are much rarer than car crashes. Yet many people operate on the opposite assumption. And the availability heuristic scares people off from nuclear power — we vividly recall a few high profile accidents (which actually killed very few) — while ignoring the tens of thousands of deaths caused annually by air pollution from conventional power plants. They don’t come to our attention. Pinker calls the news media an “availability machine,” serving up stories which feed our impression of what’s common in a way that’s sure to mislead. (It’s why people always think crime is rising.)

The book goes through many examples of how we commonly misjudge probabilities. For example, it’s reported that a third of fatal accidents occur at home. Does that mean homes are very dangerous? No; it’s just that we spend a lot of time there. We confuse the probability that a given fatal accident occurred at home with the probability that a fatal accident will occur while at home. Two very different things.

Or how about this? A majority of bicycle accidents involve boys. Does that suggest boys ride more recklessly? Or — that boys ride more than girls?

We also overrate the significance of coincidences. I’m often at my computer typing, with the radio on. Is it spooky when I hear a word on the radio just as I’m typing the same word? Not really. I type a lot of words, and hear a lot of words. So such coincidences are bound to occur regularly. Even sometimes with obscure words. My favorite instance: Equatorial Guinea mentioned on the radio just as I was working up a coin from that country. What are the odds? Well, finite.

There’s a chapter on Bayesian reasoning, named for Thomas Bayes, an 18th century thinker. It’s all about how added information should modify our predictions. Like in the Monty Hall problem: his opening one door added information. A key concept is the “base rate.” Suppose 1% of women have a certain disease. There’s a test for it, 90% accurate. Suppose a woman tests positive. What is the chance she has the disease? Most people, including doctors, give it a high probability — forgetting the base rate, which is again only 1%. Bayesian math here tells us that with a disease that rare, a test 90% accurate will produce about ten times more false positives than true ones. So the gal’s likelihood of having the disease is only about 9%. In Bayesian lingo, the 1% is the “prior” — prior information giving us expectations we modify with further information — the test.

One of the most hated theories of our time, Pinker says, is “rational choice theory.” Associated with Homo Economicus, the idea that people act to maximize self-interest. Well, of course we know they do; yet don’t always. Pinker cites an experiment where money-filled wallets were dropped, and most got returned. However — was that really against self-interest? Again, most people feel good about themselves when doing the right thing; shameful and guilty otherwise. And what is life about, if not feelings? Pinker comments that rational choice theory “doesn’t so much tell us how to act in accord with our values as how to discern our values by observing how we act.”

So far I’ve talked about making decisions and choices for ourselves. But it’s another thing when dealing with someone else who’s also trying to maximize their self-interest. This is game theory, which Pinker says explains a lot of behavior that might seem irrational. He mentions the game of chicken, which I once wrote a poem about:

Here’s the trick to playing chicken:

You just keep driving straight,

And don’t swerve, ever.

The other guy will always swerve first.

You’ve got to be crazier than the other guy.

And if the other guy is crazier than you,

And doesn’t swerve,

And you’re killed in a fiery crash,

So be it.

The classic illustration for game theory is “the prisoner’s dilemma.” Two partners in crime are interrogated separately. Each is told that if he rats on the other, he’ll go free, and the other gets ten years. If both talk, each gets six years. If neither talks, each gets six months. So collectively they’re better off staying mum, but only if both do, and neither knows what the other will do. Self interest for each says talk. And if both talk, they’re screwed with six year sentences.

There’s seemingly no good solution. But if the game is repeated, it turns out the best strategy is tit-for-tat — betraying a partner only if previously they betrayed you. And in fact much of human social life resembles this. We indeed behave toward others like it’s a repeated series of prisoner’s dilemma; and that’s why social cooperation tends to prevail. We still can get “the tragedy of the commons,” where individual self-interest ruins things for everybody. But that’s not actually so common. People mostly restrain themselves.

Next topic: correlation does not mean causation. The concept of causation, says Pinker, is at the heart of science — figuring out the true causes of things. So we can do something about them.

Pinker likes to put some humor in his books. A husband couldn’t satisfy his wife in bed. They consult a rabbi. He suggests they hire a buff young man to wave a towel over them in bed. It doesn’t work. So next the rabbi suggests switching: the young man shtups the wife while the husband waves the towel. An lo, great results. So the husband declares to the young man: “Schmuck! Now that’s how you wave a towel.”

And that’s how Pinker illustrates the concept of causation.

So finally he gets to the question: what’s wrong with people? Saying we have a “pandemic of poppycock.” Belief in Satan, miracles, ESP, ghosts, astrology, UFOs, homeopathy, QAnon, 2020 election fraud, replacement theory. And when science produced one of its greatest near-miracles — Covid vaccines — a lot of of Americans said no thanks.

Pinker acknowledges that all the logical and cognitive pitfalls he discussed play some role. But none of that could have predicted QAnon. He also won’t blame social media, pointing out that conspiracy theories and viral falsehoods are probably as old as language. Look at the Bible — talk about fake news. Meantime, even the most flagrant conspiracy mongers still behave, in mundane day-to-day life, with great rationality. So what indeed is going on?

For one thing, rationality can be a nuisance, producing unwelcome answers. Pinker quotes Upton Sinclair: it’s hard to get someone to understand something if their income depends upon their not understanding it. So we use motivated reasoning to reach a preferred conclusion. Indeed, Pinker says the true adaptive function of our reasoning ability may be to win arguments: “We evolved not as intuitive scientists but as intuitive lawyers.” Thus confirmation bias: we embrace any supposed information that confirms a cherished belief, while dismissing or disregarding anything discordant.

However, Pinker suggests the rational pursuit of goals needn’t necessarily encompass “an objective understanding of the world.” Which might conflict with, for example, a goal of fitting in with your peer group (a big propellant for confirmation bias). Pinker calls this “expressive rationality” — adopting beliefs based not on truth but as expressions of a person’s moral and cultural identity. (A related word perhaps strangely doesn’t appear in the book: groupthink.)

Pinker focuses here on our political polarization, between what have really become “sociocultural tribes.” Resembling religious sects “held together by faith in their moral superiority and contempt for opposing sects.” True of the woke left, but especially Republicans, now epitomizing members of a religious cult — whose sense of selfhood depends upon their not understanding that their deity is a stinking piece of shit.

But most Americans actually consider themselves less susceptible to cognitive biases than the average person. That’s the Dunning-Kruger effect — people with deficient thinking skills lack the thinking skill to recognize their own deficiency.

So Pinker says the paradox of how we can be both so rational and so irrational lies in self-aggrandizing motivation. Just as the core of morality is impartiality, likewise with rationality, one must transcend self-interest. I try to apply an ideology of reality — shaping my beliefs on the facts I see — rather than letting my beliefs shape the facts I see. But that does not come naturally to most people.

As Pinker notes, the most obvious counter-example is religion. Yet this book about rationality has relatively little to say about religion. Perhaps Pinker feared turning too many people away from his message. But “faith,” as Mark Twain put it, means believing what you know ain’t so. Believing things despite lack of evidence; even in defiance of evidence. And Pinker does say, “I don’t believe in anything you have to believe in.”

But what does it really mean to believe something anyway? An interesting question. Many religious people believe they’re going to Paradise. Yet few are in any hurry to depart. Pinker distinguishes beliefs consciously constructed versus intuitive convictions we feel in our bones. And we divide the world into two zones: hard factual reality, where our beliefs tend to be accurate and we act rationally; and a zone where reality is more elusive, a zone of mythology, not undermining our day-to-day functioning. There, even holding a false belief can be rational in the sense of serving certain goals — making one feel good, tribal solidarity again, or avoiding fear of death.

Pinker does fault our society for failing to sufficiently inculcate some of science’s foundational principles (which contradict religion): that the universe is indifferent to human concerns, that everything is governed by basic laws and forces, that the mind is something happening in the brain. Thus ruling out an immortal soul.

But, ever the optimist, he also reminds us how much rationality is actually out there. Some people distrust vaccines, but not antibiotics (and so much else in modern medicine and science). And culture can evolve. Ours has evolved tremendously; a lot of what was acceptable not so long ago is no longer acceptable. (There may be some overcorrection.)

It’s a battle against what Pinker sees as a “tragedy of the rationality commons.” Wherein self-interested and self-motivated argumentation gobbles up all the space. Yet he thinks the greater community can mobilize against this; for example, internet media in particular have awakened to the problems, roused by two big recent alarm bells: misinformation about Covid, threatening public health, and about the 2020 election result, threatening our democracy.

The final chapter is titled “Why Rationality Matters.” As if that still needs answering. Pinker presents a whole catalog of how common mistakes of rationality cause concrete harm. He cites one study identifying 368,000 people killed between 1970 and 2009 from blunders in critical thinking. I said to myself: really? Only 368,000? And of course countless Americans died from Covid irrationality.

Yet still, immense technological progress, improving quality of life (and its length) has been achieved through rationality. Likewise our moral progress, in a great roll-back of cruel unjust practices. Pinker says that in researching this, his greatest surprise was how often the first domino was reasoned argument. Very powerful after all.

I would add that globally speaking, a huge factor propelling human rationality has been the spread of education (the Dunning-Kruger effect notwithstanding).

Well, it might seem like I’ve veered back and forth between positive and negative. But I’ll conclude with the book’s final words: “The power of rationality to guide moral progress is of a piece with its power to guide material progress and wise choices in our lives. Our ability to eke increments of well-being out of a pitiless cosmos and to be good to others despite our flawed nature depends on grasping impartial principles that transcend our parochial experience.”

That is, rationality. Humanity’s best idea.

My book talk Tuesday: Pinker on Rationality

June 29, 2022

On Tuesday, July 5, at noon, I will present a review of Steven Pinker’s new book, Rationality, at the Albany Public Library, 161 Washington Avenue. This is a live, in-person event!

The book covers all aspects of how people employ reason — and how and why, often, we don’t.