Archive for the ‘Science’ Category

Coronavirus realities

March 24, 2020

Trump, having previously said the economic shutdown could last till August, now wants a return to normalcy much sooner. (Much sooner than medical experts recommend.)

Actually we’re only just beginning to see how bad things are. The Economist’s latest issue (as usual) provides much clarity.

COVID-19 is very contagious, and the containment measures look too little too late because the virus is already very widespread. The swiftly rising number of reported cases is likely just the tip of an iceberg. Many infected people don’t show symptoms right away, if ever, but meantime can infect others.

Our efforts might, in a couple of weeks, appear to bend the curve down. But the problem is that a majority of the population won’t have been infected, hence won’t have developed immunity, and the virus won’t have disappeared from the landscape. This means that after Trump declares victory and restrictive measures are relaxed, the virus will likely spike back up — necessitating a reimposition of restrictions. “This on-off cycle,” says The Economist, “must be repeated until either the disease has worked through the population or there is a vaccine which could be months away, if one works at all.”

This virus, while new, is not a fundamentally different creature from others of its ilk, so in principle previous methods to create vaccines should succeed. But before then, most of our population could contract the illness. As we know, most would have only minor symptoms, or none. But even a death rate below 1% could still be expected to kill a million or two.

Of course, besides a vaccine, a medicine to treat the illness would change everything. While some candidates are being tested, we don’t have a treatment yet.

Note that — barring the virus’s complete eradication (practically impossible) — the more effective a shutdown is in preventing infections, the worse will be the second wave, after the relaxation, because the virus will have so many potential new victims without immunity. The Imperial College in London built a set of models (reported by The Economist) showing this effect after five months of restrictions. If they included schools, the second wave is even more severe. (China may soon be putting this to the test.) Governments need to be candid about this prospect, instead of encouraging us to imagine the whole thing will just go away in due course.

I have argued that we really have no choice but to accept severe economic pain to avoid a nightmare scenario of a health system unable to handle a flood of illnesses so that many thousands die simply from lack of care. That’s starting to look likely despite our best efforts. Realize not just coronavirus victims will be affected — hospitals won’t be able to treat accidents, heart attacks, anything else. And, says The Economist, “the bitter truth is that [those containment efforts] may be economically unsustainable. After a few iterations governments might not have the capacity to carry businesses and consumers. Ordinary people might not tolerate the upheaval. The cost of repeated isolation, measured by mental well-being and the long-term health of the rest of the population, might not justify it.”

An agonizing dilemma. But The Economist also says it can be mitigated by a massive testing regime and use of technology to trace contacts and identify who really needs quarantining. As South Korea and China have done.

Trump keeps patting himself on the back for his early restrictions on travel from China and, later, Europe. That may indeed have helped slow the virus’s spread. However, it was already underway before the travel bans, so it was delusional to think they solved the problem. What was really needed was what South Korea did — again, massive testing, right away.

But even to this day, we’re still not doing that. Still only starting to ramp up toward it.

As The Economist’s “Lexington” columnist (on American affairs) writes, this testing inadequacy at least partly owes to the Trump administration’s “decision to scrap the NSC’s dedicated pandemic unit” (established under Obama). He also points to its “sticking with a faulty viral test when the WHO could have provided a working alternative.” (As South Korea used. The tests mostly in use here now, still way too few, also don’t give results for up to ten days — almost useless in this fast-moving pandemic.) Lexington also points to overall White House dysfunctionality, and concludes: “a stunning catalog of failure.”

Add in Trump’s fountain of false and misleading information, which delayed most Americans’ taking the problem seriously. Last Wednesday he belatedly invoked the Defense Production Act, enabling government to require industries to produce stuff needed in an emergency. We’re desperately short on respirators and protective gear. But just signing an order, with Trump’s posturing flamboyance, actually produces nothing, absent follow-through. And it is absent. Trump seems to imagine he’ll nevertheless make the needed items magically appear.

Trump (never able to admit error) now claims he knew very early this would be a pandemic. Contradicting his own previous statements. And begging the question: if he knew so early, why was our response, particularly on testing, so dilatory?

The harsh truth: South Korea’s infection began exactly the same time as ours. Had we done what South Korea did, we might have avoided the need for economic restrictions as extreme as those now in force, which may well fail anyway. And avoided literally trillions in costs and losses and untold human suffering. And of course a vast number of deaths soon to occur.

Trump bears terrible blame for this catastrophe. As do Americans who voted for such a person.

Suppose there were some disease that would somehow disproportionately take out Republicans. Well, here it is. They do tend to be much older on average. But moreover, many Trump fans who took on board his early pooh-poohing of the virus still treat it less seriously than even he does now; thus are more likely to expose themselves to infection and death.

On the other hand, this thing is bollixing up voting, and Republicans will take advantage to make casting ballots harder — especially for Democrats. We must be vigilant lest our democracy be another casualty of COVID-19.

Artificial Intelligence and our ethical responsibility

March 16, 2020

(A virus-free and Trump-free post. (At least until I added this.))

Artificial Intelligence (AI) was originally conceived as replicating human intelligence. That turns out to be harder than once thought. What is rapidly progressing is deep machine learning, with resulting artificial systems able to perform specific tasks (like medical diagnosis) better than humans. That’s far from the integrated general intelligence we have. Nevertheless, an artificial system for the latter may yet be inevitable in the future. Some foresee a coming “singularity” when AI surpasses human intelligence and then takes over its own further evolution. Which changes everything.

Much AI fearmongering warns this could be a mortal threat to us. That superior AI beings could enslave or even eliminate us. I’m extremely skeptical toward such doomsaying; mainly because AI would still be imprisoned under human control. (“HAL” in 2001 did get unplugged.) Nevertheless, AI’s vast implications raise many ethical issues, much written about too.

One such article, with a unique slant, was by Paul Conrad Samuelsson in Philosophy Now magazine. He addresses our ethical obligations toward AI.

Start from the question of whether any artificial system could ever possess a humanlike conscious self. I’ve had that debate with David Gelernter, who answered no. Samuelsson echoes my position, saying “those who argue against even the theoretical possibility of digital consciousness [disregard] that human consciousness somehow arises from configurations of unconscious atoms.” While Gelernter held that our neurons can’t be replicated artificially, I countered that their functional equivalent surely can be. Samuelsson says that while such “artificial networks are still comparatively primitive,” eventually “they will surpass our own neural nets in capacity, creativity, scope and efficiency.”

And thus attain consciousness with selves like ours. Having the ability to feel — including to suffer.

I was reminded of Jeremy Bentham’s argument against animal cruelty: regardless of whatever else might be said of animal mentation, the dispositive fact is their capacity for suffering.

Samuelsson considers the potential for AI suffering a very serious concern. Because, indeed, with AI capabilities outstripping the human, the pain could likewise be more intense. He hypothesizes a program putting an AI being into a concentration camp, but on a loop with a thousand reiterations per second. Why, one might ask, would anyone do that? But Samuelsson then says, “Picture a bored teenager finding bootlegged AI software online and using it to double the amount of pain ever suffered in the history of the world.”

That may still be far-fetched. Yet the next passage really caught my attention. “If this description does not stir you,” Samuelsson writes, “it may be because the concept of a trillion subjects suffering limitlessly inside a computer is so abstract to us that it does not entice our empathy. But this itself shows us” the problem. We do indeed have a hard time conceptualizing an AI’s pain as remotely resembling human pain. However, says Samuelsson, this is a failure of imagination.

Art can help here. Remember the movie “Her?” (See my recap: https://rationaloptimist.wordpress.com/2014/08/07/her-a-love-story/)

Samantha, in the film, is a person, with all the feelings people have (maybe more). The fact that her substrate is a network of circuits inside a computer rather than a network of neurons inside a skull is immaterial. If anything, her aliveness did finally outstrip that of her human lover. And surely any suffering she’s made to experience would carry at least equal moral concern.

I suspect our failure of imagination regarding Samuelsson’s hypotheticals is because none of us has ever actually met a Samantha. That will change, and with it, our moral intuitions.

AI rights are human rights.

Coronavirus/Covid19: Don’t panic, it’s just flu

March 9, 2020

It may or may not be a pandemic, but it is certainly a panic. A huge chunk of Italy, including Milan and Venice, is locked down, as is much of Washington State. Financial markets have freaked out, anticipating economic damage (mostly not from disease but from measures combating it).

Our federal government’s response so far is shambolic. Test kits: too little too late. Moronic Trump spews misinformation and utilizes the occasion to bash enemies.

China’s draconian restrictions on freedom seem to have gotten the spread under control. One worries about countries with governments even less competent than Trump’s. (Yes, there are many.)

A problem is that an infected person is symptomless for a while, so can infect many others before detection.

Okay. Now let’s please get a grip.

So far, coronavirus has caused something over 100,000 illnesses and 3000 deaths worldwide. It’s an ailment much like ordinary flu, so most cases are relatively mild and clear up by themselves. Both illnesses kill mostly people already in frail health.

In the U.S. alone, ordinary common flu this season has thus far caused at least 32 million illnesses, 310,000 hospitalizations, and 20,000 deaths.

Coronavirus does seem to have a somewhat higher death rate, but it’s still a very small percentage and the vast majority of victims recover. Coronavirus also does seem somewhat more infectious. On both measures, researchers are still trying to get an accurate fix. But it’s clear that though, on a case-by-case basis, coronavirus is more dangerous, it is not dramatically more dangerous.

And even if coronavirus is more contagious than ordinary flu, your chances of catching the latter, in the U.S., are hundreds of times greater simply because there are vastly more carriers. That could conceivably change, but coronavirus would have to metastasize humongously before it would actually be a U.S. health threat rivaling ordinary flu.

So why the panic over coronavirus, but not ordinary flu?*

As ever, human psychology is very bad at rationally gauging threats. After 9/11, millions felt safer driving than flying, though the risk on the roads was hugely greater (even counting the terrorism factor). People feel safer driving because they imagine they have control, unlike on an airplane. In the case of flu, the control factor is represented by vaccines, though in reality their effectiveness is limited. Another factor is familiarity. Driving, and seasonal flu, are thoroughly familiar. Unfamiliarity makes airplane terrorism, and coronavirus, seem more scary.

So we have TSA, and drastic efforts to contain coronavirus. Similarly strong measures could prevent tens of thousands of deaths annually from car crashes and ordinary flu, not to mention guns, but most Americans just yawn.

Government might do better at calming the coronavirus panic by calling it just “flu.”

* Actually, measures combating coronvirus will probably prevent larger numbers of flu deaths as a side effect.

What we eat: The Omnivore’s Dilemma (Part I)

January 2, 2020

Michael Pollan is a food thinker and writer. Not a restaurant reviewer; he looks at the big picture of what we eat in The Omnivore’s Dilemma. (Carnivores eat meat; herbivores eat plants; omnivores eat both.)

The book is a smorgasbord of investigative reporting, memoir, analysis, and argument. Pollan does have a strong point of view; cynics, pessimists and misanthropes will find much fodder here. But Pollan is no fanatical purist ideologue. We saw him on a TV piece summing up with this core advice: “Eat real food, not too much, mostly plants.” Seems pretty reasonable.

He’s a lovely writer. Here’s a sample, concluding the first of the book’s three parts, talking (perhaps inevitably) about McDonald’s:

“The more you concentrate on how it tastes, the less like anything it tastes. I said before that McDonald’s serves a kind of comfort food, but after a few bites I’m more inclined to think they’re selling something more schematic than that — something more like a signifier of comfort food. So you eat . . . hoping somehow to catch up to the original idea of a cheeseburger, or French fry, as it retreats over the horizon. And so it goes, bite after bite, until you feel not satisfied exactly, but simply, regrettably full.”

I might disagree with his evaluation, but man, this guy can write.

That first third of the book is all corn. In fact, if “you are what you eat,” we are all corn (well, mostly). Don’t think you eat much corn? Think again. As Pollan explains, a high proportion of our food is derived from corn; even our meat, the animals being mostly corn-fed. Pollan argues that, rather than humans domesticating corn, corn domesticated us. Viewed biologically, that species exploits us to spread itself and increase its population.

Pollan sees food industry economic logic driving us toward a kind of craziness. When the government started intervening in farm produce markets, the aim was to support prices by preventing overproduction. Remember farmers paid not to grow stuff? But in the 1970s that reversed, with the system now incentivizing ever higher yields, aided by technological advances. The resulting glut, in a free market, should drive prices down, signaling producers to cut back. However, if farm prices fall below a certain floor, the feds give farmers checks to make up the difference. Thus their incentive now is to just grow as much as possible, no matter what.

But, even with that government guarantee, Pollan shows, most farmers can barely eke a living, after costs. The bulk of the profit from corn actually being swallowed by the big middleman corporations like ADM and Cargill.

Meantime it’s a challenge to market all that corn. That’s why so much goes to animal feed. The industry has also cajoled the government to require using some in gasoline (ethanol), which actually makes neither economic, operational, nor environmental sense. But it does eat up surplus corn.

Part of the marketing challenge is that while for most consumer goods you can always (theoretically at least) get people to buy more, there’s a limit to how much a person can eat. So with U.S. population growth only around 1%, it’s hard for the food industry to grow profits by more than that measly percentage. But, in Pollan’s telling, it’s been fairly successful in overcoming that obstacle. This contributes, of course, to an obesity epidemic.

The abundance and consequent (governmentally subsidized) cheapness of corn figures large here. It goes into a lot of foods like soft drinks (yes, full of corn too!) that also attract us by their sweetness. Unsurprisingly, lower income consumers in particular go for such tasty fare that’s also cheap — buying what provides the most calories per budgetary dollar.

But the main driver of obesity is simple biology. We evolved in a world of food scarcity, hence with a propensity to load up when we could, against lean times sure to come. Thus programmed to especially crave calorie-rich sweet stuff. But it being no longer scarce, indeed ubiquitous, no wonder many get fat.

Pollan extensively discusses “organic” food. Largely a victim of its own success. “Organic” is a brilliant marketing ploy, it sounds so good. And farming that conforms to the original purist vision of what “organic” should mean may be environmentally cuddlier than conventional farming (though there are tradeoffs, one being greater acreage required). However, in practice, stuff in stores labeled “organic” is not produced all that differently. A key reason is that once “organic” took off and became big business, producers had to use many of the same large-scale industrial practices of conventional farming. Small operators can’t compete. Another is that the USDA rules for “organic” labeling were lobbied hard by producers to give them more leeway. Pollan cites, for example, a rule saying cows must have “access to pasture.” Sounds nice, but if you think about it, what does it really mean? If anything? Here, and in much of the rulebook, there aren’t real rules.*

Pollan muses that salad might seem our most natural kind of eating. But it gives him cognitive dissonance when considering the complex industrial processes that actually put it on our plates. An organic salad mix takes 57 calories of fossil fuel energy for every calorie of food. If grown conventionally, it would be just 4% more. Bottom line: by and large, “organic” is a pretty meaningless label. (Wifey take note.)

However, Pollan chronicles his stint at one actual farm that might be called beyond organic. This read to me like one of those old-time utopia novels. And that farm is actually extremely efficient. But its model doesn’t seem scalable to the industrial level needed to feed us all. Also, it’s extremely labor- and brain-intensive. Few farmers today are up for that.

The farmer profiled there opined that government regulation is the single biggest impediment to spreading his approach. It gives USDA inspectors conniptions. Pollan shows how the whole government regulatory recipe is geared to bigness. One example: a slaughtering facility must have a restroom reserved for the government inspector alone.

The book also delves deeply into the ethics of eating animals, a fraught issue. I will address that separately soon.

* Well, there are some, like no antibiotics. Today’s organic farming is a sort of kludge — Pollan likens it to trying to practice industrial agriculture with one hand tied behind your back.

Probability, coincidence, and the origin of life

November 30, 2019

The philosopher Epicurus was shown a wall of pictures — told, reverently, they portrayed sailors who, in storms, prayed to the gods and were saved. “But where,” he asked, “are the pictures of those who prayed and drowned?”

He was exposing the mistake of counting hits and ignoring misses. It’s common when evaluating seemingly paranormal, supernatural, or even miraculous occurrences. Like when some acquaintance appears in a dream and then you learn they’ve just died. Was your dream premonitory? But how often do you dream of people who don’t die? As with Epicurus, this frequently applies to religious “miracles” like answered prayers. We count the hits and ignore the many more unanswered prayers.

I usually work with the radio on. How often do you think I’ll write a word while hearing the same word from the radio? (Not common words, of course, like “like” or “of course.”) In fact it happens regularly, every few days. Spooky? Against astronomical odds? For a particular word, like “particular,” the odds would indeed be very small. But the open-ended case of any word matching is far less improbable. Recently it was “Equatorial Guinea!” Similarly, the odds of any two people’s birthdays matching are about one in 365. But how many must there be in a room before two birthdays likely match? Only 23! This surprises most folks — showing we have shaky intuitions regarding probability and coincidence. Most coincidences are not remarkable at all, but expectable, like my frequent radio matches.

So what does all this have to do with the origin of life? I recently began discussing Dawkins’s book, The Blind Watchmaker, and life’s having (almost certainly) begun with a fairly simple molecular structure, naturally occurring, with the characteristic of self-duplication. Dawkins addresses our intuition that that’s exceedingly improbable.

The essence of evolution by natural selection is, again, small incremental steps over eons of time, each making beneficiaries a bit likelier to survive and reproduce. The replicator molecule utilized by all life is DNA,* which maybe can’t be called “simple” — but Dawkins explains that DNA could itself have evolved in steps, from simpler precursors —non-living ones.

Indeed, non-living replication is familiar to us. That’s how crystals form. They grow by repeating a molecular structure over and over. (I’ve illustrated one we own — trillions of molecules creating a geometrical object with perfectly flat sides.) Dawkins writes of certain naturally occurring clays with similar properties, which could plausibly have been a platform for evolving the more elaborate self-replicators that became life.

Maybe this still seems far-fetched to you. But Dawkins elucidates another key insight relevant here.

Our brains evolved (obviously) to navigate the environment we lived in. Our abilities to conceptualize are tailored accordingly, and don’t extend further (which would have been a waste of biological resources). Thus, explains Dawkins, our intuitive grasp of time is grounded in the spectrum of intervals in our everyday experience — from perhaps a second or so at one end to a century or two at the other. But that’s only a tiny part of the full range, which goes from nanoseconds to billions of years. We didn’t need to grasp those. Likewise, our grasp of sizes runs from perhaps a grain of sand to a mountain. Again, a tiny part of the true spectrum, an atom being vastly smaller, the galaxy vastly larger. Those sizes we never needed to imagine — and so we really can’t.

This applies to all very large (or small) numbers. Our intuitions about probability are similarly circumscribed.

If you could hypothetically travel to early Earth, might you witness life beginning — as I’ve explained it? Of course not. Not in a lifetime. The probability seems so small it feels like zero. And accordingly some people just reject the idea.

Suppose it’s so improbable that it would only occur once in a billion years. But it did have a billion years to happen in! Wherein a one-in-a-billion-year event is hardly unlikely.

The odds against winning the lottery are also astronomical. Our human capacity to grasp such probabilities is, again, so limited that many people play the lottery with no clue about the true smallness of their chances. Yet people win the lottery. And I had my “Equatorial Guinea” coincidence.

And what’s the probability that life did not evolve naturally, along general lines I’ve suggested, but was instead somehow deliberately created by a super-intelligent being of unimaginable power — whose existence in the first place nobody can begin to account for?

Surely zero; a childishly absurd idea. As Sherlock Holmes said, once you eliminate the impossible, whatever remains, howsoever improbable, must be the truth. But the Darwinian naturalistic theory of life is not at all improbable or implausible. There’s tons of evidence for it. And even if there weren’t, Dawkins observes, it would still be the only concept capable of explaining life. Not only is it true, it must be true.

* That all living things use the same DNA code makes it virtually certain that all had a common ancestor. Your forebears were not, actually, monkeys; but the ancestors of all humans, and of all monkeys, were fish.

Evolution: The Blind Watchmaker and the bat

November 24, 2019

What is it Like to be a Bat? was a famous essay (I keep coming back to) by Philosopher Thomas Nagel. Its point being our difficulty in grasping — that is, constructing an intuitively coherent internal model of — the bat experience. Because it’s so alien to our own.

Biologist Richard Dawkins, though, actually tackles Nagel’s question in his book The Blind Watchmaker. The title refers to William Paley’s 1802 Natural Theology, once quite influential, arguing for what’s now called “intelligent design.” Paley said if you find a rock in the sand, its presence needs no explanation; but if you find a watch, that can only be explained by the existence of a watchmaker. And Paley likens the astonishing complexity of life forms to that watch.

I’ve addressed this before, writing about evolution. Paley’s mistake is that a watch is purpose-built, which is not true of anything in nature. Nature never aimed to produce exactly what we see today. Instead, it’s an undirected process that could have produced an infinitude of alternative possibilities. What we have are the ones that just happened to fall out of that process — very unlike a watch made by a watchmaker.

However, it’s not mere “random chance,” as some who resist Darwinism mistakenly suppose. The random chance concept would analogize nature to a child with a pile of lego blocks, tumbling them together every which way. No elegant creation could plausibly result. But evolution works differently, through serial replication.

It began with an agglomeration of molecules, a very simple naturally occurring structure, but having one crucial characteristic: a tendency to duplicate itself (using other molecules floating by). If such a thing arising seems improbable, realize it need only have occurred once. Because each duplicate would then be making more duplicates. Ad infinitum. And as they proliferate, slight variations accidentally creeping in (mutations) would make some better at staying in existence and replicating. That’s natural selection.

Dawkins discusses bats at length because the sophistication of their design (more properly, their adaptation) might seem great evidence for Paleyism.

Bats’ challenge is to function in the dark. Well, why didn’t they simply evolve for daytime? Because that territory was already well occupied, and there was a living to be made at night — for a creature able to cope with it.

Darkness meant usual vision systems wouldn’t work. Bats’ alternative is echolocation — sonar. They “see” by emitting sound pulses and using the echoes to build, in their brains, a model of their outside environment. Pulses are sent between ten and 200 times per second, each one updating the model. Bat brains have developed the software to perform this high speed data processing and modeling, on the fly.

Now get this. Their signals’ strength diminishes with the square of the distance, both going out and coming back. So the outgoing signals must be quite loud (fortunately beyond the range of human hearing) for the return echos to be detectable. But there’s a problem. To pick up the weak return echos, bat ears have to be extremely sensitive. But such sensitive ears would be wrecked by the loudness of the outgoing signals.

So what to do? Bats turn off their ears during each outgoing chirp, and turn them on again to catch each return echo. Ten to 200 times a second!

Another problem: Typically there’s a zillion bats around, all creating these echos simultaneously. How can they distinguish their own from all those others? Well, they can, because each has its own distinctive signal. Their brain software masters this too, sorting their own echos from all the background noise.

The foregoing might suggest, a la Nagel, that the bat experience is unfathomable. Our own vision seems a much simpler and better way of seeing the world. But not so fast. Dawkins explains that the two systems are really quite analogous. While bats use sound waves, we use light waves. However, it’s not as though we “see” the light directly. Both systems entail the brain doing a lot of processing and manipulation of incoming data to build a model of the outside environs. And the bat system does this about as well as ours.

Dawkins imagines an alien race of “blind” batlike creatures, flabbergasted to learn of a species called humans actually capable of utilizing inaudible (!) rays called “light” to “see.” He goes on to describe our very complex system for gathering light signals, and transmitting them into the brain, which then somehow uses them to construct a model of our surroundings which, somehow, we can interpret as a coherent picture. Updated every fraction of a second. (Their Nagel might write, “What is it like to be a human?”)*

A Paleyite would find it unimaginable that bat echolocation could have evolved without a designer. But what’s really hard for us to imagine is the immensity of time for a vast sequence of small changes accumulating to produce it.

Dogs evolved (with some human help) from wolves over just a few thousand years; indeed, with variations as different as Chihuahuas and Saint Bernards. And we’re scarcely capable of grasping the incommensurateness between those mere thousands of years and the many millions over which evolution operates.

Remember what natural selection entails. Small differences between two species-mates may be a matter of chance, but what happens next is not. A small difference can give one animal slightly better odds of reproducing. Repeat a thousand or a million times and those differences grow large; likewise a tiny reproductive advantage also compounds over time. It’s not a random process, but nor does it require an “intelligent designer.”

Dawkins gives another example. Imagine a mouse-sized animal, where females have a slight preference for larger males. Very very slight. Larger males thus have a very very slight probability of leaving more offspring. The creature’s increasing size would be imperceptible during a human lifetime. How long would it take to reach elephant size? The surprising answer: just 60,000 years! An eyeblink of geological time. This would be considered “sudden” by normal evolutionary standards.**

Returning to vision, a favorite argument of anti-evolutionists is that such a system’s “irreducible complexity” could never have evolved by small steps — because an incomplete eye would be useless. Dawkins eviscerates this foolish argument. Lots of people in fact have visual systems that are incomplete or defective in various ways, some with only 5% of normal vision. But for them, 5% is far better than zero!

The first simple living things were all blind. But “in the country of the blind the one-eyed man is king.” Even just having some primitive light-sensitive cells would have conferred a survival and reproductive advantage, better enabling their possessors to find food and avoid becoming food. And such light detectors would have gradually improved, by many tiny steps, over eons; each making a creature more likely to reproduce.

Indeed, a vision system — any vision system at all — is so advantageous that virtually all animals evolved one, not copying each other, but along separate evolutionary paths, resulting in a wide array of varying solutions to the problem — including bat echolocation, utilizing principles so different from ours.

But none actually reflects optimized “intelligent” design. Not what a half decent engineer or craftsman would have come up with. Instead, the evolution by tiny steps means that at each stage nature was constrained to work with what was already there; thus really (in computer lingo) a long sequence of “kludges.” For example, no rational designer would have bunched our optic nerve fibers in the front of the eye, creating a blind spot.

You might, if you still cling to an imaginary “designer,” ask her about that. And while you’re at it, ask why no third eye in the back of our heads?

(To be continued)

* Some blind humans are actually learning to employ echolocation much like bats, using tongue clicks.

** This is not to say evolution entails slow steady change. Dawkins addresses the “controversy” between evolutionary “gradualists” and “punctuationists” who hypothesize change in bursts. Their differences are smaller than the words imply. Gradualists recognize rates of change vary (with periods of stasis); punctuationists recognize that evolutionary leaps don’t occur overnight. Both are firmly in the Darwinian camp.

Greta Thunberg is wrong

October 1, 2019

Greta Thunberg, the 16-year-old Swedish climate warrior, berates the world (“How dare you?”) for pursuing a “fairy tale” of continued economic growth — putting money ahead of combating global warming. A previous local newspaper commentary hit every phrase of the litany: “species decimation, rainforest destruction . . . ocean acidification . . . fossil-fuel-guzzling, consumer-driven . . . wreaked havoc . . . blind to [the] long-term implication . . . driven by those who would profit . . . our mad, profligate  . . . warmongering . . . plasticization and chemical fertilization . . . failed to heed the wise admonition of our indigenous elders . . . .”

The litany of misanthropes hating their own species and especially their civilization.

Lookit. There’s no free lunch. Call it “raping the planet” if you like, but we could never have risen from the stone age without utilizing as fully as possible the natural resources available. And if you romanticize our pre-modern existence (“harmony with nature” and all), well, you’d probably be dead now, because most earlier people didn’t make thirty. And those short lives were nasty and brutish. There was no ibuprofen.

This grimness pretty much persisted until the Industrial Revolution. Only now, by putting resource utilization in high gear, could ordinary folks begin to live decently. People like that commentator fantasize giving it up. Or, more fantastical, our somehow still living decently without consuming the resources making it possible.

These are often the same voices bemoaning world poverty. Oblivious to how much poverty has actually declined — thanks to all the resource utilization they condemn. And to how their program would deny decent lives to the billion or so still in extreme poverty. Hating the idea of pursuing economic growth may be fine for those living in affluent comfort. Less so for the world’s poorest.

Note, as an example, the mention of “chemical fertilization.” This refers to what’s called the “green revolution” — revolutionizing agriculture to improve yields and combat hunger, especially in poorer nations. It’s been estimated this has saved a couple billion lives. And of course made a big dent in global poverty.

But isn’t “chemical fertilization,” and economic development more generally, bad for the environment? Certainly! Again, no free lunch. In particular, the climate change we’re hastening will, as Thunberg says, likely have awful future impacts. Yet bad as that is, it’s not actually humanity’s biggest challenge. The greater factors affecting human well-being will remain the age-old prosaic problems of poverty, disease, malnutrition, conflict, and ignorance. Economic growth helps us battle all those. We should not cut it back for the sake of climate. In fact, growing economic resources will help us deal with climate change too. It’s when countries are poor that they most abuse the environment; affluence improves environmental stewardship. And it’s poor countries who will suffer most from climate change, and will most need the resources provided by economic growth to cope with it.

Of course we must do everything reasonably possible to minimize resource extraction, environmental impacts, and the industrial carbon emissions that accelerate global warming. But “reasonably possible” means not at the expense of lower global living standards. Bear in mind that worldwide temperatures will continue to rise even if we eliminate carbon emissions totally (totally unrealistic, of course). Emission reductions can moderate warming only slightly. That tells us to focus less on emissions and more on preparing to adapt to higher temperatures. And more on studying geo-engineering possibilities for removing greenhouse gases from the atmosphere and otherwise re-cooling the planet. Yet most climate warriors actually oppose such efforts, instead obsessing exclusively on carbon reduction, in a misguided jihad against economic growth, as though to punish humanity for “raping the planet.”

Most greens are also dead set against nuclear power, imagining that renewables like solar and wind energy can fulfill all our needs. Talk about fairy tales. Modern nuclear power plants are very safe and emit no greenhouse gases. We cannot hope to bend down the curve of emissions without greatly expanded use of nuclear power. Radioactive waste is an issue. But do you think handling that presents a bigger challenge than to replace the bulk of existing power generation with renewables?

I don’t believe we’re a race of planet rapists. Our resource utilization and economic development has improved quality of life — the only thing that can ultimately matter. The great thing about our species, enabling us to be so spectacularly successful, is our ability to adapt and cope with what nature throws at us. Climate change and environmental degradation are huge challenges. But we can surmount them. Without self-flagellation.

Thinking like a caveman

September 18, 2019

 

What is it like to be a bat? That famous essay by philosopher Thomas Nagel keeps nagging at us. What is it like to be me? Of this I should have some idea. But why is being me like that? — how does it work? — are questions that really bug me.

Science knows a lot about how our neurons work. Those doings of billions of neurons, each with very limited, specific, understandable functions, join to create one’s personhood. A leap we’re only beginning to understand.

Steven Mithen’s book, The Prehistory of the Mind, takes the problem back a step, asking how our minds came to exist in the first place. It’s a highly interesting inquiry.

Of course the simple answer is evolution. Life forms have natural variability, and variations that prove more successful in adapting to changing environments proliferate. This builds over eons. Our minds were a very successful adaptation.

But they could not have sprung up all at once. Doesn’t work that way. So by what steps did they evolve? The question is problematical given our difficulty in reverse-engineering the end product. But Mithen’s analysis actually helps toward such understanding.

He uses two metaphors to describe what our more primitive, precursor minds were like. One is a Swiss Army knife. It’s a tool that’s really a tool kit. Leaving aside for the moment the elusive concept of “mind,” all living things have the equivalent of Swiss Army knives to guide their behavior in various separate domains. A cat, for example, has a program in its brain for jumping up to a ledge; another for catching a mouse; and so forth. The key point is that each is a separate tool, used separately; two or more can’t be combined.

Which brings in Mithen’s other metaphor for the early human mind: a cathedral. Within it, there are various chapels, each containing one of the Swiss Army knife tools, each one a brain program for dealing with a specific type of challenge. The main ones Mithen identifies are a grasp of basic physics in connection with tool-making and the like; a feel for the natural world; one for social interaction; and language arts, related thereto.

This recalls Martin Gardner’s concept of multiple intelligences. Departing from an idea that “intelligence” is a single capability that people have more or less of, Gardner posited numerous diverse particularized capabilities, such as interpersonal skills, musical, spatial-visual, etc. A person can be strong in one and weak in another.

Mithen agrees, yet nevertheless also hypothesizes what he calls “general intelligence.” By this he means “a suite of general-purpose learning rules, such as those for learning associations between events.” Here’s where his metaphors bite. The Swiss Army knife doesn’t have a general intelligence tool. That’s why a cat is extremely good at mousing but lacks a comprehensive viewpoint on its situation.

In Mithen’s cathedral, however, there is general intelligence, situated right in the central nave. However, the chapels, each containing their specific tools, are closed off from it and from each other. The toolmaking program doesn’t communicate with the social interaction program; none of them communicates with the general intelligence.

Does this seem weird? Not at all. Mithen invokes an analogy to driving while conversing with a passenger. Two wholly separate competences are operating, but sealed off from each other, neither impinging on the other.

This, Mithen posits, was indeed totally the situation of early humans (like Neanderthals). Our own species arose something like 100,000 years ago, but for around half that time, it seems, we too had minds like Neanderthals, like Mithen’s compartmentalized cathedral, lacking pathways for the various competences to talk to each other. He describes a “rolling” sort of consciousness that could go from one sphere to another, but was in something of a blur about seeing any kind of big picture.

Now, if you were intelligently building this cathedral, you wouldn’t do it this way. But evolution is not “intelligent design.” It has to work with what developed previously. And what it started with was much like the Swiss Army knife, with a bunch of wholly separate competences that each evolved independently.

That’s good enough for most living things, able to survive and reproduce without a “general intelligence.” Evolving the latter was something of a fluke for humans. (A few other creatures may have something like it.)

The next step was to integrate the whole tool kit; to open the doors of all the chapels leading into the central nave. The difference was that while a Neanderthal could be extremely skilled at making a stone tool, while he was doing it he really couldn’t ponder about it in the context of his whole life. We can. Mithen calls this “cognitive fluidity.”

The way I like to put it, the essence of our consciousness is that we don’t just have thoughts, we can think about our thoughts. That’s the integration Mithen talks about — a whole added layer of cognition. And it’s that layering, that thinking about our thinking, that gives us a sense of self, more powerfully than any other creature.

I’ve previously written too of how the mind makes sense of incoming information by creating representations. Like pictures in the mind, often using metaphors. And here too there’s layering; we make representations of representations; representations of ourselves perceiving those representations. That indeed is how we do perceive — and think about what we perceive. And we make representations of concepts and beliefs.

All this evolved because it was adaptive — enabling its possessors to better surmount the challenges of their environment. But this cognitive fluidity, Mithen says, is also at the heart of art, religion, science — all of human culture.

Once we achieved this capability, it blew the doors off the cathedral, and it was off to the races.

“Science for Heretics” — A nihilistic view of science

August 10, 2019

Physicist Barrie Condon has written Science for Heretics: Why so much of science is wrong. Basically arguing that science cannot really understand the world, and maybe shouldn’t even try. The book baffles me.

It’s full of sloppy mistakes (many misspelled names). It’s addressed to laypeople and does not read like a serious science book. Some seems downright crackpot. Yet, for all that, the author shows remarkably deep knowledge, understanding, and even insight into the scientific concepts addressed, often explaining them quite lucidly in plain English. Some of his critiques of science are well worth absorbing. And, rather than the subtitle’s “science is wrong,” the book is really more a tour through all the questions it hasn’t yet totally answered.

A good example is the brain. We actually know a lot about its workings. Yet how they result in consciousness is a much harder problem.

Condon’s first chapter is “Numbers Shmumbers,” about the importance of mathematics in science. His premise is that math is divorced from reality and thereby leads science into black holes of absurdity, like . . . well, black holes.* He starts with 1+1=? — whose real world answer, he says, is never 2! Because that answer assumes each “1” is identical to the other, while in reality no two things are ever truly identical. For Condon, this blows up mathematics and all the science incorporating it.

But identicality is a red herring. It’s perfectly valid to say I have two books, even if they’re very different, because “books” is acategory. One book plus one book equals two books.

Similarly, Condon says that in the real world no triangle’s angles equal 180 degrees because you can never make perfectly straight lines. Nor can any lines be truly parallel. And he has fun mocking the concepts of zero and infinity.

However, these are all concepts. That you can’t actually draw a perfect triangle doesn’t void the concept. This raises the age-old question (which Condon nibbles at) of whether mathematics is something “out there” as part of the fabric of reality, or just something we cooked up in our minds. My answer: we couldn’t very well have invented a mathematics with 179 degree triangles. The 180 degrees (on flat surfaces!) is an aspect of reality — which we’ve discovered.

A key theme of the book is that reality is complex and messy, so the neat predictions of scientific theory often fail. A simplified high school picture may indeed be too simple or even wrong (like visualizing an atom resembling the solar system). But this doesn’t negate our efforts to understand reality, or the value of what we do understand.

Modern scientific concepts do, as Condon argues, often seem to violate common sense. Black holes for example. But the evidence of their reality mounts. Common sense sees a table as a solid object, but we know from science that it’s actually almost entirely empty space. In fact, the more deeply we peer into the atomic and even sub-atomic realms, we never do get to anything solid.

Condon talks about chaos theory, and how it messes with making accurate predictions about the behavior of any system. Weather is a prime example. Because the influencing factors are so complex that a tiny change in starting conditions can mean a big difference down the line. Fair enough. But then — exemplifying what’s wrong with this book — he says of chaos theory, “[t]his new, more humble awareness marked a huge retreat by science. It clearly signaled its inherent limitations.” Not so! Chaos theory was not a “retreat” but an advance, carrying to a new and deeper level our understanding of reality. (I’ve written about chaos theory and its implications, very relevantly to Condon’s book: https://rationaloptimist.wordpress.com/2017/01/04/chaos-fractals-and-the-dripping-faucet/)

After reading partway, I was asking myself, what’s Condon really getting at? He’s a very knowledgeable scientist. But if science is as futile as he seems to argue — then what? I suspected Condon might have gone religious, so I flipped to the last chapter, expecting to find a deity or some other sort of mysticism. But no. Condon has no truck with such stuff either.

He does conclude by saying “we need to profoundly re-assess how we look at the universe,” and “who knows what profound insights may be revealed when we remove [science’s] blinkers.” But Condon himself offers no such insights. Instead (on page 55) he says simply that “we are incapable of comprehending the universe” and “there are no fundamental laws underlying the universe to begin with. The universe just is the way it is.” (My emphasis)

No laws? Newton’s inverse square law of gravitation is a pretty good descriptor of how celestial bodies actually behave. A Condon might say it doesn’t exactly explain the orbit of Mercury, which shows how simple laws can fail to model complex reality. But Einstein’s theory was a refinement to Newton’s — and it did explain Mercury’s orbit.

So do we now know everything about gravitation? Condon makes much of how galaxies don’t obey our current understanding, if you only count visible matter; so science postulates invisible “dark matter” to fix this. Which Condon derides as a huge fudge factor. And I’m actually a heretic myself on this, having written about an alternate theory that would slightly tweak the laws of gravitation making “dark matter” unnecessary (https://rationaloptimist.wordpress.com/2012/07/23/there-is-no-dark-matter/). But here is the real point. We may not yet have gravitation all figured out. But that doesn’t mean the universe is lawless.

Meantime, you might wonder how, if our scientific understandings were not pretty darn good, computers could work and planes could fly. Condon responds by saying that actually, “our technology rarely depend[s] on scientific theory.” Rather, it’s just engineering. “Engineers have learnt from observation and experience,” and “[u]nburdened by theory they were . . . simply observing regularities in the behavior of the universe.”**

And how, pray tell, do “regularities in the behavior of the universe” differ from laws? In fact, a confusion runs through the book between science qua “theory” (Condon’s bete noire) and science qua experimentation revealing how nature behaves. And what does it mean to say, “the universe just is the way it is?” That explains nothing.

But it can be the very first step in a rational process of understanding it. Recognizing that it is a certain way, rather than some other way (or lawless). That there must be reasons for its being the way it is. Reasons we can figure out. Those reasons are fundamental laws. That’s science.

And, contrary to the thrust of Condon’s book, we have gained a tremendous amount of understanding. The very fact that he could write it — after all, chock full of science— and pose all the kinds of questions he does — testifies to that understanding. Quantum mechanics, for example, which Condon has a field day poking fun at, does pose huge puzzles, and some of our theories may indeed need refinement. Yet quantum mechanics has opened for us a window into reality, at a very deep level, that Aristotle or Eratosthenes could not even have imagined.

Condon strangely never mentions Thomas Kuhn, whose seminal The Structure of Scientific Revolutions characterized scientific theories as paradigms, a new one competing against an old one, and until one prevails there’s no scientific way to choose. You might thus see no reason to believe anything science says, because it can change. But modern science doesn’t typically lurch from one theory to a radically opposing one. Kuhn’s work was triggered by realizing Aristotle’s physics was not a step toward modern theories but totally wrong. However, Aristotle wasn’t a scientist at all, did no experimentation; he was an armchair thinker. Science is in fact a process of honing in ever closer to the truth through interrogating reality.

Nor does Condon discuss Karl Popper’s idea of science progressing by “falsification.” Certitude about truth may be elusive, but we can discover what’s not true. A thousand white swans don’t prove all swans are white, but one black swan disproves it.

And as science thusly progresses, it doesn’t mean we’ve been fools or deluded before. Newton said that if he saw farther, it’s because he stood on the shoulders of giants. And what Newton revealed about motion and gravity was not overturned by Einstein but instead refined. Newton wasn’t wrong. And those who imagine Darwinian evolution is “just a theory” that future science may discard will wait in vain.

Unfortunately, such people will leap upon Condon’s book as confirmation for their seeing science (but not the Bible) as fallible.*** Thinking that because science doesn’t know everything, they’re free to disregard it altogether, substituting nonsense nobody could ever possibly know.

Mark Twain defined faith as believing what you know ain’t so. Science is not a “faith.” Nor even a matter of “belief.” It’s the means for knowing,

*But later he spends several pages on the supposed danger of the Large Hadron Collider creating black holes (that Condon doesn’t believe in) and destroying the world. Which obviously didn’t happen.

**But Condon says (misplaced) reliance on theory is increasingly superseding engineering know-how, with bad results, citing disasters like the Challenger with its “O” rings. Condon’s premise strikes me as nonsense; and out of literally zillions of undertakings, zero disasters would be miraculous.

***While Condon rejects “intelligent design,” he speculates that Darwinian natural selection isn’t the whole story — without having any idea what the rest might be.

Fantasyland: How America Went Haywire

July 3, 2019

(A condensed version of my June 18 book review talk)

In this 2017 book Kurt Andersen is very retro; believes in truth, reason, science, and facts. But he sees today’s Americans losing their grip on those. Andersen traces things back to the Protestant Reformation, preaching that each person decides what to believe.

Religious zealotry has repeatedly afflicted America. But in the early Twentieth Century that, Andersen says, seemed to be fizzling out. Christian fundamentalism was seen as something of a joke, culminating with the 1925 Scopes “monkey” trial. But evangelicals have made a roaring comeback. In fact, American Christians today are more likely than ever to be fundamentalist, and fundamentalism has become more extreme. Fewer Christians now accept evolution, and more insist on biblical literalism.

Other fantasy beliefs have also proliferated. Why? Andersen discusses several factors.

First he casts religion itself as a gateway drug. Such a suspension of critical faculties warps one’s entire relationship with reality. So it’s no coincidence that the strongly religious are often the same people who indulge in a host of other magical beliefs. The correlation is not perfect. Some religious Americans have sensible views about evolution, climate change, even Trump — and some atheists are wacky about vaccination and GM foods. Nevertheless, there’s a basic synergy between religious and other delusions.

Andersen doesn’t really address tribalism, the us-against-them mentality. Partisan beliefs are shaped by one’s chosen team. Climate change denial didn’t become prevalent on the right until Al Gore made climate a left-wing cause. Some on the left imagine Venezuela’s Maduro regime gets a bum rap.

Andersen meantime also says popular culture blurs the line between reality and fantasy, with pervasive entertainment habituating us to a suspension of disbelief. I actually think this point is somewhat overdone. People understand the concept of fiction. The problem is with the concept of reality.

Then there’s conspiracy thinking. Rob Brotherton’s book Suspicious Minds: Why We Believe Conspiracy Theories says we’re innately primed for them, because in our evolution, pattern recognition was a key survival skill. That means connecting dots. We tend to do that, even if the connections aren’t real.

Another big factor, Andersen thinks, was the “anything goes” 1960s counterculture, partly a revolt against the confines of rationality. Then there’s post-modernist relativism, considering truth itself an invalid concept. Some even insist that hewing to verifiable facts, the laws of physics, biological science, and rationality in general, is for chumps. Is in fact an impoverished way of thinking, keeping us from seeing some sort of deeper truth. As if these crackpots are the ones who see it.

Then along came the internet. “Before,” writes Andersen, “cockamamie ideas and outright falsehoods could not spread nearly as fast or widely, so it was much easier for reason and reasonableness to prevail.” Now people slurp up wacky stuff from websites, talk radio, and Facebook’s so-called “News Feed” — really a garbage feed.

Andersen considers “New Age” spirituality a new form of American religion. He calls Oprah its Pope, spreading the screwball messages of a parade of hucksters, like Eckhart Tolle, and the “alternative medicine” promoter Doctor Oz. Among these so-called therapies are homeopathy, acupuncture, aromatherapy, reiki, etc. Read Wikipedia’s scathing article about such dangerous foolishness. But many other other mainstream gatekeepers have capitulated. News media report anti-scientific nonsense with a tone of neutrality if not acceptance. Even the U.S. government now has an agency promoting what’s euphemized as “Complementary and Integrative Health;” in other words, quackery.

Guns are a particular focus of fantasy belief. Like the “good guy with a gun.” Who’s actually less a threat to the bad guy than to himself, the police, and innocent bystanders. Guns kept to protect people’s families mostly wind up shooting family members. Then there’s the fantasy of guns to resist government tyranny. As if they’d defeat the U.S. military.

Of course Andersen addresses UFO belief. A surprising number of Americans report being abducted by aliens, taken up into a spaceship to undergo a proctology exam. Considering the nearest star being literally 24 trillion miles away, would aliens travel that far just to study human assholes?

A particularly disturbing chapter concerns the 1980s Satanic panic. It began with so-called “recovered memory syndrome.” Therapists pushing patients to dredge up supposedly repressed memories of childhood sexual abuse. (Should have been called false memory syndrome.) Meantime child abductions became a vastly overblown fear. Then it all got linked to Satanic cults, with children allegedly subjected to bizarre and gruesome sexual rituals. This new witch hunt culminated with the McMartin Preschool trial. Before the madness passed, scores of innocent people got long prison terms.

A book by Tom Nichols, The Death of Expertise, showed how increasing formal education doesn’t actually translate into more knowledge (let alone wisdom or critical thinking). Education often leads people to overrate their knowledge, freeing them to reject conventional understandings, like evolution and medical science. Thus the anti-vaccine insanity.

Another book, Susan Jacoby’s The Age of American Unreason, focuses on our culture’s anti-intellectual strain. Too much education, some people think, makes you an egghead. And undermines religious faith. Yet Jacoby also notes how 19th Century Americans would travel long distances to hear lecturers like Robert Ingersoll, the great atheist, and Huxley the evolutionist. Jacoby also vaunts 20th century “Middlebrow” American culture, with “an affinity for books; the desire to understand science; a strong dose of rationalism; above all, a regard for facts.”

Today in contrast there’s an epidemic of confirmation bias: people embracing stuff that supports pre-existing beliefs, and shutting out contrary information. Smarter folks are actually better at confabulating rationalizations for that. And how does one make sense of the world and of new information? Ideally by integrating it with, and testing it against, your body of prior knowledge and understanding. But many Americans come short there — blank slates upon which rubbish sticks equally well as truth.

I also think reality used to be more harsh and unforgiving. To get through life you needed a firm grip on reality. That has loosened. The secure, cushy lives given us by modernity — by, indeed, the deployment of supreme rationality in the age of science — free people to turn their backs on that sort of rationality and indulge in fantasy.

Anderson’s subtitle is How America Went Haywire. As if that applies to America as a whole. But we are an increasingly divided nation. Riven between those whose faith has become more extreme and those moving in the opposite direction; which also drives political polarization. So it’s not all Americans we’re talking about.

Still, the haywire folks are big shapers of our culture. And there are real costs. Anti-vaccine hysteria undermines public health. The 1980s child threat panic ruined lives. Gun madness kills many thousands. And of course they’ve given us a haywire president.

Yet is it the end of the world? Most Americans go about their daily lives, do their jobs, in a largely rational pragmatic way (utilizing all the technology the Enlightenment has given). Obeying laws, being good neighbors, good members of society. Kind, generous, sincere, ethical people. America is still, in the grand sweep of human history, an oasis of order and reasonableness.

Meantime religious faith is collapsing throughout the advanced world, and even in America religion, for all its seeming ascendancy, is becoming more hysterical because it is losing. The younger you are, the less religious you are likely to be. And there are signs that evangelical Christianity is being hurt by its politicization, especially its support for a major moral monster.

I continue to believe in human progress. That people are capable of rationality, that in the big picture rationality has been advancing, and it must ultimately prevail. That finally we will, in the words of the Bible itself, put childish things away.