Archive for the ‘Philosophy’ Category

Bible Babble

September 17, 2016

The Massachusetts Bible Society is conducting “The Great Bible Experiment” – discussion forums in “America’s least Bible-minded cities.” Strangely, Albany tops that list (maybe it’s all the students; Boston comes second). A radio blurb said a humanist would be on the panel. So I went.

imagesMy wife wouldn’t come, expecting just a sales pitch for Bibles. Actually none were on sale, and the event seemed more or less sincerely aimed at dialog.

I was first handed a questionnaire, asking me to pick six words from a long list to reflect my view of the Bible. Most words were positive, yet I was able to find six: words like dangerous, mis-used, scary, weird.

Attendees were encouraged to submit questions. images-1After one panelist, Father Warren Savage, an African-American Catholic priest, said he believed the Bible is all about love, my question was, “Why did God command the Israelites to kill every man, woman, and child in the cities of Canaan?” The moderator combined it with a similar question citing the story of Abraham and Isaac.

Panelist Tom Krattenmaker responded that he simply disregards the Bible’s less appetizing parts. He was the advertised “humanist,” actually with Yale Divinity School, and author of Confessions of a Secular Jesus Follower. images-2He sounded like a Jeffersonian – Jefferson cut up his Bible, making his own book containing only what he deemed Jesus’s words of wisdom, throwing away all the rest. Krattenmaker said he “does not subscribe to the factual existence of God.”

Rev. Anne Robertson, MBS’s head, was an articulate and engaging speaker. She said she’d started as a “God said it, I believe it, that settles it” type, a real Biblical literalist, but she’d repented. She spoke of how hard it was for her to first utter “the four words” – “I might be wrong.” Robertson stressed the difference between fact and truth, saying the Bible is not a book of facts, yet conveys truths. And she quoted another Bible bit: “we see through a glass darkly.”

The Abraham and Isaac story, Robertson argued, must be viewed in historical context: it’s an extremely old story dating from a time when child sacrifice was common. And the important thing about Abraham-and-Isaac is how its outcome differs from that cultural paradigm.*

Another question was why the Bible is losing sway. Father Savage answered, “the hypocrisy of Christians who don’t practice what the Bible teaches.” Krattenmaker said he inhabits a culture wherein gays are seen as just ordinary humans, and when Bible-thumpers go around crying “abomination!” it makes the Bible “radioactive.”

images-4As for living Biblical teachings, my second question said, “The Bible teaches I may own slaves, as long as they’re from foreign countries. Does this include Canadians?” But time ran out before that question could be reached.

And I refrained from submitting a further question: if you think the Bible is somehow divinely inspired, how do you know? How could anyone know? (“Faith” can’t be the answer, merely begging the question, what’s the basis for the faith?)

images-3*While the historicity of child sacrifice in the ancient Near East is widely accepted, a lot of that is traceable to what the Bible says – hardly an objective source. Even distant Carthage’s famous child sacrifice comes to us from its Roman conquerors – not an unbiased source either.

The animal that came in from the cold (My Labor Day tribute to work)

September 5, 2016

There’s a cynical misanthropic mentality seeing humanity as a curse upon the planet, and modern life as a snakepit of psychic malaise. I don’t buy it.

imagesRecently I traveled from Albany to New York and all along the way was struck not just by how humankind has thoroughly transformed the landscape, but by the stupendous amount of work it took. Whether it was the roads with all their vehicles, all the buildings and other infrastructure, the farmlands with endless rows of cultivation – how many man-hours of toil!

Unknown-2And did you ever stop to ponder how much metal we use, everywhere? And where it comes from – all the mining and milling and processing and fabrication? And don’t forget what it took for people to figure out how to do all this. Likewise all the buildings – every brick had to be manufactured, transported, cemented. Again, the colossal amount of sheer effort boggles the mind.

And what’s it all for? Quite simply, so we can live with less pain and more comfort and reward. We’re the animal that came in from the cold. Unknown-1We arrived on this planet with nothing, literally naked. Everything we’ve done, we’ve done ourselves. It wasn’t easy. To me it’s a veritable miracle.

This is Man’s fundamental nature. Believing (despite all religion) not that things are up to some God, or fate, but up to us. Not to accept, but to strive. Not to submit, but to prevail.

Faster: the pace of modern life

August 27, 2016

imagesI picked up James Gleick’s book Faster and read it slowly – something it says people rarely do anymore.

The subtitle is The Acceleration of Just About Everything. I was hoping for some insight into the human condition as affected by modernity; our lives are radically different from what we evolved for. But the book reads more like a Seinfeld monologue than a sociology essay – a string of quickie observations, never connected into some over-arching theory or viewpoint. Unknown-1I was reminded of Churchill saying of a dessert: “This pudding has no theme.”

Yes, in many ways, life has gotten faster. We all know that. But what does it really mean for us? Gleick seems unsure, ambivalent – the book’s tone is bemusement.

He even contradicts himself at times. One chapter (“Short Term Memory”) starts, “As the flow of information accelerates, we may have trouble keeping track of it all.” Gleick explains that the media on which information is recorded quickly becomes obsolete. Tons of data are on floppy disks and microfilms – but can you find the machines today to read them? Et cetera. This is indeed a real problem. Yet then Gleick says: “amnesia doesn’t seem to be [our] worst problem. This new being just can’t throw anything away . . . It has forgotten that some baggage is better left behind. Homo Sapiens has become a packrat.”

But perhaps such contradictoriness really is the essence of this book, in exploring our modern relationship with time. Gleick returns repeatedly to the concept of “saving time,” and how slippery it is. Talking about the genre of self-help books on time-saving, he says this (his emphasis):

Unknown-3“[The authors] reveal confusion about what it means to save time. They flip back and forth between advertising a faster and a slower life. They offer more time, in their titles and blurbs, but they are surely not proposing to extend the 1,440-minute day, so by ‘more’ do they mean fuller or freer time? Is time saved when we manage to leave it empty, or when we stuff it with multiple activities, useful or pleasant? . . . when we seize it away from a low-satisfaction activity, like ironing clothes, and turn it over to a high-satisfaction activity, like listening to music? What if we do both at once? If you can choose between a thirty minute train ride, during which you can read, and a twenty minute drive, during which you cannot, does the drive save ten minutes? . . . What if you can listen to [an] audiotape . . . ? Are you saving time, or employing time that you have saved elsewhere . . . ?”

But Gleick doesn’t really philosophize about the nature of time. In physics, it is indeed a tricky and elusive concept. There seem to be fundamental particles of matter, and maybe of space, but not of time; no unit is the limit of smallness in measuring time. Unknown-5And while we all think we know what time’s passage is, we actually don’t experience it as a sequence of moments; “living in the moment” is impossible because as soon as a moment occurs it’s already in the past. The “now” sandwiched between anticipation and retrospect never actually exists as something we can experience.

Time is the one thing which, once lost, can never be replaced. That might not matter much once we achieve immortality (or near-immortality); but as long as we know our allotment is limited, we value every minute. While people may have a lot of mis-judged values, the quest to save time is not one of them.

In my coin business, I used to mail out price lists (very laborious); then take down all the orders on the phone (even more laborious). Now I post the list on the web, and print out the orders. The time savings is great.

One point the book makes is that time has become a commodity, and a lot of our economy concerns its allocation. A business often tries to get customers to pay not just with money but with their time – “some assembly required” – thereby relieving the business of some costs. images-2Buffet restaurants are another example, the customer doing some of the work theretofore done by restaurant staff. There’s a new buffet concept in Japan – instead of “all you can eat” for a fixed price, or charging by the ounce, this eatery charges by the minute. Diners punch a time clock, then rush to the buffet, and wolf down their food as fast as they can. Conversation among dining companions is a casualty (though they can eat with eyes glued to phones). The advantage to the restaurant is obvious – without gourmands lingering over their repasts, many more of them can be serviced. Yet the scheme is quite popular, Gleick reports; Tokyo residents wait in line for the opening gun.

Speaking of eyes glued to phones, Gleick quotes economist Herbert Stein: “It is the way of keeping contact with someone, anyone, who will reassure you that you are not alone . . . deep down you are checking on your existence. I rarely see people using cellphones on the sidewalk when they are in the company of other people.”Unknown-7

Reading this made me check the book’s publication date: 1999. Seems like ancient history now. Today many folks are fixated on their phones 24/7 – oblivious to people around them.

This is, again, a mode of existence radically different from our evolutionary antecedents. Some see it as dystopian; yet its extreme popularity tells us that it satisfies human needs in a very deep way. People always had a profound yearning for what their phones provide – but until recently, they just didn’t know it.

Mind and – or versus – brain – and “neuropsychoanalysis”

August 18, 2016

imagesThis book was a gift from my wife: In the Mind Fields by Casey Schwartz. I’ve written before how the concept of the self bugs me. I keep pondering: what, really, is this feeling that I’m me? David Hume identified why this is so hard – it’s using the self to look for itself.

The book is subtitled Exploring the New Science of Neuropsychoanalysis. But it wasn’t persuasive that there even is such a thing.

It’s a tale of two disciplines. Psychoanalysis, the whole Freudian thing, tries to demystify the workings of the mind. Neuroscience tries to understand the workings of the brain. It’s interested in figuring out how the brain creates the mind. But, once you have one, the thoughts it produces are no concern of neuroscience. That’s psychology, the province of psychoanalysis. And, in turn, psychoanalysis isn’t much interested in the nuts and bolts of brain function that neuroscience explores.

imagesIndeed, as the book says, psychoanalysts are so fixated on the mind that they tend to forget it’s produced by the brain. They’re often actually somewhat hostile to neuroscience, seeing it as aridly divorced from the reality of human experience, as lived through the psychology they are concerned with. While neuroscientists tend to look down on psychoanalysis as unscientific, non-rigorous, subjective psychobabble.

Neuropsychoanalysis (as the name implies) seeks to bridge this chasm, by bringing the findings of neuroscience into the practice of psychoanalysis. However, while its leading prophet, Mark Solms, does use the word, the book left me unclear how, if at all, this marriage actually works in practice.

UnknownEventually, the author comes around to focusing intensively on one case: Harry, and his psychoanalyst, David Silvers. A normal, athletic man, Harry had a stroke in his thirties that partly crippled him and left him aphasic – i.e., largely speechless. (He fully understood language, but couldn’t put thoughts into words.) Unable to continue his tutoring business, Harry’s life became a cycle of medical appointments.

Now, this was quintessentially a neuroscience case. Harry’s problem was not psychological; his brain was physically damaged. Of course, he did have some psychological difficulty adjusting to his loss and new circumstances but that was certainly not mental illness. At one point, though, Silvers labels him “depressed.” That diagnosis seemed superciliously offhand. Depression is a particular pathology, apparently caused by brain chemistry effects. Harry was not “depressed,” he was responding to a rotten break, as any normal person would. If anything, he seemed pretty cheerful under the circumstances.

So what was Harry doing in psychoanalysis altogether? It works by talking through issues with the analyst. But the supreme irony here is that Harry’s problem was his inability to talk! He did manage to communicate, somewhat, sort of. But Silvers acknowledged that his sessions with Harry did not resemble his usual interactions with patients.

The book flap states that Harry “nevertheless benefits from Silvers’s analytic technique.” This assertion is key to the whole book. Yet I could not see how Harry benefited, therapeutically. He and Silvers did establish a human bond, which Harry seemed to value. But Silvers’s psychoanalysis did nothing to improve his situation. In fact, Harry was actually in worse shape at the end.

images-2Nor could I see how Silvers’s efforts could be labeled “neuropsychoanalysis.” He had no neuroscience training, and nowhere did he appear to be using neuroscientific insights to help Harry. This evokes the old saw, “if your only tool is a hammer, every problem looks like a nail.” Silvers’s psychoanalytic toolkit was simply mismatched to Harry’s case.

Freud, who figures prominently throughout this book, had a lot to say about the self and its behavior – some of it wrong, though he himself would have acknowledged the tentativeness of his theories – but he had no clue what makes a self. Someday neuroscience may crack this very hard problem. images-1Then maybe I can finally know who and what I am.

The Shakers: happiness in an ant colony?

June 27, 2016
Round barn at Hancock Shaker Village

Round barn at Hancock Shaker Village

My humanist group recently toured Hancock Shaker Village, in Massachusetts. The Shakers were a religious sect that set up communes of a sort, preaching equality and, famously, celibacy. They pretty much died out. No surprise.

Unlike some sects (notably the Amish), the Shakers loved new technology and were often clever in applying it; they were efficiency freaks. They lived dormitory-style, men and women separate. Unknown-1However, during the work day, they inter-mingled, though no touching was allowed. They ate together – but there was no talking either. A family joining the sect would be separated.

The attempt to write sexuality out of human life has many antecedents. At least the Shakers did not go as far as Russia’s Skoptsy, a religious sect whose answer for controlling sexuality was castration. But sex being dirty is a big theme in many religious contexts. imagesIt’s partly down to the abominable story of Adam and Eve, who were “pure,” and didn’t know from sex, until they “sinned” and started doing it – staining not only themselves but all their descendants, until Jesus got himself tortured to death to expiate the “sin.” Yet not even that sanitized sex for future generations.

With the Shakers, it may really have been rooted in their founder Ann Lee’s bad experience with marriage and procreation. She was not the only 18th century woman who felt that way, and that women would be better off free of it all. But that notion was very politically incorrect – better to cloak celibacy in a mantle of religious justification. So the Shakers were told celibacy enabled them to get closer to God. Or something like that.

Anyhow, one can see why women might buy into this. Less so for men. And indeed, over time, more men than women abandoned Shakerdom, which became mostly female.

But not only was celibacy a fundamental denial of human nature; so was the equality fetish. As our tour guide explained, that would be attractive to people who were not being treated equally in the outside world. However, the thirst for status is deeply rooted in the human psyche (by evolution – higher status meant more mating opportunities, so genes for status-seeking proliferated). And Shakerdom was down on the whole idea of self-actualization. Everything you did was supposed to be for the good of the community, and to please God – not yourself.

Unknown-2It all sounded to me like living in an ant colony. So why would people do that voluntarily? Only at gunpoint did people join communes in Soviet Russia or China. And even there sex was allowed.

It helps to remember that the concept of happiness is a modern invention. People in earlier times did not think that way. The point of life was not to be happy, but just to get through it. That was a hard enough challenge. (And entering a Shaker commune freed you of worry over your next meal.)

Of course people always craved pleasure and shunned suffering. Only a robot wouldn’t. But what differed was how they thought about it. Indeed, actually, that they didn’t. Unknown-3Many today seem to obsessively measure their happiness temperature. Doing so would never have occurred to our 18th century forebears. Moreover, like the Shakers, most people were brainwashed into the paradigm that whatever they did was to please not themselves but God. Even the most rapacious would strive to rationalize that he was pleasing God. Fear of Hell was very real.

Hancock Shaker Village was enjoyable to visit. But I was glad to return to modern life.

Something horrible is happening: reading the obituary page

May 1, 2016

UnknownSomething horrible is happening. Dozens of local people die every day. It’s a holocaust.

I read the obituary page, and feel bad for everyone there. What’s happened to them is the worst thing that can happen to anyone. (And someday it will happen to me.)

imagesIt’s gotten worse since the local paper went to full color printing. Now the people pictured in obituaries seem more real to me.

Dying at, like, 83, is uninteresting. But I’m always drawn to those listing younger ages. “Passed away suddenly,” “died at home,” etc. – it makes me wonder what could have happened. It’s a reminder of life’s fragility. Though actually such wording – especially, “died unexpectedly” – can be a euphemism for suicide. Tragic how common that is.

Speaking of euphemism, of course most obituaries avoid words like “died.” Some read as though the person merely moved away – to a better neighborhood, at that.

Unknown-1What I like is obituaries with high ages. “Sally Jones, 103.” I say to myself, way to go, old Sal! Made it to 103! It gives me hope. And for centenarians I’ll glance over the details, to see what a person did in such a long life. It seems that high achievers in the age department are often high achievers in other ways.

One recent obit was for a Vera Lister, 100. I read it. Said she was a “homemaker for most of her life.” Zzzz. But also that, in the British navy in WWII, she participated in breaking the German enigma code. Holy smoke!

There are some amazing people among us, and we don’t always know it. One local acquaintance, the most unassuming of men, I recently learned worked on the Manhattan Project.

Of course, a big reason for checking the obits is to look for names I know. I’m not very social, yet it’s amazing how many folks one has encountered in half a century in Albany. Seeing someone on that page can be a shocker. Not long ago, a guy I knew from work; younger than me; a lively fellow, in rude health when I’d seen him just shortly before. Died in some stupid accident. Another memento mori reminder.

Sometimes merely the age is a shocker. Just saw the obit of a young feller I once knew slightly. He was eighty. How could that be? Time gets away from us.

"Hap" Hazzard

“Hap” Hazzard

Yet the obituary page – occasionally – offers some yuks too. One recently made me laugh out loud. Guy’s name was Harold Hazzard. The obit included his nickname: Harold “Hap” Hazzard. He must have had a sense of humor.

But this holocaust must stop. And we’re working on it. This is what medical science is ultimately all about. It’s not enough to cure illness when people must die in the end anyway. But aging and death too are medical problems. A key factor is telomeres, little extensions on the ends of chromosomes. When cells divide, telomeres get shorter. And, when you’re out of telomeres – you’re out.

images-2There’s an enzyme called telomerase that can replenish them. Unfortunately, a dose of telomerase gives you cancer. But maybe we can fix that.

And someday, you’ll turn to the obituary page, and it will say: no deaths to report.

Could a machine ever feel emotion? – David Gelernter

April 15, 2016

UnknownI recently heard a talk by Yale Professor David Gelernter, notable guru of computer science and artificial intelligence.* His new book is The Tides of Mind. That’s his metaphor for human consciousness cycling between varying states: early in the day we’re full of energy, seeing the world differently from later, when attention shifts from the external to the internal realm, and insistence of memory crowds out use of reason. After reaching a mid-afternoon low point, one cycles back upward somewhat before cycling back down again toward sleep. (I’ve always felt sharpest, doing my best work, in the morning; I’m drafting this at 5 AM in an airport; in mid-afternoon I’m soporific.)

Gelernter spoke of his project to emulate these workings of the mind in a computer program. He said the spectrum’s “top edge,” where rationality predominates, is easiest to model; it gets harder lower down, where we become less like calculating machines and more emotive. And Gelernter said – categorically – that no artificial system would ever be able to feel like a human feels.

Unknown-1This I challenged in the question period, suggesting that everything a human mind does must emerge out of neurons’ information processing – admittedly a massively complex system – but if such a system could be mimicked artificially, couldn’t all its effects, including consciousness and emotion, arise therein? I referenced the movie Her.

 Gelernter replied at great length. He said that some man-made systems already approach that degree of complexity (actually, I doubt this), but nobody imagines they’re conscious. He quoted Paul Ziff that a computer can do nothing that’s not a performance – a simulation of mind functioning, not the real thing.

Unknown-5Making notes, I wrote the words “Chinese Room” before Gelernter spoke them. This refers to John Searle’s famous thought experiment: a person in a room, using a set of rules, can respond to incoming messages in Chinese, thus appearing to understand Chinese, without actually understanding Chinese. Likewise a computer, using programmed rules, could appear to converse and understand, without actually understanding.

images-1Gelernter contrasted the view of “computationalists” like Daniel Dennett who – consistent with my question – regard the mind as basically akin to a computer – the brain is the hardware, the mind is the software. Gelernter acknowledged this is a majority view. It says that while a single neuron can do nothing, nor can a thousand, when a brain has trillions of interconnections, mind emerges. But this Gelernter dismissed, analogizing that a single grain of sand can do nothing, but a trillion can’t either.

images-2Gelernter asserted that computationalists actually have no evidence for their stance, and it boils down to being an axiom – an assumption, like Euclid’s axiom that parallel lines never meet (though never meeting is the definition of parallel lines, which is something different).

I found none of this persuasive. Someone later asked me what’s the antithesis of “computationalism.” I said “magicalism.” Because Gelernter seemed to posit something magical that creates mind, above and beyond mechanistic neural processing. Unknown-3This argument has been going on for centuries. But it’s really Gelernterists who engage in axioms – that is, assuming something must be true, albeit unprovable. And I call the opposing view materialism – that all phenomena must be explicable rationally – and the mind must arise from what neurons physically do – because there is no other possibility. I do not believe in magic.

Talking with Gelernter afterward, he offered a somewhat better argument – that to get a mind from neurons, you need, well, neurons. That their specific characteristics, with all their chemistry, are indispensable, and their effects could not be reproduced in a system made, say, of plastic. He analogized neurons to the steel girders holding up the building – thanks to steel’s particular characteristics – and girders made of something else, like potato chips, wouldn’t do.Unknown-4

But I still wasn’t persuaded. Gelernter had said, again, that computer programs can only simulate human mind phenomena; for example, a program that “learns” is simulating learning but not actually learning as a human does. I think that’s incorrect – and exemplifies Gelernter’s error. What does “learning” mean? Incorporating new information to change the response to new situations – becoming smarter from experience. Computer programs now do exactly this.

Neuronal functioning is very special and sophisticated, and would be very hard to truly reproduce in a system not made from actual neurons. But not impossible, because it’s not magical. I still see no reason, in principle, why an artificial system could not someday achieve the kind of complex information processing that human brains do, which gives rise to consciousness, a sense of self, and feelings.**

Those who’ve said something is impossible have almost always proven wrong. And Arthur C. Clarke said any sufficiently advanced technology is indistinguishable from magic.

* In 1993 he survived an attack by the Unabomber, whose brother, David Kaczynski, has been to my house (we had an interesting discussion about spirituality) – my three degrees of separation to Gelernter.

** See my famous article in The Humanist magazine: The Human Future: Upgrade or Replacement.

 

Grannies killed by college exams

January 24, 2016

imagesIt’s true. College exams are deadly for students’ grandmothers. A study determined that granny death rates spike tenfold before a midterm, and nineteen times before a final exam. One theory is that grannies’ health is undermined by anxiety and stress when their grandchildren face exams. Indeed, the study found that failing students are fifty times likelier to lose a grandmother in the run-up to an exam, compared to non-failing students.

This is reported in Dan Ariely’s book, The (Honest) Truth About Dishonesty. Ariely is a professor of psychology and behavioral economics at Duke.

images-3But seriously, what’s really going on is that students commonly make up grandmother deaths as a pretext for requesting exam postponements. Shocking.

The book’s main theme is that we all lie and cheat. But that doesn’t make us sociopaths. In fact, we tend to lie and cheat only so much that we can still look in the mirror and see an honest ethical person. We sometimes lie to ourselves.

UnknownAriely invokes numerous laboratory experiments. In a typical case, test subjects are asked to solve a set of puzzles within a time limit, earning a payment for each one solved. But on an honor system: they self-report their performance. Most fudge it upward, but only by a little.

images-1I found much of this suspiciously artificial and unlike real life. In another example, people were asked to gauge whether more dots appeared to the right or left of a line. Sometimes it was obvious, sometimes not. But when told they’d be paid substantially more for saying “right” than “left,” the answers skewed rightward. This Ariely called dishonesty. I disagree. If told I’d be paid more simply for saying “right” rather than “left,” I’d shrug and say “right” every time. That’s just a rational response to the rules.

Perhaps I’m quibbling. But most of Ariely’s lab tests entailed honesty along a gradient, falling in shades of gray. Whereas in everyday life ethical questions are often either-or. For instance, in my coin business, I normally send out orders before payment. Perhaps if, Ariely lab style, customers calculated their own bills, there might be some fudging. But when it’s just paying versus not paying, over 99% pay. Some even correct errors made in their favor.

This bespeaks honesty of a high order. Maybe my customers are not a representative cross-section, but I don’t think collectively they’re that unusual. Nor is my business. Most of the world’s commerce proceeds on a basis of mutual trust between trading partners; it’s our default assumption. Unknown-1I once got an e-mail from a stranger in Africa selling coins. I gave him a substantial order. He didn’t know me, but assumed that an American businessperson would likely pay. And I did pay him after receiving the package. That’s how it works.

This basic level of trust is a fundamental underpinning of civilization. Of course we know we must watch out for violators; we lock our doors. Yet still you assume the average person whose paths you cross won’t bash your head in and grab your stuff. Or that a store won’t sell you defective goods. And so forth. Otherwise civilization could not function.

A recent poll found a significant decline in the percentage agreeing that most people are trustworthy. There’s no evidence we’ve actually become less trustworthy – only that we think people have. images-2Ariely seems to, pointing to scandals like Enron. But were businesses more ethical in bygone times? I doubt it; indeed, it’s harder to get away with scams in today’s interconnected media world of constant scrutiny and exposure. Yet that parade of exposures – Volkswagen is a recent example – does make people believe misfeasance has become rampant, compared to a romanticized past. I also suspect that decreased face-to-face personal interactions undermines our acculturation to the idea that people are generally trustworthy. But if that makes us less trusting, the decline in perceived trustworthiness can become a self-fulfilling prophecy.

What do we live for?

January 3, 2016

“God, make me chaste – but not yet.”

Augustine

Augustine

That was Saint Augustine, famously wrestling between his worldly desires and desire for holiness. He’s profiled in David Brooks’s book, The Road to Character.

Brooks’s theme is that a truly good life requires controlling, even sacrificing, personal desires — but it’s an advantageous trade-off. This is what Augustine struggled over. He knew his pursuit of worldly success, pleasures, sex, wasn’t making him happy. But could he change?

Brooks profiles people he feels did resolve the dilemma and hence did live good lives.

Marshall

Marshall

George Marshall, for example, a model of soldierly devotion to duty and country. In WWII, Marshall ached to lead the D-Day invasion, and believed he’d earned the prize. But he forbade himself from ever putting personal desires first, and when FDR asked him point blank if he wanted it, Marshall could not utter the word “yes.” So it went to Eisenhower.

Eisenhower too is profiled in the book, along with Dorothy Day, A. Philip Randolph, George Eliot, Frances Perkins, and Samuel Johnson; all certainly admirable characters. Each made sacrifices for the sake of a higher good, exercising self-control over personal impulses which might have entailed transient rewards but which conflicted with larger goals. The key is understanding what is really important, and the strength of will to put that first.

This again was Augustine’s struggle. But, unlike the others profiled, his greater good was not to achieve something in the human realm. While Ike and Marshall served their country, Randolph the cause of equality, Day and Perkins the downtrodden, etc., for Augustine it was God. It was to get right with God that Augustine finally summoned the will to reorder his life.

The others were serving something real; Augustine, something imaginary. So what is the moral lesson there? Brooks’s chapter on Augustine is all theological mumbo-jumbo, convoluted and false; indeed, absurd. You cannot live a truly meaningful life if the whole thing is grounded in delusion. Only when you overcome false ideas about existence, and grapple with the world as it really is, can you live a life of authentic meaning and virtue.

Unknown-4In concluding his chapter on Augustine, Brooks speaks of “faith against pure rationalism.” Mark Twain defined faith as believing what you know ain’t so. My rationalism isn’t “pure,” since humans are imperfect. But we must try.

Brooks talks of a broad cultural shift from an ethos of “moral realism,” controlling the self in service to some larger good (a la Marshall) to one of self-actualization, “be all you can be,” or condensed to “the big Me.” imagesAnd like others who put things in such terms, Brooks is censorious, albeit mildly; he thinks the shift has gone too far, and we’re losing a deeper kind of virtue.

Here’s my take. For most of human history, conditions of life were unforgivingly harsh, such that Brooksian “moral realism” was not just a virtue but a necessity. Of course selfishness and greed always operated too, yet survival required individuals to conform to societal strictures. That’s what has changed. No longer will a little free-spirited self-indulgence throw us back to living in caves. Modern advanced societies have at last mastered the problem of subsistence, freeing us to seek personal fulfillment in whatever ways feel nourishing to us, without having to be George Marshall about it.

Most of us still do try to serve others, and a larger good. But it’s not the only way to live meaningfully. In a utilitarian calculus of increasing the world’s sum total of human happiness, seeing to your own needs and desires is at least equal in importance to worrying about someone else’s. Indeed, you have a special duty to yourself, and you are the one person best positioned to know what’s good for you.

As Garrison Keillor has said, if one’s purpose in life is to serve others, then what purpose is served by the existence of those others?

UnknownIn his summing up, Brooks’s point number one is: “We don’t live for happiness, we live for holiness.” But the explanatory paragraph actually says nothing of God, it’s about moral ambition. If we live for such “holiness,” why so? Ultimately it’s always about personal fulfillment – doing that which makes us feel good. The ascetic starving himself in a cave does it because, on a level most important to him, the suffering makes him feel good about himself. “Happiness” is a suitable word for this concept. It is what everyone lives for.

The sense of grievance: a personal lesson

December 30, 2015

UnknownOne factor motivating Islamic radicals is a deep sense of grievance. A feeling that Muslims are victims of injustice, disrespected, a grievance crying out for expression and expiation. Humans have a pre-installed injustice detector (mine is set on “high”). These are powerful feelings.

imagesWe traveled as usual to my wife’s family for the holiday. My daughter flew in from Jordan. On Christmas eve I got left at the hotel, waiting for my wife to fetch me around 2 PM. Well, two came, then three, and four, and the next one. I could have called her but somehow got it in my head that she should call me. So instead I chose to wait and nurture a grievance, feeling disrespected. This grew to prodigious proportions by the time she arrived at 5:20.

Turned out she’d had a very rough day, chauffeuring people through terrible traffic. Oh, and by the way – the previous day had been her mother’s funeral. But none of that trumped my sense of grievance. Unknown-1I expected my wife to fall on her knees in contrition. When instead she pointed out what I should have done, my umbrage multiplied.

I think of myself as cool, rational, reasonable. And while I fumed, I did carefully analyze whether my intense feelings were truly justified. Yup, they were, I concluded.

But my truculence was making my beloved wife very upset, and finally, remorse for that overcame my sense of grievance, fortunately before it could ruin Christmas. And once the boil was thusly lanced, in the cold light of reason I could see how unreasonable and petty I had been. images-1Indeed, I was kind of shocked at how such a demon of fierce feeling had seized control of my brain. While in its grip, no mitigating factor mattered.

It made me think of Muslims and Palestinians and the sense of grievance. And of the late Edward Said, whose all-encompassing “blame the West” perspective on the Middle East remains influential. I could grasp in a new, personal way just how powerful such emotions can be – how impervious to reason – and to any other considerations, least of all consideration for the other side. Without dismissing Muslim and Palestinian grievances, there is indeed a lot to be said on the other side; and the grievance mindset can betray one’s own best interests. But when that demon gets hold of you – as it did me, briefly at least – it won’t listen to reason. This is how you get suicide bombers.

Unknown-2Well, my wife likes to chide my supposed belief in rationality, and this episode certainly scored one for her. But of course I don’t believe humans are always rational. Rather, it’s that we are capable of rationality (as I was, in the end). And (go ahead, cynics, have fun scoffing) I believe we are getting better at being rational — and thusly making a better world.