A Predilection for Extinction?

 

There appears to be a lot of concern about extinctions nowadays -everything from spotted owls to indigenous languages pepper the list. Things around us that we took for granted seem to be disappearing before we even get to know or appreciate them. One has to wonder whether this is accompanied by furtive, yet anxious, glances in the mirror each morning.

Extinction. I wonder what it would be like -or can we even imagine it? If we could, then presumably we’re not extinct, of course, but our view of history is necessarily a short one. Oral traditions aside, we can only confidently access information from the onset of written accounts; many extinctions require a longer time-frame to detect… although, perhaps even that is changing as we become more aware of the disappearance of less threatening -less obvious- species. Given our obsessive proclivity for expanding our knowledge, someone somewhere is bound to have studied issues that have simply not occurred to the rest of us.

And yet, it’s one thing to comment on the absence of Neanderthals amongst us and tut-tut about their extinction, but yet another to fail to fully appreciate the profound changes in climate that are gradually occurring. Could the same fate that befell Neanderthals be forecasting our own demise -a refashioning of the Cassandra myth for our self-declared Anthropocene?

It would not be the first time we failed to see our own noses, though, would it? For all our perceived sophistication, we often forget the ragged undergarments of hubris we hide beneath our freshly-laundered clothes.

Religion has long hinted at our ultimate extinction, of course -especially the Christian one with which those of us in the West are most familiar- with its talk of End-of-Days. But, if you think more closely about it, this is predicted to occur at the end of Time; extinction, on the other hand, occurs -as with, say, the dinosaurs- within Time. After all, we are able to talk about it, measure its extent, and determine how long ago it happened.

And yet, for most of us, I suspect, the idea of extinction of our own species is not inextricably linked to our own demise. Yes, each of us will cease to exist at some point, but our children will live on after us -and their children, too. And so on for a long, long time. It is enough to think that since we are here, our children will continue on when we are not. Our species is somehow different than our own progeny…

Darwin, and the subsequent recognition of the evolutionary pressures that favour the more successfully adapted no doubt planted some concerns, but an essay in Aeon by Thomas Moynihan (who completed his PhD at Oxford), set the issue of Extinction in a more historical context for me, however. https://aeon.co/essays/to-imagine-our-own-extinction-is-to-be-able-to-answer-for-it

Moynihan believes that only after the Enlightenment (generally attributed to the philosophical movement between the late 17th to the 19th century) did the idea of human extinction become an issue for consideration. ‘It was the philosopher Immanuel Kant who defined ‘Enlightenment’ itself as humanity’s assumption of self-responsibility. The history of the idea of human extinction is therefore also a history of enlightening. It concerns the modern loss of the ancient conviction that we live in a cosmos inherently imbued with value, and the connected realisation that our human values would not be natural realities independently of our continued championing and guardianship of them.’

But, one may well ask, why was there no serious consideration of human extinction before then? It would appear to be related to what the American historian of ideas, Arthur Lovejoy, has called the Principle of Plenitude that seemed to have been believed in the West since the time of Aristotle right up until the time of Leibniz (who died in 1716): things as they are, could be no other way. It would be meaningless to think of any species (even human) not continuing to exist, because they were meant to exist. Period. I am reminded -as I am meant to be- of Voltaire’s satirical novel Candide and the uncritical espousal of Leibniz’ belief that they were all living in ‘the best of all possible worlds’ -despite proof to the contrary.

I realize that in our current era, this idea seems difficult to accept, but Moynihan goes on to list several historical examples of the persistence of this type of thinking -including those that led ‘Thomas Jefferson to argue, in 1799, in the face of mounting anatomical evidence to the contrary, that specimens such as the newly unearthed Mammuthus or Megalonyx represented species still extant and populous throughout the unexplored regions of the Americas.’

Still, ‘A related issue obstructed thinking on human extinction. This was the conviction that the cosmos itself is imbued with value and justice. This assumption dates back to the roots of Western philosophy… Where ‘being’ is presumed inherently rational, reason cannot itself cease ‘to be’… So, human extinction could become meaningful (and thus a motivating target for enquiry and anticipation) only after value was fully ‘localised’ to the minds of value-mongering creatures.’ Us, in other words.

And, of course, the emerging findings in geology and archeology helped to increase our awareness of the transience of existence. So too, ‘the rise of demography [the statistical analysis of human populations] was a crucial factor in growing receptivity to our existential precariousness because demography cemented humanity’s awareness of itself as a biological species.’

Having set the stage, Moynihan’s argument is finally ready: ‘And so, given new awareness of the vicissitude of Earth history, of our precarious position within it as a biological species, and of our wider placement within a cosmic backdrop of roaming hazards, we were finally in a position to become receptive to the prospect of human extinction. Yet none of this could truly matter until ‘fact’ was fully separated from ‘value’. Only through full acceptance that the Universe is not itself inherently imbued with value could ‘human extinction’ gain the unique moral stakes that pick it out as a distinctive concept.’

And interestingly, it was Kant who, as he aged, became ‘increasingly preoccupied with the prospect of human extinction…  During an essay on futurology, or what he calls ‘predictive history’, Kant’s projections upon humanity’s perfectibility are interrupted by the plausibility of an ‘epoch of natural revolution which will push aside the human race… Kant himself characteristically defined enlightening as humanity’s undertaking of self-responsibility: and human rationality assumes culpability for itself only to the exact extent that it progressively spells out the stakes involved… This means that predicting increasingly severe threats is part and parcel of our progressive and historical assumption of accountability to ourselves.’

So, I don’t see this recognition of the possibility of human extinction as a necessarily bad thing. The more we consider the prospect of our disappearance, the more we become motivated to do something about it. Or, as Moynihan points out, ‘The story of the discovery of our species’ precariousness is also the story of humanity’s progressive undertaking of responsibility for itself. One is only responsible for oneself to the extent that one understands the risks one faces and is thereby motivated to mitigate against them.’ That’s what the  Enlightenment was all about: humanity’s assumption of self-responsibility.

Maybe there is still hope for us… well, inshallah.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Illeism, or Sillyism?

Who would have thought that it might be good to talk about yourself in the third person? As if you weren’t you, but him? As if you weren’t actually there, and anyway, you didn’t want yourself to find out you were talking about him in case it seemed like, well, gossip? I mean, only royalty, or the personality-disordered, are able to talk like that without somebody phoning the police.

Illeism, it’s called -from the Latin ‘he’- and it’s an ancient rhetorical technique that was used by various equally ancient personages -like, for example, Julius Caesar in the accounts he wrote about his exploits in various wars. It’s still in occasional use, apparently, but it stands out like a yellow MacDonald’s arch unless you’re a member of a small cabal, sworn to secrecy.

Now that I mention it, I remember trying it once when I was very young, and blamed our cat for scattering cookies all over the floor; but I suppose that doesn’t count because my mother instantly realized I was actually using the third-person-singular in its grammatical sense, and sent me to my room for fibbing -without the cat. I didn’t even get a hug for my clever use of ancient rhetoric.

The episode kind of put me off third-personism until I read a little more about it in an adaptation of an article originally published by The British Psychological Society’s Research Digest, by David Robson and edited for Aeon. He is a science journalist and a feature writer for the BBC: https://aeon.co/ideas/why-speaking-to-yourself-in-the-third-person-makes-you-wiser

It seems illeism can be an effective tool for self-reflection. And, although you may be tempted to opt for simple rumination – which is ‘the process of churning your concerns around in your head… research has shown that people who are prone to rumination also often suffer from impaired decision making under pressure, and are at a substantially increased risk of depression…’

Robson was intrigued by the work of the psychologist Igor Grossmann at the University of Waterloo in Canada writing in PsyArxiv which suggests that third-person thinking ‘can temporarily improve decision making… [and] that it can also bring long-term benefits to thinking and emotional regulation.’ -presumably related to the perspective change allowing the user to bypass -or at least appreciate- their previously held biases.

Grossmann, it seems, studies wisdom, and [w]orking with Ethan Kross at the University of Michigan in the United States…  found that people tend to be humbler, and readier to consider other perspectives, when they are asked to describe problems in the third person.’

Hmm…

He read the article with a fair soupçon of wariness. Might this not, he wondered, be academic legerdemain? It managed to fool Robson, but surely not he who reads with not even a hatchet to grind. He, after all, is only a retired GYN who is only accustomed to addressing freshly delivered newborns and their unique anatomical appendages with the appropriate third-person labels. It’s hard to do otherwise with the unnamed. Indeed, it had always seemed situationally germane, given the circumstances. To turn that on himself, however, might be contextually confusing -as well as suspicious.

So, his days as an accoucheur long past, he decided there would be little harm in trying it out in front of a mirror before he unleashed his full third-person on an unsuspecting face in Starbucks.

It seemed… unusual at first: he knew the individual in the reflection as well as himself, and addressing him as ‘he’ felt rude -creepy, actually. He did manage to get around the vertigo by pretending he was actually talking to a younger version of his brother, though, and ignored the fact that his brother was moving his lips at the same time and apparently not listening.

“Your brother,” he started, “is wondering if he should use the third-person approach when he is anxious about whether or not to order the sausage and egg bagel or just a cookie for breakfast at Starbucks.” A sudden thought occurred to him: “he could pretend he was sent to order it for his friend who is currently guarding a table for the two of them.”

He stared at the image in the mirror and frowned, suddenly remembering the cat-and-cookie incident.

He was uncertain where this was going. Was he supposed to ask what he -that is ‘himself’- thought about the idea? And who, exactly, would be answering? The whole thing seemed like an endless hall of mirrors, an infinite regression of Matryoshka dolls.

“Okay,” he added, to assuage the guilt he assumed he would have fibbing to the barista, “He is just trying an experiment in non-gendered, non-directional conversation to solve a personal decisional paralysis. So, he is not trying to be weird or anything. He is actually just asking for your advice: would bagel or cookie be a better breakfast?”

Suddenly, an unexpected epiphany -maybe produced by the comparative ‘better’, but nonetheless apparent in the way in which the third person had phrased his question. Of course the bagel with its protein rich contents was the ‘better’ breakfast! He was pretty sure that First-person-singular would never have seen that with such clarity –could never have seen it. Only by divorcing himself from his stomach, and mentioning it as if he were discussing a friend did it become clear.

He stepped away from his brother at the mirror and smiled to himself. He’d discovered a way of distancing himself from himself long enough to see who he was from an outside perspective. Still, there was a nagging question that kept tugging at his sleeve: who was he when he asked those questions? And did he risk permanently closing the door to the person he used to be, or was it sort of like losing himself in a story and then swapping realities when he closed the book…? But, what if he preferred what he was reading to what he was living…?

Whoa -pretty heavy stuff, that.

You know, it’s harder coming back to First-person than closing the book after a while, and I found myself switching back and forth for the longest time. I really wonder how hard Grossman and Kross had thought this through. And I wonder if Robson got caught up in their web as well. Nobody mentioned anything about collateral damage -but of course, they wouldn’t, would they?

All I can say is be careful, readers -there might be a third-person Minotaur at the end of the labyrinth.

Who’s afraid of the Deodand?

Sometimes Philosophy hides in plain sight; interesting questions emerge, unbidden, when you least expect them. A few months ago I was waiting in a line to order a coffee in a poorly-lit shop, when the woman behind bumped into me as she struggled to read the menu posted on the wall over the counter.

“They don’t make it easy in here, do they?” she grumbled in a token apology.

I turned and smiled; I’d been having the same difficulty. “I should have brought a flashlight,” I added, trying to make light of it.

“Photons should be free,” she mumbled. “It’s not like we should have to carry them with us to get a coffee…” She looked at me with a mischievous grin creeping across her shadowed face. “I mean they don’t have to pay by the pound for them like bananas, or anything…”

I chuckled. “Photons have mass…? I didn’t realize they were Catholic.” It was a silly thing to say, I suppose, but it just popped out.

She actually laughed out loud at that point. “That’s very clever…” she said, and despite the dim light, I could feel her examining me with more interest.

But I found myself standing in front of the barista at that point, so I ordered my coffee, and headed for a table in the corner. A moment later, the woman from the lineup surfaced out of the darkness and sat beside me under a feeble wall light at the next table.

“Do you mind if I sit here?” she asked, not really waiting for my reply.

I smiled pleasantly in response, but in truth, I had been looking forward to the solitude usually offered by a dark coffee-shop corner.

“I’m sorry,” she said, immediately sensing my mood. “It’s just that you cheered me up in that horrid line, and I wanted to thank you…”

“It was a bit of a trial, wasn’t it?”

She nodded as she sipped her coffee. “Your comment on the mass of photons was hilarious -I’m a Science teacher at the Mary Magdalene Women’s College, so I enjoyed the reference to Catholics. My students will love it.”

I looked at her for a moment and shrugged. “I’m afraid it’s not original, but thank you.”

She chuckled at my honesty and picked up her coffee again. “I don’t recognize it,” she added after a moment’s reflection, still holding her steaming cup in front of her and staring at it like a lover.

“I think maybe it was one of my favourite comedians who said it…” But I wasn’t sure.

“Oh? And who might that be?” she asked, smiling in anticipation of a shared interest.

I thought about it for a moment. “I don’t know… Woody Allen, perhaps.”

She put down her cup with a sudden bang on the table and stared at me. Even in the dim light, I could feel her eyes boring into my face. “A horrid man!” she said between clenched teeth. “How could you ever think that anything he said was funny?” she muttered.

I was beginning to find her eyes painful. I was aware of the controversies about Woody, of course, but I suppose I was able to separate them from his humour. And yet, I have to admit, that when the woman reminded me of his behaviour, I felt guilty -as if by laughing at his jokes, I was tacitly approving of his other activities.

It’s a puzzling, and yet fascinating relationship we have with things used by, or even owned by people we consider evil: deodands. The word, once used in English Common Law, was originally from Medieval Latin –Deo dandum -a thing to be given to God. The idea was that if the object had caused a human death, it had to be forfeited to the Crown, and its value would equal the compensation given to charity, or the family of the victim.

The question, though, is why we feel such revulsion for something that, through no fault of its own, was used in the commission of a crime? It could have been any knife, say, that was used in a stabbing, so why is this particular knife somehow different? Does the aura of what it did cling to it? Haunt it…? Would Woody Allen’s unrelated jokes -or, for that matter, Bill Cosby’s- be funny if we didn’t know their sources?

I have to admit that humour is a lot more reflective of the personality that created it than, for example, an assassin’s gun, or a criminal’s knife, but in isolation -ie divorced from context- is there really any difference? I certainly have no answer, but I have to say that I was pleasantly surprised that the issue was not one that I was puzzling over on my  own. I came across an essay in an issue of Aeon by Paul Sagar, a lecturer in political theory at King’s College London that looked at first as if it might be helpful: https://aeon.co/essays/why-do-we-allow-objects-to-become-tainted-by-chance-links

He wrote that ‘It is not uncommon to find that one’s enjoyment of something is irrevocably damaged if that thing turns out to be closely connected to somebody who has committed serious wrongs…  knowledge of somebody – or something – having done a bad thing can deeply affect how we view the status of the thing itself.’ But why should that be?

Obviously, the answer is not easily obtained, and in a roundabout way he throws himself on the mercy of the 18th-century Scottish Enlightenment thinker Adam Smith, and his first book, The Theory of Moral Sentiments (1759). ‘Smith thought it undeniable that we assess the morality of actions not by their actual consequences, but by the intentions of the agent who brings them about.’ And yet, if a person were to throw a brick over a wall and hit someone accidentally, he would also be judged by the consequences even though he hadn’t intended to injure anyone. ‘Smith thought that our moral sentiments in such cases were ‘irregular’. Why do we respond so differently to consequences that have bad outcomes, when those outcomes are purely a matter of luck? Smith was confident that, although he could not explain why we are like this, on balance we should nonetheless be grateful that we are indeed rigged up this way.’

Have patience -this may slowly lead us to a sort of answer. First of all, ‘if, in practice, we really did go around judging everybody solely by their intentions, and not by the actual consequence of their actions, life would be unliveable. We would spend all our time prying into people’s secret motivations, fearing that others were prying into ours, and finding ourselves literally on trial for committing thought crimes.’ Only a god on Judgement Day should be allowed that privilege.

Also, it is good be bothered by consequences rather than just about hidden intentions for social reasons: you have to do good things to get praise, not just intend to do them. And conversely you have to do the bad things to get the punishment. Uhmm… Well, okay, but that doesn’t really explain deodands, or anything.

At this point, Sagar kind of gives up on Smith’s attempts at moral philosophy and heads off on his own wandering trail to find an answer. ‘It is good that we feel aversion to artifacts (be they physical objects, films, records or whatever) associated with sex crimes, murders and other horrors – even if this is a matter of sheer luck or coincidence – because this fosters in us not only an aversion to those sorts of crimes, but an affirmation of the sanctity of the individuals who are the victims of them.’ Somehow that makes us less likely to act the same way? Whoaa…

In the last paragraph, he essentially throws up his hands in frustration (or maybe those were my hands…) and as good as admits he doesn’t know why we would even think about deodands.

And me? How should I have responded to the woman in the coffee shop? Well, probably not by talking about Adam Smith -but changing the subject might have been a good first step, though…

Too cute for words?

I love cute as much as anyone else, I suppose, although it’s not a quality I have possessed since early childhood I’m afraid. Many things are cute, though: puppies, babies, toddlers… and they all seem to have certain attributes in common: large, or prominent eyes, larger than expected head, and so on –neoteny, it’s called. They all seem vulnerable, and deserving of protection. Needing cuddling.

And yet, apart from agreeing affably with doting parents describing their newborns, or singles obsessing over their new puppies, I didn’t give cuteness any extra thought, I have to admit. I mean, cute is, well, cute. There didn’t seem to be any need to dwell on the features, or deify their appearance. But an older article by Joel Frohlich, then a PhD student at UCLA, about cuteness in Aeon did tweak my interest: https://aeon.co/ideas/how-the-cute-pikachu-is-a-chocolate-milkshake-for-the-brain

Perhaps it was the etymology of the word that initially intrigued me. ‘The word emerged as a shortened form of the word ‘acute’, originally meaning sharp, clever or shrewd. Schoolboys in the United States began using cute to mean pretty or attractive in the early 19th century. But cuteness also implies weakness. Mignon, the French word for cute or dainty, is the origin of the English word ‘minion’, a weak follower or underling… It was not until the 20th century that the Nobel laureates Konrad Lorenz and Niko Tinbergen described the ‘infant schema’ that humans find cute or endearing: round eyes, chubby cheeks, high eyebrows, a small chin and a high head-to-body-size ratio. These features serve an important evolutionary purpose by helping the brain recognise helpless infants who need our attention and affection for their survival.’

In other words, ‘cute’ was a mechanism to elicit protection and caring. Indeed it seems to be neurologically wired. MRI studies of adults presented with infant faces revealed that the ‘brain starts recognising faces as cute or infantile in less than a seventh of a second after the face is presented.’ These stimuli activate ‘the nucleus accumbens, a critical piece of neural machinery in the brain’s reward circuit. The nucleus accumbens contains neurons that release dopamine.’

But it can be tricked, so ‘baby-like features might exceed those of real infants, making the character a supernormal stimulus: unbearably adorable, but without the high maintenance of a real baby.’ So, is cuteness in these circumstances, actually a Trojan Horse? An interesting thought.

Cuteness is situational -or at least, should be. Cuteness out of context can be frightening, and even grotesque. Think of the clown in Stephen King’s novel It for example. Imitation, when recognized as such, seems out of place. Wrong. Cute is a beginning -an early stage of something that will eventually change as it grows up. Its transience is perhaps what makes it loveable. At that stage it is genderless, asexual, and powerless. It poses no threat -in fact, it solicits our indulgence. Think what would happen if it were a trick, however: our guard would be down and we would be vulnerable.

But there’s a spectrum of cuteness; there must be, because it –or its homologues- seem to be appearing in situations that don’t remotely suggest innocence, youth, or vulnerability. Think of the proliferation of cutesy Emojis. As Simon May, a visiting professor of philosophy at King’s College London points out in an essay (also in Aeon https://aeon.co/ideas/why-the-power-of-cute-is-colonising-our-world ) ‘This faintly menacing subversion of boundaries – between the fragile and the resilient, the reassuring and the unsettling, the innocent and the knowing – when presented in cute’s frivolous, teasing idiom, is central to its immense popularity… Cute is above all a teasing expression of the unclarity, uncertainty, uncanniness and the continuous flux or ‘becoming’ that our era detects at the heart of all existence, living and nonliving. In the ever-changing styles and objects that exemplify it, it is nothing if not transient, and it lacks any claim to lasting significance. Plus it exploits the way that indeterminacy, when pressed beyond a certain point, becomes menacing – which is a reality that cute is able to render beguiling precisely because it does so trivially, charmingly, unmenacingly. Cute expresses an intuition that life has no firm foundations, no enduring, stable ‘being’.’

Perhaps that’s what makes non-contextual cute so inappropriate, so menacing. ‘This ‘unpindownability’, as we might call it, that pervades cute – the erosion of borders between what used to be seen as distinct or discontinuous realms, such as childhood and adulthood – is also reflected in the blurred gender of many cute objects… Moreover, as a sensibility, cute is incompatible with the modern cult of sincerity and authenticity, which dates from the 18th century and assumes that each of us has an ‘inner’ authentic self – or at least a set of beliefs, feelings, drives and tastes that uniquely identifies us, and that we can clearly grasp and know to be truthfully expressed. Cute has nothing to do with showing inwardness. In its more uncanny forms, at least, it steps entirely aside from our prevailing faith that we can know – and control – when we are being sincere and authentic.’

Whoa -that takes cute down a road I don’t care to travel: it’s an unnecessary detour away from the destination I had intended. Away from attraction, and into the Maelstrom of distraction. Contrived cute is uncute, actually. It is a tiger in a kitten’s mask.

I think there is a depth to beauty which -as ephemeral and idiosyncratic as it may seem- is missing from cute. Both hint at admirability, and yet in different ways: one describes a surface,  the other a content -much as if the value of a wallet could be captured by its outward appearance alone.

For some reason cute reminds of a Latin phrase of Virgil in his Aeneid -although in a different context, to be sure: Timeo Danaos et dona ferentes –‘I fear the Greeks even bearing gifts’- a reference to the prolonged Greek war against Troy and their attempt to gain entrance to the city with their gift of the infamous Trojan Horse. Cute is a first impression, not a conclusion. I suppose it’s as good a place to start as any, but I wonder if, in the end, it’s much ado about nothing…