Does everything have meaning?

What is the meaning of rain? No, really -what, if anything does it mean? If we ask the same question of Life, we understand immediately the type of answer required, so what is different about rain? Both are processes, of sorts, although rain has the added advantage of also being a thing -both palpable and visible- I suppose. But should that disqualify it from having meaning?

Meaning is something that stretches beyond the thing described, and expands it in ways perhaps not obvious at first glance: beyond just descriptive definition, beyond attempts at capturing it with a synonym -those are mere tautologies and add little clarity beyond finding other words to say the same thing.

It would be all too tempting to resort to simply describing rain’s cause -its meteorological significance; or suggesting its value in the sustenance of Life -but these would only describe its purpose -what it does- not its meaning. There is surely more to rain than water falling from the sky, just as there is more to Life than growth, reproduction, and change.

No, it seems to me that meaning points to something else, and a grammatical equivalent might be something like a metaphor.

I suspect it was an essay in Aeon by Jeremy Mynott, an emeritus fellow at Wolfson College in Cambridge, that rekindled my wonder about meaning in the world around us: https://aeon.co/essays/the-ancient-world-teemed-with-birds-now-we-think-with-them

As he suggests, ‘Sometimes you need to look at things from outside to see them more clearly.’ And history can do that for many things -birds, for example. Before the days of over-population with its attendant pollution and habitat destruction, the much smaller aggregations of humanity were more intimately exposed to the perils -and beauty- that surrounded them.

‘The Mediterranean world of 2,500 years ago would have looked and sounded very different. Nightingales sang in the suburbs of Athens and Rome; wrynecks, hoopoes, cuckoos and orioles lived within city limits, along with a teeming host of warblers, buntings and finches; kites and ravens scavenged the city streets; owls, swifts and swallows nested on public buildings. In the countryside beyond, eagles and vultures soared overhead, while people could observe the migrations of cranes, storks and wildfowl. The cities themselves were in any case tiny by modern standards – ancient Athens, for example, had a population of about 120,000 at the height of its power in the 5th century BC.’

Things in nature impressed their physical presence on people’s daily lives to a degree now hard to imagine. ‘Not surprising either, therefore, that they also populated people’s minds and imaginations and re-emerged in their culture, language, myths and patterns of thought in some symbolic form.’ Some things -birds in his essay, at least- acquired a meaning beyond their mere physical presence.

Because Mynott is writing about the ‘meaning’ of birds, he goes on to describe how they became metaphors -there is ‘a simple progression from a descriptive fact about a bird (swallows migrate here the same time every spring), to a human comparison (that’s when we change what we wear, too) and then, in a natural and almost imperceptible segue, to making the bird represent something other than itself (a sign of spring, a reminder to start gardening, a valued guest). That is, a metaphor, something that ‘carries us across’ from one dimension of meaning to another.’

I think there is a very obvious parallel with other aspects of the natural world, too -rain, for example. And where he supplies examples of proverbs to bolster his contention of how the idea of birds has migrated into the realm of metaphor: ‘One swallow doesn’t make a summer’, there is certainly an equivalence in rain proverbs that do the same: ‘You need to save for a rainy day’, or ‘Rain does not fall on one roof alone’.

Metaphors work by having one thing stand symbolically for another, and by so doing, achieve a meaning far larger than the original.

When my children were young and beginning to learn the intricacies of language, they sounded very literal -so much so, that at times it was difficult to explain things to them without endlessly searching for another word to use for clarification: definition again. And yet, often they seemed to be searching for something more than description -and the perpetual ‘Why?’ questions that dog every parent are testament to that. No matter the skillfulness of the answer, it is seldom enough to satisfy their inner quest.

I’m not suggesting that this is necessarily indicative of children’s innate need for meaning so much as simple curiosity born of insufficient exposure to the world -or perhaps incipient mischievousness- but it is interesting that it seems to be a search for more than just a cursory explanation. Perhaps it is a developing awareness that there is more to reality than surface -an early, and tentative, exploration of Philosophy.

“Why does it rain, daddy?” my little daughter once asked. I remember the question because of her drive to understand more about rain.

“Well,” I started, unsure of the answer, to be honest, “… you know how sometimes the air around you feels wet in the summer?” I was on shaky ground already, but I pressed on when she nodded her head enthusiastically. “And sometimes if you look really hard you can see little water droplets on the window glass?”

I have to admit I was making it up as I went along, but her little face seemed so eager for more, I embellished it a bit. “Well, those drops appear when wet air touches something cool like the glass in the window. It’s called condensation,” I added, but more for my sake than hers, I think.

“So, is that where rain comes from, daddy?” She was obviously confused that windows didn’t usually rain.

“Uhmm, no but it was just a way of explaining that wet air sometimes condenses on cold things, and it’s really cold way up in the sky…”
“So…” I could almost see her processing the information behind her eyes. “So, are there windows up in the sky…?” That didn’t seem right to her, I could tell.

“No, but there are little particles of dust up there, and they’re really cold, so water droplets condense on them. And when there are a lot of them, you see them as clouds…” I was way beyond my depth, so I rather hoped she’d be satisfied with that. But I could see by her face that the machinery inside was still churning.

“So, clouds are rain before it falls…” There, I had told her all I knew about rain -more than I knew, in fact.

Suddenly, a large smile grew on her face, and her eyes twinkled mischievously. “You’re just kidding me, aren’t you daddy?”

My heart sank. We were walking along a trail in the woods at the time, and had stopped to rest in a little clearing; I hadn’t thought to bring an encyclopedia. I can still remember the flowers peeking through the grass like children thinking they could hide in plain sight and I shrugged to hide my embarrassment. “What do you mean, sweetheart?”

She grabbed my hand and looked up at my face. “There’s more to rain than clouds, daddy…”

I tried to look like the wise parent, but she was having none of it.

“Why do you say that, sweetie?” I said and held my breath.

She sighed and rolled her eyes like she’d seen me do so often. Then she pointed to an enormous fluffy cloud that was floating lazily just over our heads. “Miss Janzen at kindergarten says that rain happens when clouds cry…”

I didn’t know whether to nod in agreement -it was a kind of vindication of my explanation- or stay still, in case it was a trap.

She suddenly blinked and stared at the cloud. “You can tell that cloud doesn’t have any rain in it…” I smiled and waited for the explanation. “It looks happy, doesn’t it…?”

I’m not sure, but I suspect my daughter already knew about metaphors, even if she’d never heard the word… and perhaps she’d grasped the meaning of rain, as well…

Must we stop and smell the flowers?

Apart from passive receptivity, I have had no more opportunity to experience perfumes than any other nose in the average crowd. And even in that chaos, the scents seem to be equally admixed with whatever else clings to us -not all of it encouraging. But I have to believe that the ability to notice different smells, let alone be repulsed or attracted to them, must serve more purpose than merely warning us off things that might be harmful. Why dedicate an entire organ just to avoid rotting carcasses or to pick pleasing flowers -as socially useful as that might be?

Indeed there may be an entire chemical vocabulary entrusted to smells, that enriches the umwelt of the otherwise utilitarian world of the animal kingdom. Given our common origins, why would we be any different?

Of course, except for their behaviour, animals are unable to communicate what they are reading in the odour, and until the very recent identification of olfactory receptor genes, even variations in humans were, by and large, a mystery. And yet, their importance is signalled by the finding that these genes appear to constitute the largest gene family in all the mammalian genomes.

The problem, perhaps, has always been in the attribution. As with animals, if we don’t know that odours are responsible for an action, we wouldn’t think to credit them. If a dog, for example, marks a spot after smelling it, we have no idea what that means. Does it merely suggest that the dog simply likes whatever it was it smelled, or something more? Is the dog leaving a message other than ‘I was here, too’?

You see the difficulty: an odour may engender an action, but neither the signal nor the response can be reliably categorized as anything other than a generic stimulus/response. And given the size of the olfactory receptor gene family, a purposeless, or motiveless reflexive response seems unlikely.

So, how have we made use of this prowess historically? Well, for one thing, we have used odours, to mask odours -a rather recursive, circular activity, it seems. The fact that bathing, at one time was frowned upon -or perhaps difficult to achieve with any regularity for other than the wealthy- usually demanded olfactory disguise amongst those not similarly handicapped. The need to remedy the resultant smell, in itself suggests a nascent awareness of a message, camouflaged as it might be in societal norms.

And, think of the now discredited Miasma theory: that many diseases -the Bubonic Plague springs to mind- were caused by ‘bad air’: smells, in other words. One can certainly understand the conflation of the odour of, say, rotting meat and sickness that might follow ignoring the message inherent in its telltale reek, with the idea that the smell itself might be the cause. Only when germs were identified, and -in the absence of germs- not the air around them, did the idea of smell become merely an indicator, not a cause of disease.

But there’s a hint of a more useful and atavistic function of odours in the discovery of its importance in the initial bonding and identification of human mothers with their newly born offspring. I suppose it should have been obvious for millennia, though: an orphan lamb is often rejected by an unrelated lactating mother unless the strange lamb is made to smell like her.

So, where am I going with this? Well, first of all, the findings of a recent study [lead author Casey Trimmer, PhD] published online in advance of print in the Proceedings of the National Academies of Sciences: https://www.sciencedaily.com/releases/2019/04/190430164208.htm

‘Humans have about 400 different types of specialized sensor proteins, known as olfactory receptors, in their noses. One odor molecule can activate several different olfactory receptors, while any given receptor can be activated by several different odor molecules. In a process that remains to be decrypted, the olfactory system somehow interprets these receptor activation patterns to recognize the presence, quality (does it smell like cherry or smoke?) and intensity of millions, maybe even trillions, of different smells… Small differences in olfactory receptor genes, which are extremely common in humans, can affect the way each receptor functions. These genetic differences mean that when two people smell the same molecule, one person may detect a floral odor while another smells nothing at all… Because most odors activate several receptors, many scientists thought that losing one receptor wouldn’t make a difference in how we perceive that odor. Instead, our work shows that is not the case…  A change in a single receptor was often sufficient to affect a person’s odor perception… olfactory receptors in the nose encode information about the properties of odors even before that information reaches the brain.’

Why the receptor complexity if odours are mainly simple social adjuncts? Or, is there more going on than meets the nose? Obviously we seem to have less appreciation of the panoply of chemicals around us than, say, the average dog, but because we do not ‘smell’ them with equal facility, does that mean they have less of an effect on us? As we have begun to appreciate in terms of mother/infant recognition, not all odours reach conscious awareness. Not all smells are nameable.

Some perfume manufacturers maintain that their products contain pheromones (chemical signals) which might activate aphrodisiac-like behaviour in humans, but so far the evidence is tenuous, to say the least. Given our common evolutionary history with animals who do produce and react to pheromones, and our own incredible biological investment in olfactory receptors, however, I suspect it is just a matter of time before similar chemicals and effects are identified and utilized in us.

What brought this whole subject to mind, though, was a titillating article in the Smithsonian Magazine about Cleopatra’s perfume: https://www.smithsonianmag.com/smart-news/may-be-what-cleopatra-smelled-180972854/

‘Back in 2012, the archaeologists uncovered what was believed to be the home of a perfume merchant, which included an area for manufacturing some sort of liquid as well as amphora and glass bottles with residue in them… The researchers took their findings to two experts on Egyptian perfume, Dora Goldsmith and Sean Coughlin, who helped to recreate the scents following formulas found in ancient Greek texts.’

And, no, there’s no proof that what was recreated was what Cleopatra used -in fact, ‘It’s believed she had her own perfume factory and created signature scents instead of wearing what would be the relative equivalent of putting on a store-bought brand.’ But still, it’s a smell that Cleopatra might have worn…

There’s a legend that she believed so fervently in her perfume’s allure that she soaked the sails of her royal ship in it – so much, in fact, that Marc Antony could smell her coming all the way from shore when she visited him at Tarsus.

There’s got to be something in that: where there’s smoke there’s fire, eh?

Tomorrow, and tomorrow, and tomorrow

What is Time, if not a river flowing ever onwards from now -or from an ill-remembered ‘then’ to the same now? Of course, we all know the quotation attributed to Saint Augustine: What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know – but that doesn’t get us very far. It neither allows events to be situated in time, nor allows us to appreciate its passage. Perhaps that’s unfair to ask of a denizen of the fourth century -saint, or no- but an orderly historical conception of time’s progression began long before his birth.

To more fully acknowledge the extent of time, one must be able to measure it -not so much mechanically, as calendrically. And, as Paul J. Kosmin pointed out in an article in Aeon, ‘from earliest recorded history right up to the years after Alexander the Great’s conquests in the late 4th century BCE, historical time – the public and annual marking of the passage of years – could be measured only in three ways: by unique events, by annual offices, or by royal lifecycles. https://aeon.co/essays/when-time-became-regular-and-universal-it-changed-history

‘In ancient Mesopotamia, years could be designated by an outstanding event of the preceding 12 months’ -presumably this would make the time frame more easily memorable. The more distant in time events occurred, the more difficult it would be to appreciate any surrounding context. And it would be meaningful only to those living in the country, or region, so unless forced by conquest or a shared natural disaster, uninterpretable by others.

Finally, though, ‘In the chaos that followed the death of Alexander the Great in Babylon in 323 BCE, all this changed. One of Alexander’s Macedonian generals, who would go on to win an enormous kingdom stretching from Bulgaria to Afghanistan, introduced a new system for reckoning the passage of time. It is known, after him, as the Seleucid Era. This was the world’s first continuous and irreversible tally of counted years. It is the unheralded ancestor of every subsequent era system, including the Christian Anno Domini system, our own Common Era, the Jewish Era of Creation, the Islamic Hijra, the French Revolutionary Era, and so on… For the first time in history, historical time was marked by a number that never restarted, reversed or stopped… Most importantly, as a regularly increasing number, the Seleucid Era permitted an entirely new kind of predictability… to confidently and accurately conceive, name and hold in the imagination a date several years, decades or centuries into the future.’

So, no matter what else happened, the year and all that happened in it was stable -and traceable. Nowadays that may not seem so amazing, but if you think about it, the perception of time itself changes when that happens: ‘Every event must be chained to its place in time before it becomes an available object of historical articulation. And the modes by which we date the world, by which we apprehend historical duration and the passage of time, frame how we experience our present, conceive a future, remember the past, reconcile with impermanence, and make sense of a world far wider, older and more enduring than any of us.’

Of course that’s not to imply the ancients had no concept of travelling through time, but with only remembered time posts as a guide, it made the journey more fraught -more circumscribed. And for many, it must have seemed like they were confined in a room whose walls were events they may not themselves have witnessed. Kosmin quotes a paragraph written by the Norwegian author Karl Ove Knausgård about the introduction of numeric time that describes it nicely: ‘It was as if a wall had been removed in the room they inhabited. The world no longer enveloped them completely. There was suddenly an opening … Their glance no longer met any resistance, but swept on and on through more of the same.’ We take the view for granted.

I remember visiting my grandmother in her final days in hospital. She was approaching her 100th year, and becoming increasingly lost as she wandered along the ever winding trail she’d taken through time. It was often difficult for her to pin down the order of some things, and yet her memory of other details seemed impeccable.

She recounted tales about her early life I had never heard before, but at times they seemed metaphorical -believable only in translation. She meant well, but I suspect that because she found dates elusive, she was trying to compensate with word pictures, and comparisons to tell her story. And then, like pre-Seleucid times, she would necessarily tack her story to past events.

When, ‘Do you remember when…?’ didn’t work because the incident was well before my time, she would resort to things like ‘When your mother was a little girl…’ or ‘The year Joe and I got married…’. Then, with no need to worry about correction, she would recount her version of what had happened.

I found it a delightful, albeit opaque, excursion along her personal timeline, but one I could never even hope to verify without considerable effort. Her description of their journey across the country to the west coast on a ‘pioneer train’ as she called it, was a good example.

“Whenever the train would stop to pick up water for the engine,” she said, “ it was our signal for the men to jump off the cars and search for firewood…”

I remember her eyes twinkling at the memory. “There were stoves in each car for cooking, so while the men were away, the women would rummage around in their trunks for the rice and beans we were told to pack for the trip.”

I remember being surprised at them having stoves on a moving train -I come from a railway family, and I’d never heard of such a thing. “When did you travel across the country, grandma?” I remember asking her, thinking maybe she meant the train would stop long enough for people to cook at the station, or wherever.

Her eyes looked inward for a while -whether to remember, or relive the experience, it was difficult to tell. “I remember seeing soldiers wandering around on some of the platforms, so maybe it was during the war…” And she shrugged.

That didn’t sound right. “But grandma, soldiers would have been going to the east coast -to Montreal or Halifax -not west to Vancouver…”

Her eyes cleared for a moment and she sent them to reprimand my face for its expression, all the while shaking her head at my inability to follow her story. “The soldiers didn’t get on our train. They were waiting for the next one, G… Try to pay attention, eh?” she added and then sighed like the woman I used to visit when I was young.

I shrugged, embarrassed for doubting her. “So, I suppose that was early in the First World War,” I said, mostly to myself, I suppose. I was trying to establish a time frame that made sense to me -a picture that I could pass on to my own children about her life.

Her eyes, though, were the real storytellers, and at times they seemed impatient as they watched from their increasingly bony redoubt. “I don’t remember the year, G, but you asked me to describe our journey across the country; the year’s not as important as the story, is it?”

Then, she smiled at the scolding and the grandmother of my childhood returned briefly. “You have to open a book to see what’s in it -the cover it’s wrapped in is irrelevant…”

I think my grandmother would have done just fine without the encumbrance of the irreversible tally of counted years.

Words, when there aren’t any

Here’s a thought: What are you thinking – right now? Can you describe what is happening inside your head at any moment you are asked? If you can, is it in a decipherable stream of words… or in something else? And, further, if it is something else, then how could you ever describe it in words?

When I consider such a subject, I find that I am reminded of the Buddhist koan that asks the disciple to imagine the sound of one hand clapping. It is an endless labyrinth in which it is also too easy to think of Dante’s Divine Comedy in which he describes what is inscribed on the entrance gate to Hell: Abandon all hope ye who enter here.

But you see what is happening already: a flight of ideas, some of which can be described in words after the fact, and yet the journey -and indeed, the destination- are fluid, and wordless. Much like watching a Fellini film in a darkened movie theatre, and then emerging, confused, into a noontime street outside where different rules, different realities apply.

It happened again, didn’t it? Right now -the activity inside my head somewhere… I have just attempted to describe it in words, and yet there weren’t any while it was going on… But nonetheless it was happening. If we can remember them, dreams can be like that sometimes, can’t they? Wordless, and yet often transcribable; there is usually an emotional overlay, and yet is it just that when we emerge into the daylight reality we struggle for descriptors if we are asked to remember. Is consciousness merely the translator, hired for the job?

I suspect these ruminations are not common in our everyday lives that expect to be able to explain something -everything?- when asked. It is, after all, the mandate of Science to subject the world and everything in it to scrutiny. But can we ever hope to describe our interior machinations in words, if the world in there is not primarily verbal? If journeys inside are not even always pictorial? Evocative? Is there even a language that does not depend on features we would characterize as consciously recognizable? Translatable? Can we, in other words, understand our minds? We all want to, don’t we…?

Despite the fascinating venue, even deciding where to start any such attempt eluded me. There was an article in a BBC Future article, that started me wondering again, though: http://www.bbc.com/future/story/20190819-what-your-inner-voice-says-about-you

Kelly Oakes, a freelance writer for the BBC, starts out by suggesting, ‘Interrogating what’s going on inside our own minds doesn’t seem like it should be a difficult task. But by trying to shine a light on those thoughts, we’re disturbing the very thing we want to measure in the first place.’ She goes on to describe the attempts of the psychologist Russell Hurlburt at the University of Nevada to get around the questions we ask about our inner thoughts which obviously prompt us to translate the inner activity into words -and hence reporting more as inner speech than is actually the case. So, he uses a technique he calls Descriptive Experience Sampling (DES) which involves carrying a device that beeps randomly but only occasionally throughout the day. That is the prompt to tune into whatever was in your mind just before the beep. At the end of the day, you are debriefed and are expected to describe ‘what form it took: words, pictures, an emotion, a physical sensation, or something else.’ And, not surprisingly, it varies.

It’s not ideal, I suppose, but it does attempt to characterize something evanescent and amorphous and translate it into meaningful categories. But even if we were to concentrate on one form of activity -inner speech- there are still imponderables that have to be sorted out.

Is it an inner dialogue, or monologue? Indeed, how could it be a dialogue with only one brain involved? Or, for that matter, to whom would a monologue be addressed? Maybe Freud, with his Ego, Id, and Superego divisions of the unconscious was on to something…

But, Oakes mentions a description written by someone after they had recovered from a stroke, that is both existentially chilling, and yet also helpful in understanding some of our inner processing: ‘After neuroanatomist Jill Bolte Taylor recovered from a stroke she suffered aged 37, she wrote in My Stroke of Insight [my italics] about what it was like to experience a “silent mind” without inner speech for several weeks: “What a daunting task it was to simply sit there in the centre of my silent mind…’ It wasn’t just the absence of words that was occurring, it was the absence of anything. Although I haven’t read the book, I assume that her mind was also empty of -what?- pictures, emotions, sensations -even identity. So maybe you either get everything -the melange- or nothing.

I find that a really sobering thought, for some reason. That in our brains -our minds– the way we process input from the outside –or activities happening on the inside- is more a jumble than a formula. I’m sure it doesn’t actually work that way, but just like it’s difficult to accurately render a poem, a metaphor, or a Weltanschauung into a different culture and language, there are similar problems in translating the inner language into the outer one we need to use.

In our constant quest to understand, and master the unknown, I sometimes wonder if we expect too much of our questions. But maybe that’s just my outer voice that speaks -the one that translates for the me that lives inside. How do I know if it’s even on the right path?

Perhaps it takes a poet to interpret what’s really going on. My mind drifts to the words of Kahlil Gibran: For thought is a bird of space, that in a cage of words may indeed unfold its wings but cannot fly.

The colour of truth is gray

It’s back again… Well, actually I suppose it never left. We still seem to be obsessed with the genderization of colours -as if it were an established biological given; as if it were as obvious as handedness, or as necessary as the assignation of gender at birth. ‘Pink is for girls and Blue is for boys’ -its self-evidence is once again being called into question; it seems an endless, pointless cycle.

There have been many attempts to link gendered colour preference to Weltanschauungen, genetic atavisms, and of course, persistent, market-savvy fashion manipulation (even I attempted a commentary in a previous essay: https://musingsonwomenshealth.com/2013/12/06/nature-versus-princess-nurture/) -but none seem adequate explanations for its persistence in our culture. Indeed, those studies that have sought to resolve the issue seem to have canvassed opinions from predominantly western cultures. And apart from the probable sampling bias, there are other factors that likely come into play, as suggested in a 2015 article in Frontiers in Psychology: ‘… red symbolizes good luck in China, Denmark, and Argentina, while it means bad luck in Germany, Nigeria, and Chad (Schmitt, 1995Neal et al., 2002). White is a color of happiness and purity in the USA, Australia, and New Zealand, but symbolizes death in East Asia (Ricks, 1983Neal et al., 2002). Green represents envy in the USA and Belgium, while in Malaysia it represents danger or disease (Ricks, 1983Hupka et al., 1997).’ In other words, ‘this variation in the symbolism of color could lead to variation on color preference between cultures.’ We’d best choose our colours carefully.

But, I suppose what got me interested again in this perpetual, gendered debate was a rather lengthy and thoughtful article (extracted from her book Gender and Our Brains) in Aeon by Gina Rippon, an emerita professor of cognitive neuroimaging at Aston University in Birmingham, UK: https://aeon.co/essays/is-pink-for-girls-and-blue-for-boys-biology-or-conditioning

I have to say I was lured into reading the entire article when she quickly introduced me to the dreadful concept of ‘gender reveal’ parties. They apparently come in two varieties: in one, the pregnant woman for whom the party is held, does not know the sex of her fetus as do the organizers (the ultrasound sex, by agreement, has been sent only to them) -it is guarded in a sealed envelope as is the colour motif; in the second variety, the mother knows and reveals it with all the appropriately coloured hoopla at the party.

And why, apart from the puerile attempts to colourize the event, do I find it so disagreeable? Well, as Rippon suggests, ‘20 weeks before little humans even arrive into it, their world is already tucking them firmly into a pink or a blue box. And… in some cases, different values are attached to the pinkness or blueness of the news.’

I also read further, in hopes that the author had some convincing insights as to whether the colour assigned to each gender was biologically or culturally determined. Unfortunately, the evidence she cites seems able to support either -or neither- side. One study, however, did make some progress in resolving the problem: ‘American psychologists Vanessa LoBue and Judy DeLoache tracked more closely just how early this preference emerges. Nearly 200 children, aged seven months to five years, were offered pairs of objects, one of which was always pink. The result was clear: up to the age of about two, neither boys nor girls showed any kind of pink preference. After that point, though, there was quite a dramatic change, with girls showing an above-chance enthusiasm for pink things, whereas boys were actively rejecting them. This became most marked from about three years old onwards.’ This suggests a cultural rather than biological explanation: ‘once children learn gender labels, their behaviour alters to fit in with the portfolio of clues about genders and their differences that they are gradually gathering.’

But why, then, the cultural preference? There was recently what may be an Urban Legend suggesting that at one time, the gendered colour preferences were actually reversed and ‘that any kind of gender-related colour-coding was established little more than 100 years ago, and seems to vary with fashion, or depending on whether you were reading The New York Times in 1893 [pink for a boy]… or the Los Angeles Times in the same year [pink for a girl].’

But, at least in our current milieu, the issue is not so much the colour as what it has come to suggest, consciously or not: ‘Pink has become a cultural signpost or signifier, a code for one particular brand: Being a Girl. The issue is that this code can also be a ‘gender segregation limiter’, channelling its target audience (girls) towards an extraordinarily limited and limiting package of expectations, and additionally excluding the non-target audience (boys).’

Of course, as Rippon points out, the fact that Pink may be a signifier of what is acceptable to females, allows it to bridge the gender gap: colour a toy truck pink, and it becomes acceptable for a girl to play with it. Unfortunately, the other side of the permission can be that ‘pinkification is all too often linked with a patronising undertow, where you can’t get females to engage with the thrills of engineering or science unless you can link them to looks or lipstick, ideally viewed through – literally – rose-tinted glasses.’ And viewed through prevailing stereotypes as well, I might add.

And yet, what determines what constitutes a ‘boy toy’? Is it what the child sees -or what their parents and grandparents saw in the world in which they grew up? In the world today, women drive trucks, operate diggers, become doctors and lawyers -not just secretaries, teachers, and nurses.

There is also a danger to pandering to ill-conceived remedies, of course. Take Rippon’s example of the STEM Barbie doll (STEM -for the older, more naïve readers like me- stands for Science, Technology, Engineering, and Mathematics -traditionally male-dominated fields, apparently): ‘efforts to level the playing field get swamped in the pink tide – Mattel has produced a STEM Barbie doll to stimulate girls’ interest in becoming scientists. And what is it that our Engineer Barbie can build? A pink washing machine, a pink rotating wardrobe, a pink jewellery carousel.’

Only in the penultimate and last paragraph of the article does Rippon come close to answering the question on the reader’s lips from the beginning of her 4500 word document: ‘It is clear that boys and girls play with different toys. But an additional question should be – why?… The answer to these questions could lie in our new understanding of how, from the moment of birth (if not before), our brains drive us to be social beings – to understand social scripts, social norms, social behaviour – to make sure we understand the groups we should belong to and how we can fit in… our brains are scouring our world for the rules of the social game – and if that world is full of powerful messages about gender, helpfully flagged by all sorts of gendered labelling and gendered colour-coding, our brains will pick up such messages and drive their owners to behave ‘appropriately.’’

Perhaps Rippon is correct, but I wonder if it’s more accurate to say that we were stuck with gendered colours; I think there is room for hope: what the child sees when she looks around is changing. So I am instead inclined to the view of André Gide, the French author who won the Nobel Prize in Literature in 1947: ‘The colour of truth is gray,’ he wrote.

May we all be free to mix our own colours…

A Day at the Beach

I like to go to the shore from time to time, albeit a different one than encircles the island where I live. After a summer of relative peace, I long for the rough and unpredictable weather of the extreme west coast of Vancouver Island, with its waves that shoulder their way along rocks, or crash headlong into others that dare to stand their ground. I like to stare into wind that has picked up strength and salt along its thousand mile fetch and tries to force me to turn away. The roar of breaking waves and a gale howling through the bending foreshore trees is an anodyne to my usual existence -and yet the turbulence is, like everything else out there, unpredictable.

Sometimes though, no matter the season, the sea is relatively calm. Sunlight strokes the surface of the waves and sparkles off the jagged surface of the massive, dark igneous rocks that bulge like warts from the sandy shore, or squat like castle walls to guard the line of anxious, hunchbacked trees that watch kyphotically from a safe distance inland.

On days like that, I pretend to scramble effortlessly over the craggy rocks, very aware that I am no longer in my salad days, and that my balance, adequate for mainland forest roots and level trails perhaps, is sometimes hesitant and disoriented with the constant motion of the nearby roiling tide. And the tidal pools embedded in their stone enclosures, each with miniature versions of marine life safe from pounding seas, distract me enough that I misjudge the distance to nearby footholds and teeter on the brink of trauma.

Of course, I seldom seek the safety of my hands, because that’s not how a younger me moved along the rocks; some things -like age, and pride- do not allow for compromise, so injury is always just a step away.

I imagine that’s why I remember a trip there last year so well. The day was flawless, and the waves merely licked the shore-chained rocks like puppies in a kennel greeting visitors. Usually damp and foreboding, the outcrop beckoned me from the storm-tossed log redoubt along the shore and I scrambled up the nearest crag without a thought. It had been a year since I had last attempted a run along them -a year further away from youth, a year to forget my waning judgement.

Of course I fell, but at least I made it further along them than last year before I scraped my knee and twisted my ankle. And, as I lay writhing on the sand, I decided I had fulfilled my pilgrimage for another year, and limped painfully back to the logs.

The beach was remarkably empty for a sunny November day; there was only a mother watching her child run along the sandy beach, and an elderly couple strolling, shoes in hand, through the wave-wet sand from an incoming tide. I decided to sit and read in the still, warm sun.

I suppose I must have dosed briefly but I was awakened by a pair of eyes watching me from a nearby log.

“Is your ankle better?” asked a little voice when its eyes noticed mine were open.

I looked up to see the little boy, walking along a driftwood log towards me, his balance not at all in question. He couldn’t have been much more than 6 or 7 years old, and was dressed in baggy blue shorts, a dirty white tee shirt, and black running shoes. His rumpled auburn hair danced lightly in the onshore breeze.

“We saw you fall,” he explained when he was closer still, and jumped nimbly off the log to stand in front of me. “My mom wondered if you needed any help.”

I moved my ankle back and forth and it seemed to be all right. My knee was stiff, but I doubted it would be a problem, so I smiled and thanked him for asking.

He shrugged, but didn’t turn away. “You should use your hands when you climb those rocks, mister,” he said, after staring at me for a moment. “My dad taught me how to do it,” he added, in case I was puzzled how he knew.

I broadened my smile. “Does your family live here?” I asked, wondering if his mother had decided to come to the beach on her day off, or maybe just to wile away a morning while her husband was at work.

He nodded, but I could see his face change. “My dad is in the hospital in Vancouver, though…”

I waited for him to continue, but the subject seemed a painful one for him, and I could see him fighting back tears.

“Want me to show you how to do it?” he added, but more to change the subject than anything else, I think.

I nodded and watched him run across the sand towards the rocks, all the while waving at his mother. But then, just before he started to climb, he turned and signalled for me to join him.

“You have to look for hand-holds to grab,” he said as soon as I was at his side. “That way, if your foot slips, you won’t fall…” He glanced at me, very much the teacher in a class. “There,” he said, spying a perfect place on the rough surface for his fingers. “Try it.”

I obliged and he followed behind, pointing out new handles for me to grab.  We only stopped when we reached a little tidal pool in a recess on the ocean side of the rock.

“Have a look in here,” he said, kneeling down beside the pool. “See those dark things on the edge of the rock?” He reached down and touched one of them. “Those are mussels, and those little ones are barnacles,” he added, pointing to a cluster of small warty creatures.

“And look in the little pool.” He reached down into the crystal water, and attempted to touch some little fish, clearly delighted when they were too fast for him to catch.

“Come on,” he shouted, as he darted excitedly away and down the sea side of the rock. “I’ll show you the sea stars stuck to the edge…”

But before I could reach him, his mother appeared behind us, climbing nimbly up towards us from the sand. “I hope Chess is not bothering you,” she said, smiling like a proud mother. “He gets so excited on the beach.” She stared out to sea for a moment before continuing. “His father is a marine biologist,” she explained in a voice barely audible above the soft murmur of the wind and the gentle lapping of the waves against the rocky ramparts. “He always gets excited down here, so I think it rubbed off on Chester…”

She settled her eyes on my face for a moment and then sent them out to search the sea again. “His dad would be proud to think his son was so much like him…” And her voice trailed off as if it had been whisked away by the ocean breeze.

I thought of them when I came across an article in the Conversation that pointed out the value of the beach as a learning experience for children: https://theconversation.com/a-day-at-the-beach-deep-learning-for-a-child-121785

It was written by Lotje Hives, a Research Collaborator and Tara-Lynn Scheffel, an Associate Professor, both at the Schulich School of Education at Nipissing University. But, you know, I didn’t really have to read the essay -I didn’t need much convincing about the educational value a beach affords for parent and child alike.

I can still see Chester and his mother on that massive rock on Long Beach -the one scampering over the crags and gullies, poking his fingers into crevasses, and pointing excitedly at what he’d found, while the other, as still as an outcrop, watched him with undisguised pride.

A Predilection for Extinction?

 

There appears to be a lot of concern about extinctions nowadays -everything from spotted owls to indigenous languages pepper the list. Things around us that we took for granted seem to be disappearing before we even get to know or appreciate them. One has to wonder whether this is accompanied by furtive, yet anxious, glances in the mirror each morning.

Extinction. I wonder what it would be like -or can we even imagine it? If we could, then presumably we’re not extinct, of course, but our view of history is necessarily a short one. Oral traditions aside, we can only confidently access information from the onset of written accounts; many extinctions require a longer time-frame to detect… although, perhaps even that is changing as we become more aware of the disappearance of less threatening -less obvious- species. Given our obsessive proclivity for expanding our knowledge, someone somewhere is bound to have studied issues that have simply not occurred to the rest of us.

And yet, it’s one thing to comment on the absence of Neanderthals amongst us and tut-tut about their extinction, but yet another to fail to fully appreciate the profound changes in climate that are gradually occurring. Could the same fate that befell Neanderthals be forecasting our own demise -a refashioning of the Cassandra myth for our self-declared Anthropocene?

It would not be the first time we failed to see our own noses, though, would it? For all our perceived sophistication, we often forget the ragged undergarments of hubris we hide beneath our freshly-laundered clothes.

Religion has long hinted at our ultimate extinction, of course -especially the Christian one with which those of us in the West are most familiar- with its talk of End-of-Days. But, if you think more closely about it, this is predicted to occur at the end of Time; extinction, on the other hand, occurs -as with, say, the dinosaurs- within Time. After all, we are able to talk about it, measure its extent, and determine how long ago it happened.

And yet, for most of us, I suspect, the idea of extinction of our own species is not inextricably linked to our own demise. Yes, each of us will cease to exist at some point, but our children will live on after us -and their children, too. And so on for a long, long time. It is enough to think that since we are here, our children will continue on when we are not. Our species is somehow different than our own progeny…

Darwin, and the subsequent recognition of the evolutionary pressures that favour the more successfully adapted no doubt planted some concerns, but an essay in Aeon by Thomas Moynihan (who completed his PhD at Oxford), set the issue of Extinction in a more historical context for me, however. https://aeon.co/essays/to-imagine-our-own-extinction-is-to-be-able-to-answer-for-it

Moynihan believes that only after the Enlightenment (generally attributed to the philosophical movement between the late 17th to the 19th century) did the idea of human extinction become an issue for consideration. ‘It was the philosopher Immanuel Kant who defined ‘Enlightenment’ itself as humanity’s assumption of self-responsibility. The history of the idea of human extinction is therefore also a history of enlightening. It concerns the modern loss of the ancient conviction that we live in a cosmos inherently imbued with value, and the connected realisation that our human values would not be natural realities independently of our continued championing and guardianship of them.’

But, one may well ask, why was there no serious consideration of human extinction before then? It would appear to be related to what the American historian of ideas, Arthur Lovejoy, has called the Principle of Plenitude that seemed to have been believed in the West since the time of Aristotle right up until the time of Leibniz (who died in 1716): things as they are, could be no other way. It would be meaningless to think of any species (even human) not continuing to exist, because they were meant to exist. Period. I am reminded -as I am meant to be- of Voltaire’s satirical novel Candide and the uncritical espousal of Leibniz’ belief that they were all living in ‘the best of all possible worlds’ -despite proof to the contrary.

I realize that in our current era, this idea seems difficult to accept, but Moynihan goes on to list several historical examples of the persistence of this type of thinking -including those that led ‘Thomas Jefferson to argue, in 1799, in the face of mounting anatomical evidence to the contrary, that specimens such as the newly unearthed Mammuthus or Megalonyx represented species still extant and populous throughout the unexplored regions of the Americas.’

Still, ‘A related issue obstructed thinking on human extinction. This was the conviction that the cosmos itself is imbued with value and justice. This assumption dates back to the roots of Western philosophy… Where ‘being’ is presumed inherently rational, reason cannot itself cease ‘to be’… So, human extinction could become meaningful (and thus a motivating target for enquiry and anticipation) only after value was fully ‘localised’ to the minds of value-mongering creatures.’ Us, in other words.

And, of course, the emerging findings in geology and archeology helped to increase our awareness of the transience of existence. So too, ‘the rise of demography [the statistical analysis of human populations] was a crucial factor in growing receptivity to our existential precariousness because demography cemented humanity’s awareness of itself as a biological species.’

Having set the stage, Moynihan’s argument is finally ready: ‘And so, given new awareness of the vicissitude of Earth history, of our precarious position within it as a biological species, and of our wider placement within a cosmic backdrop of roaming hazards, we were finally in a position to become receptive to the prospect of human extinction. Yet none of this could truly matter until ‘fact’ was fully separated from ‘value’. Only through full acceptance that the Universe is not itself inherently imbued with value could ‘human extinction’ gain the unique moral stakes that pick it out as a distinctive concept.’

And interestingly, it was Kant who, as he aged, became ‘increasingly preoccupied with the prospect of human extinction…  During an essay on futurology, or what he calls ‘predictive history’, Kant’s projections upon humanity’s perfectibility are interrupted by the plausibility of an ‘epoch of natural revolution which will push aside the human race… Kant himself characteristically defined enlightening as humanity’s undertaking of self-responsibility: and human rationality assumes culpability for itself only to the exact extent that it progressively spells out the stakes involved… This means that predicting increasingly severe threats is part and parcel of our progressive and historical assumption of accountability to ourselves.’

So, I don’t see this recognition of the possibility of human extinction as a necessarily bad thing. The more we consider the prospect of our disappearance, the more we become motivated to do something about it. Or, as Moynihan points out, ‘The story of the discovery of our species’ precariousness is also the story of humanity’s progressive undertaking of responsibility for itself. One is only responsible for oneself to the extent that one understands the risks one faces and is thereby motivated to mitigate against them.’ That’s what the  Enlightenment was all about: humanity’s assumption of self-responsibility.

Maybe there is still hope for us… well, inshallah.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Illeism, or Sillyism?

Who would have thought that it might be good to talk about yourself in the third person? As if you weren’t you, but him? As if you weren’t actually there, and anyway, you didn’t want yourself to find out you were talking about him in case it seemed like, well, gossip? I mean, only royalty, or the personality-disordered, are able to talk like that without somebody phoning the police.

Illeism, it’s called -from the Latin ‘he’- and it’s an ancient rhetorical technique that was used by various equally ancient personages -like, for example, Julius Caesar in the accounts he wrote about his exploits in various wars. It’s still in occasional use, apparently, but it stands out like a yellow MacDonald’s arch unless you’re a member of a small cabal, sworn to secrecy.

Now that I mention it, I remember trying it once when I was very young, and blamed our cat for scattering cookies all over the floor; but I suppose that doesn’t count because my mother instantly realized I was actually using the third-person-singular in its grammatical sense, and sent me to my room for fibbing -without the cat. I didn’t even get a hug for my clever use of ancient rhetoric.

The episode kind of put me off third-personism until I read a little more about it in an adaptation of an article originally published by The British Psychological Society’s Research Digest, by David Robson and edited for Aeon. He is a science journalist and a feature writer for the BBC: https://aeon.co/ideas/why-speaking-to-yourself-in-the-third-person-makes-you-wiser

It seems illeism can be an effective tool for self-reflection. And, although you may be tempted to opt for simple rumination – which is ‘the process of churning your concerns around in your head… research has shown that people who are prone to rumination also often suffer from impaired decision making under pressure, and are at a substantially increased risk of depression…’

Robson was intrigued by the work of the psychologist Igor Grossmann at the University of Waterloo in Canada writing in PsyArxiv which suggests that third-person thinking ‘can temporarily improve decision making… [and] that it can also bring long-term benefits to thinking and emotional regulation.’ -presumably related to the perspective change allowing the user to bypass -or at least appreciate- their previously held biases.

Grossmann, it seems, studies wisdom, and [w]orking with Ethan Kross at the University of Michigan in the United States…  found that people tend to be humbler, and readier to consider other perspectives, when they are asked to describe problems in the third person.’

Hmm…

He read the article with a fair soupçon of wariness. Might this not, he wondered, be academic legerdemain? It managed to fool Robson, but surely not he who reads with not even a hatchet to grind. He, after all, is only a retired GYN who is only accustomed to addressing freshly delivered newborns and their unique anatomical appendages with the appropriate third-person labels. It’s hard to do otherwise with the unnamed. Indeed, it had always seemed situationally germane, given the circumstances. To turn that on himself, however, might be contextually confusing -as well as suspicious.

So, his days as an accoucheur long past, he decided there would be little harm in trying it out in front of a mirror before he unleashed his full third-person on an unsuspecting face in Starbucks.

It seemed… unusual at first: he knew the individual in the reflection as well as himself, and addressing him as ‘he’ felt rude -creepy, actually. He did manage to get around the vertigo by pretending he was actually talking to a younger version of his brother, though, and ignored the fact that his brother was moving his lips at the same time and apparently not listening.

“Your brother,” he started, “is wondering if he should use the third-person approach when he is anxious about whether or not to order the sausage and egg bagel or just a cookie for breakfast at Starbucks.” A sudden thought occurred to him: “he could pretend he was sent to order it for his friend who is currently guarding a table for the two of them.”

He stared at the image in the mirror and frowned, suddenly remembering the cat-and-cookie incident.

He was uncertain where this was going. Was he supposed to ask what he -that is ‘himself’- thought about the idea? And who, exactly, would be answering? The whole thing seemed like an endless hall of mirrors, an infinite regression of Matryoshka dolls.

“Okay,” he added, to assuage the guilt he assumed he would have fibbing to the barista, “He is just trying an experiment in non-gendered, non-directional conversation to solve a personal decisional paralysis. So, he is not trying to be weird or anything. He is actually just asking for your advice: would bagel or cookie be a better breakfast?”

Suddenly, an unexpected epiphany -maybe produced by the comparative ‘better’, but nonetheless apparent in the way in which the third person had phrased his question. Of course the bagel with its protein rich contents was the ‘better’ breakfast! He was pretty sure that First-person-singular would never have seen that with such clarity –could never have seen it. Only by divorcing himself from his stomach, and mentioning it as if he were discussing a friend did it become clear.

He stepped away from his brother at the mirror and smiled to himself. He’d discovered a way of distancing himself from himself long enough to see who he was from an outside perspective. Still, there was a nagging question that kept tugging at his sleeve: who was he when he asked those questions? And did he risk permanently closing the door to the person he used to be, or was it sort of like losing himself in a story and then swapping realities when he closed the book…? But, what if he preferred what he was reading to what he was living…?

Whoa -pretty heavy stuff, that.

You know, it’s harder coming back to First-person than closing the book after a while, and I found myself switching back and forth for the longest time. I really wonder how hard Grossman and Kross had thought this through. And I wonder if Robson got caught up in their web as well. Nobody mentioned anything about collateral damage -but of course, they wouldn’t, would they?

All I can say is be careful, readers -there might be a third-person Minotaur at the end of the labyrinth.

Who’s afraid of the Deodand?

Sometimes Philosophy hides in plain sight; interesting questions emerge, unbidden, when you least expect them. A few months ago I was waiting in a line to order a coffee in a poorly-lit shop, when the woman behind bumped into me as she struggled to read the menu posted on the wall over the counter.

“They don’t make it easy in here, do they?” she grumbled in a token apology.

I turned and smiled; I’d been having the same difficulty. “I should have brought a flashlight,” I added, trying to make light of it.

“Photons should be free,” she mumbled. “It’s not like we should have to carry them with us to get a coffee…” She looked at me with a mischievous grin creeping across her shadowed face. “I mean they don’t have to pay by the pound for them like bananas, or anything…”

I chuckled. “Photons have mass…? I didn’t realize they were Catholic.” It was a silly thing to say, I suppose, but it just popped out.

She actually laughed out loud at that point. “That’s very clever…” she said, and despite the dim light, I could feel her examining me with more interest.

But I found myself standing in front of the barista at that point, so I ordered my coffee, and headed for a table in the corner. A moment later, the woman from the lineup surfaced out of the darkness and sat beside me under a feeble wall light at the next table.

“Do you mind if I sit here?” she asked, not really waiting for my reply.

I smiled pleasantly in response, but in truth, I had been looking forward to the solitude usually offered by a dark coffee-shop corner.

“I’m sorry,” she said, immediately sensing my mood. “It’s just that you cheered me up in that horrid line, and I wanted to thank you…”

“It was a bit of a trial, wasn’t it?”

She nodded as she sipped her coffee. “Your comment on the mass of photons was hilarious -I’m a Science teacher at the Mary Magdalene Women’s College, so I enjoyed the reference to Catholics. My students will love it.”

I looked at her for a moment and shrugged. “I’m afraid it’s not original, but thank you.”

She chuckled at my honesty and picked up her coffee again. “I don’t recognize it,” she added after a moment’s reflection, still holding her steaming cup in front of her and staring at it like a lover.

“I think maybe it was one of my favourite comedians who said it…” But I wasn’t sure.

“Oh? And who might that be?” she asked, smiling in anticipation of a shared interest.

I thought about it for a moment. “I don’t know… Woody Allen, perhaps.”

She put down her cup with a sudden bang on the table and stared at me. Even in the dim light, I could feel her eyes boring into my face. “A horrid man!” she said between clenched teeth. “How could you ever think that anything he said was funny?” she muttered.

I was beginning to find her eyes painful. I was aware of the controversies about Woody, of course, but I suppose I was able to separate them from his humour. And yet, I have to admit, that when the woman reminded me of his behaviour, I felt guilty -as if by laughing at his jokes, I was tacitly approving of his other activities.

It’s a puzzling, and yet fascinating relationship we have with things used by, or even owned by people we consider evil: deodands. The word, once used in English Common Law, was originally from Medieval Latin –Deo dandum -a thing to be given to God. The idea was that if the object had caused a human death, it had to be forfeited to the Crown, and its value would equal the compensation given to charity, or the family of the victim.

The question, though, is why we feel such revulsion for something that, through no fault of its own, was used in the commission of a crime? It could have been any knife, say, that was used in a stabbing, so why is this particular knife somehow different? Does the aura of what it did cling to it? Haunt it…? Would Woody Allen’s unrelated jokes -or, for that matter, Bill Cosby’s- be funny if we didn’t know their sources?

I have to admit that humour is a lot more reflective of the personality that created it than, for example, an assassin’s gun, or a criminal’s knife, but in isolation -ie divorced from context- is there really any difference? I certainly have no answer, but I have to say that I was pleasantly surprised that the issue was not one that I was puzzling over on my  own. I came across an essay in an issue of Aeon by Paul Sagar, a lecturer in political theory at King’s College London that looked at first as if it might be helpful: https://aeon.co/essays/why-do-we-allow-objects-to-become-tainted-by-chance-links

He wrote that ‘It is not uncommon to find that one’s enjoyment of something is irrevocably damaged if that thing turns out to be closely connected to somebody who has committed serious wrongs…  knowledge of somebody – or something – having done a bad thing can deeply affect how we view the status of the thing itself.’ But why should that be?

Obviously, the answer is not easily obtained, and in a roundabout way he throws himself on the mercy of the 18th-century Scottish Enlightenment thinker Adam Smith, and his first book, The Theory of Moral Sentiments (1759). ‘Smith thought it undeniable that we assess the morality of actions not by their actual consequences, but by the intentions of the agent who brings them about.’ And yet, if a person were to throw a brick over a wall and hit someone accidentally, he would also be judged by the consequences even though he hadn’t intended to injure anyone. ‘Smith thought that our moral sentiments in such cases were ‘irregular’. Why do we respond so differently to consequences that have bad outcomes, when those outcomes are purely a matter of luck? Smith was confident that, although he could not explain why we are like this, on balance we should nonetheless be grateful that we are indeed rigged up this way.’

Have patience -this may slowly lead us to a sort of answer. First of all, ‘if, in practice, we really did go around judging everybody solely by their intentions, and not by the actual consequence of their actions, life would be unliveable. We would spend all our time prying into people’s secret motivations, fearing that others were prying into ours, and finding ourselves literally on trial for committing thought crimes.’ Only a god on Judgement Day should be allowed that privilege.

Also, it is good be bothered by consequences rather than just about hidden intentions for social reasons: you have to do good things to get praise, not just intend to do them. And conversely you have to do the bad things to get the punishment. Uhmm… Well, okay, but that doesn’t really explain deodands, or anything.

At this point, Sagar kind of gives up on Smith’s attempts at moral philosophy and heads off on his own wandering trail to find an answer. ‘It is good that we feel aversion to artifacts (be they physical objects, films, records or whatever) associated with sex crimes, murders and other horrors – even if this is a matter of sheer luck or coincidence – because this fosters in us not only an aversion to those sorts of crimes, but an affirmation of the sanctity of the individuals who are the victims of them.’ Somehow that makes us less likely to act the same way? Whoaa…

In the last paragraph, he essentially throws up his hands in frustration (or maybe those were my hands…) and as good as admits he doesn’t know why we would even think about deodands.

And me? How should I have responded to the woman in the coffee shop? Well, probably not by talking about Adam Smith -but changing the subject might have been a good first step, though…

Too cute for words?

I love cute as much as anyone else, I suppose, although it’s not a quality I have possessed since early childhood I’m afraid. Many things are cute, though: puppies, babies, toddlers… and they all seem to have certain attributes in common: large, or prominent eyes, larger than expected head, and so on –neoteny, it’s called. They all seem vulnerable, and deserving of protection. Needing cuddling.

And yet, apart from agreeing affably with doting parents describing their newborns, or singles obsessing over their new puppies, I didn’t give cuteness any extra thought, I have to admit. I mean, cute is, well, cute. There didn’t seem to be any need to dwell on the features, or deify their appearance. But an older article by Joel Frohlich, then a PhD student at UCLA, about cuteness in Aeon did tweak my interest: https://aeon.co/ideas/how-the-cute-pikachu-is-a-chocolate-milkshake-for-the-brain

Perhaps it was the etymology of the word that initially intrigued me. ‘The word emerged as a shortened form of the word ‘acute’, originally meaning sharp, clever or shrewd. Schoolboys in the United States began using cute to mean pretty or attractive in the early 19th century. But cuteness also implies weakness. Mignon, the French word for cute or dainty, is the origin of the English word ‘minion’, a weak follower or underling… It was not until the 20th century that the Nobel laureates Konrad Lorenz and Niko Tinbergen described the ‘infant schema’ that humans find cute or endearing: round eyes, chubby cheeks, high eyebrows, a small chin and a high head-to-body-size ratio. These features serve an important evolutionary purpose by helping the brain recognise helpless infants who need our attention and affection for their survival.’

In other words, ‘cute’ was a mechanism to elicit protection and caring. Indeed it seems to be neurologically wired. MRI studies of adults presented with infant faces revealed that the ‘brain starts recognising faces as cute or infantile in less than a seventh of a second after the face is presented.’ These stimuli activate ‘the nucleus accumbens, a critical piece of neural machinery in the brain’s reward circuit. The nucleus accumbens contains neurons that release dopamine.’

But it can be tricked, so ‘baby-like features might exceed those of real infants, making the character a supernormal stimulus: unbearably adorable, but without the high maintenance of a real baby.’ So, is cuteness in these circumstances, actually a Trojan Horse? An interesting thought.

Cuteness is situational -or at least, should be. Cuteness out of context can be frightening, and even grotesque. Think of the clown in Stephen King’s novel It for example. Imitation, when recognized as such, seems out of place. Wrong. Cute is a beginning -an early stage of something that will eventually change as it grows up. Its transience is perhaps what makes it loveable. At that stage it is genderless, asexual, and powerless. It poses no threat -in fact, it solicits our indulgence. Think what would happen if it were a trick, however: our guard would be down and we would be vulnerable.

But there’s a spectrum of cuteness; there must be, because it –or its homologues- seem to be appearing in situations that don’t remotely suggest innocence, youth, or vulnerability. Think of the proliferation of cutesy Emojis. As Simon May, a visiting professor of philosophy at King’s College London points out in an essay (also in Aeon https://aeon.co/ideas/why-the-power-of-cute-is-colonising-our-world ) ‘This faintly menacing subversion of boundaries – between the fragile and the resilient, the reassuring and the unsettling, the innocent and the knowing – when presented in cute’s frivolous, teasing idiom, is central to its immense popularity… Cute is above all a teasing expression of the unclarity, uncertainty, uncanniness and the continuous flux or ‘becoming’ that our era detects at the heart of all existence, living and nonliving. In the ever-changing styles and objects that exemplify it, it is nothing if not transient, and it lacks any claim to lasting significance. Plus it exploits the way that indeterminacy, when pressed beyond a certain point, becomes menacing – which is a reality that cute is able to render beguiling precisely because it does so trivially, charmingly, unmenacingly. Cute expresses an intuition that life has no firm foundations, no enduring, stable ‘being’.’

Perhaps that’s what makes non-contextual cute so inappropriate, so menacing. ‘This ‘unpindownability’, as we might call it, that pervades cute – the erosion of borders between what used to be seen as distinct or discontinuous realms, such as childhood and adulthood – is also reflected in the blurred gender of many cute objects… Moreover, as a sensibility, cute is incompatible with the modern cult of sincerity and authenticity, which dates from the 18th century and assumes that each of us has an ‘inner’ authentic self – or at least a set of beliefs, feelings, drives and tastes that uniquely identifies us, and that we can clearly grasp and know to be truthfully expressed. Cute has nothing to do with showing inwardness. In its more uncanny forms, at least, it steps entirely aside from our prevailing faith that we can know – and control – when we are being sincere and authentic.’

Whoa -that takes cute down a road I don’t care to travel: it’s an unnecessary detour away from the destination I had intended. Away from attraction, and into the Maelstrom of distraction. Contrived cute is uncute, actually. It is a tiger in a kitten’s mask.

I think there is a depth to beauty which -as ephemeral and idiosyncratic as it may seem- is missing from cute. Both hint at admirability, and yet in different ways: one describes a surface,  the other a content -much as if the value of a wallet could be captured by its outward appearance alone.

For some reason cute reminds of a Latin phrase of Virgil in his Aeneid -although in a different context, to be sure: Timeo Danaos et dona ferentes –‘I fear the Greeks even bearing gifts’- a reference to the prolonged Greek war against Troy and their attempt to gain entrance to the city with their gift of the infamous Trojan Horse. Cute is a first impression, not a conclusion. I suppose it’s as good a place to start as any, but I wonder if, in the end, it’s much ado about nothing…