Remember Plato’s Cave allegory? In his Republic he describes a scenario in which some people have spent their lives chained in a cave so they can only see the wall in front of them. There is a fire behind them that casts shadows on the wall that they have no way of knowing are only shadows. For these people, the shadows are their reality. There seem to be many versions of what happens next, but the variation I prefer is that one of the people escapes from the cave and discovers the sun outside for the first time; he realizes that what he had assumed was real -the shadows- were just that: merely the shadows cast by the actual objects themselves.

Sometimes we become so accustomed to seeing things a certain way, we find it difficult, if not impossible, to believe there could be an alternative view. Or assume that, even if there were another version, it must be wrong. But can we be sure that we are evaluating the alternative fairly and without prejudice? Can we assess it with sufficient objectivity to allow a critical analysis? Or are we unavoidably trapped in the prevailing contemporary Weltanschauung? It’s an interesting question to be sure, and one that begs for examination, if only to explore the world behind the mirror.

I stumbled upon an essay by Julie Reshe, a psychoanalyst and philosopher, who, after recovering from a bout of depression, began to wonder whether depression itself was actually the baseline state, and one that allowed a more accurate view of how things actually are:

I have to admit that I had to temporarily divorce myself from my usually optimistic worldview to be able to fathom her argument, and I found it rather uncomfortable. But sometimes it’s instructive -even valuable- to look under a rock.

As a philosopher, Reshe felt the need to examine both sides of an argument critically, putting aside preconceptions and biases. ‘Depressogenic thoughts are unpleasant and even unbearable,’ she writes, ‘but this doesn’t necessarily mean that they are distorted representations of reality. What if reality truly sucks and, while depressed, we lose the very illusions that help us to not realise this? What if, to the contrary, positive thinking represents a biased grasp of reality? … What if it was a collapse of illusions – the collapse of unrealistic thinking – and the glimpse of a reality that actually caused my anxiety? What if, when depressed, we actually perceive reality more accurately? What if both my need to be happy and the demand of psychotherapy to heal depression are based on the same illusion?’ In other words, what if I am actually not a nice person? What if there’s a reason people don’t like me?

Whoa! Suppose this is not a counterfactual? After all, other philosophers have wondered about this. For example, Arthur Schopenhauer whose deeply pessimistic writings about the lack of meaning and purpose of existence I have never understood, or the equally famous German philosopher Martin Heidegger who felt that anxiety was the basic mode of human existence. ‘We mostly live inauthentically in our everyday lives, where we are immersed in everyday tasks, troubles and worries, so that our awareness of the futility and meaninglessness of our existence is silenced by everyday noise… But the authentic life is disclosed only in anxiety.’ My god, where’s my knife?

And even Freud wasn’t optimistic about the outcome of psychotherapeutic treatment, and was ‘reluctant to promise happiness as a result.’ He felt that ‘psychoanalysis could transform hysterical misery into ‘common unhappiness’. Great…

And then, of course, there’s the philosophical tradition called ‘depressive realism’ which holds that ‘reality is always more transparent through a depressed person’s lens.’ And just to add more poison to the cake, ‘the Australian social psychologist Joseph Forgas and colleagues showed that sadness reinforces critical thinking: it helps people reduce judgmental bias, improve attention, increase perseverance, and generally promotes a more skeptical, detailed and attentive thinking style.’

All of which is to say, I suppose, ‘The evolutionary function of depression is to develop analytical thinking mechanisms and to assist in solving complex mental problems. Depressive rumination helps us to concentrate and solve the problems we are ruminating about… depressive rumination is a problemsolving mechanism that draws attention to and promotes analysis of certain problems.’

I have presented these deeply troubling ideas merely as an exercise in perspective, I hasten to add. Sometimes it is valuable to try to grasp the Umwelt of the stranger on the other side of the door before we open it. We can only help if we are willing to understand why they are there.

Part of the solution may lie in puzzling out Reshe’s counterfactuals. She seems to want to assign meaning to her former depression, as have many of the other people she mentions, to buttress her point. She also seems to feel that there was a time when that point of view might have seemed more mainstream. That nowadays there is just too much expectation of happiness -unrealistic expectations by and large, which presents a problem in and of itself. If we constantly expect to achieve a goal, but, like a prairie horizon, it remains temptingly close and yet just out of reach, we are doomed to frustration -a self-fulfilling prophecy.

And yet, it seems to me that resigning oneself to unhappiness, or its cousin depression, doesn’t represent a paradigm shift, but rather a rationalization that it must be the default position -and therefore must serve some useful evolutionary purpose; a position benighted and stigmatized because it advertises the owner’s failure to achieve the goal that others seem to have realized.

I’m certainly not disparaging depression, but neither am I willing to accept that it serves any evolutionary strategy except that of a temporary, albeit necessary harbour until the storm passes. And to suggest that positive emotions -happiness, contentment, joy, or pleasure, to name just a few- however short-lived, are illusory, and unrealistic expectations, is merely to excuse and perhaps justify an approach to depression that isn’t working. A trail that only wanders further into the woods.

I’m certainly cognizant of the fact that there is a spectrum of depressions, from ennui to psychotic and that some are more refractory to resolution than others, but that very fact argues against leaving them to strengthen, lest they progress to an even more untenable and dangerous state.

Perhaps we need to comfort ourselves with the ever-changing, ever-contrasting nature of emotions, and not expect of them a permanence they were likely never evolved to achieve.

Goldilocks, it seems to me, realized something rather profound when she chose the baby bear’s porridge after finding papa bear’s porridge too hot, and mamma bear’s too cold: it was just right…

Imagination bodies forth the forms of things unknown

The poet’s eye, in fine frenzy rolling,
Doth glance from heaven to earth, from earth to heaven;
And as imagination bodies forth
The forms of things unknown, the poet’s pen
Turns them to shapes and gives to airy nothing
A local habitation and a name.
Such tricks hath strong imagination.                                                                                                                   
Theseus, in Shakespeare’s A Midsummer Night’s Dream

Shakespeare had a keen appreciation of the value of imagination, as that quote from A Midsummer Night’s Dream suggests. But what is imagination? Is it a luxury -a chance evolutionary exaptation of some otherwise less essential neural circuit- or a purpose-made system to analyse novel features in the environment? A mechanism for evaluating counterfactuals -the what-ifs?

A quirkier question, perhaps, would be to ask if it might predate language itself -be the framework, the scaffolding upon which words and thoughts are draped. Or is that merely another chicken versus egg conundrum drummed up by an overactive imagination?

I suppose what I’m really asking is why it exists at all. Does poetry or its ilk serve an evolutionary purpose? Do dreams? Does one’s Muse…? All interesting questions for sure, but perhaps the wrong ones with which to begin the quest to understand.

I doubt that there is a specific gene for imagination; it seems to me it may be far more global than could be encompassed by one set of genetic instructions. In what we would consider proto-humans it may have involved more primitive components: such non-linguistic features as emotion -fear, elation, confusion- but also encompassed bodily responses to external stimuli: a moving tickle in that interregnum between sleep and wakefulness might have been interpreted as spider and generated a muscular reaction whether or not there was an actual creature crawling on the skin.

Imagination, in other words, may not be an all-or-nothing feature unique to Homo sapiens. It may be a series of  adaptations to the exigencies of life that eventuated in what we would currently recognize as our human creativity.

I have to say, it’s interesting what you can find if you keep your mind, as well as your eyes, open. I wasn’t actively searching for an essay on imagination -although perhaps on some level, I was… At any rate, on whatever level, I happened upon an essay by Stephen T Asma, a professor of philosophy at Columbia College in Chicago and his approach fascinated me.

‘Imagination is intrinsic to our inner lives. You could even say that it makes up a ‘second universe’ inside our heads. We invent animals and events that don’t exist, we rerun history with alternative outcomes, we envision social and moral utopias, we revel in fantasy art, and we meditate both on what we could have been and on what we might become… We should think of the imagination as an archaeologist might think about a rich dig site, with layers of capacities, overlaid with one another. It emerges slowly over vast stretches of time, a punctuated equilibrium process that builds upon our shared animal inheritance.’

Interestingly, many archaeologists seem to conflate the emergence of imagination with the appearance of artistic endeavours –‘premised on the relatively late appearance of cave art in the Upper Paleolithic period (c38,000 years ago)… [and] that imagination evolves late, after language, and the cave paintings are a sign of modern minds at work.’

Asma, sees the sequence rather differently, however: ‘Thinking and communicating are vastly improved by language, it is true. But ‘thinking with imagery’ and even ‘thinking with the body’ must have preceded language by hundreds of thousands of years. It is part of our mammalian inheritance to read, store and retrieve emotionally coded representations of the world, and we do this via conditioned associations, not propositional coding.’

Further, Asma supposes that ‘Animals appear to use images (visual, auditory, olfactory memories) to navigate novel territories and problems. For early humans, a kind of cognitive gap opened up between stimulus and response – a gap that created the possibility of having multiple responses to a perception, rather than one immediate response. This gap was crucial for the imagination: it created an inner space in our minds. The next step was that early human brains began to generate information, rather than merely record and process it – we began to create representations of things that never were but might be.’ I love his idea of a ‘cognitive gap’. It imagines (sorry) a cognitive area where something novel could be developed and improved over time.

I’m not sure that I totally understand all of the evidence he cites to bolster his contention, though- for example, the view of philosopher Mark Johnson at the University of Oregon that there are ‘deep embodied metaphorical structures within language itself, and meaning is rooted in the body (not the head).’ Although, ‘Rather than being based in words, meaning stems from the actions associated with a perception or image. Even when seemingly neutral lexical terms are processed by our brains, we find a deeper simulation system of images.’ But at any rate, Asma summarizes his own thoughts more concisely, I think: ‘The imagination, then, is a layer of mind above purely behaviourist stimulus-and-response, but below linguistic metaphors and propositional meaning.’

In other words, you don’t need to have language for imagination. But the discipline of biosemantics tries to envisage how it might have developed in other animals. ‘[Primates] have a kind of task grammar for doing complex series of actions, such as processing inedible plants into edible food. Gorillas, for example, eat stinging nettles only after an elaborate harvesting and leave-folding [sic] sequence, otherwise their mouths will be lacerated by the many barbs. This is a level of problem-solving that seeks smarter moves (and ‘banks’ successes and failures) between the body and the environment. This kind of motor sequencing might be the first level of improvisational and imaginative grammar. Images and behaviour sequences could be rearranged in the mind via the task grammar, long before language emerged. Only much later did we start thinking with linguistic symbols. While increasingly abstract symbols – such as words – intensified the decoupling of representations and simulations from immediate experience, they created and carried meaning by triggering ancient embodied systems (such as emotions) in the storytellers and story audiences.’ So, as a result, ‘The imaginative musician, dancer, athlete or engineer is drawing directly on the prelinguistic reservoir of meaning.’

Imagination has been lauded as a generator of progress, and derided as idle speculation throughout our tumultuous history, but there’s no denying its power: ‘The imagination – whether pictorial or later linguistic – is especially good at emotional communication, and this might have evolved because emotional information drives action and shapes adaptive behaviour. We have to remember that the imagination itself started as an adaptation in a hostile world, among social primates, so perhaps it is not surprising that a good storyteller, painter or singer can manipulate my internal second universe by triggering counterfactual images and events in my mind that carry an intense emotional charge.’

Without imagination, we cannot hope to appreciate the Shakespeare who also wrote, in his play Richard III:

Princes have but their titles for their glories,                                                                                                      An outward honor for an inward toil,                                                                                                                And, for unfelt imaginations,                                                                                                                                They often feel a world of restless cares.

Personally, I cannot even imagine a world where imagination doesn’t play such a crucial role… Or can I…?


Does the love of heaven make one heavenly?

Why do find myself so attracted to articles about religion? I am not an adherent -religion does not stick to me- nor am I tempted to take the famous wager of the 17th century philosopher, Pascal: dare to live life as if God exists, because you’ve got nothing to lose if He doesn’t, and everything to gain if He does.

Perhaps I’m intrigued by the etymological roots that underpin the word: Religare (Latin, meaning ‘to bind’) is likely the original tuber of the word. But is that it -does it bind me? Constrain me? I’d like to think not, and yet… and yet…

Even many diehard atheists concede that religion has a use, if only for social cohesion -Voltaire was probably thinking along those lines when he wrote: ‘If God did not exist, it would be necessary to invent him’. Or Karl Marx: ‘Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people’.

And then, of course, there’s Sigmund Freud, an avowed Jewish atheist, who for most of his life, thought that God was a collective neurosis. But, in his later years when he was dying of cancer of the jaw, he suggested (amongst other, much more controversial things) in his last completed book, Moses and Monotheism that monotheistic religions (like Judaism) think of God as invisible. This necessitates incorporating Him into the mind to be able to process the concept, and hence likely improves our ability for abstract thinking. It’s a bit of a stretch perhaps, but an intriguing idea nonetheless.

But, no matter what its adherents may think about the value of the timeless truths revealed in their particular version, or its longevity as proof of concept, religions change over time. They evolve -or failing that, just disappear, dissolve like salt in a glass of water. Consider how many varieties and sects have arisen just from Christianity alone. Division is rife; nothing is eternal; Weltanschauungen are as complicated as the spelling.

So then, why do religions keep reappearing in different clothes, different colours? Alain de Botton, a contemporary British philosopher, argues in his book Religion for Atheists, that religions recognize that their members are children in need of guidance and solace. Although certainly an uncomfortable opinion, there is a ring of truth to his contention. Parents, as much as their children, enjoy ceremonies, games, and rituals and tend to imbue them with special significance that is missing in the secular world. And draping otherwise pragmatic needs in holy cloth, renders the impression that they were divinely inspired; ethics and morality thus clothed, rather than being perceived as arbitrary, wear a spiritual imprimatur. A disguise: the Emperor’s Clothes.

Perhaps, then, there’s more to religion than a set of Holy caveats whose source is impossible to verify. But is it really just something in loco parentis? A stand-in? I found an interesting treatment of this in a BBC Future article written by Sumit Paul-Choudhury, a freelance writer and former editor-in-chief of the New Scientist. He was addressing the possible future of religion.

‘We take it for granted that religions are born, grow and die – but we are also oddly blind to that reality. When someone tries to start a new religion, it is often dismissed as a cult. When we recognise a faith, we treat its teachings and traditions as timeless and sacrosanct. And when a religion dies, it becomes a myth, and its claim to sacred truth expires… Even today’s dominant religions have continually evolved throughout history.’

And yet, what is it that allows some to continue, and others to disappear despite the Universal Truth that each is sure it possesses? ‘“Historically, what makes religions rise or fall is political support,”’ writes Linda Woodhead, professor of sociology of religion at the University of Lancaster in the UK ‘“and all religions are transient unless they get imperial support.”’ Even the much vaunted staying power of Christianity required the Roman emperor Constantine, and his Edict of Milan in 313 CE to grant it legal status, and again the emperor Theodosius and his Edict of Thessalonica in 380 CE to make Christianity the official religion of the Roman Empire.

The first university I attended was originally founded by the Baptists and, at least for my freshman year, there was a mandatory religious studies course. Thankfully, I was able to take a comparative religion course, but in retrospect, I would have liked an even broader treatment of world religions. I realize now that I was quite naïve in those times; immigration had not yet exposed many of us to the foreign customs and ideas with which we are now, by and large, quite familiar. So the very notion of polytheism, for example, where there could be a god dedicated to health, say, and maybe another that spent its time dealing with the weather, was not only fascinating, but also compelling. I mean, why not? The Catholics have their saints picked out that intervene for certain causes, so apart from the intervener numbers, what makes Hinduism with its touted 33 million gods, such an outlier in the West (wherever that is)?

It seems to me that most of us have wondered about the meaning of Life at one time or other, and most of us have reflected on what happens after death. The answers we come up with are fairly well correlated with those of our parents, and the predominant Zeitgeist in which we swim. But as the current changes, each of us is swept up in one eddy or another, yet we usually manage to convince ourselves it’s all for the best. And perhaps it is.

Who’s to say that there needs to be a winner? Religions fragment over time and so do societies; their beliefs, however sacrosanct in the moment, evolve and are in turn sacralized. And yet our wonder of it all remains. Who are we really, and why are we here? What happens when we die? These questions never go away, and likely never will. So maybe, just maybe, we will always need a guide. Maybe we will always need a Charon to row us across the River Styx…

Is the thing translated still the thing?

When I was a student at University, translated Japanese Haiku poetry was all the rage; it seemed to capture the Zeitgeist of the generation to which I had been assigned. I was swept along with others by the simple nature images, but -much like the sonnet, I suppose- I failed to realize how highly structured it was. In fact, I can’t really remember all of its complex requirements -but maybe that’s the beauty of its seeming simplicity in English. However, the contracted translation of the Japanese word –haikai no ku, meaning ‘light verse’- belies the difficulty in translating the poetry into a foreign language while still conserving its structure, its meaning, and also its beauty.

It seems to me that the ability to preserve these things in translation while still engaging the interest of the reader requires no less genius than that of its original creator. While, both in poetry as well as in the narrative of story, the ideas of the authors, and their images, plots and metaphors are an intrinsic part of the whole, sometimes the concepts are difficult to convey to a foreign culture. So, what to do with them to maintain the thrust of the original while not altering the charm? And when does the translation actually become a different work of art and suggest the need for a different attribution?

Given my longstanding  love for poetry and literature, I have often wondered whether I could truly understand the poetry of, say, Rumi who wrote mainly in Persian but also in Turkish, Greek and Arabic; or maybe, the more contemporary Rilke’s German language poetry. I speak none of those languages, nor do I pretend to understand the Umwelten of their times, so how do I know what it is that attracts me, apart from the beauty of their translations? Is it merely the universality of their themes, and perhaps my mistaken interpretations of the images and metaphors, or is there something else that seeps through, thanks to -or perhaps in spite of- the necessary rewording?

Since those heady days in university, I have read many attempts to explain, and even to justify, various methods of translation, and they all seem to adhere to one or both of the only two available procedures: paraphrasing, or metaphrasing (translating word for word). And no matter which is used, I have to wonder if the product is always the poor cousin of the original.

In one of the seminars from university, I remember learning that as far back as St. Augustine and St. Jerome, there was disagreement about how to translate the Bible -Augustine favoured metaphrasis, whereas Jerome felt that there was ‘a grace of something well said’. Jerome’s appealing phrase has stayed with me all these years. Evidently, the problem of translation goes even further back in history though, and yet the best method of preserving the author’s intention is still no closer to being resolved.

In my abiding hope for answers, I still continue to search. One such more recent forage led me to an essay in the online publication Aeon by the American translator and author Mark Polizzotti (who, among other honours, is a Chevalier of the Ordre des Arts et des Lettres, the recipient of a 2016 American Academy of Arts and Letters award for literature, and a publisher and editor-in-chief at the Metropolitan Museum of Art in New York).

He writes, ‘as the need for global communication grows by proverbial leaps, the efficiency of machine-based translation starts looking rather attractive. In this regard, a ‘good’ translation might simply be one that conveys the requisite bytes of information in the shortest time. But translation is about more than data transmission, and its success is not always easy to quantify. This becomes particularly true in the literary sphere: concerned with delivering artistic effect more than facts simple and straight.’

So, ‘We might think that the very indeterminacy of literary translation would earn it more leeway, or more acceptance.’ And yet, ‘many sophisticated readers view translation as no more than a stopgap… it would be disingenuous to claim that the reader of a translation is truly experiencing, in all its aspects, the foreign-language work it represents, or that in reading any text transposed from one language into another there isn’t a degree of difference (which is not the same as loss). The heart of the matter lies in whether we conceive of a translation as a practical outcome, with qualities of its own that complement or even enhance the original, or as an unattainable ideal, whose best chance for relative legitimacy is to trace that original as closely as possible.’

Polizzotti goes on to catalogue various approaches and views of translation and then suggests what I, at least, would consider the best way to think of translation and the obvious need it attempts to fulfil: ‘If instead we take translators as artists in their own right, in partnership with (rather than servitude to) their source authors; if we think of translation as a dynamic process, a privileged form of reading that can illuminate the original and transfer its energy into a new context, then the act of representing a literary work in another language and culture becomes something altogether more meaningful. It provides a new way of looking at a text, and through that text, a world. In the best of cases, it allows for the emergence of an entirely new literary work, at once dependent on and independent of the one that prompted it – a work that neither subserviently follows the original nor competes with it, but rather that adds something of worth and of its own to the sum total of global literatures. This does not mean taking undue liberties with the original; rather, it means honouring that original by marshalling all of one’s talent and all of one’s inventiveness to render it felicitously in another language.

‘To present a work as aptly as possible, to recreate it in all its beauty and ugliness, grandeur and pettiness, takes sensitivity, empathy, flexibility, knowledge, attention, caring and tact. And, perhaps most of all, it takes respect for one’s own work, the belief that one’s translation is worth judging on its own merits (or flaws), and that, if done well, it can stand shoulder to shoulder with the original that inspired it.’

Polizzotti has nailed it. There’s a spirit inherent in good translation -one that inspires a confidence that the original intent of the author is appropriately, and befittingly displayed.

One of the reasons I was drawn to Polizzotti’s essay was a recent book I read (in translation): The Elegance of the Hedgehog, by Muriel Barbery and translated from the original French by Alison Anderson. So seamless was the narrative, and so apt were the translated dialogues, I have to confess that I had difficulty believing the book had not originally been written in English. And as it stands, it is one of the most rewarding books I have experienced in years. I’m sure that Ms Barbery is well content with Anderson’s translation, not the least because their efforts earned it accolades from various critics, including a posting on the New York Times best-seller list.

It seems to me that one could not expect more from a translator than that.

To hold, as it were, a mirror up to Nature

Who am I? No, really -where do I stop and something else begins? That’s not really as silly a question as it may first appear. Consider, for example, my need to remember something -an address, say. One method is to internalize it -encode it somehow in my brain, I suppose- but another, no less effective, is to write it down. So, if I choose the latter, is my pen (or keyboard, for that matter) now in some sense a functional part of me? Is it an extension of my brain? The result is the same: the address is available whenever I need it.

Ever since my university days, when I discovered the writings of the philosopher Alan Watts, I have been intrigued by his view of boundaries, and whether to consider them as things designed to separate, or to join. Skin, was one example that I remember he discussed -does it define my limits, and enclose the me inside, or is it actually my link with the outside world? I hadn’t really thought much about it until then, but in the intervening years it has remained an idea that continues to fascinate me.

Clearly Watts was not alone in his interest about what constitutes an individual, nor in his speculations about the meaning of whatever identities individuals think they possess by virtue of their boundaries. There was an insightful article in Aeon by Derek Skillings, a biologist and philosopher of science at the University of Pennsylvania entitled ‘Life is not easily bounded’:

‘Most of the time the living world appears to us as manageable chunks,’ he writes, ‘We know if we have one dog or two.’ Why then, is ‘the meaning of individuality … one of the oldest and most vexing problems in biology? …  Different accounts of individuality pick out different boundaries, like an overlapping Venn diagram drawn on top of a network of biotic interactions. This isn’t because of uncertainty or a lack of information; rather, the living world just exists in such a way that we need more than one account of individuality to understand it.’ But really, ‘the problem of individuality is (ironically enough) actually composed of two problems: identity and individuation. The problem of identity asks: ‘What does it mean for a thing to remain the same thing if it changes over time?’ The problem of individuation asks: ‘How do we tell things apart?’ Identity is fundamentally about the nature of sameness and continuity; individuation is about differences and breaks.’ So, ‘To pick something out in the world you need to know both what makes it one thing, and also what makes it different than other things – identity and individuation, sameness and difference.’

What about a forest -surely it is a crowd of individual trees?  Well, one way of differentiating amongst individuals is to think about growth -a tree that is growing (in other words, continuing as more of the same)- and contrasting it with producing something new: as in reproduction. And yet even here, there is a difficulty. It’s difficult to determine the individual identities of any trees that also grew from the original roots -for example from a ‘nurse’ tree lying on the ground with shoots and saplings sprouting from it.’

But it’s not only plants that confuse the issue. If reproduction -i.e. producing something new– counts as a different entity, then what about entities like bacteria? ‘These organisms tend to reproduce [albeit] by asexual division, dividing in half to produce two clones… and, failing mutation and sub-population differentiation, an entire population of bacteria would be considered a single individual.’ -whatever ‘individual’ might therefore mean.

And what about us, then? Surely we have boundaries, surely we are individuals created as unique entities by means of sexual reproduction. Surely we have identities. And yet, what of those other entities we carry with us through our lives -entities that not only act as symbiotes, but are also integrated so thoroughly into our metabolism that they contribute to such intimate functions as our immune systems, our weight and health, and even function as precursors for our neurotransmitters and hence our moods? I refer, of course, to the micro-organisms inhabiting our bowels -our microbiome. Clearly ‘other’ and yet essential to the functioning person I regard as ‘me’.

And yet, our gut bacteria are mostly acquired from the environment -including the bacteria colonizing our mother’s vagina and probably her breast milk- and so are not evolutionarily prescribed, nor thereby hereditarily transmitted. So, am I merely a we –picking up friends along the way? Well, consider mitochondria -the powerhouse of our cells. They were once free-living bacteria that adapted so well inside our cells that they, too, are integral to cell functioning but have lost the ability to survive separately; they are transmitted from generation to generation. So they are me, right…?

Again I have to ask just who is me? Or is the question essentially meaningless put like that? Given that I am a multitude, and more like a city than a single house, shouldn’t the question be who are we? The fact that all of us, at least in Western cultures, consider ourselves to be distinct entities -separate individuals with unique identities- makes me wonder, about our evolutionary history.

Was there a time when we didn’t make the distinctions we do nowadays? A time when we thought of ourselves more as members of a group than as individuals? When, perhaps sensing that we were constantly interacting with things outside and inside us, the boundaries were less important? Is that how animals would say they see the world if they were able to tell us?

Does our very ability to communicate with each other with more sophistication, create the critical difference? Is that what created hubris? In Greek tragedy, remember, hubris -excess pride and self-confidence- led inexorably to Nemesis, retributive justice. Were poets in that faraway time, trying to tell people something they had forgotten? Is that what this is all about?

I wonder if Shakespeare, as about so many things, was also aware of our ignorance: ‘pride hath no other glass to show itself but pride, for supple knees feed arrogance and are the proud man’s fees.’

Plus ça change, eh?