Oh coward Conscience, how dost thou afflict me!

Every once in a while, buried in all the fake news and confirmation biases, I find something that rings true. Something that transcends the routine moral admonishments that usually find me wanting. It’s not that I don’t aspire to morality, or whatever, it’s just that I’m sometimes not very good at it: I forget things from time to time, and yell at other people, or the dog.

And anyway, being good only exists in contrast to something else so it’s important to keep other stuff around so you know where you sit. I do not know any moral saints, you understand -they must run with a different crowd- but then again, I’m not sure we’d get along as friends. The American philosopher, Susan Wolf, defines these ‘saints’ as people whose every action is as morally good and worthy as possible, and she writes in her eponymous essay Moral Saints: ‘I don’t know whether there are any moral saints. But if there are, I am glad that neither I nor those about whom I care most are among them.’

It turns out that her essay is the subject for another essay, this time by Daniel Callcut in Aeon, rather than The Journal of Philosophy so I felt less of a stranger in a strange land in reading it:  https://aeon.co/essays/why-it-is-better-not-to-aim-at-being-morally-perfect

Wolf seems to be suggesting that the moral saint would likely never give you a break if you weren’t constantly altruistic, so I enjoyed Callcut’s paraphrase: ‘The problem with extreme altruism, as Oscar Wilde is reported to have said about socialism, is that it takes up too many evenings.’

‘If you don’t have enough time for friendship or fun, or works of art or wildlife, then you are missing out on what Wolf calls the non-moral part of life. Wolf does not mean to suggest that non-moral equals immoral: just because something doesn’t have anything to do with morality (playing tennis, for instance) it does not follow that it is therefore morally bad. The point is that morality is, intuitively, focused on issues such as treating others equally, and on trying to relieve suffering. And good things these are: but so is holidaying with a friend, or exploring the Alaskan rain forest, or enjoying a curry. Moral goodness is just one aspect of the good things in life and, if you live as if the moral aspect is the only aspect that matters, then you are likely to be very impoverished in terms of the non-moral goods in your life.’

I am taken with Callcut’s take on Aristotelian ethics: ‘Aristotle most notably, held views of ethics that encouraged neither selfishness nor selflessness: the best kind of life would be concerned with others, and involve pleasurable engagement with others’ lives, but it would not require impartial dedication to the needs of strangers. Ethics is more concerned with the question of how to be a good friend than it is the question of how to save the world. And, as with good friendships, ethics is both good for you and good for other people. At the heart of Aristotle’s ethics is the ultimate win-win. The best ethical life simply is the most desirable life, and the fulfilment of our social nature consists in living in mutual happiness with others.’

However, some of Callcut’s arguments -and especially Wolf’s- go deeper than what most of us non-philosophers would likely accept, let alone understand. What I took from the essay was that ‘a line has to be drawn between what is morally required of you and that which is morally praiseworthy but not morally required… Morality doesn’t require you to have no other interests besides morality.’ And ‘The fact that you are not morally perfect doesn’t make you a bad person.’ Most of us walk the middle ground.

I remember one cold day a few years ago when I was in town -fairly close to Christmas, I think. The street was full of shoppers, charity Santa Clauses, and on every block, Salvation Army volunteers with their little pots slowly filling with money. Unfortunately, the contrast with the street people among them was jarring -especially the old man and his dog sitting on a busy corner. Everybody passed the two of them without a glance. He had no cup, and he looked too cold to leave his hat down on the sidewalk for donations. Perhaps in his sixties, or seventies, he was unshaven and dressed in a torn, mud-stained grey-brown overcoat and was huddled close to his dog, his hands trying to find some warmth in his coat, while his feet sought refuge under the dog. A rumpled blue toque, obviously too small for him, was pulled over his head, but it wasn’t large enough to cover his ears, and he was visibly shivering.

I had just bought a few presents and could feel some change jangling in one pocket, and my conscience in another, so I decided to empty both of them in the Salvation Army pot nearby.

I glanced at the man and his dog as I walked over to the pot.

The volunteer saw me looking at the man. “I’ve tried to convince him to come to the shelter,” he explained, before I had a chance to empty my pocket. “But he won’t…”

“Can he bring his dog with him?” I asked. The dog was obviously important to him.

The volunteer nodded. “But, only to our shelter on the other side of the city, unfortunately -too far away from where he lives in the park.” He smiled at the old man. “He says he’s waiting for some friends, although I haven’t seen them in a couple of days…” We both stared at the old man. “He just got out of hospital -actually, I think he probably discharged himself. He was worried about the dog.”

“But look at him,” I said. “He’s cold now; he’s going to freeze tonight!”

The volunteer sighed. “He refuses to go back to the hospital, so I offered to drive him and the dog to the shelter in my van, but…” He shrugged.

“Let me talk to him,” I said and walked over to where he sat. I started to extend my hand to greet him, but the dog growled protectively.

“Is there anything I can help you with, sir?” I asked, being careful not to approach any closer.

A pair of sad, rheumy eyes slowly emerged from under the curtain of his lids and stared vacantly at me. His thin, chapped lips twitched and I wondered if he was talking softly to me. His skin was sallow and bruised; he didn’t look at all well. “Dog,” he seemed to be saying, although I couldn’t be sure. But even the effort of whispering seemed too much for him.

“Dog…?” I said, to help him out.

His head slowly nodded. “Dog,” formed on his trembling lips, and then his eyes receded again into his skull, and his head fell forward onto his chest.

I hurried back to the Salvation Army man. “He’s really sick,” I said, and dialled 911 for the ambulance. But before they arrived, the dog began to whine and lick the man’s face.

When the paramedics arrived, it was too late -the old man had died, but the dog wouldn’t let them take him away, so they had to call the SPCA to restrain him.

“What will happen to the dog?” I asked as the official bundled it into his van.

The SPCA man shrugged. “Usually put them down, eh?”

“But…” I struggled for words. “Can’t it be kept for adoption?”

“Too many of ‘em,” he explained, his eyes sad. “And this one’s a bit old…”

I stared at him with disbelief. “But… But the last thing he said to me was… well, he wanted to make sure the dog was taken care of, I think.”

The driver was obviously a kind man. “You can donate some money for a kennel…” he said, and produced a card with the phone number. “Who knows, maybe someone will want an older dog… It’s Christmas, eh?”

I nodded and took the card. The man smiled like he was relieved. “I hate it when we have to put ’em down,” he said, closing the door to the van. “Thank you, sir,” he said, getting into his seat behind the wheel. “I’ll tell them you’re going to phone.”

The Salvation Army man walked over to me as the ambulance drove away and the crowd that had gathered, thinned. “You know,” he said, smiling at me and shaking my hand, “That was the most meaningful donation I’ve seen this Christmas…”

And I think it was the most meaningful gift I’ve ever given…

 

 

 

 

 

 

Advertisements

To wear an undeserved dignity

 

Lately, I’ve been worried about dignity -not my own, you understand, although I’m sure that could use a little work. I’m more concerned that what I assumed was an inherent quality possessed -if not always demonstrated- by us all, may not be as innate as I thought. An essay in the online publication Aeon, by Remy Debes, an associate professor of philosophy at the University of Memphis entitled Dignity is Delicate, helped me to understand some of its issues: https://aeon.co/essays/human-dignity-is-an-ideal-with-remarkably-shallow-roots?

The word itself is derived from the Latin dignus, meaning ‘worthy’, but as with most words, it can be used in different ways, each with slightly different meanings. ‘Dignity has three broad meanings. There is an historically old sense of poise or gravitas that we still associate with refined manners, and expect of those with high social rank… Much more common is the family of meanings associated with self-esteem and integrity, which is what we tend to mean when we talk of a person’s own ‘sense of dignity’… Third, there is the more abstract but no less widespread meaning of human dignity as an inherent or unearned worth or status, which all human beings share equally.’

This latter aspect, which Debes calls the ‘moralized connotation’ ‘is the kind of worth everyone has, and has equally, just because we are persons.’ As Immanuel Kant wrote, in his Groundwork for the Metaphysics of Morals in 1785: ‘ whatever is above all price, and therefore admits of no equivalent, has a dignity.’ He also argued that we have a duty to treat other humans ‘always at the same time as an end, never merely as a means’ -with respect, in other words. Unfortunately, ‘the Groundwork wasn’t professionally translated until 1836. And even that translation wasn’t easily available until a revised edition appeared in 1869.’

So, in terms of its moral and ethical aspects, the concept of dignity is a recent one. ‘[U]ntil at least 1850, the English term ‘dignity’ had no currency as meaning anything like the ‘unearned worth or status of humans’, and very little such currency well into the 1900s. When the Universal Declaration of Human Rights (1948) used the terminology of human dignity to justify itself, this turned out to be a conceptual watershed.’

What am I missing here? As Debes illustrates in his essay, ‘the idea of human dignity is beset by hypocrisy. After all, our Western ethos evolved from, and with, the most violent oppression. For 200 years, we’ve breathed in the heady aspirations of liberty and justice for all, but somehow breathed out genocide, slavery, eugenics, colonisation, segregation, mass incarceration, racism, sexism, classism and, in short, blood, rape, misery and murder.’ So what is going on? Debes thinks ‘The primary way we have dealt with this shock and the hypocrisy it marks has been to tell ourselves a story – a story of progress… the story’s common hook is the way it moves the ‘real’ hypocrisy into the past: ‘Our forebears made a terrible mistake trumpeting ideas such as equality and human dignity, while simultaneously practising slavery, keeping the vote from women, and so on. But today we recognise this hypocrisy, and, though it might not be extinct, we are worlds away from the errors of the past.’

Of course, a still different way of explaining our abysmal lack of dignity is to suggest, not that we are getting better, but that we are getting worse -that there was a time when it was not so, and we need try going back to that ‘better time’.

Uhmm, they can’t both be correct. Perhaps, like me, you have noticed the presence of gerunds (verbs functioning as nouns with –ing endings), or implied gerunds, in the description: from the Latin gerundum –‘that which is to be carried on’. In other words, that which is not yet completed, or is in the process of happening, and hopefully will be so in the indefinite future.  As Debes writes, ‘facing up to the hypocrisy in our Western ethos requires resisting the temptation to scapegoat both the past and the present. We must not divorce ourselves from the fact that the present is possible only because of our past, the one we helped to create. Likewise, the existential question isn’t, are we really who we say we are? The question is, have we ever been?’

But why is everything so viscid? Humans have always been seen as valuable -the concept evolving through time. ‘The chorus in Sophocles’ Antigone, for example, praises man as the most ‘wondrous’ thing on Earth, a prodigy cutting through the natural world the way a sailor cuts through the ‘perilous’, ‘surging seas’ that threaten to engulf him.’ The word ‘dignity’ was not used, but it seems to me he was on the right track, although perhaps not in the sense that mankind’s value was incommensurable and couldn’t be exchanged for other kinds of worth as Kant had concluded.

Or how about Aristotle: ‘Dignity does not consist in possessing honours, but in deserving them’

Even Shakespeare’s Hector says to Troilus about whether Helen of Troy is worth going to war for: Value dwells not in a particular will; it holds his estimate and dignity as well wherein ‘tis precious of itself as in the prizer. In other words, value -dignity- isn’t subjective, it’s intrinsic.

So what has kept us from believing in that ‘inherent or unearned worth or status, which all human beings share equally’? Admittedly we are children of our era, and very few of us can escape from the Weltanschauung of our time, let alone the political and social ethos in which we find ourselves embedded. There is much that conspires to homogenize and temper our views, I suspect.

Maybe it was as simple as a fear of the unknown, and fear of disruption, that kept the lid on the pot -better the devil we know than the devil we don’t. Moral dignity –ethical dignity- did not accord with the status quo: keeping slaves, or a class system that offered wealth and status to the powerful; women were trapped in a never-ending cycle of pregnancies and children, and so were themselves essentially biologically enslaved… A clock will not work unless all of the parts are in their proper places.

So many levels: civilization -well, at least culture– has always been a matryoshka doll –‘a riddle wrapped in a mystery inside an enigma’, as Winston Churchill so famously said about Russia. But maybe, concealed inside the innermost layer, the sanctum sanctorum of the inner doll, a flower lives, not a minotaur.

We can only hope.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Should We Bell the Cat?

What should you do at a dinner party if the hostess, say, declares that she believes something that you know to be inaccurate -or worse, that you consider repellent? Abhorrent? Should you wait to see how others respond, or take it upon yourself to attempt to correct her belief? If it is merely a divergence of opinion, it might be considered a doctrinaire exercise -a Catholics vs Protestant type of skirmish- and likely unwinnable.

But, suppose it is something about which you are recognized to have particular credentials so your response would not be considered to be merely an opinion, but rather a statement of fact? Should that alter your decision as to whether or not to take issue with her pronouncement? Would your silence imply agreement -acquiescence to a view that you know to be not only wrong, but offensive? And would your failure to contradict her, signal something about her opinion to the others at the table? If it is an ethical issue, should you attempt to teach?

It is a difficult situation to be sure, and one that is no doubt difficult to isolate from context and the responsibilities incumbent upon a guest. Still, what should you do if, uncorrected, she persists in promulgating her belief? Should you leave the table, try to change the topic, or merely smile and wait to see if she is able to sway those around you to her views?

I can’t say that the situation has arisen all that often for me, to tell the truth -we tend to choose our friends, and they theirs, on the basis of shared values- but what risks might inhere in whatever course of action I might choose? I happened upon an insightful and intriguing article that touched on that very subject in Aeon, an online magazine:  https://aeon.co/ideas/should-you-shield-yourself-from-others-abhorrent-beliefs It was written by John Schwenkler, an associate professor in philosophy at Florida State University.

He starts, by pointing out that ‘Many of our choices have the potential to change how we think about the world. Often the choices taken are for some kind of betterment: to teach us something, to increase understanding or to improve ways of thinking. What happens, though, when a choice promises to alter our cognitive perspective in ways that we regard as a loss rather than a gain?’

And further, ‘When we consider how a certain choice would alter our knowledge, understanding or ways of thinking, we do this according to the cognitive perspective that we have right now. This means that it’s according to our current cognitive perspective that we determine whether a choice will result in an improvement or impairment of that very perspective. And this way of proceeding seems to privilege our present perspective in ways that are dogmatic or closed-minded: we might miss the chance to improve our cognitive situation simply because, by our current lights, that improvement appears as a loss. Yet it seems irresponsible to do away entirely with this sort of cognitive caution… And is it right to trust your current cognitive perspective as you work out an answer to those questions? (If not, what other perspective are you going to trust instead?)’

You can see the dilemma: is the choice or opinion you hold based on knowledge, or simply belief? And here he employs a sort of thought experiment: ‘This dilemma is escapable, but only by abandoning an appealing assumption about the sort of grasp we have on the reasons for which we act. Imagine someone who believes that her local grocery store is open for business today, so she goes to buy some milk. But the store isn’t open after all… It makes sense for this person to go to the store, but she doesn’t have as good a reason to go there as she would if she didn’t just think, but rather knew, that the store were open. If that were case she’d be able to go to the store because it is open, and not merely because she thinks it is.’

But suppose that by allowing an argument -an opinion, say- to be aired frequently or uncontested, you fear you might eventually be convinced by it? It’s how propaganda endeavours to convince, after all. What then? Do you withdraw, or smile and smile and see a villain (to paraphrase Hamlet)? ‘If this is on the right track, then the crucial difference between the dogmatic or closed-minded person and the person who exercises appropriate cognitive caution might be that the second sort of person knows, while the first merely believes, that the choice she decides against is one that would be harmful to her cognitive perspective. The person who knows that a choice will harm her perspective can decide against it simply because it will do so, while the person who merely believes this can make this choice only because that is what she thinks.’

This is philosophical equivocation, and Schwenkler even admits as much: ‘What’s still troubling is that the person who acts non-knowingly and from a mere belief might still believe that she knows the thing in question… In that case, she’ll believe that her choices are grounded in the facts themselves, and not just in her beliefs about them. She will act for a worse sort of reason than the sort of reason she takes herself to have.’

As much as I enjoy the verbiage and logical progression of his argument, I have to admit to being a little disappointed in the concluding paragraph in the article, that seems to admit that he has painted himself into a corner: ‘What’s still troubling is that the person who acts non-knowingly and from a mere belief might still believe that she knows the thing in question: that climate change is a hoax, say, or that the Earth is less than 10,000 years old. In that case, she’ll believe that her choices are grounded in the facts themselves, and not just in her beliefs about them. She will act for a worse sort of reason than the sort of reason she takes herself to have. And what could assure us, when we exercise cognitive caution in order to avoid what we take to be a potential impairment of our understanding or a loss of our grip on the facts, that we aren’t in that situation as well?’

But, I think what this teaches me is the value of critical analysis, not only of statements, but also of context. First of all, obviously, to be aware of the validity of whatever argument is being aired, but then deciding whether or not an attempted refutation would contribute anything to the situation, or merely further entrench the individual in their beliefs, if only to save face. And as well, it’s important to step back for a moment, and assess the real reason I am choosing to disagree. Is it self-aggrandizement, dominance, or an incontestable conviction -incontestable based on knowledge or unprovable belief…?

I realize this is pretty confusing stuff -and, although profound, not overly enlightening- but sometimes we need to re-examine who it is we have come to be. In the words of the poet Kahlil Gibran, The soul walks not upon a line, neither does it grow like a reed. The soul unfolds itself like a lotus of countless petals.

He’s mad that trusts in the tameness of a wolf

I am an obstetrician, and not a neuropsychiatrist, but I feel a definite uneasiness with the idea of messing with brains –especially from the inside. Talking at it, sure –maybe even tweaking it with medications- but it seems to me there is something… sacrosanct about its boundaries. Something akin to black-boxhood -or pregnant-wombhood, if you will– where we have a knowledge of its inputs and outputs, but the internal mechanisms still too complex and interdependent to be other than interrogated from without.

I suppose I have a fear of the unintended consequences that seem to dog science like afternoon shadows -a glut of caution born of reading about well-meaning enthusiasms in my own field. And yet, although I do not even pretend to such arcane knowledge as might tempt me to meddle with the innards of a clock let alone the complexities of a head, I do watch from afar, albeit through a glass darkly. And I am troubled.

My concern bubbled to the surface with a November 2017 article from Nature that I stumbled upon: https://www.nature.com/news/ai-controlled-brain-implants-for-mood-disorders-tested-in-people-1.23031 I recognize that the report is dated, and merely scratches the surface, but it hinted at things to come. The involvement of DARPA (the Defense Advanced Research Projects Agency of the U.S. military) did little to calm my fears, either –they had apparently ‘begun preliminary trials of ‘closed-loop’ brain implants that use algorithms to detect patterns associated with mood disorders. These devices can shock the brain back to a healthy state without input from a physician.’

‘The general approach —using a brain implant to deliver electric pulses that alter neural activity— is known as deep-brain stimulation. It is used to treat movement disorders such as Parkinson’s disease, but has been less successful when tested against mood disorders… The scientists behind the DARPA-funded projects say that their work might succeed where earlier attempts failed, because they have designed their brain implants specifically to treat mental illness — and to switch on only when needed.’

And how could the device know when to switch on and off? How could it even recognize the complex neural activity in mental illnesses? Well, apparently, an ‘electrical engineer Omid Sani of the University of Southern California in Los Angeles — who is working with Chang’s team [a neuroscientist at UCSF] — showed the first map of how mood is encoded in the brain over time. He and his colleagues worked with six people with epilepsy who had implanted electrodes, tracking their brain activity and moods in detail over the course of one to three weeks. By comparing the two types of information, the researchers could create an algorithm to ‘decode’ that person’s changing moods from their brain activity. Some broad patterns emerged, particularly in brain areas that have previously been associated with mood.’

Perhaps this might be the time to wonder if ‘broad patterns’ can adequately capture the complexities of any mood, let alone a dysphoric one. Another group, this time in Boston, is taking a slightly different approach: ‘Rather than detecting a particular mood or mental illness, they want to map the brain activity associated with behaviours that are present in multiple disorders — such as difficulties with concentration and empathy.’ If anything, that sounds even broader -more unlikely to specifically hit the neural bullseye. But, I know, I know –it’s early yet. The work is just beginning… And yet, if there ever was a methodology more susceptible to causing collateral damage, and unintended, unforeseeable consequences, or one that might fall more afoul of a hospital’s ethics committee, I can’t think of it.

For example, ‘One challenge with stimulating areas of the brain associated with mood … is the possibility of overcorrecting emotions to create extreme happiness that overwhelms all other feelings. Other ethical considerations arise from the fact that the algorithms used in closed-loop stimulation can tell the researchers about the person’s mood, beyond what may be visible from behaviour or facial expressions. While researchers won’t be able to read people’s minds, “we will have access to activity that encodes their feelings,” says  Alik Widge, a neuroengineer and psychiatrist at Harvard University in Cambridge, Massachusetts, and engineering director of the MGH [Massachusetts General Hospital] team.’ Great! I assume they’ve read Orwell, for some tips.

It’s one of the great conundrums of Science, though, isn’t it? When one stretches societal orthodoxy, and approaches the edge of the reigning ethical paradigm, how should one proceed? I don’t believe merely assuming that someone else, somewhere else, and sometime else will undoubtedly forge ahead with the same knowledge, is a sufficient reason to proceed. It seems to me that in the current climate of public scientific skepticism, it would be best to tread carefully. Science succeeds best when it is funded, fêted, and understood, not obscured by clouds of suspicion or plagued by doubt -not to mention mistrust. Just look at how genetically modified foods are regarded in many countries. Or vaccinations. Or climate change…

Of course, the rewards of successful and innovative procedures are great, but so is the damage if they fail. A promise broken is more noteworthy, more disconcerting, than a promise never made.

Time for a thought experiment. Suppose I’ve advertised myself as an expert in computer hardware and you come to me with particularly vexing problem that nobody else seemed to be able to fix. You tell me there is a semi-autobiographical novel about your life that you’d been writing in your spare time for years, stored somewhere inside your laptop that you can no longer access. Nothing was backed up elsewhere –you never thought it would be necessary- and now, of course, it’s too late for that. The computer won’t even work, and you’re desperate.

I have a cursory look at the model and the year, and assure you that I know enough about the mechanisms in the computer to get it working again.

So you come back in a couple of weeks to pick it up. “Were you able to fix it?” is the first thing you say when you come in the door.

I smile and nod my head slowly. Sagely. “It was tougher than I thought,” I say. “But I was finally able to get it running again.”

“Yes, but does it work? What about the contents? What about my novel…?”

I try to keep my expression neutral as befits an expert talking to someone who knows nothing about how complex the circuitry in a computer can be. “Well,” I explain, “It was really damaged, you know. I don’t know what you did to it… but a lot of it was beyond repair.”

“But…”

“But I managed to salvage quite a bit of the function. The word processor works now –you can continue writing your novel.”

You look at me with a puzzled expression. “I thought you said you could fix it -the area where my novel is…”

I smile and hand you back the computer. “I did fix it. You can write again -just like before.”

“All that information… all those stories… They’re gone?”

I nod pleasantly, the smile on my face broadening. “But without my work you wouldn’t have had them either, remember. I’ve given you the opportunity to write some more.”

“But… But was stored in there,” you say, pointing at the laptop in front of you on the counter. “How do I know who I am now?”

“You’re the person who has been given the chance to start again.”

Sometimes that’s enough, I suppose…

 

 

 

 

 

 

 

 

 

Noceboes? How Cute.

I have always been fascinated by neologisms –new words that substitute for more commonly used ones. They can be clever, rude, or just plain silly, but often their point is to get noticed –or perhaps draw attention to their inventors. There was a time –before social media, at least- when we used to applaud people like Shakespeare for turning nouns into verbs, or adjectives into more active participants. And it was a time when elders, if they forgot the word for which they were searching, would simply come up with a new one. Of course, they still do, but it is often  lost in the ebb and flow of media utterage (pardon the neologism). I have written about this before in another context, but the subject continues to intrigue me: https://musingsonretirementblog.com/2016/05/22/what-did-you-say/

This time, however, I was more interested in the clever contrast of nocebo with the word it was replacing, placebo, that was reported in an article in the CBC health news: http://www.cbc.ca/news/health/nocebo-effect-greater-expensive-drugs-1.4358664

I suspect we’re all acquainted with the placebo effect: the ability of a harmless, inactive substitute to have a beneficial effect if it is believed to be the treatment. Again, I have covered this in a previous essay: https://musingsonwomenshealth.com/2016/04/20/rethinking-placebos/

But there seems to be no end to our ability to fool ourselves, and the concept of ‘noceboes’ is yet another illustration. ‘The opposite of the placebo effect — perceived improvement when no active medicine is given — nocebo is the perception of negative side-effects from a benign “medication” in a blind trial.’

The article reports on a study published in the journal Science, which suggests that ‘Expensive medicines can seem to create worse side-effects than cheaper alternatives.’ This particular investigation ‘focused on the pain perceptions of patients who were treated with creams they believed had anti-itch properties but actually contained no active ingredients.’ And, as one could no doubt predict from the title of their publication, Nocebo effects can make you feel pain, ‘Though the scientists ensured the temperatures applied to the two creams were consistent, those who received the expensive cream rated their pain as nearly twice as intense as those who received the cheaper cream. The study suggested that patient expectations related to price can trigger brain responses resulting in higher perception of pain, said Alexandra Tinnermann, a co-author of the study and neuroscientist at University Medical Center Hamburg-Eppendorf.

‘Tinnermann’s team used a functional MRI scanner to identify areas along the spinal cord that were activated during participants’ experience of side-effects. They also pinpointed two brain regions that were more stimulated among participants who believed they received the expensive drug.’

The ethics of using placebos –tricks- is one thing, but what about those of choosing between several recognized and approved medications where the only difference is the price? On the surface, it might seem to be a saving for all concerned. If the data hold up in further studies, why prescribe new and probably higher cost medications, if they’re more likely to have side effects?

Unfortunately the very ethics that require medical practitioners to discuss the possible side effects of any medication, are also known to influence the experience. Knowledgeable patients report more side effects than those who, for whatever reasons, are blissfully unaware of what to expect. Perhaps it’s more a question of which of Pandora’s boxes the practitioner should open -a zero sum game, no matter.

I was sitting on a park bench in the shade of a tree one sunny summer day, trying to finish a book a friend had loaned to me. It wasn’t very interesting, despite her recommendations, and although I was determined to discover what she had liked about it, I found my mind looking for excuses to put it down. My ears soon found a distraction. Two little boys had abandoned their bikes on the  grass nearby and were engaging themselves in scaling the leafy tower of what I had assumed was my own special shade tree. Hidden by several bouquets of leaves fluttering gently in the afternoon breeze, I suppose they thought they were invisible in their private redoubt.

“Thought you were sick, Jay,” one of them said, as if he wondered if he was in danger of catching whatever Jay had.

“I’m on antibiotics, Jordan,” the other answered defensively.

They were silent for a few moments, although I could hear them grunting as they climbed ever higher.

“My mother doesn’t believe in them,” a voice, probably Jordan’s, said very firmly.

“Why?” was Jay’s surprised reply.

Jordan was silent for a moment, clearly trying to remember. “She says they can make you sick.” Even from my position far beneath them, I could almost feel Jordan’s italics.

“How?”

Another, grunt-filled silence as they switched branches. “She says they can make your skin go red…” He hesitated for a minute while he combed through his memory. “And give you… make you wanna throw up.”

Jay seemed to hesitate before answering. “Well, I’m not red or anything, but… uhmm, sometimes I do feel a little like throwing up, I guess. Anyway I have to go to the toilet a lot, so it’s hard to tell.”

“She says that’s what happens with them too, Jay. It’s why I just take vitamin pills.”

“My mother says those don’t usually work… People only think they do.” Jay felt a need to defend his antibiotics. “Mom says we imagine things sometimes…”

“Like what?” Jordan sounded sceptical. For a while, I could only hear the leaves rustling, so I wasn’t sure if they’d already climbed too high to hear.

“Like… Like that vitamins can keep us from getting sick.” I could hear one of them shifting somewhere above as a branch cracked. “And she says some people won’t take antibiotics because they’re afraid of, uhmm…” He hesitated, while he searched for the right word. “…the side-stuff.”

“You mean ‘side-defects’?” Jordan pronounced the words carefully, condescension fairly dripping from his words.

“Yeah. She says if they hear about the defects, they figure they’ll get them.”

“Well my mom says doctors have to tell people about them, though, Jay… It’s the law.” He added smugly.

Jay seemed to think about it for a second. “Then no wonder, eh?” he said, as if he finally understood.

“No wonder what?”

“No wonder people get ‘em,” Jay answered, triumphantly.

From the mouths of babes.

How Ethical is Ethical Compromise?

What to do with a minefield? Once it is there, is it sufficient to avoid it while we investigate and map it –mark it off as terra incognita- or must we act immediately to attempt to remove all mines even if we do not fully understand their distribution or destructive capabilities? Even if we may miss some and our initial enthusiasm was deemed naïve?

This is an admittedly inadequate metaphor when applied to ethics, to be sure, but in many ways is illustrative of the pitfalls of being too quick to judge; or, alternatively, of assuming there is only one approach –and that the one chosen is perforce the correct and appropriate one.

Unfortunately, majority opinion often quietly assumes the mantle of indisputability in a culture, no matter its importance or suitability elsewhere. And even to question the legitimacy of the assertion is to question the legitimacy of the social norms to which its members unconsciously adhere. It may not necessarily intend to negate them, or overtly dispute them, but by subjecting them to investigation, it may seem to disparage their sanctity.

It is difficult to step out of our societally condoned patterns of thought and our long-hallowed mores; it is troubling to observe customs that seem to violate what to us are ingrained standards of morality. It is difficult indeed, to accept that we may not be in sole possession of moral rectitude –that there may be alternate truths, alternate moralities, even alternate equally valid perspectives.

I raise this with regard to the increasing awareness and condemnation of female genital mutilation (FGM). To be clear from the start, I do not condone FGM nor feel that it should be perpetuated; indeed I have to confess that I have great difficulty viewing it as anything other than a culturally-imposed abomination -misogyny writ large. I was, however, intrigued by a paper published in the Journal of Medical Ethics that sought to assess the issue in a more critically constructive fashion than I have seen before: http://jme.bmj.com/content/early/2016/02/21/medethics-2014-102375.full  It is really a very thoughtful and enlightening paper and I would strongly suggest that it is worth reading –if only to learn more about FGM and its cultural significance stripped of any pre-loaded societal baggage.

I was impressed by several things in fact. They sought to classify the procedures in terms of degree, medical issues, the ethical underpinnings of FGM, cultural sensitivity, and whether or not any form of the procedure would constitute gender discrimination or the violation of human rights. I will let the reader judge how thoroughly these fields were covered, but caution against our usually self-imposed wall of confirmation-bias that often precludes a dispassionate consideration of views that don’t fully accord with what we ‘know’ to be the correct ones… http://www.cbc.ca/news/health/female-genital-mutilation-legal-1.3459379 -this brief article from the CBC is perhaps a more assimilable and balanced –albeit nuanced- summary of the arguments.

I suppose the issue is not so much whether the practice should ever be acceptable –although neonatal male circumcision seems to have made it through the gate- as whether by outlawing it, the procedure will be driven underground as seems to be happening currently. If it is so important to a culture –whether justified by mores, or religion- that there seems to be an imperative to have it performed to allow an individual’s acceptability to be confirmed in the community, then wouldn’t it be better to acknowledge this, but mitigate the harm?

The authors have attempted a classification of FGM into 5 categories, the first two of which are thought to have minimal if any permanent effects on the girl -no effects on sexual pleasure, functioning, or reproduction. And, of course, if accepted, could be done under an anaesthetic, rather than by test of courage. Its acceptance could serve to assuage the cultural imperatives while essentially eliminating the greater severity and mutilating effects of the more complicated forms of the practice. It would be an intermediate –and hopefully temporary- step on the road to complete elimination of the procedure.

To be sure, the objection raised is often the one of argumentum ad temperantiam –the fallacy of assuming that the truth –the resolution- can be found in the middle ground between the two conflicting opinions. The problem, of course, lies in the validity of the opposing claims. Should one really be looking for the middle ground between information and mis (or dis) information? Sometimes the distinction is easy, but sometimes it is the minefield I discussed above. Primum non nocere –first of all do no harm- is the guide. As the authors state: ‘… analysis of issues in medical ethics generally regards principles as being prima facie in nature, rather than absolute. Therefore, important emotional and social considerations can trump minor medical considerations.’ In fact, because of the extreme and negative connotations of the term female genital mutilation, the authors even propose an alternative, less pejorative name: FGA (female genital alteration).

Without trying to push the concept and its acceptance too strongly, let me quote the summary of their intent: ‘Since progress in reducing FGA procedures has been limited in states where they are endemic and the commitment of people from these cultures to these procedures has led to their persistence [even in] in states where they are legally discouraged, alternative approaches should be considered. To accommodate cultural beliefs while protecting the physical health of girls, we propose a compromise solution in which liberal states would legally permit de minimis [a level of risk too small to be of concern] FGA in recognition of its fulfilment of cultural and religious obligations, but would proscribe those forms of FGA that are dangerous or that produce significant sexual or reproductive dysfunction.’

Compromises are always difficult; no one gets all they want, and yet each gets something. I raise the issue of female genital mutilation/alteration mainly for information but also for discussion. Sometimes, we need to know something about what we oppose. Always, in fact…