He’s mad that trusts in the tameness of a wolf

I am an obstetrician, and not a neuropsychiatrist, but I feel a definite uneasiness with the idea of messing with brains –especially from the inside. Talking at it, sure –maybe even tweaking it with medications- but it seems to me there is something… sacrosanct about its boundaries. Something akin to black-boxhood -or pregnant-wombhood, if you will– where we have a knowledge of its inputs and outputs, but the internal mechanisms still too complex and interdependent to be other than interrogated from without.

I suppose I have a fear of the unintended consequences that seem to dog science like afternoon shadows -a glut of caution born of reading about well-meaning enthusiasms in my own field. And yet, although I do not even pretend to such arcane knowledge as might tempt me to meddle with the innards of a clock let alone the complexities of a head, I do watch from afar, albeit through a glass darkly. And I am troubled.

My concern bubbled to the surface with a November 2017 article from Nature that I stumbled upon: https://www.nature.com/news/ai-controlled-brain-implants-for-mood-disorders-tested-in-people-1.23031 I recognize that the report is dated, and merely scratches the surface, but it hinted at things to come. The involvement of DARPA (the Defense Advanced Research Projects Agency of the U.S. military) did little to calm my fears, either –they had apparently ‘begun preliminary trials of ‘closed-loop’ brain implants that use algorithms to detect patterns associated with mood disorders. These devices can shock the brain back to a healthy state without input from a physician.’

‘The general approach —using a brain implant to deliver electric pulses that alter neural activity— is known as deep-brain stimulation. It is used to treat movement disorders such as Parkinson’s disease, but has been less successful when tested against mood disorders… The scientists behind the DARPA-funded projects say that their work might succeed where earlier attempts failed, because they have designed their brain implants specifically to treat mental illness — and to switch on only when needed.’

And how could the device know when to switch on and off? How could it even recognize the complex neural activity in mental illnesses? Well, apparently, an ‘electrical engineer Omid Sani of the University of Southern California in Los Angeles — who is working with Chang’s team [a neuroscientist at UCSF] — showed the first map of how mood is encoded in the brain over time. He and his colleagues worked with six people with epilepsy who had implanted electrodes, tracking their brain activity and moods in detail over the course of one to three weeks. By comparing the two types of information, the researchers could create an algorithm to ‘decode’ that person’s changing moods from their brain activity. Some broad patterns emerged, particularly in brain areas that have previously been associated with mood.’

Perhaps this might be the time to wonder if ‘broad patterns’ can adequately capture the complexities of any mood, let alone a dysphoric one. Another group, this time in Boston, is taking a slightly different approach: ‘Rather than detecting a particular mood or mental illness, they want to map the brain activity associated with behaviours that are present in multiple disorders — such as difficulties with concentration and empathy.’ If anything, that sounds even broader -more unlikely to specifically hit the neural bullseye. But, I know, I know –it’s early yet. The work is just beginning… And yet, if there ever was a methodology more susceptible to causing collateral damage, and unintended, unforeseeable consequences, or one that might fall more afoul of a hospital’s ethics committee, I can’t think of it.

For example, ‘One challenge with stimulating areas of the brain associated with mood … is the possibility of overcorrecting emotions to create extreme happiness that overwhelms all other feelings. Other ethical considerations arise from the fact that the algorithms used in closed-loop stimulation can tell the researchers about the person’s mood, beyond what may be visible from behaviour or facial expressions. While researchers won’t be able to read people’s minds, “we will have access to activity that encodes their feelings,” says  Alik Widge, a neuroengineer and psychiatrist at Harvard University in Cambridge, Massachusetts, and engineering director of the MGH [Massachusetts General Hospital] team.’ Great! I assume they’ve read Orwell, for some tips.

It’s one of the great conundrums of Science, though, isn’t it? When one stretches societal orthodoxy, and approaches the edge of the reigning ethical paradigm, how should one proceed? I don’t believe merely assuming that someone else, somewhere else, and sometime else will undoubtedly forge ahead with the same knowledge, is a sufficient reason to proceed. It seems to me that in the current climate of public scientific skepticism, it would be best to tread carefully. Science succeeds best when it is funded, fêted, and understood, not obscured by clouds of suspicion or plagued by doubt -not to mention mistrust. Just look at how genetically modified foods are regarded in many countries. Or vaccinations. Or climate change…

Of course, the rewards of successful and innovative procedures are great, but so is the damage if they fail. A promise broken is more noteworthy, more disconcerting, than a promise never made.

Time for a thought experiment. Suppose I’ve advertised myself as an expert in computer hardware and you come to me with particularly vexing problem that nobody else seemed to be able to fix. You tell me there is a semi-autobiographical novel about your life that you’d been writing in your spare time for years, stored somewhere inside your laptop that you can no longer access. Nothing was backed up elsewhere –you never thought it would be necessary- and now, of course, it’s too late for that. The computer won’t even work, and you’re desperate.

I have a cursory look at the model and the year, and assure you that I know enough about the mechanisms in the computer to get it working again.

So you come back in a couple of weeks to pick it up. “Were you able to fix it?” is the first thing you say when you come in the door.

I smile and nod my head slowly. Sagely. “It was tougher than I thought,” I say. “But I was finally able to get it running again.”

“Yes, but does it work? What about the contents? What about my novel…?”

I try to keep my expression neutral as befits an expert talking to someone who knows nothing about how complex the circuitry in a computer can be. “Well,” I explain, “It was really damaged, you know. I don’t know what you did to it… but a lot of it was beyond repair.”

“But…”

“But I managed to salvage quite a bit of the function. The word processor works now –you can continue writing your novel.”

You look at me with a puzzled expression. “I thought you said you could fix it -the area where my novel is…”

I smile and hand you back the computer. “I did fix it. You can write again -just like before.”

“All that information… all those stories… They’re gone?”

I nod pleasantly, the smile on my face broadening. “But without my work you wouldn’t have had them either, remember. I’ve given you the opportunity to write some more.”

“But… But was stored in there,” you say, pointing at the laptop in front of you on the counter. “How do I know who I am now?”

“You’re the person who has been given the chance to start again.”

Sometimes that’s enough, I suppose…

 

 

 

 

 

 

 

 

 

Advertisements

The Black Sewing Box

I love mysteries, and if they involve finding buried treasure, so much the better. Thoughts of treasure chests used to conjure up maps and pirates hiding valuable things in faraway and largely inaccessible places. I suppose that shows my age, because nowadays, the more likely proxy for a treasure chest in the popular imagination is a flight data recorder –a black box- submerged beneath thousands of meters of ocean or buried under rocks on the side of a faraway mountain. Hidden wealth for sure.

The myth of faraway, or at least elusive, treasure is an ancient one; think of the Greek myth of Jason in quest of the Golden Fleece -the golden wool of a ram which symbolized authority. There is something enticing about that which we do not have, but might obtain with sufficient diligence. And information seems to be the treasure most prized in the modern era. Information is Power. Information is Knowledge.

And yet, despite the cache of data contained in the almost magically endowed black box, and despite its reputation as the only solution to an otherwise insoluble problem, we forget its other, earlier, and less forthcoming incarnation –its perhaps even more obscure aspect. In computational and engineering models, a black box is something we can use, but don’t understand. For every input, there is an output, but like a magician’s sleeve, we don’t know why. The brain is still a black box. You and I are, for all intents and purposes, black boxes. And that is what is so appealing to me: that none of us are completely knowable. Predictable. We are all magician’s hats…

A short article in an August 2015 Canadian Medical Association Journal stirred the coals of my easily invoked imagination: http://www.cmaj.ca/content/187/11/794.full  It likens the measured parameters in an aviation ‘black box’ to a research project involving operating rooms at a Toronto hospital. ‘The technology involves several cameras and microphones, along with sensors to document physiological data and key aspects of the environment, such as temperature.’ But this foray into the sacred chambers of the OR is not merely another frivolous time-and-motion study, so beloved of factories and corporations everywhere. No, as the article puts it: ‘The intent of the new technology is to enhance health team performance, pinpoint errors and missteps (human and otherwise), and subsequently identify ways to prevent and address those issues.’

Having spent a good part of my career as a surgeon in the OR, I appreciate the need to improve performance and prevent mistakes. In a teaching hospital, much of our time in surgery goes to passing on our skills and honing the competence and judgement of the resident doctors in the program. We become the monitors. But, as hinted in the old fable of mice deciding that the best way to detect the approach of a cat would be to hang a bell around its neck, who will bell the cat? In other words, how do we know that the surgeon –or whoever- is not passing along bad habits? Faulty techniques in need of improvement?

One way tried in recent times, has involved having another surgeon in the OR as an observer. A later meeting to debrief and discuss opportunities to modify identified issues then helps to improve performance. Unfortunately not all of us are open to suggestions about our skill-sets, and other opinions are sometimes seen as criticisms. Ego and the fear of loss of reputation likely figure prominently in the equation even though the findings are kept private. Only if this practice of observation and subsequent discussion were made universal would it have a chance of thriving as a learning tool, however.

Another, although for some, equally uncomfortable method of improving performance in the OR, would be the practice of having a more junior surgeon, say, scrubbing with another more experienced colleague as part of a mandated hospital policy for quality assurance -much as hospitals now require yearly performance and outcome reviews for hospital reappointment. Personally, I like this approach. It is an easy way to learn and see new techniques in a less stressful environment than if I were in charge of the case. And I think we can also learn from the residents we are teaching who have studied in other hospitals and with other surgeons. There are many ways to improve our skills if we don’t allow ourselves to become encased in habit and focussed only on our own clothes. As Isaac Newton might have put it, ‘If I have been able to see as far as others, it is by standing on the shoulders of colleagues.’ Well, okay, perhaps he said it better, but our options to improve seem to be either carrot or stick.

There is a trend creeping into public media of assessing and rating doctors on their outcomes. How many patients benefitted from the surgery? How many had complications? How many surgeries has the doctor performed? What about her colleagues? The publication of these data sets may seem reasonable, but unfortunately they leave many contributing factors in the shadows –or even unreported. Unconsidered. For example, perhaps the surgeon in question has a high complication rate because, as the most experienced, she gets the most difficult cases -maybe the ones that have failed other treatments.

All things considered, perhaps the black box approach has more compelling merit than first meets the eyes. If the public were assured that procedures were monitored and recorded this might go a long way to assuaging their suspicion of incompetence or malpractice. And as the article suggests, ‘Data recorded by the black box system could well speak for patients unable to speak for themselves because they were under anaesthesia or unfamiliar with hospital procedures and protocol.’ Let’s face it, ‘black box’ monitoring certainly helps to instill a level of confidence in airplanes: just knowing that after a difficult or problematic flight, experts could discover what actually happened and correct it for the future.

There is a problem with the black box method, however –an obvious one for surgeons: ‘the data in an operating room black box could be used as evidence in medical malpractice suits unless precluded by legislation — in much the same way morbidity and mortality assessments made by hospitals and staff for the purpose of quality assurance and improvements are exempt from being used in court.’ We all learn from our mistakes –and from the mistakes of others. We must, otherwise the errors will be repeated. And most of these issues are not the result of malpractice or incompetence. They are potentially teachable moments, if you will.

In fact, one lawyer commenting on the black box idea, felt that ‘the data could also help surgeons who are being sued. “With the black box, critical procedures and techniques could be objectively assessed by peer surgeons when a poor outcome occurs. From the surgeon’s point of view, the data would be confirmation that all was done right but the poor outcome was beyond their control.”

So, in a way, it’s prudent to swallow unsweetened medicine now to ward off disease down the road. In the words of Tolkien, ‘It will not do to leave a live dragon out of your plans, if you live near one.’