He’s mad that trusts in the tameness of a wolf

I am an obstetrician, and not a neuropsychiatrist, but I feel a definite uneasiness with the idea of messing with brains –especially from the inside. Talking at it, sure –maybe even tweaking it with medications- but it seems to me there is something… sacrosanct about its boundaries. Something akin to black-boxhood -or pregnant-wombhood, if you will– where we have a knowledge of its inputs and outputs, but the internal mechanisms still too complex and interdependent to be other than interrogated from without.

I suppose I have a fear of the unintended consequences that seem to dog science like afternoon shadows -a glut of caution born of reading about well-meaning enthusiasms in my own field. And yet, although I do not even pretend to such arcane knowledge as might tempt me to meddle with the innards of a clock let alone the complexities of a head, I do watch from afar, albeit through a glass darkly. And I am troubled.

My concern bubbled to the surface with a November 2017 article from Nature that I stumbled upon: https://www.nature.com/news/ai-controlled-brain-implants-for-mood-disorders-tested-in-people-1.23031 I recognize that the report is dated, and merely scratches the surface, but it hinted at things to come. The involvement of DARPA (the Defense Advanced Research Projects Agency of the U.S. military) did little to calm my fears, either –they had apparently ‘begun preliminary trials of ‘closed-loop’ brain implants that use algorithms to detect patterns associated with mood disorders. These devices can shock the brain back to a healthy state without input from a physician.’

‘The general approach —using a brain implant to deliver electric pulses that alter neural activity— is known as deep-brain stimulation. It is used to treat movement disorders such as Parkinson’s disease, but has been less successful when tested against mood disorders… The scientists behind the DARPA-funded projects say that their work might succeed where earlier attempts failed, because they have designed their brain implants specifically to treat mental illness — and to switch on only when needed.’

And how could the device know when to switch on and off? How could it even recognize the complex neural activity in mental illnesses? Well, apparently, an ‘electrical engineer Omid Sani of the University of Southern California in Los Angeles — who is working with Chang’s team [a neuroscientist at UCSF] — showed the first map of how mood is encoded in the brain over time. He and his colleagues worked with six people with epilepsy who had implanted electrodes, tracking their brain activity and moods in detail over the course of one to three weeks. By comparing the two types of information, the researchers could create an algorithm to ‘decode’ that person’s changing moods from their brain activity. Some broad patterns emerged, particularly in brain areas that have previously been associated with mood.’

Perhaps this might be the time to wonder if ‘broad patterns’ can adequately capture the complexities of any mood, let alone a dysphoric one. Another group, this time in Boston, is taking a slightly different approach: ‘Rather than detecting a particular mood or mental illness, they want to map the brain activity associated with behaviours that are present in multiple disorders — such as difficulties with concentration and empathy.’ If anything, that sounds even broader -more unlikely to specifically hit the neural bullseye. But, I know, I know –it’s early yet. The work is just beginning… And yet, if there ever was a methodology more susceptible to causing collateral damage, and unintended, unforeseeable consequences, or one that might fall more afoul of a hospital’s ethics committee, I can’t think of it.

For example, ‘One challenge with stimulating areas of the brain associated with mood … is the possibility of overcorrecting emotions to create extreme happiness that overwhelms all other feelings. Other ethical considerations arise from the fact that the algorithms used in closed-loop stimulation can tell the researchers about the person’s mood, beyond what may be visible from behaviour or facial expressions. While researchers won’t be able to read people’s minds, “we will have access to activity that encodes their feelings,” says  Alik Widge, a neuroengineer and psychiatrist at Harvard University in Cambridge, Massachusetts, and engineering director of the MGH [Massachusetts General Hospital] team.’ Great! I assume they’ve read Orwell, for some tips.

It’s one of the great conundrums of Science, though, isn’t it? When one stretches societal orthodoxy, and approaches the edge of the reigning ethical paradigm, how should one proceed? I don’t believe merely assuming that someone else, somewhere else, and sometime else will undoubtedly forge ahead with the same knowledge, is a sufficient reason to proceed. It seems to me that in the current climate of public scientific skepticism, it would be best to tread carefully. Science succeeds best when it is funded, fêted, and understood, not obscured by clouds of suspicion or plagued by doubt -not to mention mistrust. Just look at how genetically modified foods are regarded in many countries. Or vaccinations. Or climate change…

Of course, the rewards of successful and innovative procedures are great, but so is the damage if they fail. A promise broken is more noteworthy, more disconcerting, than a promise never made.

Time for a thought experiment. Suppose I’ve advertised myself as an expert in computer hardware and you come to me with particularly vexing problem that nobody else seemed to be able to fix. You tell me there is a semi-autobiographical novel about your life that you’d been writing in your spare time for years, stored somewhere inside your laptop that you can no longer access. Nothing was backed up elsewhere –you never thought it would be necessary- and now, of course, it’s too late for that. The computer won’t even work, and you’re desperate.

I have a cursory look at the model and the year, and assure you that I know enough about the mechanisms in the computer to get it working again.

So you come back in a couple of weeks to pick it up. “Were you able to fix it?” is the first thing you say when you come in the door.

I smile and nod my head slowly. Sagely. “It was tougher than I thought,” I say. “But I was finally able to get it running again.”

“Yes, but does it work? What about the contents? What about my novel…?”

I try to keep my expression neutral as befits an expert talking to someone who knows nothing about how complex the circuitry in a computer can be. “Well,” I explain, “It was really damaged, you know. I don’t know what you did to it… but a lot of it was beyond repair.”


“But I managed to salvage quite a bit of the function. The word processor works now –you can continue writing your novel.”

You look at me with a puzzled expression. “I thought you said you could fix it -the area where my novel is…”

I smile and hand you back the computer. “I did fix it. You can write again -just like before.”

“All that information… all those stories… They’re gone?”

I nod pleasantly, the smile on my face broadening. “But without my work you wouldn’t have had them either, remember. I’ve given you the opportunity to write some more.”

“But… But was stored in there,” you say, pointing at the laptop in front of you on the counter. “How do I know who I am now?”

“You’re the person who has been given the chance to start again.”

Sometimes that’s enough, I suppose…











Whether ’tis Nobler in the Mind

I may have inadvertently stumbled upon something important. I may have found a boundary marker that potentially distinguishes New Age from Old Age. Of course, definitionally I could be way out of my league –New Age being construed as anything that happened after I left university- but considered as a panoply, I think it works, if only conceptually.

I happened upon an article in the CBC news app while scrolling through my phone, that struck me as interesting: http://www.cbc.ca/1.4302866 -perhaps because I had never thought about technology in those terms, and perhaps because I felt embarrassed that I had been caught doing just that.

The premise was that we seem to turn to various apps on our devices for problem solving of many sorts. Everything from comparing shopping prices to trends in fashion to the latest news. And, as we are increasingly discovering, these digital peregrinations revisit us in the form of directed advertisements hoping to cash in on our whimsical journeys. Nothing is thrown away in the digital world –even our whims are stored, categorized, and pragmatically redistributed. And if notions, then it seems a small step to include moods. Emotions –positive, or otherwise- should be equally trackable.

In fact, I learned that ‘Google announced it now offers mental-health screenings when users in the U.S. search for “depression” or “clinical depression” on their smartphones. Depending on what you type, the search engine will actually offer you a test. […] And Facebook is working on an artificial intelligence that could help detect people who are posting or talking about suicide or self-harm.’

Perhaps this is where I feel the shadow of a boundary issue. There seems little question that mood disorders transcend age and gender; what is more problematic, however, is whether there may be a generational divide in confiding those emotions digitally, or even believing that solace could lie therein. The problem is not so much in putting these issues in writing –diaries, and correspondence, after all, have long been a rich retrospective source for biographers. The difference, it seems to me though, is the intent of the disclosure –diaries have traditionally been personal, and usually, not meant as a way of communication, but rather a way of sorting out thoughts. Private thoughts. Letters, as well, were directed to particular individuals –often trusted confidants- and not meant for publication outside that circle. Have the older generation –Generation R, for example (Retirement, to attach a label)- been sufficiently swept up in the digital river, to feel comfortable in clinging to its flotsam like their children?

I’m certainly not gainsaying the efforts of the internet giants to expand into the mental health realm –it seems a natural progression, so perhaps this is a start… and yet it’s one thing to key in on various words like ‘depression’ and have the algorithm kick in with a screening test, but another to sift through the context to determine the appropriateness of offering the test. I suppose random screening like that may be helpful for some, but as Dr. John Torous, the co-director of the digital psychiatry program at Harvard Medical School and chair of the American Psychiatric Association’s workgroup on smartphone apps, observes, ‘”One of the trickiest things is that language is complex … and there’s a lot of different ways that people can phrase that they’re in distress or need help.”’ Amen to that.

Quite apart from translational difficulties and the more abstract and culturally-fraught issues with their changing metaphors and societal expectations, there are other language problems –even in the dominant language of whatever country: changing vocabularies, local argot, and misspellings, to name only a few.

To state that human culture is complex, is a trope, and to believe that artificial intelligence will be able to keep up with its multifaceted, ever-changing face, anytime soon is probably naïve. And, as the article points out, privacy –no matter the promises of the internet provider, or the app-producer- is another weak link in the chain. Quite apart from malicious hacking, or innocent and trusting confidence in the potential for help, ‘Our phones already collect a tremendous amount of personal data. They know where we are and who we’re speaking and texting with, as well as our voice, passwords, and internet browsing activities. “If on top of that, we’re using mental-health services through the phone, we may actually be giving up a lot more data than people realize,” Torous says. He also cautions that many of the mental-health services currently available in app stores aren’t protected under federal privacy laws [at least in the United States], so you’re not afforded the same privacy protections as when you talk to a doctor.’

In a very real –if mainly age-related- sense, I am relieved I did not grow up in the digital age. I am fortunate that Orwell’s prescient ‘1984’ was available, not as a quaint attempt at predicting the future, but as a warning about a creeping surveillance that seemed so malevolently unrealistic when it was written –it was first published in 1949, remember. And when I read it, the date was still sufficiently far in the future that it seemed more science fiction than predictive. Yet, as the years wore on, and society changed in unexpected ways, the horrors of the theme, for me at least, became more and more uncomfortable. More and more possible, despite the reassuring smoke blown in our eyes by those eager for progress, and mesmerized by the possibilities.

I mention this, not to suggest that I was unique in this discomfort –I was obviously not- nor to imply that what we are now experiencing is evil, or even threatening, but merely to explain the hesitation of many of those my age in accepting, unreservedly, the digitally-wrapped gifts so readily proffered. It is not a venue to which I would likely turn for health issues, or emotional sustenance.

For me, there is something more reassuring about an eye-to-eye encounter with another member of the same species, able to understand the vagaries of language, and compare the nuanced phrasing of my words with the expression on my face. Perhaps, I’ll change -perhaps I’ll have to- and yet… and yet I’d still feel better dealing with an entity –a person– able to experience the heart-ache and the thousand natural shocks that flesh is heir to. And yes, someone who has read and understood what Shakespeare meant.