Thursday, October 15, 2009

Potential Mascot

Why face confrontation when you can curl up like a ball and bounce down a rocky mountainside instead?
Adaptation is a beautiful thing.



Read more about the Venezuelan pebble toad here.

Tip o' the hat to Doctor Professor.

Wednesday, October 7, 2009

A "Cure" for Writing

In my last post, “Where the Problem Fixes You,” I wrote about an article that examined the potential benefits of depression. One of the claims in the article (and the study on which it’s based) is that writing promotes quicker resolution of depressive symptoms by allowing the writer to gain objectivity, insight, and control through the abreactive process of writing about one's troubles. This idea (commonly known as the “writing cure”) is not quite as new the adaptationist theories in which it plays a role.

The writing cure is a variant of the “talking cure,” a concept generally regarded as the basis of psychoanalysis. The term was coined, not by Freud, but by Bertha Pappenheim (“Anna O.” in the case histories), who was a young patient of Freud’s colleague, Josef Breuer. In 1880, Bertha and Breuer discovered together that regular discussions about her condition had the effect of reducing the severity of her symptoms. Through words, the limbic intelligence of the “underbrain” acquired the possibility of negotiation outside the self. In the analyst’s office, speech became the vehicle through which unconscious pressures could be shifted into consciousness, where they could then be examined, dealt with, and eventually mastered. Bertha Pappenheim dubbed this “the talking cure” or, in jest, “chimney sweeping.”

D. H. Lawrence might’ve come close to articulating the writing cure in 1913 when he famously wrote to a friend that “one sheds ones sicknesses in books — repeats and presents again one’s emotions, to be a master of them.” To shed sickness in a book confers a purpose on sickness that it might otherwise lack, and that’s certainly remarkable. But I’d prefer to argue that the first person to write about the writing cure might well have been Virginia Woolf. The Woolfs’ Hogarth Press had been publishing Freud’s papers since 1924, but Woolf herself reserved several years to pour scorn on the man before reading him in earnest (and meeting him!) in 1939, the same year she wrote the autobiographical essay “A Sketch of the Past.” In a much-discussed passage of the essay, Woolf describes how the experience of writing To the Lighthouse allayed her obsessive fixation on her mother’s death; she writes,

I suppose that I did for myself what psychoanalysts do for their patients. I expressed some very long felt and deeply felt emotion. And in expressing it I explained it and then laid it to rest.

However deceptively cut-and-dried Woolf’s account might be here, this is the basic machinery of the writing cure. It makes some sense that people who write a lot are more likely to stumble upon a particular kind of writing that helps them — at least temporarily — alleviate pain. And yet, why is it that some of the strongest testimonies to the palliative effects of writing tend to come from writers who ultimately commit suicide? If writing is such a cure, why does it fail consistently enough to have earned its own term? It’s possible (in fact, likely) that participants in writing-cure studies and those who choose to write for a living are birds of a very different feather; but I’m going to go ahead and sit on that point for a minute.

James Kaufman, who coined the term “Sylvia Plath Effect,” coauthored a 2006 article in the Review of General Psychology titled “Why Doesn’t the Writing Cure Help Poets?” (note: link goes straight to a PDF file), which cites twenty-some studies on the way to the assertion that “writers are, generally, more likely to be mentally ill and die young than others in the arts,” with poets essentially blowing the curve with “higher rates of mental illness, suicide, and mortality than other writers.” (The New York Times covered the same topic in 2004, in an article titled “Going Early into That Good Night.”) Belief in the therapeutic power of writing, in light of such strong evidence against it, seems to come packaged with the notion that the majority of poets (and many writers) do it wrong.

Kaufman and Sexton, the authors of the 2006 article, argue that the key elements of writing-cure writing, done right, are expressivity and narrative; that is, in order to feel better, the writer needs to make an emotional investment and tell a coherent story. Language-level details might also help to clarify what’s ticking behind allegedly palliative writing. Linguistic-analysis software allows researchers to examine connections between the frequency with which a writer uses certain categories of words and the health benefits that the writer may or may not report (or demonstrate) after having written. (Categories of words, for example, might include “cognitive” words (“because,” “reason,” “result”), big words, self-references, interrogatives, positive and negative emotions, etc.) Linguistic analysis allows psychologists to make some pretty remarkable observations. For example:

Another finding is that writers who shift in their use of the first person singular (e.g., I, me, my) to third person (e.g., we, us, them) are better off than those who continue to use the first person singular (Stirman & Pennebaker, 2001). This suggests that a shift in perspective is an important element and is consistent with the idea of storytelling. The picture that emerges is that the healthy writer is telling an evolving story and using emotion while doing it. . . . Conversely, a shift toward usage of the first-person singular may indicate a change in mental health. An analysis of the works of Kurt Cobain, John Cheever, and Cole Porter revealed that as their fame increased, all three writers used more first-person singular in both their creative work (song lyrics and stories) and in their private diaries and journals (Schaller, 1997). As this increase took place, so did an increase in self-destructive behaviors (e.g., excessive drinking) and depression (and for Cobain, an eventual suicide).

A version of the widely used text-analysis software designed by Pennebaker, Booth, and Francis is available to play with online, for free (if you don’t mind contributing to their corpus with your sample) at http://www.liwc.net. (The current version counts words that refer to sex, eating, or religion, too.) For what it’s worth, I’d offer that the delightful blog Dear Thyroid is a perfect example of what therapeutic-writing exercises can offer, and if I weren’t averse to pasting in stuff that doesn’t belong to me, I’d run some of that stuff through LIWC just to see what it says.

Certainly, there is no lack of evidence or research supporting the argument that certain types of writing are beneficial to mental health, even if the longevity and universal applicability of those benefits might be debatable. The Scientific American article I wrote about last time suggested that depression, in itself, is an evolutionary advantage that should perhaps not be thought of (or treated) as a mental disorder. A week ago, the New York Times ran an article about the anxious mind that seemed to resonate with the ideas described in the Scientific American piece; as Robin Marantz Henig relates,

“Our culture has this illusion that anxiety is toxic,” Kagan said. But without inner-directed people who prefer solitude, where would we get the writers and artists and scientists and computer programmers who make society hum? Kagan likes to point out that T. S. Eliot suffered from anxiety . . . “That line ‘I will show you fear in a handful of dust’ — he couldn’t have written that without feeling the tension and dysphoria he did,” Kagan said.

In view of all of these things, here’s what I think about all of this, finally: Depression probably offers a greater benefit to writing than writing does to depression. Contrary to the arguments expressed in the articles I wrote about in my previous post, analytical rumination may be better for your writing than it is for you. For Virginia Woolf, it was “the shock-receiving capacity” that made her a writer; it didn’t make her a happy person, it didn’t make her well-adjusted, but it did (by her account) make her see things more clearly, and she was able to bring that clarity of vision to her work. A high threshold for “shock” is perhaps the only way to explain how she could weather the eleven turbulent years (beginning, for her, at age thirteen) that brought the deaths of her mother, half-sister, father, and brother. I think it’s safe to suggest, with regard to extreme emotional disturbance, that the endgame of analytical rumination may be to say, essentially, “This insoluble problem is an integral part of who I am; and to the extent that I am happy with that, the trauma has a reason for being, and I accept it.” Such is adaptation.

Two weeks ago, over the text “near-unspeakable questions,” I linked to a four-word question in “A Sketch of the Past” that I think about a lot, and which appears in the below excerpt:

If to be aware of the insecurity of life, to remember something gone, to feel now and then, overwhelmingly, . . . a passionate fumbling fellowship—if it is a good thing to be aware of all this at fifteen, sixteen, seventeen, by fits and starts —if, if, if—. But was it good? Would it not have been better (if there is any sense in saying good and better when there is no possible judge, no standard) to go on feeling, as at St. Ives, the rush and tumble of family life?. . . . would this not have been better than to have had that protection removed; to have been tumbled out of the family shelter; to have had it cracked and gashed; to have become critical and skeptical of the family—? Perhaps to have remained in the family, believing in it, accepting it, as we should, without those two deaths, would have given us greater scope, greater variety, and certainly greater confidence. On the other hand, I can put another question: Did those deaths give us an experience that even if it was numbing, mutilating, yet meant that the Gods (as I used to phrase it) were taking us seriously, and giving us a job which they would not have thought it worthwhile to give—say, the Booths or the Milmans. . . . So I came to think of life as something of extreme reality.

The “near-unspeakable” question is a matter of examining consequences and asking whether one would (albeit impossibly) substitute that Thing so terrible that one wishes it never happened, that Thing that forges a person, for something easier, for a life taken less seriously. Perhaps to catch oneself unwilling to substitute that trauma for the blithe “rush and tumble” is to catch oneself confessing preference for the trauma (and the hard-won wisdom that comes with it). It might mean thinking such admittedly complicated and disturbing things as, for example, “It is good that I was abused,” “It is good that I have lost my loved ones,” or, paraphrasing Mizuta Masahide, “It is good that my house has burned down. Now I can see the moon.”

Sunday, September 20, 2009

Where the Problem Fixes You

Last month, Scientific American featured an article by two scientists contending that depression is not a mental disorder but rather an evolutionary adaptation designed to resolve the very issues responsible for depressive episodes in the first place. In contradiction to the conventional belief that depression is a deleterious malfunction, that the depressed mind needs “fixing” (often to the tune of heady prescription drugs), psychologists Andrews and Thomson argue that depression is fixing, that the depressed state of mind conduces to a productive style of analysis in which information is processed slowly, methodically, and with greater attention to detail — all in the service of self-preservation. It is suggested that the anatomy of the brain offers a biochemical complement to this, too, given that natural selection has preserved a specialized receptor that allows the energy-draining process of depressive rumination to “continue uninterrupted with minimal neuronal damage.” The emotional and behavioral traits associated with depression are therefore cast as evolved responses to the same types of complex dilemmas that humans have been facing since the dawn of the species. The Scientific American article raised a lot of questions for me, so I decided to chase down the primary source.

In "The Bright Side of Being Blue: Depression as an Adaptation for Analyzing Complex Problems," Andrews and Thomson note that depression (affecting some 121 million people on the planet) is the primary psychological condition for which help is sought. The DSM-IV (the standard reference manual for the diagnosis of mental disorders) names the presence of “clinically significant impairment or distress” among its general criteria for determining whether a psychological condition amounts to a biological dysfunction, inviting the psychologists to question whether a heightened capacity for analysis truly constitutes “impairment.” Andrews and Thomson make no distinction between clinical and subclinical depression. They maintain that clinical-significance criteria contribute to the overdiagnosis of depressive disorders and, further, that the threshold between minor episodic/situational depression and major chronic/intransitive depression is fairly arbitrary regardless. To them, depression is directive — at all points along the continuum, where there is illness, there is teleology.

The “analytical rumination hypothesis” put forward in the article holds that depression (1) activates neurological mechanisms that direct optimized focus toward problem-related analysis and (2) promotes bodily and behavioral changes in the interest of reducing exposure to stimuli that could disrupt the analytical reasoning requisite to resolving the “triggering problem.” (In short: Better thinking through misery.) The caveat, however, is that problem-solving processes are only improved insofar as they relate to the depressed person’s actual problem(s). Prior studies have shown that depressed people do very poorly on laboratory tasks (which typically have nothing to do with the subject of their ruminations). Even when lab tasks are designed to resemble real-life quandaries, the intensification of depressive symptoms may make it “progressively more difficult for depressed people to attend to anything but the specifics of their problems.” Contrary to the authors’ intentions in that point, though, I think it bears mentioning that the deeper the obsession with the problem, the wider the net of “specifics” relevant to it (and, likewise, the wake of associated skills). For example, depressed individuals generally outperform the nondepressed in low- or zero-contingency scenarios, i.e., when the participant’s action hardly changes, or does not change, the probability of the outcome. In such situations, nondepressed people are more likely to err in their judgment by overestimating their degree of control, whereas their depressed counterparts are more inclined to judge cause–effect patterns correctly. Along similar lines, depressed people are less likely to make the fundamental attribution error, a bias which Andrews and Thomson relate explicitly to the process of critical thinking:

For instance, people make the FAE when they attribute a pro-Castro stance to the writer of a pro-Castro essay even when they know that the writer wrote the essay as part of a class assignment and was assigned the pro-Castro stance by the course instructor. . . . Avoiding the FAE requires multiple processing steps in which an initial attribution is made based on the actor’s behavior (the pro-Castro stance), and then a correction is made based on the situational context (the assignment of the stance by the course instructor). This approach is cognitively effortful. People are less likely to use situational information and are more likely to make the FAE under conditions of cognitive load. Moreover, those who avoid the FAE take longer on the task.


It takes longer to read closely, analyze slowly, and process with care. These tasks demand patience, thoroughness, and resilience to brain burnout, all of which are allegedly abetted by depressive affect.

That depression should so resemble a learning experience is perhaps not surprising. You never learn so much about a system as when it breaks down, and illness certainly provides the opportunity to know oneself better. But its offerings are hard truths — knowledge of the kind most would prefer not to have. Avoidance responses to unwanted knowledge are a huge part of what is signified by the word “depression.” It goes without saying that avoidant action typically results in the production of avoidable stressors, posing an infinite loop of depressive potential. Perhaps avoidance is what allows Andrews and Thomson to make no distinction between clinical and subclinical depression, given that the “neurological orderliness” of the condition varies much less than an individual’s own strategies of escape (e.g., engagement in distractions, drug/alcohol use, somnolence, or the most avoidant behavior of all, suicide). What we may readily consider the symptoms of depression may in fact be avoidance-of-depression symptoms, that is, behaviors that address the traits — but not the causes — of the illness. It’s possible, too, that the environments that furnish the greatest number of opportunities to engage in avoidant behavior are also those in which depression is most common.

To regard depressive rumination as a conflict-resolution strategy has significant implications with regard to treatment, especially considering that more people seek help for depression than for any other psychological condition. If depression is truly engineered to resolve its own causes (as adaptationist hypotheses like Andrews and Thomson’s propose), then anything that circumvents the process of learning how to endure persistent painful feelings while negotiating possible solutions to them constitutes “avoidance.” This would include treatment with prescription drugs — or even sugar pills, according to the unpublished trials of pharmaceutical companies. The suggestion that a placebo can, in all but the most severe cases, match the effectiveness of prescription antidepressants does indeed imply that most human brains are pretty well equipped to deal with depression in one way or another. (Whether the mind agrees is another story, hence rumination.)

The idea that painful feelings draw attention to problems and motivate problem-solving behavior does not make those feelings any less painful, though. In an age when the interval between desire and gratification grows smaller by the day, our tolerance for struggle has probably shrunk accordingly. (Ideas to that end were the subject of an article in Scientific American last year.) Our modern lifestyles may predispose us toward the instant gratification that a temporary deferral of our problems can provide, but gratification, distraction, and deferral don’t really solve anything. Depression is a natural response to emotionally challenging pains and losses in life, and to “treat” it impatiently is tantamount to cheating oneself out of the potential for development that a formidable obstacle can supply.

Of the infinite number of possible problems generated by the human condition, not one of them is entirely unique. If life is hard, it’s because it should be. It takes a prodigious effort to turn rumination into a reward. Some might argue that certain problems (loss and trauma, especially) are so insoluble that no amount of mental activity can “resolve” them. At that point, I think we're talking about more profound sense of adaptation; specifically, the ability to earnestly ask near-unspeakable questions and candidly accept curious conclusions. And even then, "reconciling the insoluble" still isn't necessarily synonymous with "surviving your depression." But as the authors of “The Bright Side of Being Blue” conclude, “the extended nature of depressive pain is useful. Without it, people would not be motivated to engage in the extended effort required to solve complex problems. . . . [L]earning how to endure and utilize emotional pain may be part of the evolutionary heritage of depression, which may explain venerable philosophical traditions that view emotional pain as the impetus for growth and insight into oneself and the problems of life.”

One final note: Of particular interest to me, and also to self-professed neurotic and wonderful writer Maud Newton, is article’s pitch that expressive writing facilitates the resolution of depressive symptoms. Maud later updated her post to append the response of a reader who offered the cases of Virginia Woolf and David Foster Wallace as counterexamples to the analytical rumination/"writing cure" hypothesis. To that end, Woolf and Wallace are in considerable company.

More on that next time.
/* old tracking code: */