In Defense of Anti Depressants and All Other Seriously Researched Medical Products

2 1,120

Recently, a number of articles have suggested that antidepressants are no more effective than placebos. Just last month, in an essay in The New York Review of Books, Marcia Angell-Relman, former interim editor in chief of The New England Journal of Medicine, favorably entertained the premise that “psychoactive drugs are useless.”

Earlier, a USA Today piece about a study done by the psychologist Robert DeRubeis had the headline, “Antidepressant lift may be all in your head,” and shortly after, a Newsweek cover piece discussed research by the psychologist Irving Kirsch arguing that the drugs were no more effective than a placebo.

To address the controversy surrounding the effectiveness of antidepressants used in Americans, Peter D. Kramer, a clinical professor of psychiatry at Brown University, wrote an article in the New York Times, “defending antidepressants.”

In Defense of Antidepressants

Kramer starts out by asserting that, “antidepressants work — ordinarily well, on a par with other medications doctors prescribe.”  He noted that, “certain researchers who have questioned their efficacy in particular areas,” have done so “on the basis of shaky data.”

The problem with these researchers using shaky data is that the notion that antidepressants “aren’t effective in general is influencing treatment.”

Kramer first offers support that antidepressants work from a study in France of more than 100 people with a particular kind of stroke. Along with physiotherapy, half received Prozac, and half a placebo. Members of the Prozac group recovered more of their mobility. Antidepressants are good at treating post-stroke depression and good at preventing it. They also help protect memory. In stroke patients, antidepressants look like a tonic for brain health.

What was surprising to Kramer was that a friend of his, who suffered from a stroke, was not put on antidepressants until after Kramer himself emailed his friend’s doctor. And even then, he learned from his friend, Robert G. Robinson at the University of Iowa’s department of psychiatry, that neurologists say they “don’t use an antidepressant unless a patient is suffering very serious depression” because “they’re influenced by reports that say that’s all antidepressants are good for.”

Kramer asserts that, “the serious dispute about antidepressant efficacy has a limited focus. Do they work for the core symptoms (such as despair, low energy and feelings of worthlessness) of isolated episodes of mild or moderate depression? The claim that antidepressants do nothing for this common condition — that they are merely placebos with side effects — is based on studies that have probably received more ink than they deserve.”

Evidence?

Kramer points out that the most widely publicized debunking research — the basis for the Newsweek and New York Review pieces — is drawn from data submitted to the Food and Drug Administration (FDA) in the late 1980s and the 1990s by companies seeking approval for new drugs. This research led to its share of scandal when a study in The New England Journal of Medicine found that the trials had been published selectively. Papers showing that antidepressants work had found their way into print; unfavorable findings had not.

In his book “The Emperor’s New Drugs: Exploding the Antidepressant Myth,” Dr. Kirsch, a psychologist at the University of Hull in England, analyzed all the data. He found that while the drugs outperformed the placebos for mild and moderate depression, the benefits were small. The problem with the Kirsch analysis — and none of the major press reports considered this shortcoming — is that the FDA material is ill suited to answer questions about mild depression.

Kramer explained that as a condition for drug approval, the FDA requires drug companies to demonstrate a medicine’s efficacy in at least two trials. Trials in which neither the new drug nor an older, established drug is distinguishable from a placebo are deemed “failed” and are disregarded or weighed lightly in the evaluation. Consequently, companies rushing to get medications to market have had an incentive to run quick, sloppy trials.

Often subjects who don’t really have depression are included — and (no surprise) weeks down the road they are not depressed. People may exaggerate their symptoms to get free care or incentive payments offered in trials. Other, perfectly honest subjects participate when they are at their worst and then spontaneously return to their usual, lower, level of depression.  Kramer explained that these problems are an “artifact of the recruitment process.” As a result, he noted that if many subjects labeled mildly depressed in the FDA data don’t have depression, this would explain why they might respond to placebos as readily as to antidepressants.

Research

Kramer explained how there are two sorts of studies that are done on drugs: broad trials and narrow trials. Broad trials, like those done to evaluate new drugs, can be difficult these days, because many antidepressants are available as generics.

Narrow studies, done on those with specific disorders, tend to be more reliable. Recruitment of subjects is straightforward; no one’s walking off the street to enter a trial for stroke patients. Narrow studies have identified many specific indications for antidepressants, such as depression in neurological disorders, including multiple sclerosis and epilepsy; depression caused by interferon, a medication used to treat hepatitis and melanoma; and anxiety disorders in children.

New ones regularly emerge. The June issue of Surgery Today features a study in which elderly female cardiac patients who had had emergency operations and were given antidepressants experienced less depression, shorter hospital stays and fewer deaths in the hospital.

Broad studies tend to be most trustworthy when they look at patients with sustained illness. A reliable finding is that antidepressants work for chronic and recurrent mild depression, the condition called dysthymia. More than half of patients on medicine get better, compared to less than a third taking a placebo. (This level of efficacy — far from ideal — is typical across a range of conditions in which antidepressants outperform placebos.) Similarly, even the analyses that doubt the usefulness of antidepressants find that they help with severe depression.

In fact, antidepressants appear to have effects across the depressive spectrum. Scattered studies suggest that antidepressants bolster confidence or diminish emotional vulnerability — for people with depression but also for healthy people. In the depressed, the decrease in what is called neuroticism seems to protect against further episodes. Because neuroticism is not a core symptom of depression, most outcome trials don’t measure this change, but we can see why patients and doctors might consider it beneficial.

Similarly, in rodent and primate trials, antidepressants have broad effects on both healthy animals and animals with conditions that resemble mood disruptions in humans.

One reason the FDA manages to identify useful medicines is that it looks at a range of evidence. It encourages companies to submit “maintenance studies.” In these trials, researchers take patients who are doing well on medication and switch some to dummy pills. If the drugs are acting as placebos, switching should do nothing. In an analysis that looked at maintenance studies for 4,410 patients with a range of severity levels, antidepressants cut the odds of relapse by 70 percent. These results, rarely referenced in the antidepressant-as-placebo literature, hardly suggest that the usefulness of the drugs is all in patients’ heads.

The other round of media articles questioning antidepressants came in response to a seemingly minor study engineered to highlight placebo responses. One effort to mute the placebo effect in drug trials involves using a “washout period” during which all subjects get a dummy pill for up to two weeks. Those who report prompt relief are dropped; the study proceeds with those who remain symptomatic, with half getting the active medication. In light of subject recruitment problems, this approach has obvious appeal.

Consequently, Kramer explained how a study conducted by Dr. DeRubeis, an authority on cognitive behavioral psychotherapy, was problematic because it was built around his own research. Overall, the medications looked best for very severe depression and had only slight benefits for mild depression — but this study, looking at weak treatments and intentionally maximized placebo effects, could not quite meet the scientific standard for a firm conclusion. And yet, the publication of the no-washout paper produced a new round of news reports that antidepressants were placebos.

Conclusion

In the end, Kramer asserted that, “the much heralded overview analyses look to be editorials with numbers attached. The intent, presumably to right the balance between psychotherapy and medication in the treatment of mild depression, may be admirable, but the data bearing on the question is messy.”

As for the news media’s uncritical embrace of debunking studies, Kramer suggested that this trend is based on a number of forces, including, “misdeeds — from hiding study results to paying off doctors — that have made Big Pharma an inviting and, frankly, an appropriate target.

That said, the result that the debunking analyses propose remains implausible: antidepressants help in severe depression, depressive subtypes, chronic minor depression, social unease and a range of conditions modeled in mice and monkeys — but uniquely not in isolated episodes of mild depression in humans.

Accordingly, Kramer noted that better-designed research may tell us whether there is a point on the continuum of mood disorder where antidepressants cease to work. Nevertheless, he recognized that “it is dangerous for the press to hammer away at the theme that antidepressants are placebos” because they are not, and to give the impression that they are is to cause needless suffering.

As far as other products, the effort that goes into researching a prescription product or medical device is amazing.   Perhaps some of that effort should be considered when judging products.

2 Comments
  1. Robert De Rubeis says

    Dr. Kramer’s piece addressed a timely and extremely important topic, the suitability of antidepressants as treatment for a broad range of mental health and other conditions. Dr. Kramer pointed out that academic and public discourse surrounding these topics has been highly charged, to the detriment of patient care. His goals were to educate the reader about the issues, and to bring a balance to the ongoing dialogue. Dr.
    Kramer’s efforts are to be admired, as is the Times’ willingness to provide a forum for such pieces.
    Unfortunately, the article contains numerous assertions meant to support Dr. Kramer’s thesis that were demonstrably false. He seems to imply that he possesses knowledge he could not possibly possess, and about which he is wrong. Moreover, his writing contains unsubstantiated suggestions that scientists who have addressed these questions, including our research group, have engaged in misrepresentation when reporting on their work.
    Although some of the errors Dr. Kramer makes are subtle, such that they might be recognized only by experts who conduct the relevant research and review the research of others in academic forums, other comments are blatant and could easily have been checked by him for accuracy. In my view, the presence of so many distortions and so much misleading information prevents the article from achieving its goals. Indeed, I suspect Dr. Kramer’s opinion piece has exacerbated precisely the problem that it was intended to address. In the process it has, I fear, misled readers, including Mr. Sullivan, into thinking that they would do well to dismiss a host of research findings, many of which have passed the highest levels of scientific scrutiny. As a result, the public’s trust in the relevant science has been dealt an unfortunate blow.
    In the following I present excerpts from the opinion piece in quotes, interspersed with my comments.
    Referring to a study we published in JAMA in January of 2010, Dr. Kramer characterized it this way:
    “…a seemingly minor study (that was) engineered to highlight placebo responses.”
    The methods we used were selected so that placebo responses could be compared to the responses to the medications, all things equal. The method favored by the pharmaceutical industry (the placebo-washout
    method) is used precisely to minimize the placebo effect, on the understanding that its use increases the odds that the drug under study will be shown to be superior to placebo. This is not a secret. This is not a conspiratorial way of describing the method. This is the stated aim of it. Placebo-washout studies are engineered (to use Dr. Kramer’s
    terminology) so as to minimize the costs of drug evaluation; fewer subjects are required to obtain a significant result, an additional effect of which is that the number patients exposed to placebo for the duration of the (typically 6-8 week) trial is minimized. It may be that Dr. Kramer does not understand the logic of the placebo-washout experiments, and therefore the limits of the conclusions that can be drawn from them. I have no doubt many if not most of his readers are confused.
    He continues:
    “From a large body of research, they discarded trials that used washouts, as well as those that focused on dysthymia or subtypes of depression. The team deemed only six studies, from over 2,000, suitable for review.”
    In our paper we laid out carefully the criteria for the inclusion of studies to be analyzed. We only deemed studies suitable for inclusion if they met these criteria. The process for this is described in the JAMA paper, and depicted in Figure 1 of the paper.
    “An odd collection they were. Only studies using Paxil and imipramine “made the cut” and other research had found Paxil to be among the least effective of the new antidepressants. One of the imipramine studies used a very low dose of the drug.”
    These statements are unsubstantiated and, according to my colleagues who co-authored the paper, and who are experts on these topics, false.
    “The largest study Dr. DeRubeis identified was his own. In 2005, he conducted a trial in which Paxil did slightly better than psychotherapy…”
    The claim that Paxil did slightly better than psychotherapy in this study is of no relevance to this piece as it concerns the effects of antidepressants when compared to placebo. And the claim is false.
    “…and significantly better than a placebo … but apparently much of the drug response occurred in sicker patients… Building an overview around your own research is problematic. Generally, you use your study to build a hypothesis; you then test the theory on fresh data.”
    Dr. Kramer implies knowledge of how we built the overview, with no stated basis for the knowledge. His conjecture about the process that led to the analysis, if correct, would form a basis for his, and his readers’, skepticism about the substance of the work. His conjecture, once again, is incorrect. We did not examine our data to see if this phenomenon held in it.
    Indeed, one requires more data than can come from even a relatively large study in order to gain enough precision for the kind of analysis we published. Rather, we knew that the pharmaceutical industry studies have excluded the roughly 70% of patients who fall into the mild or moderate range of depression, because it was known but not publicized that including the mild and moderate cases made it very difficult to find an advantage of drug over placebo. We believed it was time, after more than 50 years of research on the effects of antidepressants, for someone to collect all available data on this point, and publish the findings. Our own study was simply one of the six studies that met the stated criteria.
    “Critics questioned other aspects of Dr. DeRubeis’s math.”
    This unsubstantiated statement appears to be designed to lead readers to judge that I am either incompetent or dishonest. This is irresponsible, at best.
    “In a re-analysis using fewer assumptions, Dr. DeRubeis found that his core result (less effect for healthier patients) now fell just shy of statistical significance.”
    The re-analyses employed different, not fewer, assumptions. We conducted them in response to questions raised by a Letter to the Editor of JAMA.
    We clarified that those alternative ways of analyzing the data were not the most appropriate ones; we had already published on those in the JAMA paper, which had undergone the most thorough scrutiny of any paper I have been involved in.
    The fact that the results fell just shy of statistical significance, as we stated in the published response presumably referenced by Dr. Kramer, was unsurprising given that the reanalyses were conducted with methods that are inferior to those we used for the primary analysis.
    In any event, Dr. Kramer then chose not to disclose that these less appropriate methods yielded results that, if anything, pointed to an even more striking difference in the specific drug effect. (They are complicated, one consequence of their having been obtained with less appropriate methods than our primary analyses; see p. 1599 of JAMA, April 28, 2010; volume 303, N. 16).
    “In fact, (with) antidepressants…in the depressed, the decrease in what is called neuroticism seems to protect against further episodes.”
    The finding Dr. Kramer cites here was published in the December 2009 issue of the Archives of General Psychiatry by my group. Had he acknowledged the provenance of the work, it would have made it very difficult for him to include, a few paragraphs later, this unfortunate
    assertion:
    “…the much heralded overview analyses look to be editorials with numbers attached.”
    It would strain the readers’ credulity that a scientist who aimed to publish an editorial (with numbers) that raised questions about the limitations of antidepressant medications would, one month prior, publish on data that are so favorable to antidepressant therapy that Dr.
    Kramer would include that study among his examples of the drugs’
    benefits. The readers were kept from the information about the authorship of the work, which set up the accusation that we had written an editorial, not a work of clinical science that met the highest standards.
    “The intent, presumably to right the balance between psychotherapy and medication in the treatment of mild depression…”
    Dr. Kramer’s presumption is wrong. As he knows, we emphasized in the JAMA article, as well as in all of the media write-ups that included interviews with me, that it is out of concern for our ignorance of the effectiveness of these medicines in the vast majority of depressed patients that I embarked on this research. My co-author, psychologist Steven Hollon, has since published findings of a similar sort in regard to psychotherapies: namely, that they fail to show much greater benefit than control procedures for patients with milder forms of depression, yet they do show a substantial advantage among the more severely depressed.
    “Overall, the medications looked best for very severe depression, and had only slight benefits for mild depression…”
    This is correct.
    …but this study, looking at weak treatments and intentionally maximized placebo effects, could not quite meet the scientific standard for a firm conclusion.”
    This is wrong in every way. We did not look at weak treatments. We did not use a method that maximizes placebo effects. We did meet the scientific standard for a firm conclusion.
    Robert J. DeRubeis
    Samuel H. Preston Term Professor in the Social Sciences Professor and Chair, Department of Psychology School of Arts and Sciences, University of Pennsylvania

  2. doral real estate says

    In this study is of no relevance to this piece as it concerns the effects of antidepressants when compared to placebo. And the claim is false… I present excerpts from the opinion piece in quotes, interspersed with my comments.

Leave A Reply

Your email address will not be published.