JAMA Mild Depression Meta Analysis Unraveled in the New York Times

A recent study published by the Journal of the American Medical Association (JAMA), questioned the effectiveness of antidepressant drugs. The research found that although drugs are “useful in cases of severe depression, those with mild to moderate cases, the most commonly used antidepressants, are generally no better than a placebo.”

According to Richard Freidman, MD, professor of psychiatry at Weill Cornell Medical College such findings are contrary to the evidence found in “hundreds of well-designed trials, not to mention considerable clinical experience, showing antidepressants to be effective for a wide array of depressed patients.”

As a result, a New York Times editorial asserted that the published study did “not stand up to a mountain of earlier evidence,” because of the way it was conducted.

The JAMA study was not based on clinical trials but instead, combined analysis of previous studies. While using this type of approach has the potential to discover drug effects that may have been missed in smaller studies “by aggregating the data from many studies and detecting broad patterns” from statistical information, this approach also has problems.

For example, with hundreds of studies on antidepressants, how did the JAMA researches decide which studies to use?

According to NYT, JAMA “authors identified 23 studies (out of several hundred clinical trials) that met their criteria for inclusion. Of those 23, JAMA could get access to data on only 6, with a total of 718 subjects. Three trials tested the antidepressant Paxil (a selective serotonin reuptake inhibitor, in the same class as Prozac) and three used an older drug, imipramine, in the class known as tricyclics.”

With such a small number of subjects being used for the study, and little access to data, it should come as no surprise that even one of the JAMA study’s authors, Robert J. DeRubeis, a professor of psychology at the University of Pennsylvania, noted that the results from the study cannot be “generalized to other medications.”

Another weakness in the JAMA study was that “the authors of the new analysis decided to exclude a whole class of studies, those that tried to correct for the so-called placebo response.” Consequently, in neglecting this group from their research, JAMA authors “showed a comparatively small average difference between drug treatment and placebo treatment,” a result that would have been different if they had included this group in their analysis.

In fact, if JAMA had used “randomized clinical trials that try to correct, or wash out, the placebo effect,” the authors would have probably seen “patients with mild to moderate depression respond to antidepressants at rates nearly identical to patients with severe depression (who tend to have a much lower response to placebos).”

In addition to these limitations, the JAMA study is problematic because it based its conclusions on only two antidepressants — when there are 25 or so on the market. How the results from this study could be used to generalize a whole class of medicine is shocking considering “the Food and Drug Administration investigated the safety of antidepressants by analyzing data from some 300 clinical trials, with nearly 80,000 patients, involving about a dozen antidepressants.”

Ultimately, while science and “landmark study’s can come along and overturn ideas about a particular treatment,” the JAMA study does not. Their weak reliance on small amounts of data, too few drugs and patients, and methodology should discourage no one from taking their antidepressants, because the continued use of antidepressants is crucial for millions of patients in preventing relapse, and reducing symptoms.

JAMA authors from this study should acknowledge the limitations of their research cited here, and consider a way to inform the public about making the right decisions about antidepressants based on “solid scientific evidence,” rather then a small sample.  

The editor should consider that scaring the public into not taking their medication is bad medicine!!!!

 

depressionJAMAmeta analysisNEWNew York times
Comments (1)
Add Comment
  • Jordan

    News articles concerning prominent research findings almost always oversimplify the findings, or overstate the findings as intended by the researchers. To understand the study, one must read the actual study, readily available on the JAMA website.