A recent article in Nature Medicine did an “Analysis of Retractions” in medical journals and claimed that “about half of the medical papers retracted over the past few decades were pulled because of misconduct rather than an innocent mistake.”
Citing two new studies, the report found that “"About half of the medical papers retracted over the past few decades were pulled because of misconduct rather than an innocent mistake, according to two new studies…Yet although drug companies are often portrayed by the popular press as the source of all evil in biomedical publishing, just 4% of retractions due to misconduct had declared pharmaceutical sponsorship."
These findings suggest that an overwhelming majority of concerns about integrity and conflict in medical papers with industry funding are clearly exaggerated.
As a result, these studies clearly represent a relevant discovery into the critical need to continue industry collaboration with physicians and academic medical centers, including contributions to continuing medical education (CME) programs.
Karen Woolley, a co-author of one of the new studies and chief executive officer of the professional medical writing company ProScribe, believes that industry should still be scrutinized. But he points out that non financial conflicts of interest are a much bigger problem.
“But why are we always pointing our finger over there? There’s an elephant in the room, and that’s the nonfinancial conflicts of interest in academia.”
Woolley was joined by Liz Wager, a presenter of the second related study at the Sixth International Congress of Peer Review and Biomedical Publication in Vancouver, British Columbia. Both researchers believe that regardless of the results, we need to look at COI in its totality “there are huge pressures in academia to get promotions, funding, grants and status, and this should not be
forgotten against the backdrop of increased scrutiny of industry-funded research.”
This belief to scrutinize industry-physician relationships is misguided specifically because physicians at academic medical centers have a legal and ethical obligation to patients to pursue funding and grants. Moreover, such endeavors merely represent a commonality of interests.
Other critics of industry involvement with academia, such as Ginny Barbour, chief editor of the PLoS Medicine journal believe that “there are still undisclosed conflicts of interest in pharmaceutical-sponsored studies,” (lets just ignore the evidence and focus on our agenda) and industry takes advantage of this fact and other pressures to spread its own messages.
A problem however with the nature of these studies is that the study did not know “how many of the retracted papers had undisclosed connections with the pharmaceutical industry.” Without such information, making any calls to continue scrutinizing industry—when only 4% of retracted papers are associated with industry—are misguided.
The article also notes that regardless of any misconduct or unethical reasons for retraction, “many journals still take extra care in reviewing papers with industry ties.” Additionally, many of the retracted papers Wager studied related to basic biomedical research, which isn’t usually sponsored by the pharmaceutical industry, rather than to clinical trials.
Spin
Another section from the article talked about the inappropriate ‘spin’ of biomedical results in journal articles, claiming that such a practice “makes a drug look better than the data really supports,” and is a common practice. As a result of such claims, researchers are calling for a ‘propaganda index’, according to experts at the sixth international congress of the Peer Review and Biomedical Publication in this September.
For example, Doug Altman of the Centre for Statistics in Medicine in , believes that some articles “choose language or selective emphasis of certain parts of the data.” Other critics assert that papers focus on “secondary conclusions or massage the data to reach some other statistically significant conclusion.”
To back up this assertion, a study on the use of inappropriate ‘spin’ was conducted by Isabelle Boutron, of . Boutron, who examined 72 clinical trial reports, found that approximately half of the papers they reviewed used spin in the conclusion, and more than 40% of them had spin in two or more sections, such as the methods or discussion.
An example of ways Butron and her colleague would determine spin is if a paper might say that results “approached but did not reach significance,” or “would have been
statistically significant if we had a bigger sample”—both of which mean the same as, but sound more positive than, “our results were not statistically significant.”
Ultimately, claims that retracted articles and articles using spin because of industry involvement or sponsorship are incomplete. The methodology to determine whether spin is used are problematic, especially when the data alone will speak for itself to physicians. While it is important that articles and authors disclose information about ties to industry, and articles portray data accurately, researchers and physicians must be allowed the freedom to continue sharing information without worrying about unnecessary restrictions.