Wednesday, November 30, 2011

Obesity, Peer Review, and Questions

I had commented on a NEJM paper a while back which was said to determine a relationship between obesity and associated groups such as facebook friends. At the time I thought is nonsense.

A paper by Lyons debunks the paper and its sequallae quite well. As Russ Roberts states quite well:

Here is the summary:

We begin by summarizing the major problems with C&F’s studies:

1. The data are not available to others.
2. The unavailable data are sparse for friendships.
3. The models used to analyze the sparse data contradict the data and the conclusions.
4. The method used to estimate the dubious models does not apply.
5. The statistical significance tests from the questionable estimates do not show
the proposed differences.
6. The wrongly proposed differences do not distinguish among homophily, environment, and induction.
7. Associations at a distance are better explained by homophily than by induction.


Or in another words, bad paper, meaningless results. It’s not an easy article to follow (and neither was the original work by Christakis and Fowler.) The point on statistical significance is pretty clear though and pretty deadly.


However the paper also demonstrates the problems with Peer Review.

Lyons states:

Both of C&F’s first two papers were published in the world’s top medical journal, the New Engl. J. Med. Their third paper was published in BMJ, another very highly respected medical journal. Their fourth paper was published in the J. Pers. Soc. Psychol., a top journal and the flagship journal of the American Psychological Association. After we had completed our analysis of those four papers, two more based on the same data appeared: the fifth (Rosenquist, Murabito, Fowler, and Christakis, 2010) in Ann. Intern. Med., again a very highly respected journal, and the sixth (Rosenquist, Fowler, and Christakis, 2011) in Mol. Psychiatry, a top journal in psychiatry. We leave as an exercise to the reader to spot in these last two papers the same errors we have recounted here.

Given the fundamental errors we have described, what can we conclude about the process of peer review at these top journals? Altman (1998), currently the senior statistics editor at BMJ, gave a personal account as a statistical reviewer of submissions to medical journals, as well as a table summarizing some studies on the quality of statistics in published medical articles. His bleak assessment: “The main reason for the plethora of statistical errors is that the majority of statistical analyses are performed by people with an inadequate understanding of statistical methods. They are then peer reviewed by people who are generally no more knowledgeable. Sadly, much research may benefit researchers rather more than patients, especially when is carried out primarily as a ridiculous career necessity.”

Problems with peer review have long been known and several remedies have been proposed. One remedy has even been shown to fail (see Fidler, Thomason, Cumming, Finch, and Leeman, 2004). We propose a new solution below, based partly on our experiences in getting the present critique published. One can find several anecdotal reports on the web about the policies of top scientific journals regarding critiques, but we are not aware of any study of the issue. Our experiences matched the anecdotes we saw and seem informative.


We first submitted our critique to the New Engl. J. Med., but it was rejected without peer review. The journal declined to give a reason when asked. We next submitted to BMJ, but it was again rejected without peer review. This journal did, however, volunteer that “We decided your paper was probably better placed in a more specialist journal.” It is interesting to note that the same issue of BMJ that published Fowler and Christakis (2008a) also published the critique Cohen-Cole and Fletcher (2008a). The cover of that issue, in fact, was devoted to those two articles. In contrast to BMJ’s decision, the general-interest online newsmagazine Slate published an article by Johns (2010) on our critique the same month we submitted our paper. An delightful coda is that a few months later, BMJ published an editorial by Schriger and Altman (2010) called “Inadequate post-publication review of
medical research”.

After these rejections by the New Engl. J. Med. and BMJ, we approached three top journals who did not publish any of C&F’s studies, JAMA, Lancet, and Proc. Natl. Acad. Sci.. All were uninterested in our critique since they do not publish critiques of articles they did not originally publish. The section of J. Pers. Soc. Psychol. that published Cacioppo et al. (2009) does not publish critiques even of papers they have published, unless accompanied by new data.

Following on this educational venture, we submitted to a statistics journal that specializes in reviews, Stat. Sci. Five months later they had 3 referee reports. The first two recommended publication after revisions (e.g., “an important critique” and “well worth publishing”), while the third, though  agreeing with our critiques, said that C&F’s work was insufficiently important to warrant publication of a critique in Stat. Sci. Two months after getting these reports, the editor made his decision: rejection, allowing for resubmission if we made the tone more neutral and changed the focus, perhaps to “editorial decision making standards in medical journals”, as suggested by the third referee.



The above is one of the best arguments against Peer Review ever. It demonstrates the closing of ranks. It shows that Kuhn and his paradigm shift is a simplistic reason at times for scientific lack of integrity. I rarely include such a length reference but Lyons has done a service.

The Internet has dramatically changed the essence of peer review. It allows presentation of material and its evaluation as is. In my recent experience the peer reviewed journals are all too often a closed environment, closed for certain points of view. NEJM during the health care debate had in my opinion an undue balance in favor of the law. It published articles often from Administration advisers as if they were purely academic research analyses. They were in fact propaganda.

Thus a congratulations to Lyons for a superb piece of work and hopefully more of this can be delivered. I especially would like to see this applied to the less than reasonable analysis of PSA testing.