The same day, Paul Knoefler, professor in the Department of Cell Biology at the UC Davis and one of the 50 most influential people in the stem cell field, wrote a critical post on his blog explaining the findings of Obokata. At the same time he raised six key questions, that according to him need to be answered before the importance of the study could be judged. A few days later the two publications started to face many questions, we summarized the developments during this phase in an earlier post.
Since then Knoefler run an informal poll on his blog where he invites the opinion of fellow stem cell experts to vote on the question “Do you believe in STAP stem cells?“. Participation the poll increased steadily from 400 to over 1000 in the latest edition as the topic grew more controversial over the past months. We accumulated the results from his polls in the graphic below.
He emphasizes that this polling is obviously not scientific but may reflect dynamic changes in the judgement of people as the discussion around the paper evolves. Without a formal post-publication peer review system in place, a sort of open peer review just happened as various research labs all around the world tried to reproduce the findings and share their results with the community openly. The papers might be retracted in the end but the stem cell field has certainly benefited from this community driven review.
Knoepfler, in one of his latest post, wrote what can be learned from this case, not only for the stem cell field but for biomedial science in general. Cellular autofluorescence and contamination might be issues restricted to certain fields but a few points, we think, can be important lessons for peer-reviewed research in general. Below is a trimmed list of these points made by Knoepfler:
To be a good reviewer, data should always trump big names in importance. One of the problems exemplified by the STAP papers is that big name authors can sometimes sway reviewers inappropriately to be lenient on papers. In the end, as a good reviewer, you have to keep focused on the data, not the reputation of the authors.
To editors, be extra-cautious about those “sexy” papers. A paper like either of the STAP ones is certainly exciting on first read and could have big impact. […] As with the reviewer caution above, editors should not be swayed by big name authors if the story seems too good to be true and if anything, the more excited an editor is about a paper the more cautious they should be in how they handle it. Paradoxical? Perhaps, but I think it’s true.
To journals, give all manuscripts a thorough automated checkup. EMBO now reportedly has an automated screening process for manuscripts for image issues and EMBO editors have indicated that the STAP papers would not have passed. […] Clearly this kind of automated manuscript checkup should be standard procedure for all journals.
Check the hype. There is nothing wrong with being excited about a paper or its potential impact, but be cautious about crossing the line to outright hype. Not everything is a “breakthrough” and that’s OK. Good, strong science doesn’t have to be a stunning breakthrough to have a positive impact. Scientists, journals, and institutions need to walk a fine line between advocating for our work publicly (which is needed) and overstating its importance, especially to the public or reporters. Many media folks are prone to hyping science as well. I believe that STAP was hugely hyped by many of the parties involved.
As the pressures on academics to publish increase it may seem logical that academics would be more inclined to ‘fudge’ the numbers in order to make the results look better. Many journalists and academics have brought up this point and are concerned that fraud and academic misconduct are becoming an increasing problem within the academic community and peer-review is failing to catch it. But are the concerns justified?
Below is a graph showing the number of retractions from PubMed. In 2011 retractions peaked at 373. Since then there has been in a decline in the number of retractions. However, the number of retractions seem to be on the rise again in 2013.
Here is a selection of audience and Twitter discussions before, during and after last week’s debate. There were hundreds of tweets which used the #prwdebate hashtag, so if you spot something we missed, let us know in the comments!
To get an idea of what it is like going through the peer review process as a paper’s author, I spoke to physicist Joe Goodwin, who recently had his first paper reviewed before publication in Nature Communications.
Q: How long did the reviewing process take, from submission to a published paper?
A: My paper in Nature Communications was first submitted in May, and was published in October. Half of that delay was at our end, but Nature Communications publishes so many hundreds of papers per year that everything takes a while.
To get a better understanding of why a publisher like Nature that relies on subscriptions would get involved with a scheme that disseminates their content for free, I interviewed Jonathan Griffin, deputy CEO and head of business development for PLS (Publishers Licensing Society) and Jessica Rutt, Rights and Licensing Manager at Nature Publishing Group. (more…)
Researchers from the School of Economics, Finance and Management at the University of Bristol studied the system and presented a new model that improves on the current peer review system. They used a mathematical model to understand the behaviour of scientists when undertaking a review. (more…)