Interview with Anne Færch Nielsen

Anne Færch Nielsen is a molecular biologist from Denmark and she is an editor at the EMBO Journal. She received her PhD from Aarhus University and the Ribonucleic acid took her from there to Vienna and finally to Heidelberg. As an editor she is responsible for her chemicals and things that go on inside the cell. She followed the debate on the ‘STAP’ (Stimulus-triggered acquisition of pluripotency) cells that rocked the stem cell field since the beginning of the year. We asked her about her thougths on the developments and the connection with post-publication peer review. All opinions are her own, not EMBO’s.

 

AFN

End of January two papers in Nature by Obokata et al. claimed a new method to create pluripotent stem cells in a surprisingly simple way. How was the first reaction in your professional environment to the big news?

We had some discussions about it, surprise at the findings and excitement about the possible implications

Since then the stem cell community has joined efforts to reproduce the results and study the paper in all detail – all together. Are we seeing a public post-publication peer review?

Yes, I would definitely say so, but at the same time I think two parallel discussions partly turned into one: (1) are STAP cells real and can their generation be reproduced and (2) was there image manipulation in the papers? Obviously, the detection of manipulated images is a serious matter that could warrant suspicion about the overall validity of the findings reported, but in principle there could be more benign explanations as well. In my view this aspect of the story gained so much attention that it almost overshadowed the actual question whether STAP cells exist or not.

Do you think papers, especially the ones that claim big breakthrough and are potentially controversial, gain from this open peer review?

Hmm, whether the paper itself gains or not is probably a case-by-case matter depending on the eventual outcome and extent of the controversy. I do think it’s helpful for the field to have this discussion in an open forum, though, but there is the very real danger that the whole thing can turn into a witch-hunt rather than a constructive scientific debate.

Do you think an open post-publication peer review would benefit essentially every publication/study or are there cases where it is not beneficial or even harmful?

As mentioned above, there is the danger that very loud (and dominant) voices in a given field would steer the discussion in their direction. This could be particularly relevant for, say, a paper from a junior PI that goes up against the current dogma of an established field. From my side, the concern for the authors would also be that with a system of ubiquitous post-publication review the peer-review process never stops, which would essentially mean that authors would have to continuously address attacks/comments for years and years after publication. Of course this may relate to relevant criticisms in some cases but given the often prominent occurrence of scientific disagreements that drag out for years, these may be matters that are very difficult to settle, even with post-review comments/review.

If open post-publication peer review were to happen across all sciences, where should it happen: Should Journals provide a platform or should scientists on their blog be the driving force?

I would prefer to have such a discussion hosted in a central depository rather than at diverse blogs (again to maintain a level of neutrality and limit the risk of personal agendas). Systems like PubPeer and PubMedCommons are aiming to provide such venues, but one issue that remains very much under discussion (also within our editorial team) is whether this type of comments should be allowed to be anonymous.

Paul Knoepfler at UC Davis is one of the most active scientists in the debate around the Obokata papers. He started a poll asking ‘Do you believe in STAP cells?’. What do you think of the concept of involving the expert community with essentially a ‘thumbs up or down?’ question?

I think Knoepfler has been taking a sound approach in this matter, but I’m not sure the polling option is necessarily the best solution. Dependent on the current hype of the discussion (especially with negative attention) people may be swayed to vote in one direction only (and you are most like to only get the very opinionated scientists involved).

We visualized the opinion shift reflected in Knoepflers polls in a recent blog post on peerreviewwatch. It doesn’t seem the community is reaching one conclusion but rather dividing into two camps – how do you interpret this?

There will always be a level of disagreement and skepticism regarding data interpretation across all fields, that’s in the nature of scientific discovery. As such, I think this distribution illustrates that even with post-publication review, you will never reach unified agreement. That’s also where the danger may arise that the debate turns into a very extended discussion between two strong-standing camps, neither of which is likely to be swayed by the arguments from the other side. That being said, I think it’s still helpful to have this type of disagreements highlighted in a public forum – especially for the people that work in the periphery of a given field and who may not be familiar with the history of any controversial publications in an area.

Do you think, the open discussion of the Obokate papers might encourage data sharing in the stem cell community and foster a greater collaboration for future research

It certainly gets people talking and has probably also prompted more researchers to test the procedure for independent validation (and talk more openly about the results obtained). On the other hand (and especially with the general media getting involved), there is the danger that this type of discussion leads to the conclusion that even the tiniest bit of irregularity in any image is immediately equal to scientific fraud. At EMBO we conduct an extensive screening for image quality prior to manuscript acceptance and the vast majority of alterations that we do detect at this level derive from sloppy figure assembly or benign data beautification that can be resolved by the authors in a satisfactory manner and where we have no reason to suspect scientific misconduct.

One comment

Leave a reply to junypag Cancel reply