The following is an interview with Adam Etkin, Managing Director of PRE-Score – a peer review evaluation system. Adam is based in New York, so he followed the #prwdebate “Peer review is broken, how can we fix it?” on Twitter and YouTube. I asked him about his thoughts on the debate, and the topics that we covered.
Did you watch the debate live using our Google hangout?
I was travelling at the time, so was unable to watch the live debate, but I did view the recording afterwards via YouTube.
Did you follow the debate on twitter using the hashtag #prwdebate?
I sure did! I responded to a few tweets when time permitted and it made me wish I was there.
What was your opinion on the debate as a whole?
I really enjoyed it. It’s important to have these types of real-world, interactive conversations. Sometimes it’s hard to really convey and discuss these ideas via social media. There was a lot I agreed with whole-heartedly and some stuff I’m skeptical about, but I think the one thing everyone had in common was a desire to improve the current system.
I was actually pleasantly surprised to see that the large majority of folks responding to the informal poll felt that peer review is NOT broken. Granted, it was a small sample, but it confirmed my personal feeling that the large, large majority of those in the scholarly community value peer review and feel it’s needed.
In no particular order, I wanted to comment on a few things which came up during the debate:
- In general I understand while some feel that there is no correlation between Impact Factor and quality of peer review, but I do think that high quality peer review usually results in a better final product. Having said that, what about new or younger journals which don’t yet have an IF? What about journals which are lower ranked or in niche fields who still work really hard at peer review? PRE-score will make it more possible to see these correlations, not just as relates to journals and Impact Factor, but to other metrics as well. PRE-score is both an article level and journal level analysis, so over time it will be possible to see if a highly cited article, or a paper which got a lot of traction via altmetrics was also one which had a good PRE-score.
- I think Impact Factor does not need to be abolished. As pointed out during the event, it is just one metric among many which can be useful when used properly. The problem is not with IF, but with how has evolved to be used for something it was not intended for. That’s not the fault of publishers, or Dr. Garfield, or Thomson-Reuters. That falls squarely on the shoulders of the Academic community.
- I respectfully disagree with the idea that we need to tear it all down and “start from scratch.” The traditional pre-publication system has been around for so long because it works pretty well. Have we abolished automobiles, electricity etc. because they’ve been around for many years? Of course not, these things are still necessary and valuable. But over time things evolve and improve. The same applies to peer review. The system has evolved over time and with widespread use we’ve seen new practices which improve things, such as new disclosure rules, the use of technical and statistical reviewers, a more international mix of participants enabled by technology, variations on single-blind and double-blind reviewing, and so forth. Peer review has never been one thing and that’s still the case today. There’s no one “right” approach and we should always try to make it better. PRE-score is part of this effort, etc.
- There are valid arguments to be made for open review vs. blinded review. Both can work depending on subject matter and audience. I’d caution those who think there’s never any downside to open review, however. The desire to remain anonymous does not always, or usually, equate with something nefarious. In fact, anonymity allows peers to speak to the intellectual quality without fear of reprisals or career consequences. This is good for peer review in most disciplines. But in some disciplines, more open approaches may make sense. The beauty of PRE-score is that it supports all approaches.
- While I think that post-publication review is very valuable, I strongly disagree with putting everything up online and allowing everyone to review with no type of pre-publication filter. This strikes me as a “throw it all up against the wall and see what sticks” approach. We’re struggling to filter out weak material among millions of papers now. The solution is not to publish more with no standards! Nikolaus Kriegeskorte was clearly very passionate during the debate about his views on openness and how the system might change, but I found it interesting that in a follow up interview after the event (http://youtu.be/mmkobAeKPzw) his vision for a new peer review model is really not that different from the current system.
- I was not surprised to hear the standard Clay Shirky “publish then filter” quote again. That quote, along with “publishing is a button” are, in my opinion, two of the most over used and misguided arguments against the role of publishers in scholarly publishing. “Publish then filter” might be perfectly fine if you’re self-publishing what you think might be the next “Hunger Games” novel, but not for science. I say “filter, filter again, publish, then filter some more.” With PRE-score widely adopted, post-publication reviewers will have more information than they have now, and can make even more insightful comments . As for “publishing is a button,” just the other day Zen Faulkes posted something which addresses that perfectly, so I encourage others to read it if they’ve not already (http://neurodojo.blogspot.com/2014/04/publishing-may-be-button-but-publishing.html).
- I think one of the best moments of the debate occurred at the very end when an audience member mentioned Google as “publisher” in relation to the internet and filters. To me, that’s the role publishers and journals play. Journals are focused on a specific topic, they bring together like-minded individuals who are qualified to evaluate, discuss and point to material of interest. Related to this, I was also happy to hear that almost everyone who participated acknowledged the high value editors bring to the process. More and more “megajournals” are emerging which are eliminating the role of the EIC and they are not focused. As I tell my kids, too much of anything can be bad.
Do you think peer review is broken, and why?
I’m firmly in the “not broken” camp. I think traditional pre-publication peer review is a simple, logical, effective system when it is utilized CORRECTLY. I also think in the large majority of cases journals, editors and reviewers do have the right approach. As pointed out during the debate, it is easy to hold up the high-profile instances when things go wrong, but let’s not lose sight of the big picture. That’s a point I myself try to make when discussing peer review and PRE-score with people. There are MILLIONS of peer reviewed articles published every year. During the debate it was stated a couple of times that the number is around 1.4 – 1.5 million. I think the number was closer to 2 million in 2013. Statistically speaking I think peer review gets it right. That’s not to say it’s perfect and can’t be improved upon. I liked the statement Richard Van Noorden of Nature made that it is “chipped, not broken.”
Please can you explain PRE-score?
PRE-score started as an algorithm which was an attempt to reflect the level of peer review a paper underwent prior to publication. Over the years it’s expanded to be more than just the metric. That’s still very important to what we’re doing, but overall I’d describe PRE-score as a suite of tools and services which exist to support readers, authors, libraries, journals and publishers who value and are committed to ethical, rigorous peer review. We want to encourage and help celebrate those journals and publishers who use best practices which in the end benefit everyone.
What drove you to invent PRE-score?
Honestly, I was growing frustrated with hearing over and over again complaints by people about peer review, but no one seemed to be doing anything to try and address the problems. Fast forward a few years and seems that there is a growing movement around legitimate peer review and ethical scholarly publishing by several people and organizations. We now have COPE, Sense About Science, CrossRef’s CrossCheck and CrossMark initiatives, and others (including Peer Review Watch) who are doing great work and helping to move along the discussion in a positive way and educate the community about the value of peer review and the various approaches.
How do you hope it is going to help science publishing?
Our goal is to give the research community a new, unique additional way to filter through the increasing amount of material being published. We’ll do this in a few ways:
1. Support journals and publishers who work very hard to publish the best material available in their respective fields.
2. Validating that journals are really conducting peer review in the manner they say they are.
3. Weeding out the obvious unethical, scam “predatory” journals which continue to pop up.
4. Increasing the transparency of the review process.
5. Provide a leading indicator as opposed to the lagging indicators which are presently available.