Picture: Review. Credit: Markus Winkler (Pexels).
by Kim Reid
After writing two papers, it was my turn to swap to the other side and review my first paper. The topic fitted into the niche sub-sub-field I had been working on for the first half of my PhD, so I felt comfortable accepting knowing I was up to date on the recent literature in that area.
Despite that, I felt woefully unprepared.
Little 24-year old me was being asked to review the work of authors who are some of the biggest names in the field. Without giving too much away, two of the co-authors, are the co-directors of the international project related to the paper topic. It’s like a CLEX student being asked to review a paper by Todd and Andy.
I opened the manuscript then opened a new tab and googled “how to review a journal article”. After six years of university education, I’d had one class on writing a review (by the amazing Jen Martin), where I had learnt to refer to specific examples when making critiques and how to use the ‘shit sandwich’ method to deliver negative comments. Being a communication subject, that class focused on literally writing the review. As for knowing where to begin or how to spot a problem with a manuscript I was in the dark.
Thankfully, Wiley Publishing had written a thorough step-by-step guide.
My usually healthy self-confidence suddenly took a holiday. On the one hand, I wanted to do my job as a reviewer well and give useful feedback to improve the manuscript. But, on the other hand, these were senior, experienced scientists who knew way more than me, so how could I possibly add anything useful. I concluded that as long as I adequately referenced any criticisms, that was okay. The authors could always refute my comments and, thankfully, I’ll be anonymous.
I recently read a debate on Twitter about anonymity in peer-review. It appears to be a recurring debate in academia. The authors of the linked articled were strongly against anonymity in peer-review, but as a young, female student, I am glad peer-review is anonymous. Because why would anyone listen to me when I tell them their method has an error? Why would senior scientists, who have been working in the field longer than I’ve been alive, listen to me when I tell them I can’t recommend ‘accept’ for their manuscript because I think they have broken a rule of statistics. We like to think of scientists as objective thinkers, but scientists are humans first, and humans have an unconscious bias – they don’t like being told they’re wrong by someone half their age. If it weren’t for anonymity in peer-review, I wouldn’t want to review because it would be far too intimidating.
In the end, I did find what I thought was a methodological error. In summary, the study included a plot of the standard deviation at each grid of a global ensemble; the authors had found a high standard deviation in the tropics, which they couldn’t conclusively explain. I had recently read the paper that described the method for one if the ensemble members, so I knew that one of the ensemble members actually masked the tropics. I commented that I thought the authors should exclude that ensemble member before taking the standard deviation as that could artificially increase the standard deviation in the tropics. The authors responded by saying they still wanted to include that ensemble member in the study despite the mask and therefore didn’t make any changes. In reading the response to reviews, I noticed the other reviewer had not made the same point – this made me really doubtmyself. Do I press this point? Is it a big deal? Do I let it slide? Am I even right? The other reviewer didn’t comment on this, that must mean I’m wrong?
All I could think about was Dietmar Dommenget’s statistical climate lectures (a graduate class for Melbourne and Monash students) where he passionately pointed out examples of statistical fallacies that had made it into peer-reviewed literature and made us promise not to make the same mistakes.
I spent hours searching through statistics textbooks and journal articles trying to find a mathematical proof that explained why the authors shouldn’t take the standard deviation of a highly skewed distribution. In truth, I was looking for the proof for myself as much as for the authors – proof that I was capable. I spoke to my supervisors and they agreed that it was worth making my point in the second review. I even called my dad, who majored in econometrics, for advice. In the end, I did decide to reiterate my point in the second review and recommended the authors either remove the outlier or keep the outlier and use inter-quartile range instead of standard deviation.
The paper was accepted after the second review but isn’t online yet so I didn’t know if the authors had taken my advice…until today (thanks to a virtual conference where the authors presented that very work). They did do what I suggested. I’m happy to say the new inter-quartile range plot looks great. The key points of the paper still remain, but the enhanced standard deviation in the tropics that couldn’t really be explained disappears. Despite the initial crippling self-doubt, I’m glad I stuck to my guns.