The big news among climate change skeptics these days is “Climategate” – an incident where years of internal emails and documents from a well-known climate research group at a British University were leaked to the Internet. Skeptics claim that quotes from email and Fortran source code offer proof that the researchers regularly manipulated data to produce the desired conclusions of global warming, and aggressively attempted to suppress research in the academic community that did not agree with their views. Supporters counter that the quotes are being taken out-of-context and in fact just represent standard research methodology.
From what I’ve seen of the leaked content, the latter seems more likely. The quotes seem purposely misinterpreted by those with well-documented agendas, and while much of the leaked content is unfriendly and at times sloppy, it is consistent with what I’ve seen of the world of academia. But that makes me sad.
(note: all of what I’m about to comment on comes from my fairly limited experience in the academic research world… feel free to comment if you know better)
The goal of the average researcher, particularly in a University setting, is to get published. The more the better – both your current and future bosses will judge you largely based on what you’ve published. Most useful publications comes in two forms – a journal article (think a magazine article but longer and more boring), or a conference paper (a shorter article, that trades the extended journal review process for a live presentation/Q&A). Which forum is favored depends on the subject area, and certain conferences or journals are more prestigious than others. For example, getting a paper you wrote published in ‘Nature’ could be the high point of your career, while presenting at a random conference in Maui may actually hurt your credibility.
“Publish or perish,” the saying goes.
The determination of what gets published in a refereed scientific venue is called the ‘peer review’ process. The editor choose several experts in the field in question to review submissions. These submissions are usually articles submitted by researchers, but are sometimes just abstracts, with a promise to write the rest of the article later. The experts review the submissions and provide their opinion as to whether the research makes sense as described. The editor considers these reviews, and uses them to choose the most credible research to include.
Unfortunately, the process is, in all but a few cases, horribly broken. Reviewers have their own agendas, usually based around their own ability to get published, or simply their planet-sized egos. This means that the decision to accept or reject a paper is not based on the research itself, but rather the names of the authors, whether the research agrees with their personal opinions, or if their own work is referenced.
Even if you take as an assumption the basic honesty and competence of the reviewers, and only consider lengthy full-text journal article reviews, the fact is that the review is cursory. At best it would allow enough time to check for obvious logical flaws, or a few key related works that either duplicate or conflict with the research. There’s simply no time nor motivation to seriously examine and test the research claims as part of the peer review process.
Then the paper gets published. For the vast majority of published content, that’s the end of it. The paper’s popularity can be roughly is determined by how many other papers reference it (which, given the issues in the peer review process, doesn’t mean much), but otherwise nobody will care to critically consider the conclusions. Because, in the end, why challenge someone else’s conclusions when you can just publish your own?
My question is: given that the scientific method’s bread and butter is validating hypotheses, why don’t we bother to verify our own publications?
My thoughts to improve the system involve adding feedback from the research community at large. Realistically the research the average master’s student does is not going to significantly advance the curtain of knowledge… maybe at best poke it a bit. Rather than have them do bitchwork for PhDs and write nominally useful papers, why not encourage them to publish “verification papers”? 1) choose one specific paper to analyze, 2) recreate its conditions/technique, 3) verify that the results match the stated findings of the paper, 4) whether it matches or not, publish the findings in a refereed journal specifically designed for such content. The point is specifically NOT to introduce new evidence, but only to evaluate the existing evidence as presented. This gives even junior researchers something that they should excel at with light supervision, dramatically increases the number of refereed publications they author, and improves their understanding of the body of knowledge.
This verification in turn benefits every stage of the process:
- Papers will necessarily have to be written with the details sufficient to reproduce the results. Any paper that doesn’t is liable to be set aflame by eager verifiers.
- Reviewers have a level of accountability, as their recommendation of a paper is as much up for review as the paper itself. A reviewer that recommends a paper that gets discredited is unlikely to be a popular choice for reviewer in the future.
- Readers can have confidence in a paper they read, knowing that others have evaluated and confirmed the results. While a body of research with similar conclusions provides similar confidence, this requires a level of analysis of its own (the source of the ever popular ‘literature survey’), and even then only confirms the common elements.
- Authors have an avenue of publishing that allows them to discuss existing work, without the requirement to publish half-baked ideas or conflicting views. If they disagree with an idea, they have an ideal venue to voice specific concerns.
- Authors are rewarded for thorough and well-documented research through positive public feedback.
For such a system to be effective, it must be extensive. It would reasonably require most publications to have multiple reviews within a year of publication. Insufficient feedback reduces the rewards for excellence and encourages the same ego-centric approach that plagues peer review.
If such a system were in place, Climategate would be a non-event. It wouldn’t matter what schemes the researchers were plotting, nor what shortcuts were taken in their methods. You’d have an entire body of independent research already in place validating the accuracy of every last number, and backing every conclusion, point for point.
No comments:
Post a Comment