« h-index | Main | Ahem. »

August 30, 2005

Comments

John

Although I do think that reviewers are laying down on the job too much, what I worry most about is the results that aren't published because they don't fit with prevailing wisdom. In this era of government funding, publication bias can really skew one's view of the natural world. If I remember correctly, the field of the mechanics of muscle contraction was long dominated by strong personalites that wouldn't allow dissenting views to be published. They aren't the only ones, though.

Holly

My reaction to this is, "Of course most published scientific research findings are wrong." In non-theoretical science, novel research involves coming up with a theory and performing some basic research and then publishing. Often that research is either drawn from previous research designed to answer another question, but even when the research is fresh from the lab, it is a relatively short term study of limited size, at least before that initial publishing. So the results are published, valid as far as they go, but they have not accounted for all variables and/or they just didn't track a large enough sample size to really cover random chance. So the next guy sees the cool idea and reruns with different conditions and different issues and learns that the results aren't quite so clear cut. It is how progress is made. One paper may be seminal to a field, but it is never trusted until other people have confirmed the results for themselves.

And the situation is even worse in theoretical science areas, where the proofs may be mathmatically robust, but the underlying model is shifting and unproven itself.

As for computer science being science, I think what you are saying is that theoretical computer science, with it's mathmatical arguments without practical implementations doesn't count as science. But mathmatical proof does count as proof for science (see also the vast majority of theorectical physics). Plus you have the advantage of having as your model something built for the task. You know that the model of computing you are working against is the model of computing that is actually out there.

Brian

My reaction to this is that Sol Snyder is a bit of a scientific dilettante, in that he's made big initial contributions to a number of fields, and then moved on to something completely different. So it's no surprise that he's looking for ideas over facts...

I think it's a mistake, however, to get too hung up on the word "wrong" here. In the interest of interpretive charity, I think he means wrong in the sense of something that turns out not to be supported by other data or subsequent experimentation, and not wrong in the sense of something that's totally mistaken and shouldn't have gotten past the reviewers. He's right that one shouldn't read a paper like a textbook. It's entirely possible, inevitable even, that good hypotheses will be superceded as more data are collected - not to mention the outright blunders that some reviewer should have caught!

To me, 10% of papers not testing their hypothesis seems way too high, although I suppose there are instances when that's not a gross dereliction of scientific duty (maybe many more than I realize). A restrictive (arguably over-restrictive) definition of science would require that all hypotheses be tested.

Holly makes some excellent points.

The comments to this entry are closed.