Update (4/25): Lightly edited in response to Uriel's comments
Every blog on the planet has picked up the story of SCIgen. MIT students Jeremy Stribling, Max Krohn, and Dan Aguayo wrote an automatic CS paper generator and used its output [pdf] to Sokalize the infamous computing spamference WMSCI 2005. Someone sent a sharply-worded inquiry to the conference organizer, "Professor" Nagib Callaos. Callaos wrote back with a mindboggling rambling rationalization [pdf], written in full-bore buzzwordia academica. Most, but not all of the letter now appears on the WMSCI web site. (All spelling and grammatical errors in the following quotations are in the original PDF letter.)
In our acceptance e-mails we were very explicit about the reasons we had in accepting a small percentage of papers as “non-reviewed papers”. [...] We felt it was not fair, not even ethical, to refuse a paper, which refusal was not suggested by its reviewers.
In most computer science conferences and journals (APS notwithstanding), it's considered unfair, even unethical, to accept a paper whose acceptance was not suggested by its reviewers.
Personally I find it difficult to believe that no one on the absotively geenormous WMSCI program committee was willing to read each paper that was submitted. More to the point, Callaos seems unaware of the differences between “review” and “sanity check”. It's one thing to accept some submissions without review; it's quite another to accept papers without reading them at all.
Mathematics conference talks are typically accepted based on a 50-word abstract (and the reputation of the speaker). It is common and accepted practice for presentations to described preliminary results, which are not assumed to be correct until a paper is vetted by the community. These abstracts are by no means “reviewed”. Unlike in computer science, math talk abstracts rarely appear in proceedings, and they do not seriously affect in hiring and tenure decisions. Nevertheless, someone always checks the abstract to make sure it fits within the scope of the conference, or at least that the submitter hasn't gone off his meds. And yes, sometimes junk gets through (even at the most tightly reviewed conferences), but the audience at least expects that the organizers made a good-faith effort to weed out nonsense.
Not so in this case. By accepting the fake paper—by allowing papers to be accepted without human intervention of any kind—the WMSCI organizers have destroyed any expectations that the papers they acccept are anything but crap. Anyone who has published a paper in WMSCI now has a large albatross in their CV. No doubt some of those authors naïvely submitted to WMSCI in good faith, expecting that they would get something more than 20 minutes in front of an audience for their $370, as they would for any reputable computer science conference.
The author(s) of a fake paper accepted as a non re-reviewed one has complete responsibility on the content of their paper.
Well, sure. But that's also true of non-fake papers that have undergone formal peer review. It doesn't matter whether a paper has been formally refereed by armies of experts; if it's wrong, it's the author's fault, not the reviewers'.
If you check the web you can find many conferences accepting reviewed and non-reviewed papers.
I'm not aware of any computer science conference that publishes papers in its proceedings with no reviewing whatsoever. (And if such conferences do exist, so what? If all your friends jumped off a cliff yadda yadda?) Nor am I aware of conferences that charge a separate registration fee for each accepted paper, or that require a fee to publish a paper even if the author does not actually attend the conference. (Admittedly, I do know of conferences that allow each registrant at most one talk, but exceptions are not for sale, and they don't publish proceedings.)
Different kinds of reasoning can be found in the specialized literature on the subject, explaining why non-reviewed papers might, and even should, accepted. Robin and Burke (1987, Peer review in medical journals, 91(2), 252-255), for example, affirm with regards to journals, that “Editors should reserve space for articles... that receive poor review...they should publish unreviewed material...” (In A. C. Weller, 2001 Editorial Peer Review, Its Strengths and weaknesses, p.371)
(Gotta love those ellipses. What are they hiding, I wonder?)
Somewhat to my surprise, the book Callaos cites actually exists, but Callaos is seriously misrepresenting its content. Ann Weller's book is a systematic review of editorial review practices, primarily in the medical sciences, discussing several alternatives to traditional peer review, but never suggesting that scientific work should be published without review of any kind. In a letter responding to a review of her book in Medscape General Medicine, Weller writes:
The Netprints statement quoted by Anderson, that "[c]asual readers should not act on findings" while the manuscript is on the Web site during the peer-review process is, I think, problematic. I believe that medical findings should not be published (made public) until they have been subjected to peer review. To quote from my monograph: "There are those who suggest that the traditional role of editorial peer review in the publication process be eliminated. If eliminated, there would be no system of quality control and this important point should not be lost on those who want to replace peer review and support an open, nonvetted system of communication." (page 321). I was pleased to discover, in my analysis of studies of electronic publishing, that "[t]he little data available indicate that peer review of ejournals is similar to the traditional process of editorial peer review." (page 303) [...] If new electronic models of peer review do surface, any new model must maintain the integrity of science and scholarly communication. In short, medical studies, whether electronic or print, need to be subjected to peer review prior to publication.
Even if the citation were honest, citing a study of peer review in medical journals to justify an acceptance policy in a computer science conference seems a bit specious. Computing and medicine have very different cultural standards for scholarly communication. But Callaos appears to be bolstering his argument by citing an authority that takes exactly the opposite position than the one he argues. Shame, shame, shame.
(To be fair, I have not read Weller's book. My picture of its content comes from the multiple book reviews and other commentary by Weller sprinkled around the web. It's also true that Callos isn't citing Weller herself, but what appears to be a minority opinion. Frankly, this smells a bit like citing a quotation by a creationist in a book by Stephen Jay Gould. But I repeat, I haven't read the book itself.)
So, we are making the commitment to send submitted papers to reviewers, but we cannot assure that the reviewers will make their reviews on time, because this is not in our hands.
What exactly is the program committee's job, then?
As you and most scholars know, and as it has been repetitively written in specialized books and articles on the subject, the reviewing process is formal for journals by non-formal, or informal for conference proceedings, because the timeliness of the proceedings publications and because they represent a place to publish before sending a paper to a journal.
This point is correct but irrelevant. The informality of the reviewing process is not the issue, but rather its non-existence.
After examining several definitions of the phrase “peer-reviewed journal”, Weller (2002, Editorial Peer Review, p. 16) states that “These definitions contain a common element in that they each require some type of review of a manustript other than the editor. Some definitions are more presciptive than others, incorporating the number of processes and requirement. These definitions do not adress such issue as the percentage of material in a journal that should be peer reviewed, or many other details of the process.”
The self-serving irrelevance of this quotation is breathtaking. Again, Weller is discussing publication of medical research, not computer science research. In computer science, the phrase “refereed journal” has a straightforward, unambiguous meaning: Without exception, every paper published in a refereed computer science journal is refereed.
On the WMSCI web page, after complaining about the conference site being hacked, Callaos adds more specious rationalization:
The quality level of WMSCI conferences has been tested for about 9 years by their participants who, in many cases congratulated the Organization and the Program Committee. How could you otherwise explain the conference attendance increase from 45 to a range of 900-1150 attendees in the last years?
So there are a lot of people desperate for publications. The popularity of the conference is no indication of its quality; if anything, the correlation is negative. A better (but still problematic) metric is citations—How often are WMSCI papers actually cited in the computing literature? Not very often.
DeBakey (1990, Journal peer reviewing. Anonymity or disclosure? Archives of Ophthalmology, 108(3), 345-349) asked “is a reviewer of a manuscript…always a peer: a person who has equal standing with another, as in rank, class or age?” So, according to this definition of peer (equal standing of academic rank, for example) we are definitely not making “peer reviews”, and this kind of “peer reviews” is definitely not the base of our paper acceptance policy. We have no feasible way of knowing if the reviewers have the same academic ranks as those of the authors of the paper being reviewed.
What definition? DeBakey isn't offering a definition; he's asking a question, and the answer to the question is OF COURSE NOT, YOU IDIOT! followed by a smack to the back of the head. The idea that age, class, or academic rank could play a role in the paper review process is mind-boggling. There is no expectation in computer science that papers by full professors are reviewed by full professors, or that papers by graduate students are reviewed by graduate students, or (Loki forbid) that papers by 30-something upper-middle-class Republicans are only reviewed by 30-something upper-middle-class Republicans. Even if matching the academic ranks of submitters and reviewers were desirable, it would be impossible. Academic ranks have very different meanings in diffferent countries, and even among different universities in the same country. “Peer” means anyone who, in the opinion of the editor or program committee, has the expertise to make an informed judgement about the quality of the paper. (I'd like to see Callaos argue in court that “a jury of his peers” means twelve Venezuelan former professors of computer science.)
The most charitable interpretation of Callaos' responses is that he and his cohorts earnestly believe that they are providing a much-needed outlet to publish papers without the benefit of peer review of any kind. Perhaps they really believe that their actions are justified by their references to irrelevant literature and non-standard definitions of common English words. Perhaps they are not malicious, but merely gullible.
Perhaps they really do have the best of intentions. But we all know where that road leads.
But the most likely interpretation is that they are running a straightforward scam to exchange money for CV bullets. After all, Callaos still gives himself the title "Professor" even though he appears to have retired from his former faculty position at Simon Bolivar University; his conference charges separate
registration publication fees for each accepted paper; the conference has a history of accepting fake papers; the acceptance letters are sent from a Venezuelan email address, but include an office building in Orlando as a return address; the conference web site is full of meaningless pseudo-intellectual gibberrish; and most tellingly, they send ridiculous amounts of spam to attract submissions. These are not actions of a reputable publisher.
Okay, I've kicked the dead horse long enough. But I can't resist one last quotation, this time from the first editorial of the Journal of Systemics, Cybernetics and Informatics, of which Professor Callaos is the Editor-in-Chief. Read the whole thing, and be amazed.
Our methodological strategy will be a systemic, not a systematic one. To organize the editorial process and to manage the publishing operational activities will be done with an open, adaptable and evolutionary methodological system. It will have the flexibility required to adapt the journal, its editorial policy, its organizational process and its management to the dynamics of its related areas and disciplines, to changes produced by the inherent learning process involved, and to the uncertainty of the environment. It would be a matter of applying Ashby’s Requisite Variety principle, concepts related to Prigogine’s dissipative structures and other basic principles found in General Systems Theory, General Systems Methodology and Cybernetics. Consequently, we will not have a deterministic and a completely pre-conceived systematic editorial methodology, nor completely pre-determined and static editorial policy, but, in both cases, they will be open, flexible, adaptable and evolutionary.