Over at the Geomblog, an anonymous commenter complains about incestuousness in the SODA selection process:
I have noticed that most of the accepted papers in SODA/STOC/FOCS seem to be from one of the IVY league schools or from one of the established Labs. Of course, one can argue that it is natural that top schools are the only ones that are undertaking research in theory/algrorithms. But in my personal opinion, I have seen papers with big names or big organizations getting accepted very often, inspite of their mediocre content. Is it absolutely true that the review process is blind? I don't believe to be so.Also, why in the world do we see the same set of names on program committees with little/no permutations. If you just happen to look at, say, X's webpage, you can see that they have been in pretty much every other theory conference's program committee. I believe this creates an atmosphere for internal politics in reviewing papers. I know a lot of people who deserve to be in program committees. On talking to them, I even heard that they pretty much gave up on SODA/STOC/FOCS and they are rather submitting their papers to lesser known theory confereces.
Suresh argues against some of these complaints by sampling the papers and past committees. (“Ivy League”?!) Not surprisingly, the affiliations seem to follow a Zipf's Law distribution, modulo the tiny sample size. Most SODA/STOC/FOCS PC members are first- or second-timers, but that distribution also seems to have a long tail.
The review process is not blind. Reviews are anonymous by default, although there are exceptions, either because the reviewers reveal themselves or because for some papers, there are really only a few obvious reviewers. But, unlike many other areas of computer science, theory conference don't ask for (or even accept?) anonymous submissions.
It is quite true that well-established authors have a better chance of getting their papers accepted than novices. One reason is natural selection—rejected authors tend to stop submitting to that conference. But another is that reviewers rely on authors' reputations to guide their decisions. An author with a good track record is much more likely to get the benefit of the doubt than someone the committee has never heard of. And yes, sometimes this means that mediocre papers by famous people are accepted when they shouldn't be.
This isn't (necessarily) politics; it's more likely simple human laziness.
“Elitism” at conferences is by no means restricted to theoretical computer science. I've heard precisely the same complaint levelled against SIGGRAPH and SIGMOD, much higher-profile (richer) conferences than SODA, with similar concentrations of authorship. Closer to home, Canadian computational geometers created CCCG when they got sick of being rejected from SOCG (a.k.a. “Sharir-fest”). The solid modeling community has consistently ignored our overtures at co-locating SOCG with their ACM conference, in part, I suspect, because of our dismal track record at accepting “applied” papers. And there have been several years when it was hard to get a computational geometry paper into SODA because they “don't appeal to a broad enough audience”, or more brutally, because “they already have their own conference”.
This is regrettable, but natural, human behavior. Every community, research or otherwise, develops priorities and social standards that eventually evolve into (mostly) unconscious snobbery. Despite a long history of successful collaboration, algorithms and database folks tend to look down on each other, just like geeks and jocks, or MDs and PhDs, or the French and Americans. Good papers are rejected from SODA or STOC or FOCS, not because they're incorrect or shallow or incremental, but because they don't meet community standards.
These standards can be profoundly unintuitive, even counterproductive. Lance Fortnow touched on this issue a few months ago:
Given the same theorem, the community benefits from a simple proof over a complicated proof. [STOC/FOCS p]rogram committees look for hard results, so if they see a very simple proof, it can count against you.
As Lance put it, if you want your paper accepted to STOC or FOCS, “You have to play the game.” I reacted rather negatively to Lance's suggestion at the time—somehow it just seems wrong to obfuscate a paper in the hopes that the PC will like it more—but as a more general piece of advice, I have to agree. If you want to be accepted by any community, research or otherwise, you have to play its games.
The trick, then, is to find a research community whose standards/priorities/interests/ethics/talents/personality/whatever best mesh with your own. (I consider myself extraordinarily lucky in this respect!) Not perfectly, of course—that's impossible. Like any other healthy relationship, you have to make compromises, but not so many that you lose your own identity.
But the compromise has to go both ways. To stay healthy, research communities have to acknowledge the unintended effects of their “standards” and work toward changing them if they do more harm than good. As Adam Buchsbaum points out at the end of Suresh's post,
It thus seems to me that the perception of incestuousness among FOCS, STOC, and SODA PCs is in fact a mis-perception. Still, we have to address the mis-perception by ongoing action, not simply historical statistics.
I coudn't agree more.
The same comments may also be made about other conferences, but some -- eg, SIGMOD -- have been double blind for a while now, meaning that anonymous [and anonymized] submissions are enforced. This at least makes it harder to argue that the identity of the authors is directly influencing decisions (unless one invokes some paranoid conspiracy theories). Perhaps SODA should experiment with this approach. Of course the difficulty with such "experiments" is that it is very hard to run any meaningful "control" to compare against.
graham
Posted by: | September 13, 2004 at 09:36 AM
Double blind submissions are overrated. Take a look at Luc Devroye's page for a screed on this (admittedly written for journals, but some of the same principles apply). Double blind submissions are like checking for nail files at the airport: pretends to address the issue while really not doing so at all.
Posted by: Suresh | September 13, 2004 at 04:29 PM