Principles for SIGCOMM

Thoughts On SIGCOMM Reviewing

As we mentioned in the previous note, there is much agreement between the report document and the principles and proposal documents, and there are many topics where the report document opines and the other two are silent. However, the one topic where there is a stark disagreement is in the area of reviewing—a disagreement that stems from differing assumptions about what problem we should be solving.

In our view, while there are some papers where the decision to accept or reject is based on specific technical issues (e.g., shortcomings in measurement methodologies or fatal technical errors), the fate of many (if not most) papers comes down to inherently subjective decisions on the importance of the problem, the utility of the solution, and the required level of the validation. On such issues, reviewers are peers with their own opinions, not purveyors of infallible judgments. Moreover, on current SIGCOMM PCs, these opinions are strongly shaped by what PC members perceive as the prevailing community standards—i.e., what kinds of papers are most easily accepted by SIGCOMM. We think the single biggest problem with SIGCOMM is the overly narrow view our community has of what constitutes important problems, and the unnecessarily harsh view of the utility of solutions and their validations when work is not targeting well-defined, easily quantifiable, and urgent issues in today’s networks.

This is a cultural issue – of how we change the prevailing community standards about what is considered important, useful, and valid – not a purely scientific issue of whether the results and methodology are correct. This issue must thus be addressed with measures that will change our community culture over time to be more open-minded on these matters. We hope our proposal to identify and defer to champions will empower positive voices and gradually evolve the dynamics of the PC. Currently, the negative voices on the PC often have effective “veto power” and the advocates for acceptance must convince them to change their minds. Our proposal shifts the burden of persuasion, requiring the advocates for rejection to change the minds of the advocates for acceptance. This will alter the kinds of papers that are accepted, which then can change our sense of what is considered important, useful, and valid; this, in turn, can change the research we do and the questions we ask. This last point is by far the most important: we fear that our community’s current focus on near-term problems will blind us to longer-term issues. As noted in the CCR editorial, “We must avoid becoming the community that, having nurtured the modern Internet, finds itself unable to imagine what comes next.” While the proposed mechanism of accepting papers with a single advocate will likely lead to more acceptances, at least in the short term, we view increasing the number of papers to be merely a means to the end of creating a culture that is more open-minded about which questions are deemed important and what solutions are deemed useful and not an end in itself.

In contrast to this focus on cultural change, much of the report document discusses the reviewing process in more scientific terms. For instance, the report states that “We need stronger analytics for reviewing”, but suggests no identifiable metrics for bad reviews except lateness, and doesn’t grapple with the far more subtle issues concerning differences of opinion about importance, utility, and validation. In addition, the section on “Reviewing should make the assumption the paper will be accepted” argues that we should see reviewing as a collaboration between authors and reviewers to achieve the improvements needed for acceptance. This laudable perspective is appropriate if the main barriers to acceptance are fatal technical flaws that can be corrected in revision, but does not help sort out cases where there are honest differences of opinion about what is interesting and useful. More generally, the report document is silent on the distorting impact of current reviewing trends on the field. In fact, the proposal to use crowdsourcing to scale reviewing will likely reinforce, rather than alleviate, the field’s current conservative practices.

The issue of reviewing is central to the intellectual future of the SIGCOMM conference as it directly impacts which papers are accepted, which in turn shapes what kinds of papers will be written in the future. We urge the community, when considering changes to the SIGCOMM conference, to focus first on broadening our perspectives of what is interesting, useful, and valid. To that end, we are supportive of measures in the report (such as providing reviewing guidelines to the community and enabling pushback against toxic reviews) that are positive steps towards this goal.


Jul 24, 2023