Andreas Zeller

Andreas Zeller (high resolution photo)

Andreas Zeller is faculty at the CISPA Helmholtz Center for Information Security and professor for Software Engineering at Saarland University. His research on automated debugging, mining software archives, specification mining, and security testing has proven highly influential. Zeller is one of the few researchers to have received two ERC Advanced Grants, most recently for his S3 project. Zeller is an ACM Fellow and holds an ACM SIGSOFT Outstanding Research Award.

Mail:  andreas.zeller@cispa.de
Phone:  +49 681 87083-1001
Bluesky:  @andreaszeller.bsky.social
Mastodon:  @AndreasZeller@mastodon.social
Linkedin:  andreaszeller
GitHub:  andreas-zeller

Hosted on GitHub Pages — Theme by orderedlist

7 December 2025

Reviewer-Author Collusion Rings and How to Fight Them

by Andreas Zeller

In 2012, I attended a physical meeting of the program committee responsible for selecting the best scientific papers for the ESEC/FSE 2013 conference in Saint Petersburg, Russia. This meeting was particularly memorable because a few hours in, we discovered that one PC member had apparently given all submissions with Russian authors the highest possible grade, and all other submissions the lowest grade. He was excluded from the meeting on the spot, and all of us spent the night re-reviewing the papers originally assigned to him.

Today, such manipulation would be much more difficult. For one, our conferences routinely employ double-blind reviewing, so reviewers are unaware of the authors’ identities. Second, such a behavior is easily detectable as a statistical anomaly. If you want to manipulate the system today, you have to be more clever. Recently, I again heard rumors of a collusion ring in Software Engineering research – a group of authors and reviewers who agree to favorably review each other’s papers. I have no evidence that such a ring actually exists, but if I were to set up one, here’s how:

  1. Use a messaging app like WhatsApp or WeChat to create a user group.
  2. Give the group an innocuous name, say “Researchers in XYZ”, where XYZ is some currently popular research topic.
  3. Post papers you wrote, discuss research results, complain about reviewers – in short, do whatever researchers talk about with each other.

So, what could be wrong with this? The important thing is that there is an implicit understanding that all group members will review each other’s papers favorably. Of course, this would never admitted or discussed in my group: First rule of Fight Club is you do not talk about Fight Club. If someone were to check the group, it’s just a regular chat group of XYZ lovers, nothing to see here. And if someone were to find out that all members of my group tend to favorably review numerous “XYZ” submissions, I’d declare that is easily explained by all of us in the group working in the area and appreciating it (note: the area, not the members). The concept of plausible deniability is crucial for maintaining appearances – it is a standard feature of organized crime, which often operates behind legitimate-looking businesses. Go and prove that I’m doing wrong!

Any participation in such a scheme would be highly unethical for any researcher. The research community and their research institutions face severe sanctions against such cheaters: their reputation and career will be over, plain and simple. However, this requires that such behavior be detected in the first place. As recent research has shown, it is practically impossible to detect such collusion rings solely by observing the review process. Consequently, such rings may already be operating under the radar.

Should we now go and search for such collusion rings? Not unless we have actual evidence for their existence! The Software Engineering research community highly values its culture of mutual trust and respect, and we should do our very best not to endanger it with an unmotivated witch hunt. However, there are a few measures we can take to prevent and mitigate such collusion in the first place.

  1. Eliminate bidding for papers. Collusion rings work because reviewers can control which papers they will review. This occurs through “bidding” – reviewers express their interest in reviewing a specific paper. The idea is that this leads to more competent and engaged reviewers being assigned to a paper. However, it also opens the door for potential manipulation: As a reviewer, placing the highest possible bid (say, +100 in HotCRP) is a proven way to get this paper assigned. Instead, assign papers to reviewers uniquely based on their expertise and past work.
  2. Alternatively, introduce discrete bidding. Short of eliminating bidding completely, change it to use a small number of discrete values. USENIX 2025, for instance, allowed values ranging from 3 for a strong preference to –1 for being unsuitable to review a paper. 90–98% of the assignments (depending on the round and cycle) were based on bid values of 2 or 3.
  3. Since declared expertise and declared past papers can also be used to steer papers towards reviewers, say through carefully crafted papers, both should be checked by PC and area chairs. The Toronto Paper Matching System (TPMS), which does such matches automatically based on representative papers (so far provided by the reviewers themselves) could be filled automatically – say, with the five most cited and five most recent peer-reviewed papers.
  4. Maximize diversity across reviewers and between reviewers and authors. You still want reviewers to be competent for the papers they review, but as collusion rings would typically be organized around people who know and trust each other, make sure there is also diversity in nationality and geographic location, as well as regarding co-author relationships. This cannot be an absolute rule: If a conference has, say, 40% of authors from country C, you cannot have all reviewers from outside of C – but you can still try to maximize. Also, no-one said that all reviewers of a paper on XYZ have to be XYZ experts – some external perspective on a work (or a field) surely won’t hurt?
  5. Protect the anonymity of reviewers. Except for PC and area chairs, no-one should see which reviewers are assigned to a paper. To facilitate discussions and decision making, a paper’s co-reviewers can (and IMHO should) be de-anonymized to a reviewer R, but only after R’s first own review is in.
  6. Check for suspicious access patterns. As PC chair, check access logs for uncommon patterns of reviewer behavior, including (a) submitting empty reviews (to find out co-reviewer identities and stances), (b) accessing papers or reviews one is not assigned to, (c) placing large numbers of low bids, or (d) declaring lots of conflicts. If a PC member shows suspicious behavior, you don’t have to publicly kick them out, but you can quarantine them, giving them fewer or no papers to review.
  7. Improve rules on conflicts of interest. Reviewers who know the authors of a submission before publication (say, from within a paper discussion group) must declare a conflict with this work. Be sure to verify conflicts, say against DBLP data. In 2025, USENIX kicked out 15 PC members who had a large number (up to 131) of undeclared conflicts.
  8. Improve rules on contacting reviewers. Currently, authors are not allowed to contact reviewers about their submissions, as doing so typically results in the submission being desk-rejected. Such rules must be enforced and again extended to groups in which reviewers might be present. PC members must report such attempts; whether they do so could be tested using decoys.
  9. Have a separate chair at a conference whose role would be to ensure compliance with good scientific practice, including the above rules.
  10. Consider a journal review model. Here, reviewers have no say at all about what they’d like to review. Instead, an associate editor picks the best researchers for a paper’s topic and asks them for a review. Obviously, editors should be carefully vetted and selected.
  11. The standard way of revealing collusion is through whistleblowers, who report the actions and participants to an authority. The Security community, for instance, has a committee where anyone can report unethical behavior. Making whistleblowers exempt from sanctions would increase the chances of breaking collusion rings.
  12. Finally, the root of all evil: Academic promotion should be based on quality, not quantity. The whole point of a collusion ring is to maximize the number of publications; removing the demand also dries out the crime. How to achieve this is another discussion, but if collusion rings are a consequence of quantity-based evaluation, there is something amiss.

The above measures will require some sophistication to work around, but such sophistication may also leave more potential evidence. However, I tend to believe that the alleged Software Engineering collusion ring is much less sophisticated, and that its members simply exchange their paper numbers and conference names. Since authors are the only ones to know their paper numbers, a single screenshot would show they’re complicit. Such evidence can be used against them today – or in twenty years, at the height of their career. What kind of pressure must these researchers endure to run such risks?

Acknowledgments. Marcel Böhme, Lars Grunske, Giancarlo Pellegrino, Michael Pradel, and Ben Stock provided valuable insights, lessons learned, and important feedback on earlier versions of this post. Thanks a lot!

tags: popular