I regularly skim through digests of comp.risks, a newsgroup devoted to the risks of computing, as it’s directly related to my research and, above all, pretty entertaining. I was recently catching up, and the subject line “A risk of static analysis tools — and refereeing” understandably caught my eye. I reproduce it shamelessly below without attribution, as I don’t know its original source, though it’s included in comp.risks 25.02 (January 14, 2008):

Interesting anecdote: Some years ago a simple static code analysis tool was submitted to a conference. Two of the three reviewers, both of whom were extremely careful programmers, ran it on their own code just to see what would happen.

The tool produced 100% false positives (not 95, not 99, but fully 100%). As a result, the paper wasn’t accepted.

The same paper was later submitted to another conference (where reviewers didn’t try this), where it was accepted and won the best paper award.

Advertisements