The Importance of Being Wrong (and in the Right Way)

When I was an undergraduate, one of my math professors, Theodore Faticoni, explained to us the difference between a useful wrong proof and a useless wrong proof. When attacking an unsolved problem such as the Riemann conjecture, a useful wrong proof was wrong; but for reasons nobody expected. Finding the flaw in the proof taught you things about the problem you didn’t previously know. By contrast, a useless wrong proof was wrong for obvious reasons. It didn’t teach you anything new about the problem.

In fact a useful wrong proof could be far more valuable than a prosaic right proof. For instance, Yves Hellegouarch’s discovery in the 1980s that Fermat’s last theorem was closely related to elliptic curves was far more interesting and practical than the eventual use of that knowledge by Wiles and Taylor to finish the theorem. Similarly and much earlier, Euler’s “proof” of the theorem for the special case with exponent 3 was wrong, but nonetheless suggested many avenues of attack on the problem for the next couple of centuries. Writing software can be the same.

Fixing bugs isn’t as hard as proving Fermat’s Last Theorem (at least I’ve never seen a bug that hard) but the principle still applies. Sometimes a wrong fix can be just as or even more useful than a correct fix, if it teaches you new things about the problem. For example a patch that introduces a new algorithm that fails on some edge cases may be a lot more interesting than a correct one that merely fixes a simple fencepost error.

I recently was reminded of this while working on Jaxen. Dominic Krupp had found a bug in a certain XPath query that involved namespaces. I isolated the bug into the org.jaxen.jdom package; but otherwise I hadn’t been able to figure out where the bug was, much less how to fix it. Then Krupp proposed a patch. The patch didn’t work. It fixed his bug, but it broke about 28 other things. However, the fact that it fixed his bug was interesting. I didn’t initially understand his patch; but it nonetheless told me where to look for the cause of the bug, and where to set the breakpoints in the debugger. It was a useful wrong answer.

Some further investigation deepened the mystery. Once I understood the patch, it looked like it should work. Why didn’t it? Surprisingly the answer turned out to lie in something I myself had gotten implemented in JDOM some years ago. JDOM uses the flyweight design pattern to manage namespaces, including the “No Namespace” namespace. Krupp’s patch was testing for null instead of Namespace.NO_NAMESPACE. A less than one line change corrected the patch so that everything now worked; and I committed the fix. However I never would have found the problem if I hadn’t first seen Krupp’s incorrect patch. His patch was wrong, but was a far more fundamental contribution to the eventual fixing of the bug than the small tweak I made.

Of course, not all wrong answers are this useful. Faticoni also told us the cautionary tale of a famous mathematician who’d spent five years working on the Riemann hypothesis only to publish a mistaken proof. Worse, not only was it flawed and mistaken, it was flawed in an obvious way almost from the first page. Most importantly the “proof” didn’t break any new ground. Everyone already knew that was what the problem was about. Most failed patches and fixes are more like that. They’re broken, and obviously so from a quick inspection. However sometimes we need to pay a little more attention to the wrong answers and not ignore them simply because they don’t work. Sometimes the wrong answer points you straight to the right one.

2 Responses to “The Importance of Being Wrong (and in the Right Way)”

  1. John Cowan Says:

    “Give me a fruitful error any time, full of seeds, bursting with its own corrections. You can keep your sterile truth to yourself.”

    (a certain Pareto’s reply to Kepler, quoted by Stephen Jay Gould in Hen’s Teeth and Horse’s Toes)

  2. Gabe Says:

    Debugging is generally a binary search proposition. The best programmer is the one who can most neatly divide the potential problem space in half.

    For particularly challenging problems, running a number of these tests builds up a collection of forensic information that can be interpretted. The trick is being able to interpret all those bits of information simultaneously. I don’t know how many times the solution to a bug should have been obvious based on my combined testing, but I wasn’t keeping all the evidence organized in my head.

Leave a Reply