This is a discussion between John Ousterhout and Martin, who advocated in "Clean Code" to omit comments and split code in extremely small functions. Ousterhout takes that to town by asking Martin to explain an algorithm which Martin presented in his book on "Clean Code", and algorithm that generates a list of prime numbers. It turns out that Martin essentially does not understand his own code because of the way it is written - and even introduces a performance regression!
Ousterhout: Do you agree that there should be comments to explain each of these two issues?
Martin: I agree that the algorithm is subtle. Setting the first prime multiple as the square of the prime was deeply mysterious at first. I had to go on an hour-long bike ride to understand it.
[.. .] The next comment cost me a good 20 minutes of puzzling things out.
[...] I refactored that old algorithm 18 years ago, and I thought all those method and variable names would make my intent clear -- because I understood that algorithm.
[ Martin presents a re-write of the algorithm]
Ousterhout: Unfortunately, this revision of the code creates a serious performance regression: I measured a factor of 3-4x slowdown compared to either of the earlier revisions. The problem is that you changed the processing of a particular candidate from a single loop to two loops (the increaseEach... and candidateIsNot... methods). In the loop from earlier revisions, and in the candidateIsNot method, the loop aborts once the candidate is disqualified (and most candidates are quickly eliminated). However, increaseEach... must examine every entry in primeMultiples. This results in 5-10x as many loop iterations and a 3-4x overall slowdown.
It gets even more hilarious when one considers where Martin has taken from the algorithm, and who designed it originally:
Martin took it from a 1972 publication of Donald E. Knuths seminal article on Literate Programming:
http://www.literateprogramming.com/knuthweb.pdf
In this article, Knuth explains that the source code of a program should be ideally understood as a by-product of an explanation which is directed at humans, explaining reasoning, design, invariants and so on. He presents a system which can automatically extract and assemble program source code from such a text.
Even more interesting, the algorithm was not invented by Knuth himself. It was published in 1970 by Edsger Dijkstra in his "Notes on Structured Programming" (with a second edition in 1972).
In this truly fascinating and timeless text, Dijkstra writes on software design by top-down problem decomposition, proving properties of program modules by analysis, using invariants to compose larger programs from smaller algorithms and design new data types, and so on. Also, how this makes software maintainable. In this, he uses the prime number generation algorithm as an extended example. He stresses multiple times that both architecture and invariants need to be documented on their own, to make the code understandable. (If you want that feeling you are standing on the shoulders of giants, you should read what Dijkstra, Knuth, and also Tony Hoare and Niklaus Wirth wrote).
So, Robert Martin is proven wrong here. He does not even understand, and could not properly maintain, the code from his own book. Nor did he understand that his code is hard to understand for others.
( I would highly recommend Ousterhout's book.)
Fair, but it's one that the typical tools for finding bugs, tests and static analysis, cannot actually help with.