This is a guest article written by friend and colleague, Ian Cottam. For part 1, see https://www.walkingrandomly.com/?p=5435
So why do computer scientists use (i != N) rather than the more common (i < N)?
When I said the former identifies “computer scientists” from others, I meant programmers who have been trained in the use of non-operational formal reasoning about their programs. It’s hard to describe that in a sentence or two, but it is the use of formal logic to construct-by-design and argue the correctness of programs or fragments of programs. It is non-operational because the meaning of a program fragment is derived (only) from the logical meaning of the statements of the programming language. Formal predicate logic is extended by extra rules that say what assignments, while loops, etc., mean in terms of logical proof rules for them.
A simple, and far from complete, example is what role the guard in a while/for loop condition in C takes.
for (i= 0; i != N; ++i) {
/* do stuff with a[i] */
}
without further thought (i.e. I just use the formal rule that says on loop termination the negation of the loop guard holds), I can now write:
for (i= 0; i != N; ++i) {
/* do stuff with a[i] */
}
/* Here: i == N */
which may well be key to reasoning that all N elements of the array have been processed (and no more). (As I said, lots of further formal details omitted.)
Consider the common form:
for (i= 0; i < N; ++i) {
/* do stuff with a[i] */
}
without further thought, I can now (only) assert:
for (i= 0; i < N; ++i) {
/* do stuff with a[i] */
}
/* Here: i >= N */
That is, I have to examine the loop to conclude the condition I really need in my reasoning: i==N.
Anyway, enough of logic! Let’s get operational again. Some programmers argue that i<N is more “robust” – in some, to me, strange sense – against errors. This belief is a nonsense and yet is widely held.
Let’s make a slip up in our code (for an example where the constant N is 9) in our initialisation of the loop variable i.
for (i= 10; i != N; ++i) {
/* do stuff with a[i] */
}
Clearly the body of my loop is entered, executed many many times and will quite likely crash the program. (In C we can’t really say what will happen as “undefined behaviour” means exactly that, but you get the picture.)
My program fragment breaks as close as possible to where I made the slip, greatly aiding me in finding it and making the fix required.
Now. . .the popular:
for (i= 10; i<N; ++i) {
/* do stuff with a[i]
}
Here, my slip up in starting i at 10 instead of 0 goes (locally) undetected, as the loop body is never executed. Millions of further statements might be executed before something goes wrong with the program. Worse, it may even not crash later but produce an answer you believe to be correct.
I find it fascinating that if you search the web for articles about this the i<N form is often strongly argued for on the grounds that it avoids a crash or undefined behaviour. Perhaps, like much of probability theory, this is one of those bits of programming theory that is far from intuitive.
Giants of programming theory, such as David Gries and Edsger Dijkstra, wrote all this up long ago. The most readable account (to me) came from Gries, building on Dijkstra’s work. I remember paying a lot of money for his book – The Science of Programming – back in 1981. It is now freely available online. See page 181 for his wording of the explanation above. The Science of Programming is an amazing book. It contains both challenging formal logic and also such pragmatic advice as “in languages that use the equal sign for assignment, use asymmetric layout to make such standout. In C we would write
var= expr;
rather than
var = expr; /* as most people again still do */
The visible signal I get from writing var= expr has stopped me from ever making the = for == mistake in C-like languages.