We programmers put a lot of thought into our code, but it is still shaped by blind, groping luck. It is probably inevitable that our software will become dependent and coupled to the behavior of other code in its own process and its execution environment, including the bugs. Ask yourself: how many developers are likely to put their code through the wringer when it seems to be working just fine?
This dependency is especially obvious if your code has parameters that have been tuned. When I was working on a hand-writing recognition program, I found and fixed a bug in our “skeletonization” process (the process that converts shapes into lines), that in some cases would cause recognizable lines to be reduced to dots. Imagine my disappointment when character recognition performance took a hit after I implemented the change.
Sometimes the problem is that your bugs don’t get detected. I have seen that in UNIX, you can frequently get away with writing to heap memory that has already been freed. As a result, you often find dynamic memory bugs when you port a UNIX program to Windows, which is much less tolerant of heap violations.
This is not the same as “bug compatibility“, in which you knowingly write code that violates a protocol because you have learned that this is what a peer needs and expects. Instead, bug dependence is unconsciously relying on errors because you have been able to get away with it, at least so far.
It seems to me that there are many parallels between this tendency in software and real live processes in the real live world. For example, a principle of natural selection is that species will develop to fill every available ecological niche. Software tends to exercise every statement that does not cause a crash.
If your routine causes problems when a bug is fixed or an algorithm is improved, then it is flawed. But how can you write modules that don’t depending on bugs that you may not even have known were present?
My hunch is that building code based on small, limited scope, modules with single concerns would help. Static and dynamic code analyzers should also assist in finding problems you never knew you had. Even code reviews could contribute, since a reviewer is not focused on getting the code to work, but instead on its assumptions and side effects. But none of this is a guarantee: I think we have to be constantly vigilant and must constantly scrutinize our own work. “No news is good news” should not be grounds for warm and fuzzy feelings.
This is probably true for many crafts, but I’m certain it’s true for software: if you are going to excel, you must become your harshest critic.