> Do you have a pointer to what you consider the best definition or
> description of TDD?
The book /Test Driven Development/ says...
When you develop, use a test runner that provides some kind of visual
feedback at the end of the test run. Either use a GUI-based test runner that
displays a Green Bar on success or a Red Bar on failure, or a console-based
test runner that displays "All tests passed" or a cascade of diagnostics,
Then engage each action in this algorithm:
* Locate the next missing CODE ABILITY you want to add.
* WRITE A TEST that will pass if the ability is there.
* Run the test and ensure it FAILS FOR THE CORRECT REASON.
* Perform the MINIMAL EDIT needed to make the test pass.
* When the tests pass and you get a Green Bar, INSPECT THE DESIGN.
* While the design (anywhere) is poor, REFACTOR it.
* Only after the design is squeaky clean, PROCEED TO THE NEXT ABILITY.
That algorithm needs more interpretation. Looking closely into each CAPS
item reveals a field of nuances. All are beholden to this algorithm and the
intent of each action. Each action leverages different intents; these often
conflict directly with other actions' intents. Our behaviors during each
action differ in opposing ways. Repeated edits with opposing intents anneal
our code in a way that one must experience to fully appreciate and begin to
understand. Suspend disbelief, try the cycle, and report your results to a
newsgroup near you.
A CODE ABILITY, in this context, is the current coal face in the mine that
our picks swing at. It's the location in the program where we must add new
lines, or edit existing ones. Typically, this location is near the bottom of
our most recent function. If we can envision one more line to add there, or
one more edit to make there, then we must perforce be able to envision the
complementing test that will fail without that line or edit.
"WRITE A TEST" can mean to write a test case, and get as far as one
assertion. Alternately, take an existing test function, and add new
assertions to it. To re-use scenarios and bulk up on assertions, we'll
prefer the latter.
If the new test lines assume facts not in evidence - if, for example, they
reference a class or method name that does not exist yet - run the test
anyway and predict a compiler diagnostic. This test collects valid
information just like any other. If the test inexplicably passes, you may
now understand you were about to write a new class name that conflicted with
an existing one.
Work on the assertion and the code's structure (but not behavior) until the
test FAILS FOR THE CORRECT REASON. If it passes, inspect the code to ensure
you understand it, and ensure the test passed for the correct reason; then
proceed to the next feature.
All this work prepares you to make that MINIMAL EDIT. Write that line which
you have been anxious to get out of your system for the last four
The EDIT is MINIMAL because until the Bar turns Green we live on borrowed
time. Correct behavior and happy tests are slightly more important than our
design quality. We might pass the test by cloning a method and changing one
line in it. If that's the minimum number of edits, do it. We might
re-writing from scratch a method very similar to an existing method.
Alternately, the simplest edit might naturally produce a clean design that
won't need refactoring.
If the MINIMAL EDIT fails, and if the reason for failure is not obvious and
simple, hit the Undo button and try again. Anything is preferable to
debugging, and an ounce of prevention is worth a pound of cure.
Now that we have a Green Bar, we INSPECT THE DESIGN. Per the MINIMAL EDIT
rule, the most likely design flaw is duplication. To help us learn to
improve things, we tend to throw the definition of "duplication" as wide as
The book Design Patterns says we improve designs when we address the
interface, and we "abstract the thing that varies". This is the reverse way
of saying "merge the duplication that does not vary". So if we start with a
MINIMAL EDIT, merging duplication together will tend to approach a Pattern.
To REFACTOR, we inspect our code, and try to envision a design with fewer
moving parts, less duplication, shorter methods, better identifiers, and
deeper abstractions. Start with the code we just changed, and feel free to
involve any other code in the project.
If we cannot envision a better design, we can proceed to the next step
anyway. Identify MINIMAL EDITs that will either improve the design or lead
to a series of similar edits leading to an improvement. Between each edit,
run all the tests. If they fail, hit Undo and start again.
The level of cleanness is important here. You may have code quality that
formerly would have passed as "good enough". Or you may become enamored of
some new abstraction that new code might use, possibly months from now, or
possibly minutes from now. Snap out of it. The path from cruft to new
features is always harder than the path from elegance to new features. Fix
the problems, including any speculative code, while they are still small.
We may add assertions at nearly any time; while REFACTORing the design, and
before proceeding to the next ability. Whenever we learn something new, or
realize there's something we don't know, we take the opportunity to write
new assertions that express this learning, or query the code's abilities. As
the TDD cycle operates, and individual abilities add up to small features,
we take time to collect information from the code about its current
operating parameters and boundary conditions.
Boundary conditions are the limits between defined behavior and regions
where bugs might live. Set boundaries for a routine well outside the range
you know production code will call it. Research "Design by Contract" to
learn good strategies; these roll defined ranges of behaviors up from the
lower routines to their caller routines. Within a routine, simplifying its
procedure will most often remove discontinuities in its response.
Parameters between these limits now typically cause the code to respond
smoothly with linear variations. The odds of bugs occurring between the
boundaries are typically lower than elsewhere. For example, today's method
that takes 2, 3 and 5 and returns 10, 15 and 25, respectively, is unlikely
tomorrow to take 4 and return 301. Like algebraic substitutions reducing an
expression, duplication removal forces out special cases.
After creating a function, other functions now call it. Their tests engage
our function too. Our tests cover every statement in a program, and they
approach covering every path in a program. The cumulative pressure against
bugs make them extraordinarily unlikely.
If your code does something unexpected, or you receive a bug report, always
WRITE A TEST. Then use what you learned to improve design, and write more
tests of this category. If you treat the situation "this code does not yet
have that ability" as a kind of bug, then the TDD cycle is nothing but a
specialization of the rule "test-away bugs".
Quote:> I suspect there may be a fair number of variants.
Why would you suspect that?