A Challenge for all TDD programmers, ICFP

A Challenge for all TDD programmers, ICFP

Post by Thaddeus L Olczy » Fri, 20 Jun 2003 12:01:55



Next weekend  the annual ICFP competition is taking place:
http://www.dtek.chalmers.se/groups/icfpcontest/

Now this year something else is taking place.
They say that you will not have to rely on the program installing
correctly, it will run on your computer. This concerns me because
a lot of scenarios where applications in a competition run on your
computer like this can be skewed for several reasons ( for
example it runs and your conputer but must be in constant
communication with another computer on the net, which
would be disadvantageous for people like me whom rely
on dial up because broadband has not come to their neighborhood [1]).

But assuming that the competition does not have some bias, it should
prove to be a fair test of programmer ability.

Having looked at some entries I notice very few use TDD ( none that
I see, but I haven't seen all ). It has occured to me that if TDD is
so great, given that  few of the competitors do TDD, that a TDD entry
should win.

Furthermore at the conference they make the announcement
"Language X is the programming tool of choice for discriminating
hackers."
where Language X is the language used by the winning entry.

Think of the prestige that would fall upon TDD if conference
organiser instead announced ( which I believe they could be convinced
to do ):
"TDD is the programming tool of choice for discriminating hackers."

So here is my challenge:
Enter the competition. Write your tests. Save your tests someplace
public so it can be verified that they were written first. Then write
your solution.

For the most part I won't be looking at responses to this post. For
the most part that would lead to stupid flame war and dumb debate.
It also yields a lot of lame excuses about why TDD advocates don't
have to accept this challenge, and not taint TDD.

To paraphrase Yoda, you either do it or you don't do it. ( In this
case "do it" reffers to winning the competition. )

If you do it, then you have proven that TDD is a solid technique.
If you don't, then you have proven that TDD is so much shit.

[1] This is Chicago, where the local dictator tears up airport runways
in the middle of the night, costing the city 1/2 billion $ in income,
so that he could set up some tennis tables on the lake.
--------------------------------------------------
Thaddeus L. Olczyk, PhD
Think twice, code once.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Robert Wil » Fri, 20 Jun 2003 21:52:32



> Having looked at some entries I notice very few use TDD ( none that
> I see, but I haven't seen all ). It has occured to me that if TDD is
> so great, given that  few of the competitors do TDD, that a TDD entry
> should win.

There's one that used automated testing during the contest,
his report is quite interesting:
http://www.ellium.com/~thor/icfp2001/k5-icfp.html

Robert

--
Eviter tout le mauvais, faire le bien et nettoyer son propre coeur:
c'est la doctrine des Buddhas.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Thaddeus L Olczy » Fri, 20 Jun 2003 23:50:56




>> Having looked at some entries I notice very few use TDD ( none that
>> I see, but I haven't seen all ). It has occured to me that if TDD is
>> so great, given that  few of the competitors do TDD, that a TDD entry
>> should win.

>There's one that used automated testing during the contest,
>his report is quite interesting:
>http://www.ellium.com/~thor/icfp2001/k5-icfp.html

Yes. I am *very* familiar with this document, and you should have
read closer.

First. If you want to use this as a test case for TDD ( which you
shouldn't ) then this is clearly a case against it, as the person
fails ( IIRC from judging notes he placed somewhere in the 100's ).

Second. He does automated tests, not TDD. If you look he uses a tool
called QuickCheck ( available for Haskell only BTW ), which given a
set of specs generates a serious of tests. That is not what TDD is
about. Shoot QuickCheck generates tests randomly. Not exactly what
I would call "test everything that could possibly break".

Furthermore he strongly suggests if not outright states that he
generates the tests on or near the time he writes the code.

BTW I strongly believe that if you have a tool like QuickCheck
which allows you to generate tons of varied test cases quickly,
you should use it. However I am smart enough not to confuse
the tests it generates with any systematic approach to testing.
( Also there are no such tools for many industrial languages,
and if there is they are often quite pricey. )
--------------------------------------------------
Thaddeus L. Olczyk, PhD
Think twice, code once.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by John Rot » Sat, 21 Jun 2003 00:09:11






> >> Having looked at some entries I notice very few use TDD ( none that
> >> I see, but I haven't seen all ). It has occured to me that if TDD
is
> >> so great, given that  few of the competitors do TDD, that a TDD
entry
> >> should win.

> >There's one that used automated testing during the contest,
> >his report is quite interesting:
> >http://www.ellium.com/~thor/icfp2001/k5-icfp.html

> Second. He does automated tests, not TDD. If you look he uses a tool
> called QuickCheck ( available for Haskell only BTW ), which given a
> set of specs generates a serious of tests. That is not what TDD is
> about. Shoot QuickCheck generates tests randomly. Not exactly what
> I would call "test everything that could possibly break".

For once I have to agree with you. It's not TDD. Nothing wrong with
using any testing tool you can lay your hands on, of course. Most of
them
will add something positive to the mix.

Quote:> Furthermore he strongly suggests if not outright states that he
> generates the tests on or near the time he writes the code.

I'm not sure why this is a "furthermore" to the point you were
making. One of the primary notions of TDD is to generate the
tests just before writing the code, in a very fine-grained fashion.

John Roth

- Show quoted text -

Quote:> Thaddeus L. Olczyk, PhD
> Think twice, code once.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Dave Harr » Sat, 21 Jun 2003 19:32:00



Quote:> Enter the competition. Write your tests. Save your tests someplace
> public so it can be verified that they were written first. Then write
> your solution.

That wouldn't be test driven design, though. With TDD (as I understand
it), you write /one/ test and then write the code to make it pass /before/
you write the second test. You don't write all the tests first and then
write all the production code afterwards. Test and code are interleaved.
This makes it harder to publish something which would prove that TDD was
done.

Also, I believe the tests would be considered part of the intellectual
capital, and valuable, and publishing them could help opponent teams.

Quote:> But assuming that the competition does not have some bias, it should
> prove to be a fair test of programmer ability.

Often the teams that do well tend to be ones with prior knowledge of the
problem domain. Having a /big/ team helps too. 1-person teams rarely do
well. There are many variables.

-- Dave Harris, Nottingham, UK

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Jeff Gri » Sun, 22 Jun 2003 04:51:54



> > Enter the competition. Write your tests. Save your tests someplace
> > public so it can be verified that they were written first. Then write
> > your solution.

> That wouldn't be test driven design, though. With TDD (as I understand
> it), you write /one/ test and then write the code to make it pass /before/
> you write the second test. You don't write all the tests first and then
> write all the production code afterwards. [...]

You are 100% correct.

Quote:> > But assuming that the competition does not have some bias, it should
> > prove to be a fair test of programmer ability.

> Often the teams that do well tend to be ones with prior knowledge of the
> problem domain. Having a /big/ team helps too. 1-person teams rarely do
> well. There are many variables.

I read the posting of last year's winners.  They said that, aside from
successfully conforming to the rules, winning was mostly a matter of
luck.

When programming robots to fight, a la IBM's RoboCode and last year's
contest, a successful strategy depends heavily on guessing what all
your opponents' strategies will be.  And it also depends on luck.

TDD would do well in such a situation, but there's no particular
development technique that would give one a substantial advantage.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Thaddeus L Olczy » Sun, 22 Jun 2003 13:21:28





>> Enter the competition. Write your tests. Save your tests someplace
>> public so it can be verified that they were written first. Then write
>> your solution.

>That wouldn't be test driven design, though. With TDD (as I understand
>it), you write /one/ test and then write the code to make it pass /before/
>you write the second test. You don't write all the tests first and then
>write all the production code afterwards. Test and code are interleaved.
>This makes it harder to publish something which would prove that TDD was
>done.

Fine.  All I suggest is that you keep some sort of public log to
demonstrate that you are indead following the precepts of TDD.

Quote:>Also, I believe the tests would be considered part of the intellectual
>capital, and valuable, and publishing them could help opponent teams.

Valuable? Seems to me you are saying that TDD proponents are so weak
that they cannot afford to lose the proceeds for a weekend to
demonstrate the superiority of TDD.  Proof of which should net them
some money in extra work.

As for helping opponent teams, mid Saturday I wanted to take a break.
So I went online to see if any other teams were reporting their
success. Not only were they doing so, they were openly discussing
tactics ( using things like wikis). One hint. You are generally so
busy with your approach, you generally don't have time to steal from
others. ( I didn't want to even bother copying their approaches for
study after the competition. Much less read it at the time. )

Quote:

>> But assuming that the competition does not have some bias, it should
>> prove to be a fair test of programmer ability.

>Often the teams that do well tend to be ones with prior knowledge of the
>problem domain. Having a /big/ team helps too. 1-person teams rarely do
>well. There are many variables.

The last year was a variation on the traveling salesman problem. The
year before was to optimise pages written in an HTML/XML/SGML like
language. The year before that it was a ray tracing program (
something widely taught with several different programming languages
). I don't remember what the other two problems were, but I do know
they did not require much in the way of domain specific knowledge.

I also don;'t remember any winners who had domain specific knowledge.

As for size. Last year the winning team was two people. So I guess not
only can you prove the effectiveness of TDD, you can also prove the
effectiveness of pair programming.

Like I said:

To paraphrase Yoda, you either do it or you don't do it. ( In this
case "do it" reffers to winning the competition. )

If you do it, then you have proven that TDD is a solid technique.
If you don't, then you have proven that TDD is so much shit.

I should have added:
The really lame make excuses.
--------------------------------------------------
Thaddeus L. Olczyk, PhD
Think twice, code once.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Thaddeus L Olczy » Sun, 22 Jun 2003 13:21:31



Quote:

>I read the posting of last year's winners.  They said that, aside from
>successfully conforming to the rules, winning was mostly a matter of
>luck.

You mean the losers?
Reading through the entries and attempted entries I noticed two
principle reasons for poor preformance.

1) Not finishing in the first place. I thought TDD was supposed
     to increase probability of finishing.
2) Finishing, but having left so much to do in the last few hours
    that the team had to rush through, resulting either in poor
    preformance, or buggy software. Again something TDD is
    supposed to make less likely.

Quote:>When programming robots to fight, a la IBM's RoboCode and last year's
>contest, a successful strategy depends heavily on guessing what all
>your opponents' strategies will be.  And it also depends on luck.

Except for one thing.
The top entries all agreed on one thing. They payed little or no
attention to battle. Most entries agree that they should have
paid more attention to package delivery then to battle strategy.

As for battles, a quick examination will demonstrate that it is easy
to avoid battles. Unless you happen to be next to a water square or
caught in a corridor, it is very easy to avoid other robots.

Given this, it is not lucky to produce a winning strategy.  Optimise
package delivery, make sure that your program function correctly,
produce some minimal package delivery system.

Like I said the really lame make excuses. Those who are especially bad
invent excuses out of thin air.

Especially when they don't even know what the problem will be.

--------------------------------------------------
Thaddeus L. Olczyk, PhD
Think twice, code once.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Tom Plunke » Sun, 22 Jun 2003 15:07:59


<phlip view="off">


> To paraphrase Yoda, you either do it or you don't do it. ( In this
> case "do it" reffers to winning the competition. )

Actually, he said, "Try not.  Do, or do not.  There is no 'try'."

In this case, he's actually talking about deciding to do
something or not deciding to do something, to contrast with
"trying" to do something which only sets one up for failure.
That TDDers don't do it doesn't mean they can't, it just means
that they've decided not to let you pull their strings.

Quote:> If you do it, then you have proven that TDD is a solid technique.
> If you don't, then you have proven that TDD is so much shit.

If I eat sugar, I might learn that it is sweet.  If I don't eat
it, it isn't?

If I mow my lawn, I show that I can do so without killing it.  If
I do not, however, decide to mow it, then mowing it will kill it?

Please try to make valid arguments.  Thanks.

A valid argument would be one along the lines of:

"If a TDD team entered and won, it would mean that TDD can
produce robust software with non-trivial functionality in a short
amount of time.  If TDD sucks, then no TDD team will ever finish
the task, much less win the competition."

</phlip>

-tom!

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Dave Harr » Sun, 22 Jun 2003 22:02:00



Quote:> > That wouldn't be test driven design, though.

> Fine.  All I suggest is that you keep some sort of public log to
> demonstrate that you are indead following the precepts of TDD.

I just think it's interesting that some of the people who criticise TDD
still don't seem to have the first clue what it means.

Quote:> >Also, I believe the tests would be considered part of the intellectual
> >capital, and valuable, and publishing them could help opponent teams.

> Valuable?

To other teams. It is a competition. Publishing your tests is not much
different to publishing your production code.

Quote:> One hint. You are generally so busy with your approach, you
> generally don't have time to steal from others.

That's one of the benefits of having a large team. If you have enough
people, you can have someone monitor other team's discussions and report
back any good ideas.

Last year, at least some teams fell down through not considering test
cases - eg one guy never thought about the 1x1 board case.

Quote:> they were openly discussing tactics

Fine; some teams are more into fun than competitiveness.

Quote:> I also don;'t remember any winners who had domain specific knowledge.

I last looked into this just after the Ray Tracing one. As I recall, one
of the teams which did well had 7 members. Another winning team had
previous experience of writing ray tracers.

I've not looked into more recent competitions - actually I just now looked
at last year's. Unfortunately, the results are not presented in a very
easy to understand way. I could not find a table which listed teams in
finishing order, with team size and language and other basic details, and
a link to the team's website. If you want to know what the top ten teams
did, or the bottom ten teams, you have to work at it.

Quote:> ( In this case "do it" reffers to winning the competition. )

> If you do it, then you have proven that TDD is a solid technique.
> If you don't, then you have proven that TDD is so much shit.

This is so fallacious (and over-stated) it hardly bares commenting on.

Quote:> I should have added:
> The really lame make excuses.

"Really lame" meaning me? I mostly wanted to point out your failure to
understand what TDD is about. I am not entering the competition mainly
because I don't have time. It would be fun to enter, and especially to
win, but I doubt it would settle any arguments.

-- Dave Harris, Nottingham, UK

 
 
 

A Challenge for all TDD programmers, ICFP

Post by scob » Tue, 24 Jun 2003 02:04:29




> > > That wouldn't be test driven design, though.

> > Fine.  All I suggest is that you keep some sort of public log to
> > demonstrate that you are indead following the precepts of TDD.

> I just think it's interesting that some of the people who criticise TDD
> still don't seem to have the first clue what it means.

Do you have a pointer to what you consider the best definition or
description of TDD?
I suspect there may be a fair number of variants.

Cheers,
Steve

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Phli » Tue, 24 Jun 2003 07:44:02



> Do you have a pointer to what you consider the best definition or
> description of TDD?

The book /Test Driven Development/ says...

When you develop, use a test runner that provides some kind of visual
feedback at the end of the test run. Either use a GUI-based test runner that
displays a Green Bar on success or a Red Bar on failure, or a console-based
test runner that displays "All tests passed" or a cascade of diagnostics,
respectively.

Then engage each action in this algorithm:

 * Locate the next missing CODE ABILITY you want to add.
 * WRITE A TEST that will pass if the ability is there.
 * Run the test and ensure it FAILS FOR THE CORRECT REASON.
 * Perform the MINIMAL EDIT needed to make the test pass.
 * When the tests pass and you get a Green Bar, INSPECT THE DESIGN.
 * While the design (anywhere) is poor, REFACTOR it.
 * Only after the design is squeaky clean, PROCEED TO THE NEXT ABILITY.

That algorithm needs more interpretation. Looking closely into each CAPS
item reveals a field of nuances. All are beholden to this algorithm and the
intent of each action. Each action leverages different intents; these often
conflict directly with other actions' intents. Our behaviors during each
action differ in opposing ways. Repeated edits with opposing intents anneal
our code in a way that one must experience to fully appreciate and begin to
understand. Suspend disbelief, try the cycle, and report your results to a
newsgroup near you.

A CODE ABILITY, in this context, is the current coal face in the mine that
our picks swing at. It's the location in the program where we must add new
lines, or edit existing ones. Typically, this location is near the bottom of
our most recent function. If we can envision one more line to add there, or
one more edit to make there, then we must perforce be able to envision the
complementing test that will fail without that line or edit.

"WRITE A TEST" can mean to write a test case, and get as far as one
assertion. Alternately, take an existing test function, and add new
assertions to it. To re-use scenarios and bulk up on assertions, we'll
prefer the latter.

If the new test lines assume facts not in evidence - if, for example, they
reference a class or method name that does not exist yet - run the test
anyway and predict a compiler diagnostic. This test collects valid
information just like any other. If the test inexplicably passes, you may
now understand you were about to write a new class name that conflicted with
an existing one.

Work on the assertion and the code's structure (but not behavior) until the
test FAILS FOR THE CORRECT REASON. If it passes, inspect the code to ensure
you understand it, and ensure the test passed for the correct reason; then
proceed to the next feature.

All this work prepares you to make that MINIMAL EDIT. Write that line which
you have been anxious to get out of your system for the last four
paragraphs.

The EDIT is MINIMAL because until the Bar turns Green we live on borrowed
time. Correct behavior and happy tests are slightly more important than our
design quality. We might pass the test by cloning a method and changing one
line in it. If that's the minimum number of edits, do it. We might
re-writing from scratch a method very similar to an existing method.
Alternately, the simplest edit might naturally produce a clean design that
won't need refactoring.

If the MINIMAL EDIT fails, and if the reason for failure is not obvious and
simple, hit the Undo button and try again. Anything is preferable to
debugging, and an ounce of prevention is worth a pound of cure.

Now that we have a Green Bar, we INSPECT THE DESIGN. Per the MINIMAL EDIT
rule, the most likely design flaw is duplication. To help us learn to
improve things, we tend to throw the definition of "duplication" as wide as
possible.

The book Design Patterns says we improve designs when we address the
interface, and we "abstract the thing that varies". This is the reverse way
of saying "merge the duplication that does not vary". So if we start with a
MINIMAL EDIT, merging duplication together will tend to approach a Pattern.

To REFACTOR, we inspect our code, and try to envision a design with fewer
moving parts, less duplication, shorter methods, better identifiers, and
deeper abstractions. Start with the code we just changed, and feel free to
involve any other code in the project.

If we cannot envision a better design, we can proceed to the next step
anyway. Identify  MINIMAL EDITs that will either improve the design or lead
to a series of similar edits leading to an improvement. Between each edit,
run all the tests. If they fail, hit Undo and start again.

The level of cleanness is important here. You may have code quality that
formerly would have passed as "good enough". Or you may become enamored of
some new abstraction that new code might use, possibly months from now, or
possibly minutes from now. Snap out of it. The path from cruft to new
features is always harder than the path from elegance to new features. Fix
the problems, including any speculative code, while they are still small.

We may add assertions at nearly any time; while REFACTORing the design, and
before proceeding to the next ability. Whenever we learn something new, or
realize there's something we don't know, we take the opportunity to write
new assertions that express this learning, or query the code's abilities. As
the TDD cycle operates, and individual abilities add up to small features,
we take time to collect information from the code about its current
operating parameters and boundary conditions.

Boundary conditions are the limits between defined behavior and regions
where bugs might live. Set boundaries for a routine well outside the range
you know production code will call it. Research "Design by Contract" to
learn good strategies; these roll defined ranges of behaviors up from the
lower routines to their caller routines. Within a routine, simplifying its
procedure will most often remove discontinuities in its response.

Parameters between these limits now typically cause the code to respond
smoothly with linear variations. The odds of bugs occurring between the
boundaries are typically lower than elsewhere. For example, today's method
that takes 2, 3 and 5 and returns 10, 15 and 25, respectively, is unlikely
tomorrow to take 4 and return 301. Like algebraic substitutions reducing an
expression, duplication removal forces out special cases.

After creating a function, other functions now call it. Their tests engage
our function too. Our tests cover every statement in a program, and they
approach covering every path in a program. The cumulative pressure against
bugs make them extraordinarily unlikely.

If your code does something unexpected, or you receive a bug report, always
WRITE A TEST. Then use what you learned to improve design, and write more
tests of this category. If you treat the situation "this code does not yet
have that ability" as a kind of bug, then the TDD cycle is nothing but a
specialization of the rule "test-away bugs".

Quote:> I suspect there may be a fair number of variants.

Why would you suspect that?

--
  Phlip
    http://www.c2.com/cgi/wiki?TestFirstUserInterfaces

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Jeff Gri » Tue, 24 Jun 2003 11:16:13



> >I read the posting of last year's winners.  They said that, aside from
> >successfully conforming to the rules, winning was mostly a matter of
> >luck.


Quote:> You mean the losers?

http://www.taplas.org/~oiwa/icfp-contest-2002/

Team Name: TAPLAS
The first place winners say...

"We could say our victory is a result of luck rather than the ability
of programming"

'the task involved much "uncertainty" in the sense that finding an
optimal solution was almost impossible'

"speculating the behavior of other robots (or, actually, the strategy
of their programmers) was also difficult"

 
 
 

A Challenge for all TDD programmers, ICFP

Post by John W. Wilkinso » Tue, 24 Jun 2003 16:40:30




Quote:> Next weekend  the annual ICFP competition is taking place:
> http://www.dtek.chalmers.se/groups/icfpcontest/
[snip]
> Having looked at some entries I notice very few use TDD ( none that
> I see, but I haven't seen all ). It has occured to me that if TDD is
> so great, given that  few of the competitors do TDD, that a TDD entry
> should win.

Is TDD an appropriate technique for a project such as this contest where:

1. the requirements are fully known and fixed
2. the duration of the project will be very short, ie 3 days
2. there is no maintenance phase

The main benefit of TDD, so I am told, is to produce code that can be more
easily maintained/refactored? Hence its use in Agile methodologies that are
aimed at projects with uncertain or changing requirements.

Looking at last years scoring process, points were awarded solely on how
well the programs functioned. There was no attempt to judge the
maintainability of the source code. This I suppose if necessary as any
maintainability judgement would be highly subjective. However scoring purely
on functionality is useless if we want to gain insights into the pros and
cons of various development techniques. The main skill in writing good code
is not to produce a program that does what it is supposed to do, that's the
easy bit, but to produce code that can also be maintained.

John

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Phli » Tue, 24 Jun 2003 22:46:56



> Is TDD an appropriate technique for a project such as this contest where:

> 1. the requirements are fully known and fixed
> 2. the duration of the project will be very short, ie 3 days
> 2. there is no maintenance phase

> The main benefit of TDD, so I am told, is to produce code that can be more
> easily maintained/refactored? Hence its use in Agile methodologies that
are
> aimed at projects with uncertain or changing requirements.

Turn the question around: What conceivable reason can an engineer give for
not writing tests before writing code?

TDD's design abilities are the long-term benefit. The short-term one is all
the tests.

--
  Phlip
    http://www.c2.com/cgi/wiki?TestFirstUserInterfaces

 
 
 

1. A Challenge for all TDD programmers, ICFP

Next weekend  the annual ICFP competition is taking place:
http://www.dtek.chalmers.se/groups/icfpcontest/

Now this year something else is taking place.
They say that you will not have to rely on the program installing
correctly, it will run on your computer. This concerns me because
a lot of scenarios where applications in a competition run on your
computer like this can be skewed for several reasons ( for
example it runs and your conputer but must be in constant
communication with another computer on the net, which
would be disadvantageous for people like me whom rely
on dial up because broadband has not come to their neighborhood [1]).

But assuming that the competition does not have some bias, it should
prove to be a fair test of programmer ability.

Having looked at some entries I notice very few use TDD ( none that
I see, but I haven't seen all ). It has occured to me that if TDD is
so great, given that  few of the competitors do TDD, that a TDD entry
should win.

Furthermore at the conference they make the announcement
"Language X is the programming tool of choice for discriminating
hackers."
where Language X is the language used by the winning entry.

Think of the prestige that would fall upon TDD if conference
organiser instead announced ( which I believe they could be convinced
to do ):
"TDD is the programming tool of choice for discriminating hackers."

So here is my challenge:
Enter the competition. Write your tests. Save your tests someplace
public so it can be verified that they were written first. Then write
your solution.

For the most part I won't be looking at responses to this post. For
the most part that would lead to stupid flame war and dumb debate.
It also yields a lot of lame excuses about why TDD advocates don't
have to accept this challenge, and not taint TDD.

To paraphrase Yoda, you either do it or you don't do it. ( In this
case "do it" reffers to winning the competition. )

If you do it, then you have proven that TDD is a solid technique.
If you don't, then you have proven that TDD is so much shit.

[1] This is Chicago, where the local dictator tears up airport runways
in the middle of the night, costing the city 1/2 billion $ in income,
so that he could set up some tennis tables on the lake.
--------------------------------------------------
Thaddeus L. Olczyk, PhD
Think twice, code once.

2. Problems with OCI update to database

3. good challenge for Mac or CodeWarrior programmers..read please.

4. Need help with pthread_mutex functions

5. Programmers Challenge

6. manual needed

7. A Challenge to Windows Programmers.....

8. PMI News questions

9. Challenge: Modelling Human Understanding (Was: Re: Programmers to dumb...? )

10. C++ Programmer looking for challenging postition (NYC)

11. Programming Contest--ICFP