A Challenge for all TDD programmers, ICFP

A Challenge for all TDD programmers, ICFP

Post by Paul Campbel » Wed, 02 Jul 2003 17:18:37




> > tested because I write the test first.  When I defer the tests, I tend to
> > miss things.

> What you are really stateing is that diligence is necessary when software is
> developed. You in particular find that TDD imposes such diligence. That is
> not disputed. That it occurs does not make it a benifit. Any disciplined
> development aproach will achieve the same affect

A primary goal of any engineering process is to prefer automatic checking or
verification over a relience on human diligence - even the most diligent individual or
or group *will* have lapses. In this respect TDD is obviously superior with respect
to functional coverage of tests since it must be 100% unless a clear and blatant breach
of the "test first" rule occurs.

Paul C.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by David Lightston » Wed, 02 Jul 2003 20:49:16






Quote:> > > tested because I write the test first.  When I defer the tests, I tend
to
> > > miss things.

> > What you are really stateing is that diligence is necessary when
software is
> > developed. You in particular find that TDD imposes such diligence. That
is
> > not disputed. That it occurs does not make it a benifit. Any disciplined
> > development aproach will achieve the same affect

> A primary goal of any engineering process is to prefer automatic checking
or
> verification over a relience on human diligence - even the most diligent
individual or
> or group *will* have lapses.

Is this the same human that formulates the tests upon whihc TDD is
dependend?

The primary goal of enginerring process is to obtain the result. Automatic
testing is the prefered managerial strategy (eliminate the unpredictibility
of employees, eliminate them too if possible)

Quote:> In this respect TDD is obviously superior with respect
> to functional coverage of tests since it must be 100% unless a clear and
blatant breach
> of the "test first" rule occurs.

So long as you restricet yourself to functional coverage and actually
achieve your claimed 100% great. I am not interested (at this time) in
discussion whether functional coverage is adequate, and by implication
whether the 100% implies superior

- Show quoted text -

Quote:

> Paul C.


 
 
 

A Challenge for all TDD programmers, ICFP

Post by Michael Feather » Thu, 03 Jul 2003 00:06:51






> > > If you with to compare apples with sand, who am I to stop you from
> eating
> > > the sand. I am simply trying to indicate that - to convince others to
> > adopt
> > > something (and it is all about adopting technology) that the benifit
has
> > to
> > > be a real benifit. Not some remarketing of something that is already
> > > available.

> > What is that other thing?  Desk checking?  I remember you talking about
> it,
> > but I don't see what you see similar in it.

> You have become confused, desk checking is something most people no longer
> need to do (certainly I no longer even consider doing it). It was a simple
> necessity of a past long forgotten.

I was just trying to figure out what you were talking about here where you
were relating TDD and desk checking:


> > I have been doing TDD in one form or another (different names for the
> > same strategy) since 1967. In the good old days of punch cards it was
called
> > desk checking. (because turn around time was so poor, you had to
simulate
> > the example test data thru the algorithms. Examples just don't pop out
of
> > thin air).
> The aspect which most people seem to
> have forgotten relates to the analysis needed in order to determine the
test
> cases which one would use. How does such an analysis differ form that upon
> which TDD is based?

Some of it is significantly different.  Other parts are the same.  Often you
are
using the tests to specify what to do next, not to figure out whether what
you've
done is right.

Quote:> If you wish to state that performance of a good analysis is no longer
being
> done, I will not disagree. The need for a good analysis has not vanished,
> and will not

Well, I agree that the reasoning involved in understanding existing code
well enough to test it is an important skill.  Definitely something we need
to develop in the industry.

Michael Feathers
www.objectmentor.com

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Univers » Thu, 03 Jul 2003 01:30:28




>> The aspect which most people seem to
>> have forgotten relates to the analysis needed in order to determine the
>> test
>> cases which one would use. How does such an analysis differ form that upon
>> which TDD is based?
> Some of it is significantly different.  Other parts are the same.  Often you
> are
> using the tests to specify what to do next, not to figure out whether what
> you've
> done is right.

"Other parts are the same.  Often you are using the tests to specify what
to do next, not to figure out whether what you've done is right."

Test driven design; Design intentionally "pulled away" from being, as it
should be - *led* and *driven* by analytical investigation and discovery;  
A coding process that is inherently "nickel and dime", inherently
piecemeal respecting the overall analysis conceptualization and an overall
technical solution framework rooted in that overall analysis
conceptualizaion.

I.e. once again xp/oma demonstrate firm commitment to the foregoing
essential tenets of *hackery*.

Quote:>> If you wish to state that performance of a good analysis is no longer
> being
>> done, I will not disagree. The need for a good analysis has not vanished,
>> and will not

Hear, here!

Quote:> Well, I agree that the reasoning involved in understanding existing code
> well enough to test it is an important skill.  Definitely something we need
> to develop in the industry.

Huh?  How does that address the core of the matter?  Viz xp/oma
deprecation of an SDLC whose foundation is goal, scheduling, tradeoff and
structure "commanded" by Analysis.

Elliott
--
OO software rests upon class abstractions expressed in class method
*behaviors*.  The sum of class method behaviors is the overall class
*role*.  Class role should have primary responsibility for managing class
data such that the impact of class data is driven by the operation of
class methods.
         ~*~   Get with OO fundamentals, get OO.  ~*~
-
      We Don't Need US Fat Cat's Geopolitical Hegemony!
           People of the World Demand US Stop Now!
-
               * http:\\www.radix.net/~universe *

 
 
 

A Challenge for all TDD programmers, ICFP

Post by William Tanksley Goog » Thu, 03 Jul 2003 04:44:30


"David Lightstone" <david._NoSpamlightst...@prodigy.net> wrote:
>"William Tanksley Google" <wtanksle...@cox.net> wrote:
[et cetera]
>>>>>My intent was to indicate the TDD does not provide code structure
>>>>>as a benifit. Because you can inmprove the code structure of the
>>>>>code independent of TDD. (test first or otherwise)
>>>>You *can* improve the code structure independant of TDD, and you
can
>>>>draw straight lines freehand. Many people find it pleasant and
>>>>convenient to do so. But that doesn't mean that straightedges
don't
>>>>provide line structure, nor that TDD doesn't provide code

structure.

>>>>While you're using a straightedge it's hard to get away from
drawing
>>>>a straight line; when you're using TDD it's hard to get away from
>>>>writing cohesive modules which are loosely coupled.
>You are comparing a very simple tool (a straight edge), to one which
some
>claim to be sophisticated. A more appropraite comparison would be to
>something more sophisticated. Something that can readily produce a
curve. In
>Euclidean Geometry there is no such thing as a curve line.
Non-Euclidean
>geometry is another story

This is pointless quibbling. Look at the question, not the analogy:
you claim that IF there are other ways to produce good code structure,
THEN TDD can't claim good code structure as a benefit.

I point out that straight lines are a benefit of using a straightedge,
even though there are other ways of getting straight lines. In the
same way, good code structure would be a benefit of TDD regardless of
whether there were other ways of producing it.

I wonder whether you're trying to question something else about TDD,
and we're simply failing to communicate your actual questions. I'll
put some words into your mouth, but to avoid attacking a strawman I
won't answer any of them until you claim one or more of them for your
own.

1. Resolved: TDD does NOT result in (i.e. has no correlation to) good
code structure.
2. Resolved: TDD costs more than other practices which have the same
or better code structure results.

I can answer either one of these; if you've been trying to communicate
something else, please speak up.

>>>Obtaining cohesive and loosely coupled is of course the holy grail
in
>>>software development. There is no doubt about it. I take it you
wish
>>>to imply that the holy grail of software development is a benifit
of
>>>TDD.
>Definitely not.

You're replying to yourself and directly disagreeing. This is
something I haven't seen often on Usenet.

>But as you claim in your (dubious) analogy - it is hard to
>avoid obtaining good coupling and cohesion

I don't claim that; I don't make any claims about difficulty, or even
about good coupling and cohesion (except indirectly). I only claim
that a benefit is still a benefit even if there are other ways of
achieving it.

>>But more to the point: yes, I'm stating explicitly that TDD produces
>>cohesive and loosely coupled designs. I'm pretty sure I stated that
>>before, but I'm doing it again now. Lest I be mistaken for a
solitary
>>wild-eyed silver-bullet espousing maniac, though (who me?), let me
>>point out that I have some good company in making that claim: you.
You
>>claim that "much thought for architecture and/or design" can
reliably
>>result in cohesive and loosely coupled designs.
>>How much thought is unspecified. It has to be _much_, though.
>and how, pray tell, would that differ from any other approach to
developing
>software. How many refactoring will be needed? etc

I can't tell you how much refactoring -- but I can tell you precisely
when the refactoring stops, and I can break the refactoring into small
chunks which are directly related to actual deliverables. Up-front
"thought", on the other hand, doesn't have any concrete tie to the
deliverable; you don't know how good your thought was until the
deliverable is ready.

>>>You have made a few assumptions here as to the nature of the
>>>initially produced code. Most important of which are:
>>>(1) Poor cohesion
>>>(2) Excessive coupling
>>>(3) Lack of observability (so that testing is relatively painless)
>>>To me that means the code was thrown together without much thought
>>>for architecture and/or design (ie spagetti or macaroni type code)
>>You asked me to gedankenexperiment without TDD; you didn't specify
>>anything else.
>If you with to compare apples with sand, who am I to stop you from
eating
>the sand.

No, I wish to compare apples with a lack of apples. If TDD gave no
benefits, then doing nothing would be as good as doing TDD.

Come on, this is FUNDAMENTAL. You can't do an experiment without a
control group!

>I am simply trying to indicate that - to convince others to adopt
>something (and it is all about adopting technology) that the benifit
has to
>be a real benifit. Not some remarketing of something that is already
>available.

Your definition of "real benefit" should really be clarified. It seems
that you believe that nothing is beneficial if there's some other way
to gain the same result. This seems trivially false, as demonstrated
by my example of the benefits of a straightedge.

>>In that case, let me state the outcomes of this new experiment.
>>Terrible -- there'll be bugs in primitives, and debugging will
require
>>that I assume that the bug could be hiding /anywhere/, and normal
>>common sense won't serve as a reliable guide of when I can stop
>>testing (because I don't have any way of enumerating the tests I'll
>>need).
>I take it you believe that above are consistent with appropriate
cohesion
>and coupling.

No, but they're consistent with untested cohesion and coupling.

Bug happen. They happen in cohesion and coupling as often, or MORE
often, than they happen in functionality (because programmers are
usually directly watching functionality, and often not really checking
cohesion and coupling thanks to at-the-time-reasonable assumptions).

>>I'll have made a number of false starts in design that an
>> elementary test would have revealed the uselessness of.
>Does that ever not occur?

Yes, when you've made the elementary test before you made the design,
and when you implemented the design before you went on to the next
step in design. In short, when you're doing TDD.

So you yourself have now identified something which you believe is a
good thing, and which no methodology that you know of can avoid; the
next step is to show that TDD can achieve that. If I can do that, this
debate is over.

Agreed?

>>My testing
>>will reveal some shortcomings in the observability of the design, so
>>I'll hack in some viewports to let my tests in, but each one will
>>require some more conditionally compiled code, and will increase the
>>number of different ways to build the software.
>How does all this differs from a need to refactor somehow?

In a sense it doesn't -- that's the brilliant thing about TDD. If you
need to refactor, do you want to discover the need while you have a
little bit of code which is all completely tested, or do you want to
discover the need after you have all the code, most of it untested?

>>>If you must assume the quality of the initial code which you write
is
>>>that poor, perhaps it should be rewritten/ refactored (depending
upon
>>>your prefered buzz word) before you start testing.
>>How can I refactor code without tests? How can I assume the
>>testability is low if I haven't written any tests?
>The code transformation strategies are independent of the performance
of the
>testing.

Refactoring formally requires testing, to ensure that the change
doesn't disturb existing functionality. Refactoring informally is a
bad idea.

>You know the code is bad, and being an experienced developer you
>know the drill

Handwaving.

You only find that the code is bad by attempting to test it.
Seriously, if I *knew* it was bad I wouldn't have written it.

>>Why, after all, did I wait this long before I started testing, when
I
>>could have written each test as soon as I saw the design point? If
I'd
>>done that, each design point would be tested already, and my design
>>would _have_ to be observable! If my design was tested with each
>>design point in isolation, then of course my design would be
>>decoupled. And if each design point had only enough code implemented
>>to merely pass its test, then each point is cohesive (although
without
>>refactoring this doesn't promise cohesion on any other scale).

My point, re-quoted.

-Billy

 
 
 

A Challenge for all TDD programmers, ICFP

Post by David Lightston » Thu, 03 Jul 2003 10:40:11


One adopts something new when there are preceived new benifits. Benifits
thusfar absent from those already available.

Your use of benifit is in an absolute sense. A bullet that can be added to
the check list of attributes what serve to justify adoption. In that sense
the claimed characteristic is a benifit. Whether the claimed benifit
actually is obtained is a separate issue.

One can also claim the trivial benifit. It is a method.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by William Tanksley Goog » Fri, 04 Jul 2003 03:10:50



>One adopts something new when there are preceived new benifits.
Benifits
>thusfar absent from those already available.

Thank you for taking the time to clear this up.

Okay, so you weren't trying to claim that TDD has no benefits; you
were trying to claim that TDD offers no benefits not already provided
by other practices.

I would say this is false, based on two claimed benefits:

1. Quick bug feedback for both design and code
2. Mutability of design and code

#1 simply isn't available when your test isn't written before your
code. #2 is possible to a lesser extent, but because the programmer
tests in TDD completely express the intended operation of the code, a
successful test run gives much higher confidence in the validity of a
mutation.

Based on these and other benefits, we also claim that TDD is easier to
adopt and carry out; easier even than the old hack'n'crash "method" of
program design. But that's not a "benefit" by your definition, since
it's merely an allegedly greater magnitude of something already
possessed by other methods.

Quote:>Whether the claimed benifit actually is obtained is a separate issue.

Agreed.

Quote:>One can also claim the trivial benifit. It is a method.

Correct.

-Billy

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Costin Cozian » Fri, 04 Jul 2003 03:28:29




>>>I was just wondering if you have contrary TDD experience or if
>>>your other experiences lead you to avoid trying TDD.

>>I would reverse that question.
>>Why do *you* think XP/TDD is worth a try ?

> Because I've tried it, and I've tried not doing it; I've studied it
> and gained an understanding; I've studied the solid, tried and true
> methodologies and seen how they work with it; and I've studied the
> arguments against and decided they lack weight.

>>I am unconvinced if it's worth a try or not. There are lots of things

> we

>>could and sometimes we should try, but life's too short; a newbie has

> to

>>be convinced that somehow XP is special and is worth the risk of

> trying.

> Then why are you wasting our time posing as an expert on Usenet? If
> you're not even going to put any effort into understanding TDD, your
> opinion is far from special.

I am not posing at all, it is a bunch of XPers that posed as experts in
this thread. One guy posts a note asking how would XP fare in ICFP
contest. Then some XPers jumped in to denigrate the contest, some other
XPers claimed that they "know" XP would fare better than other things.

It is a contest for *unlimited bragging rights*. XPers considered it
proper to brag about their favorite pet, without actually earning the
bragging rights.

In the end  "my opinion is far from special" because I haven't expressed
any opinion, I pointed some facts. You're free to form any opinion you want.

 
 
 

A Challenge for all TDD programmers, ICFP

Post by David Lightston » Fri, 04 Jul 2003 03:45:59





> >One adopts something new when there are preceived new benifits.
> Benifits
> >thusfar absent from those already available.

> Thank you for taking the time to clear this up.

> Okay, so you weren't trying to claim that TDD has no benefits; you
> were trying to claim that TDD offers no benefits not already provided
> by other practices.

The original question related to the challange. Someone claimed that the
challenge could not exhibit the "real" benifits of TDD. I just wondered if
there were any benifits other then the "real"  benifits. The preceived
problem was that "real" implies that there was something other than that
which the challenge could not test.

Quote:

> I would say this is false, based on two claimed benefits:

> 1. Quick bug feedback for both design and code
> 2. Mutability of design and code

> #1 simply isn't available when your test isn't written before your
> code.

Subject to dispute. You can always write tests after you write code. The
only question is how much code you write before you write tests. So long as
it is not extremely large it will not matter. Whether it is more or less
effective is an entire separate question. A question that regrettable is not
answerable.

The real concealed issue is the individuals problem analysis strategy. I
doubt if one size fits all. TDD give every indication of requiring an
analysis style not necessarily present. ie how one discovers a proof to a
mathematical problem. How one partitions the argumentation tasks

Quote:>#2 is possible to a lesser extent, but because the programmer
> tests in TDD completely express the intended operation of the code, a
> successful test run gives much higher confidence in the validity of a
> mutation.

> Based on these and other benefits, we also claim that TDD is easier to
> adopt and carry out; easier even than the old hack'n'crash "method" of
> program design. But that's not a "benefit" by your definition, since
> it's merely an allegedly greater magnitude of something already
> possessed by other methods.

> >Whether the claimed benifit actually is obtained is a separate issue.

> Agreed.

> >One can also claim the trivial benifit. It is a method.

> Correct.

> -Billy

 
 
 

A Challenge for all TDD programmers, ICFP

Post by panu » Fri, 04 Jul 2003 22:41:16



> ... two claimed benefits:

>1. Quick bug feedback for both design and code
>2. Mutability of design and code

>#1 simply isn't available when your test isn't written before your
>code. #2 is possible to a lesser extent, but because the programmer
>tests in TDD completely express the intended operation of the code, a
>successful test run gives much higher confidence in the validity of a
>mutation.

What if the initial intention is wrong?  I find this often to be the
case.  Then I have to *rewrite* the tests, *and* the implementation.  
Whereas if I first write the code, until it gets to reflect the 'right
intention', and only after write the tests - then I only need to write
the tests once.

In this case not writing  tests first would offer an even faster
feedback, since no time will be wasted writing and rewriting the wrong
tests.

I've tried writing the tests first, and it feels a bit  like focusing on
a too narrow a point of view to start with.  I rather spend some time
thinking about the big picture, meddling with the design and
implementation, thinking about the whole *set of interfaces* and their
effect on the design,   instead of  writing a test for a single method
and then focusing solely on its implementation, forgettgin about
everything else for the time being.

The approach taken may depend on your programming environment: I work
with Smalltalk where it is easy to add or refactor methods, and test
them immediately *afterwards*.  Typically when I write a method, I
decide whether it is complicated enough to justify some test-cases for
it, and if so, I add them within a comment at the end of the method.  
/This/ gives me rapid feedback -  faster than if I spend time trying to
specify every test, and all things that *need* a test beforehand.  If  
the method's functioning seems highly dependend on other parts of the
system, I may write some regression tests for it as well.

I just don't get it why writing tests *first* should lead to a better
design, faster,  in every conceivable situation; in my experience it
doesn't.

The real reason - I suspect - behind advocating test-first is that
assumably if it's not instituted as a  a strict principle,  most
programmers might not write any tests afterwards either.

- Panu Viljamaa

 
 
 

A Challenge for all TDD programmers, ICFP

Post by panu » Sat, 05 Jul 2003 07:04:20



> ... TDD or not TDD, contests like these cannot prove anything related
> to developing full-scale professional applications.

Is it possible to come up with a contest that actually *does prove the
benefits of TDD*, over other approaches?

Scientifically speaking: is the proposition  "TDD is better, in specific
circumstances"  testable?

I would urge the TDD proponents to devise and organize - or even just
*define* - such a competition.  A clear win by users of TDD will surely
end the discussion for good. Not only will it prove that "TDD is
better", but it will also show that "TDD is better in these specific
circumstances".  This would be a truly valuable piece of information for
us all.

On the other hand, if the proposition above is *untestable*, the most we
can say is that "It may be possible that TDD is better, in some
circumstances".  The same is true of too many (to mention) other things
as well.

There seems to be a lot of people spending energy trying to get us adopt
their *belief* that "TDD is better (regardless of the circumstances)".
That energy would be better spent trying to find out - and demonstrate -
the exact circumstances in which it is better. By "energy better spent"
I mean it would serve *us all* better. Information has value, and so do
conjectures and informed guesses.  But when someone claims an unproven
conjecture as truth, this typically has negative value to the community
as a whole.

Thanks
-Panu  Viljamaa

 
 
 

A Challenge for all TDD programmers, ICFP

Post by Dale Kin » Wed, 09 Jul 2003 02:30:46




> > ... TDD or not TDD, contests like these cannot prove anything related
> > to developing full-scale professional applications.

> Is it possible to come up with a contest that actually *does prove the
> benefits of TDD*, over other approaches?

> Scientifically speaking: is the proposition  "TDD is better, in specific
> circumstances"  testable?

I would say the word "prove" is a bit too strong as there are many
variables. I think it is certainly testable, but such a test probably be
pretty impractical (i.e. expensive to carry out).

Quote:> I would urge the TDD proponents to devise and organize - or even just
> *define* - such a competition.  A clear win by users of TDD will surely
> end the discussion for good. Not only will it prove that "TDD is
> better", but it will also show that "TDD is better in these specific
> circumstances".  This would be a truly valuable piece of information for
> us all.

How do you propose to fund a group of developers co-located to work on a
significant piece of software with adequate control groups?

Quote:> On the other hand, if the proposition above is *untestable*, the most we
> can say is that "It may be possible that TDD is better, in some
> circumstances".  The same is true of too many (to mention) other things
> as well.

Until someone funds such exeriments, I think all we have is anecdotal
testimony.

--
 Dale King