Experiment about Test-first programming

Experiment about Test-first programming

Post by Phli » Mon, 30 Jun 2003 08:56:59



[Here's a repost from 2003 Feb 1, when I first reported to this group about
the Mueller Hagner paper "Experiment about Test-first programming". Others
pointed the test should have run more than 1 iteration; that's the benefit.]

Extremists:

Here's a clinical experiment (from 2001, I think) about test-first. It was
before the TDD book came out, so of course someone tried to use it to refute
the TDD book.

 http://www.ipd.uka.de/~muellerm/publications/ease02.pdf

It would be nice if the experiment address TDD instead of Test-First. TDD is
TF + Simple Design plus Refactoring. But the paper dismisses Simple Design
as "another XP practice" and hence rejects it to maintain purity.

Next, the "without test-first" team runs without unit tests period. But both
teams must pass an acceptance test. This is not incremental (a real team
would >start< with the acceptance test). but of course a "real team"
wouldn't have been experimental.

The paper concludes that the test-first team was less reliable (with a
significance p = 0.03) because they failed the acceptance test more often.

The test-first group implemented, and then passed the acceptance tests a
little faster.

The test-first group implemented way faster; almost only half the time.

The no-first group passed the acceptance tests faster.

Of course a troll can decide that 1 data point out of 3 "against" TDD means
we should throw away TDD. The experiment itself is very interesting, and one
can easily surmise how the slower team spent more time thinking about what
the AT might do instead of writing their own tests. And, of course, the
test-first group came out with much more tests at the end ;-)

"The test-first group had less errors when reusing an existing method more
than once." That's >the point< here. The tested method was already re-used
once, in the test.

The paper leaves open the ideal that this simple effect compounds itself
over and over again, as the project gets larger, to constrain the design
space smaller and smaller. So a bigger test should show this contraction at
work.

--
  Phlip
        http://www.greencheese.org/PeaceAndCalm
  --  MCCD - Microsoft Certified Co-Dependent  --

 
 
 

Experiment about Test-first programming

Post by Gerold Keefe » Mon, 30 Jun 2003 18:44:09


reposting what was non-sense on 1st february on the 28th of june
still yields non-sense:
is there any reason for your statement "It was before the TDD book came
out, so of course someone tried to use it to refute the TDD book." beyond
the fact that the experiment did not show the superiority of test-first?

your statements further tell me more about yourself than about the paper and
for that reason are not worth further discussion.

regards,

gerold


> [Here's a repost from 2003 Feb 1, when I first reported to this group about
> the Mueller Hagner paper "Experiment about Test-first programming". Others
> pointed the test should have run more than 1 iteration; that's the benefit.]

> Extremists:

> Here's a clinical experiment (from 2001, I think) about test-first. It was
> before the TDD book came out, so of course someone tried to use it to refute
> the TDD book.

>  http://www.ipd.uka.de/~muellerm/publications/ease02.pdf

> It would be nice if the experiment address TDD instead of Test-First. TDD is
> TF + Simple Design plus Refactoring. But the paper dismisses Simple Design
> as "another XP practice" and hence rejects it to maintain purity.

> Next, the "without test-first" team runs without unit tests period. But both
> teams must pass an acceptance test. This is not incremental (a real team
> would >start< with the acceptance test). but of course a "real team"
> wouldn't have been experimental.

> The paper concludes that the test-first team was less reliable (with a
> significance p = 0.03) because they failed the acceptance test more often.

> The test-first group implemented, and then passed the acceptance tests a
> little faster.

> The test-first group implemented way faster; almost only half the time.

> The no-first group passed the acceptance tests faster.

> Of course a troll can decide that 1 data point out of 3 "against" TDD means
> we should throw away TDD. The experiment itself is very interesting, and one
> can easily surmise how the slower team spent more time thinking about what
> the AT might do instead of writing their own tests. And, of course, the
> test-first group came out with much more tests at the end ;-)

> "The test-first group had less errors when reusing an existing method more
> than once." That's >the point< here. The tested method was already re-used
> once, in the test.

> The paper leaves open the ideal that this simple effect compounds itself
> over and over again, as the project gets larger, to constrain the design
> space smaller and smaller. So a bigger test should show this contraction at
> work.

> --
>   Phlip
>         http://www.greencheese.org/PeaceAndCalm
>   --  MCCD - Microsoft Certified Co-Dependent  --


 
 
 

Experiment about Test-first programming

Post by Phli » Tue, 01 Jul 2003 00:31:16


Quote:> your statements further tell me more about yourself than about the paper
and
> for that reason are not worth further discussion.

You are a very emotionally needy person, Gerold. That's what you are telling
us; what you repress from yourself. XP has nothing to do with that.

If you don't like XP, ignore it. Don't lay awake at night thinking of vacuos
ways to detract.

If I weren't right about this, you would have actually tried to write tests
before writing tested code, just to see what happens. Why haven't you yet,
after all these years of self-abuse? What do you have to lose?

Quote:> > --
> >   Phlip
> >         http://www.greencheese.org/PeaceAndCalm
> >   --  MCCD - Microsoft Certified Co-Dependent  --

 
 
 

Experiment about Test-first programming

Post by Greg Bac » Tue, 01 Jul 2003 05:32:48




: [...] Why haven't you yet, after all these years of self-abuse? [...]

Isn't this a family newsgroup? :-) :-)

Greg
--
The predatory pricing fiction allows competitors of efficient firms
to substitute competition in the courtroom for competition in the
marketplace.
    -- Clyde Wayne Crews Jr.

 
 
 

1. Experiment about Test-first programming

Extremists:

Here's a clinical experiment (from 2001, I think) about test-first. It was
before the TDD book came out, so of course someone tried to use it to
refute the TDD book.

 http://www.ipd.uka.de/~muellerm/publications/ease02.pdf

It would be nice if the experiment address TDD instead of Test-First. TDD is
TF + Simple Design plus Refactoring. But the paper dismisses Simple Design
as "another XP practice" and hence rejects it to maintain purity.

Next, the "without test-first" team runs without unit tests period. But both
teams must pass an acceptance test. This is not incremental (a real team
would >start< with the acceptance test). but of course a "real team"
wouldn't have been experimental.

The paper concludes that the test-first team was less reliable (with a
significance p = 0.03) because they failed the acceptance test more often.

The test-first group implemented, and then passed the acceptance tests a
little faster.

The test-first group implemented way faster; almost only half the time.

The no-first group passed the acceptance tests faster.

Of course a troll can decide that 1 data point out of 3 "against" TDD means
we should throw away TDD. The experiment itself is very interesting, and
one can easily surmise how the slower team spent more time thinking about
what the AT might do instead of writing their own tests. And, of course,
the test-first group came out with much more tests at the end ;-)

"The test-first group had less errors when reusing an existing method more
than once." That's >the point< here. The tested method was already re-used
once, in the test.

The paper leaves open the ideal that this simple effect compounds itself
over and over again, as the project gets larger, to constrain the design
space smaller and smaller. So a bigger test should show this contraction at
work.

--
  Phlip
        http://www.greencheese.org/PeaceAndCalm
  --  MCCD - Microsoft Certified Co-Dependent  --

2. HP690c printer

3. Why Test-First Programming Results in High Coverage

4. Study equipment for grabs

5. How to learn test-first programming from scratch

6. Anyone have any comments on the alph telecom uta120?

7. Why Test-First Programming Results in High Coverage

8. How much test-first to do at first?

9. Test-first & black/white box tests

10. Test first programming - how far do you go?

11. Unit tests for JUnit (Was: Unit tests for GUI programming?)