Code generation of JUnit test cases

Code generation of JUnit test cases

Post by sit » Fri, 31 Jan 2003 01:18:02



Hi,

I'd like to know the experiences of people with regard to automatic(?)
code generation of JUnit test cases. I'm refering to projects which do
not work the XP way, but still would like to have high % of unit
testing (meaning, unit test code is done towards the end of coding),
and to projects that need to deal with legacy code.

Specifically,
(a) What has been your experience with tools which auto-generate test
cases that give a good % of code coverage when tested with code
coverage tools, such as Clover?

I've worked with
- open source options mentioned in
http://www.junit.org/news/extension/testcase_generation/index.htm, but
these only generate the skeleton of the methods

- JTest, which generates pre-determined values for primitives, and
does nothing about Java value object classes, and expects the
programmer to add in the "other" test cases - which in effect turns
out to be a lot.

I guess what I'm looking for is a tool that "understands" the code and
generates unit test cases for all the branches and conditions in the
code, in order to achieve maximum code coverage. Is there any such
beast out there? :)

(a1) Has anyone worked with TogetherJ/XPTest
(http://www.extreme-java.de) for generating JUnit test cases? I didnt
see too much of high-lighting on the TogetherSoft site on this one...
What have been your experiences?

(a2) On a related note, does anyone know of XMI-compatible JUnit
generators, which would remove the dependency on the UML tool?

(b) What has been your experience with mock objects? Have they been
too cumbersome a process for the average project size (say, 100
classes, with 10 value object classes)?

(c) Building functional test cases: I've heard of JFunc; what are the
other options, and how do they compare?

Thank you very much,
situ

 
 
 

Code generation of JUnit test cases

Post by Ilja Preu » Fri, 31 Jan 2003 18:33:23


Quote:> I guess what I'm looking for is a tool that "understands" the code and
> generates unit test cases for all the branches and conditions in the
> code, in order to achieve maximum code coverage. Is there any such
> beast out there? :)

Why are you looking for maximim code coverage? Shouldn't you be looking for
maximum probability of finding bugs?

It seems to me that a tool which "understands your code" and generates tests
according to the existing code, would always produce tests which don't fail.
It would simply duplicate your bugs in the tests. How would that serve
you???

Curious, Ilja

 
 
 

Code generation of JUnit test cases

Post by sit » Sat, 01 Feb 2003 13:52:57


Hi Ilja,

You're right. I would definitely want maximum probability of finding
bugs. But again, if the unit test cases that I write do not cover all
the branches in the code, am I not lowering the probability of finding
bugs? What if someone runs into this code branch during functional
testing, or on production?

I have read thru a few articles on unit testing, and here's what I
understand:
a) 100% code coverage does not guarantee 100% quality, because you
still dont know whether you've actually fulfilled the user's
requirements. This is where functional testing needs to come in.
b) However, having less than 100% code coverage in your unit testing
effectively means that there are untested areas in your codebase which
could bite you sometime. The "old" way to make sure your codebase
works correctly has been manual code reviews. Unit testing is a more
formal and scientific approach, which abstracts the implementation of
the class from the person doing the unit testing - ideally he should
just know the inputs and the expected outputs for each class method.
Code coverage tools like Clover complete this scientific approach, by
ensuring that every part of your code is tested.

I'm looking at whether we can have a process on the following lines:
a) test cases can be auto-generated by looking at the code
b) we review those auto-generated test cases, specifically the inputs
and expected results, to check whether they make sense

On a slightly related note, I have looked at mock-objects, but I
somehow am not totally convinced with this approach. The reason is
that the unit test case for a class now "knows" something about the
layers beneath it, i.e. the classes/libraries being used by the class
being tested. For e.g. if a class uses 10 classes from a specific
library (say, Xerces), we would need 10 different mock-objects. If
tomorrow, the class is changed to use a different library other than
Xerces, we would need to re-write the unit test cases, EVEN THOUGH the
inputs and the expected outputs would remain the same. What are your
views on this approach?

situ


Quote:> > I guess what I'm looking for is a tool that "understands" the code and
> > generates unit test cases for all the branches and conditions in the
> > code, in order to achieve maximum code coverage. Is there any such
> > beast out there? :)

> Why are you looking for maximim code coverage? Shouldn't you be looking for
> maximum probability of finding bugs?

> It seems to me that a tool which "understands your code" and generates tests
> according to the existing code, would always produce tests which don't fail.
> It would simply duplicate your bugs in the tests. How would that serve
> you???

> Curious, Ilja

 
 
 

Code generation of JUnit test cases

Post by Ilja Preu » Sat, 01 Feb 2003 17:51:53


Quote:> I'm looking at whether we can have a process on the following lines:
> a) test cases can be auto-generated by looking at the code
> b) we review those auto-generated test cases, specifically the inputs
> and expected results, to check whether they make sense

I would expect this process to be notedly more costly and error prone
(besides being very boring), than manually writing the tests. If it even
would be possible to automatically generate the tests. After all, how do you
know wether your design is testable without writing the tests?

Did you take a look at Test-Driven-Development?

Quote:> On a slightly related note, I have looked at mock-objects, but I
> somehow am not totally convinced with this approach. The reason is
> that the unit test case for a class now "knows" something about the
> layers beneath it, i.e. the classes/libraries being used by the class
> being tested. For e.g. if a class uses 10 classes from a specific
> library (say, Xerces), we would need 10 different mock-objects. If
> tomorrow, the class is changed to use a different library other than
> Xerces, we would need to re-write the unit test cases, EVEN THOUGH the
> inputs and the expected outputs would remain the same. What are your
> views on this approach?

I think I don't fully understand the question. What problem are you trying
to solve? What alternatives to mock-objects do you see?

Hope this helps,

Ilja

--
++ wir optimieren Ihre Informations- und Kommunikationsprozesse ++
++ disy Cadenza: Informationsintegration am i*net Arbeitsplatz ++
++ disy conference: The easy way of audio conferencing ++
++ disy Call-Back Office: Call-Back-Buttons fr Kundenservice im Web ++


     disy Informationssysteme GmbH               http://www.disy.net
     Stephanienstr. 30                           Tel: +49 721 1 600 624
     D-76133 Karlsruhe, Germany                  Fax: +49 721 1 600 605

++ Weitere Informationen finden Sie unter http://www.disy.net ++

 
 
 

Code generation of JUnit test cases

Post by sit » Tue, 04 Feb 2003 14:15:32


Hi Ilja,


Quote:> > I'm looking at whether we can have a process on the following lines:
> > a) test cases can be auto-generated by looking at the code
> > b) we review those auto-generated test cases, specifically the inputs
> > and expected results, to check whether they make sense

> I would expect this process to be notedly more costly and error prone
> (besides being very boring), than manually writing the tests. If it even
> would be possible to automatically generate the tests. After all, how do you
> know wether your design is testable without writing the tests?

I'm not sure it would be error prone, and it definitely wont be
boring. We are currently manually writing the test cases by inspecting
the code and figuring out relevant inputs and expected outputs for
each method. This tedious task, if it can be automated, will simply
require review of the programmer(s), to ensure that the input/output
parameters generated are valid, necessary and sufficient to test the
given method. The key is that the programmer works on reviewing the
test cases, rather than writing them, which is the tedious task for
huge code bases.

Quote:

> Did you take a look at Test-Driven-Development?

No, thats something we need to look at. I'm not yet comfortable with
re-factoring that happens. I only hope its code-level refactoring that
everyone talks about, and not design re-factoring.

Also, TDD is not an option for legacy code - what would you do if you
had to maintain a huge product/project? Would you manually write test
cases for the legacy code, so that you start with a clean slate?

My basic rant is this:
a) We typically cannot account for writing JUnit test cases for huge
codebases upfront. We cannot convince the customer ALWAYS that this is
a required step before we can say we're confident of the codebase.
b) Even writing JUnit test cases only guarantees the quality of the
unit, and not the whole. Functional testing is still required to check
whether the customer's requirements have been fulfilled. With this in
mind, manually writing JUnit test cases is a very expensive task
indeed, especially for a legacy codebase.

Quote:

> > On a slightly related note, I have looked at mock-objects, but I
> > somehow am not totally convinced with this approach. The reason is
> > that the unit test case for a class now "knows" something about the
> > layers beneath it, i.e. the classes/libraries being used by the class
> > being tested. For e.g. if a class uses 10 classes from a specific
> > library (say, Xerces), we would need 10 different mock-objects. If
> > tomorrow, the class is changed to use a different library other than
> > Xerces, we would need to re-write the unit test cases, EVEN THOUGH the
> > inputs and the expected outputs would remain the same. What are your
> > views on this approach?

> I think I don't fully understand the question. What problem are you trying
> to solve? What alternatives to mock-objects do you see?

The problem I'm trying to solve is this:

Assume C1 calls C2, which in turn calls C3, and that there is a single
method "execute" in each of these classes that gets called.

C1.execute() -> C2.execute()
C2.execute() -> C3.execute()

Now, with the mock objects approach, if I had to test C1's execute
method, I would need to simulate the functioning of C2.execute() with
a mock object.

If tomorrow, I replace the logic in C1.execute as follows:
C1.execute() -> C4.execute()
C1.execute() -> C5.execute()
C4.execute() -> C4a.execute()
C5.execute() -> C5a.execute()
i.e. C1.execute() calls C4.execute, followed by C5.execute. C4 and C5
in turn call C4a and C5a's execute methods respectively.

In this case, I would need to change the mock objects, and therefore
the JUnit test cases, to "mock" C4 and C5 in order to test C1. I'm not
comfortable with that approach, because the JUnit test case becomes
"implementation-dependent", whereas it shouldn't be the case.

Thanks,
situ

> Hope this helps,

> Ilja

> --
> ++ wir optimieren Ihre Informations- und Kommunikationsprozesse ++
> ++ disy Cadenza: Informationsintegration am i*net Arbeitsplatz ++
> ++ disy conference: The easy way of audio conferencing ++
> ++ disy Call-Back Office: Call-Back-Buttons fr Kundenservice im Web ++


>      disy Informationssysteme GmbH               http://www.disy.net
>      Stephanienstr. 30                           Tel: +49 721 1 600 624
>      D-76133 Karlsruhe, Germany                  Fax: +49 721 1 600 605

> ++ Weitere Informationen finden Sie unter http://www.disy.net ++

 
 
 

Code generation of JUnit test cases

Post by Phli » Tue, 04 Feb 2003 15:39:24



>> Did you take a look at Test-Driven-Development?

> No, thats something we need to look at. I'm not yet comfortable with
> re-factoring that happens. I only hope its code-level refactoring that
> everyone talks about, and not design re-factoring.

Refactoring is not re-work. The point is to make "no design" temporarily
safe, so that a design good enough to support the simplest code to pass all
tests can emerge. This works at all scales, from lines in a function to
modules in a package.

This takes practice, but once it clicks in it's second nature, and doing
things the old-fashioned way feels much more uncomfortable. Practice it on
disposable projects, such as in-house tools, before going online with it.

Quote:> Also, TDD is not an option for legacy code - what would you do if you
> had to maintain a huge product/project? Would you manually write test
> cases for the legacy code, so that you start with a clean slate?

It depends on what you need to do - maintain, remove bugs, add minor
features, or add entire systems. Either way, instrumenting the legacy code
with aggressive assertions. Design by Contract can make the code safer. But
DbC can pin down the design too, but if the code works then its design must
at least be adequate.

--
  Phlip
   http://www.greencheese.org/PhilosophyBrethrenThree
  --  Now collecting votes for a new group:

 
 
 

Code generation of JUnit test cases

Post by Ilja Preu » Tue, 04 Feb 2003 19:05:56


Quote:> No, thats something we need to look at. I'm not yet comfortable with
> re-factoring that happens. I only hope its code-level refactoring that
> everyone talks about, and not design re-factoring.

I don't understand the difference, sorry.

Quote:> Also, TDD is not an option for legacy code - what would you do if you
> had to maintain a huge product/project? Would you manually write test
> cases for the legacy code, so that you start with a clean slate?

The problem with legacy code is that it's most often untestable at the unit
level.

Quote:> My basic rant is this:
> a) We typically cannot account for writing JUnit test cases for huge
> codebases upfront.

That's not what you are doing when you do test-first-development. In
test-first, writing the tests is closely amalgamated with writing the
production code. It would be virtually impossible to account for it
seperately. BTW, do you account separately for debugging?

Quote:> b) Even writing JUnit test cases only guarantees the quality of the
> unit, and not the whole. Functional testing is still required to check
> whether the customer's requirements have been fulfilled.

Yes.

Quote:> With this in
> mind, manually writing JUnit test cases is a very expensive task
> indeed, especially for a legacy codebase.

I don't understand how the existence of functional testing makes unit
testing expensive.

Quote:> > I think I don't fully understand the question. What problem are you
trying
> > to solve? What alternatives to mock-objects do you see?

> The problem I'm trying to solve is this:

> Assume C1 calls C2, which in turn calls C3, and that there is a single
> method "execute" in each of these classes that gets called.

> C1.execute() -> C2.execute()
> C2.execute() -> C3.execute()

> Now, with the mock objects approach, if I had to test C1's execute
> method, I would need to simulate the functioning of C2.execute() with
> a mock object.

Yes. Sometimes this is more feasible than not mocking C2. More often you
probably don't need to mock it. Mocking isn't an all-or-nothing thing - it's
a tool you use when it makes testing easier.

Quote:> If tomorrow, I replace the logic in C1.execute as follows:
> C1.execute() -> C4.execute()
> C1.execute() -> C5.execute()
> C4.execute() -> C4a.execute()
> C5.execute() -> C5a.execute()
> i.e. C1.execute() calls C4.execute, followed by C5.execute. C4 and C5
> in turn call C4a and C5a's execute methods respectively.

> In this case, I would need to change the mock objects, and therefore
> the JUnit test cases, to "mock" C4 and C5 in order to test C1. I'm not
> comfortable with that approach, because the JUnit test case becomes
> "implementation-dependent", whereas it shouldn't be the case.

Well, JUnit test cases often are white box tests, and therefore
implementation-dependent. I understand why this thought makes you feel
uncomfortable. But I'd urge you to try it - you will probably find that it
doesn't present an as big (and frequent) problem as you fear, and solves so
much other problems that it in fact becomes very practical.

Regards, Ilja

 
 
 

Code generation of JUnit test cases

Post by John Rot » Tue, 04 Feb 2003 22:10:57



> Hi Ilja,




Quote:> > > I'm looking at whether we can have a process on the following
lines:
> > > a) test cases can be auto-generated by looking at the code
> > > b) we review those auto-generated test cases, specifically the
inputs
> > > and expected results, to check whether they make sense

> > I would expect this process to be notedly more costly and error
prone
> > (besides being very boring), than manually writing the tests. If it
even
> > would be possible to automatically generate the tests. After all,
how do you
> > know wether your design is testable without writing the tests?

> I'm not sure it would be error prone, and it definitely wont be
> boring. We are currently manually writing the test cases by inspecting
> the code and figuring out relevant inputs and expected outputs for
> each method. This tedious task, if it can be automated, will simply
> require review of the programmer(s), to ensure that the input/output
> parameters generated are valid, necessary and sufficient to test the
> given method. The key is that the programmer works on reviewing the
> test cases, rather than writing them, which is the tedious task for
> huge code bases.

> > Did you take a look at Test-Driven-Development?

> No, thats something we need to look at. I'm not yet comfortable with
> re-factoring that happens. I only hope its code-level refactoring that
> everyone talks about, and not design re-factoring.

> Also, TDD is not an option for legacy code - what would you do if you
> had to maintain a huge product/project? Would you manually write test
> cases for the legacy code, so that you start with a clean slate?

Assuming the code was more or less functional; that is, it was
installed and doing useful work, I don't see taking the time to write
tests across the board in the absense of some kind of change request
as cost justified.

What I would do when faced with a change request (either a feature or
a defect, it doesn't matter) is write the necessary tests to provide a
safety net for the changes I'm about to make. These would go into the
permanent test base. I might run a coverage analyzer periodically to
check where I needed to firm up tests.

Quote:> My basic rant is this:
> a) We typically cannot account for writing JUnit test cases for huge
> codebases upfront. We cannot convince the customer ALWAYS that this is
> a required step before we can say we're confident of the codebase.

As I said above, I wouldn't even attempt to justify an attempt to
write unit tests for an existing code base. What I would tell the
customer
is that I'd write tests as I go, and those tests aren't separable from
the rest of the work. This is much easier to sell.

- Show quoted text -

Quote:> b) Even writing JUnit test cases only guarantees the quality of the
> unit, and not the whole. Functional testing is still required to check
> whether the customer's requirements have been fulfilled. With this in
> mind, manually writing JUnit test cases is a very expensive task
> indeed, especially for a legacy codebase.

> > > On a slightly related note, I have looked at mock-objects, but I
> > > somehow am not totally convinced with this approach. The reason is
> > > that the unit test case for a class now "knows" something about
the
> > > layers beneath it, i.e. the classes/libraries being used by the
class
> > > being tested. For e.g. if a class uses 10 classes from a specific
> > > library (say, Xerces), we would need 10 different mock-objects. If
> > > tomorrow, the class is changed to use a different library other
than
> > > Xerces, we would need to re-write the unit test cases, EVEN THOUGH
the
> > > inputs and the expected outputs would remain the same. What are
your
> > > views on this approach?

Sometimes you have to use mocks, but most of the time when you
think you do with legacy code, what you really need to do is change
the code to reduce coupling. In fact, this is always a good question
to ask if you find you need to introduce a mock object: "How could I
reduce coupling so that I didn't need this object?"

Another way to look at mock objects is that they help to clarify
the design: you now have two (or more) implementations of the
same design element. When you only have one implementation,
lots of coupling tends to creep in when you aren't thinking. This is the
same thing for unit tests: they are a second user, so they help to
clarify
the interfaces simply by their existance.

Quote:

> Thanks,
> situ

John Roth
 
 
 

Code generation of JUnit test cases

Post by Keith Ra » Wed, 05 Feb 2003 00:39:01




[...]
Quote:> That's not what you are doing when you do test-first-development. In
> test-first, writing the tests is closely amalgamated with writing the
> production code. It would be virtually impossible to account for it
> seperately. BTW, do you account separately for debugging?

[...]

Here's an experiment: keep track of how much time you spend debugging,
writing tests, running tests, compiling, writing new code, and
refactoring.

In my experience, writing tests reduces the time that has to be spent
debugging, and refactoring reduces the time that has to be spent writing
new code. Some kinds of refactorings can reduce compile time as well.
--
C. Keith Ray

<http://homepage.mac.com/keithray/xpminifaq.html>

 
 
 

Code generation of JUnit test cases

Post by Mike Stockdal » Wed, 05 Feb 2003 01:39:01



> The problem I'm trying to solve is this:

> Assume C1 calls C2, which in turn calls C3, and that there is a single
> method "execute" in each of these classes that gets called.

> C1.execute() -> C2.execute()
> C2.execute() -> C3.execute()

> Now, with the mock objects approach, if I had to test C1's execute
> method, I would need to simulate the functioning of C2.execute() with
> a mock object.

> If tomorrow, I replace the logic in C1.execute as follows:
> C1.execute() -> C4.execute()
> C1.execute() -> C5.execute()
> C4.execute() -> C4a.execute()
> C5.execute() -> C5a.execute()
> i.e. C1.execute() calls C4.execute, followed by C5.execute. C4 and C5
> in turn call C4a and C5a's execute methods respectively.

> In this case, I would need to change the mock objects, and therefore
> the JUnit test cases, to "mock" C4 and C5 in order to test C1. I'm not
> comfortable with that approach, because the JUnit test case becomes
> "implementation-dependent", whereas it shouldn't be the case.

Mock objects are great for seperating the architectural layers of the
system for testing, e.g. if C1 is business logic and C2 is data access
or middleware, then a mock C2 makes it much easier to test C1.  If C1
and C2 are both part of the same layer, I would use the real C2 when
testing C1.  Of course, this assumes C2 already exists when C1 is being
tested.

We did do a release where we tried unit-testing every class in
isolation, i.e., with mocks for every other class.  We found the effort
to write all the mocks was not worth it, so after that we were more
selective in the use of mocks.

 
 
 

Code generation of JUnit test cases

Post by sit » Wed, 05 Feb 2003 16:10:06



Quote:> > No, thats something we need to look at. I'm not yet comfortable with
> > re-factoring that happens. I only hope its code-level refactoring that
> > everyone talks about, and not design re-factoring.

> I don't understand the difference, sorry.

The difference is this:

We follow the traditional spiral methodology for each release that we
do - we do an architecture blueprint, review it, we do a high level
design, review it, a detailed design, review it, then code, then unit
test it, review code, then functionally test each module, then do
integration and perform integration testing.

We do a detailed design that identifies all the methods in the code,
mentions the specifics of each method that we write. We then start
coding each method.

What i understand by code-level refactoring is changing the logic of a
method, by say, bringing up the common logic being used into something
like a util class. However, IMHO, design-level refactoring would be
bigger, in the sense, it could result in new patterns being
introduced, "bad" patterns being thrown out, whole new
components/modules being introduced or reworked etc.

My rant is that I would like to avoid such major changes at the time
of "re-factoring" the code. I would rather freeze the design after
reviewing the detailed design. I understand that this is against the
XP way of working, where the design is not frozen until you have
completed the test cases and re-factoring as you code. However,
freezing the design early gives greater control and predictability
over the coding phase, and emphasizes that we do design correct the
first time itself. Besides, the design review phase is expected to
give any inputs towards design refactoring. Similarly, code review is
expected to ensure code refactoring has been done.

Any of you having experiences with mixing XP with the traditional
waterfall/spiral methodology of project development? Please let me
know your thoughts on the same.

Quote:

> > Also, TDD is not an option for legacy code - what would you do if you
> > had to maintain a huge product/project? Would you manually write test
> > cases for the legacy code, so that you start with a clean slate?

> The problem with legacy code is that it's most often untestable at the unit
> level.

> > My basic rant is this:
> > a) We typically cannot account for writing JUnit test cases for huge
> > codebases upfront.

> That's not what you are doing when you do test-first-development. In
> test-first, writing the tests is closely amalgamated with writing the
> production code. It would be virtually impossible to account for it
> seperately. BTW, do you account separately for debugging?

No, we dont account separately for "debugging" per se. However, we do
have time set aside for integration testing and debugging, which is at
the end of each release. I would guess thats unavoidable for XP-based
projects too - you've tested the individual modules so far, and now
need to get onto integrating all of them together and testing them
functionally.

With respect to each module, the development team does a bit of unit
testing, and more of functional testing, before handing over to the
quality people.

Quote:

> > b) Even writing JUnit test cases only guarantees the quality of the
> > unit, and not the whole. Functional testing is still required to check
> > whether the customer's requirements have been fulfilled.

> Yes.

> > With this in
> > mind, manually writing JUnit test cases is a very expensive task
> > indeed, especially for a legacy codebase.

> I don't understand how the existence of functional testing makes unit
> testing expensive.

How can we justify the costs of writing unit test cases, if at the end
of writing all those unit test cases [even assuming they cover 100% of
the code], we still do not guarantee, for obvious reasons, that the
application fulfills the customer's requirement?

That brings me to the reason I posted first on this thread [and the
thread's gone into re-factoring and other XP thoughts, but I've loved
it]:
I'm looking at an automated solution for generating unit tests, which
would take the tedious task of writing unit test cases off our back,
and would allow us to focus on reviewing the generated test cases, and
maybe even design-level re-factoring the XP way.

Yes, this automated solution would look at my potentially buggy code,
and would reproduce buggy test cases, but thats where the reviewer
comes in - he would look at the generated test cases and
inputs/outputs, and would check if these make sense. If not, he would
need to get into the code and correct the bugs, and then re-generate
the test cases so that the inputs/outputs now make sense.

Is there any such solution out there?

- Show quoted text -

Quote:

> > > I think I don't fully understand the question. What problem are you
>  trying
> > > to solve? What alternatives to mock-objects do you see?

> > The problem I'm trying to solve is this:

> > Assume C1 calls C2, which in turn calls C3, and that there is a single
> > method "execute" in each of these classes that gets called.

> > C1.execute() -> C2.execute()
> > C2.execute() -> C3.execute()

> > Now, with the mock objects approach, if I had to test C1's execute
> > method, I would need to simulate the functioning of C2.execute() with
> > a mock object.

> Yes. Sometimes this is more feasible than not mocking C2. More often you
> probably don't need to mock it. Mocking isn't an all-or-nothing thing - it's
> a tool you use when it makes testing easier.

> > If tomorrow, I replace the logic in C1.execute as follows:
> > C1.execute() -> C4.execute()
> > C1.execute() -> C5.execute()
> > C4.execute() -> C4a.execute()
> > C5.execute() -> C5a.execute()
> > i.e. C1.execute() calls C4.execute, followed by C5.execute. C4 and C5
> > in turn call C4a and C5a's execute methods respectively.

> > In this case, I would need to change the mock objects, and therefore
> > the JUnit test cases, to "mock" C4 and C5 in order to test C1. I'm not
> > comfortable with that approach, because the JUnit test case becomes
> > "implementation-dependent", whereas it shouldn't be the case.

> Well, JUnit test cases often are white box tests, and therefore
> implementation-dependent. I understand why this thought makes you feel
> uncomfortable. But I'd urge you to try it - you will probably find that it
> doesn't present an as big (and frequent) problem as you fear, and solves so
> much other problems that it in fact becomes very practical.

Point taken. Thanks.

- Show quoted text -

Quote:> Regards, Ilja

 
 
 

Code generation of JUnit test cases

Post by Phli » Wed, 05 Feb 2003 17:06:05



> My rant is that I would like to avoid such major changes at the time
> of "re-factoring" the code. I would rather freeze the design after
> reviewing the detailed design. I understand that this is against the
> XP way of working, where the design is not frozen until you have
> completed the test cases and re-factoring as you code. However,
> freezing the design early gives greater control and predictability
> over the coding phase, and emphasizes that we do design correct the
> first time itself. Besides, the design review phase is expected to
> give any inputs towards design refactoring. Similarly, code review is
> expected to ensure code refactoring has been done.

This is entirely and fundamentally a mindset issue. It's not a matter of
adding "refactoring", as a phase, to the existing process.

When you design up front, you often re-order the design, or simplify it, or
remove duplication. This is much the same as the refactoring cycle in TDD.

When you implement the code, you frequently adjust function lengths, and
variable names, and access controls on members. This is also an aspect of
refactoring.

When you test, and extract bugs (which TDD would never have permitted), and
then change the code >just a little<, this is like the fine-tuning aspect
of refactoring.

But the mindset problem is not against refactoring, it is pro-phase. If you
had no phases - actually micro-phases - then the above categories of
behavior would simply turn on their side.

You guys freeze the design under the assumption this will make the
implementation phase predictable, and reduce the bug rate at test time. But
by running in phases, with a very long amount of time between collecting

Quote:>real< feedback about the program's status, you are causing the very bugs

that make you think you need to freeze.

Again, this is a mindset issue. Refactoring won't cure it. A fully
incremental process will.

--
  Phlip
   http://www.greencheese.org/PerfideousDelinquency
  --  Will the bailiff please remove the juror who started the wave  --

 
 
 

Code generation of JUnit test cases

Post by Ilja Preu » Wed, 05 Feb 2003 17:52:59


Quote:> When you design up front, you often re-order the design, or simplify it,
or
> remove duplication. This is much the same as the refactoring cycle in TDD.

Yes, it's very similar - it's just based on less actual data.
 
 
 

Code generation of JUnit test cases

Post by Ron Jeffrie » Wed, 05 Feb 2003 19:02:16



Quote:>However,
>freezing the design early gives greater control and predictability
>over the coding phase, and emphasizes that we do design correct the
>first time itself.

What comparative evidence causes you to believe that freezing the design early
gives greater control and predictability over the coding phase?

What incidence of design changes do you experience?

How you know that the design is correct the first time?

--
Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.