>  Blog

TDD: Write Many Tests Together? Or One At A Time?


Pradyumn Sharma

May 09, 2017


One question that I am often asked during my training programs on Agile Software Development, as well in consulting assignments, is the following:

While applying TDD (Test-Driven Development), which of the following two approaches should one use?

  • Identify and write (automate) as many test cases as you can, before writing the code to make the tests pass.
  • Or, begin with writing just one test case, and then writing the code to make it pass. And then add one more test case, and make that pass too. Repeat this until you can no longer think of any other test cases.

My short answer is: Choose whatever works well for you; personally I prefer to write many test cases at once (the first approach) more often than write one test case at a time (the second approach).

And now for a more detailed explanation. The “purists” almost always recommend the second approach: write one test; make it pass; write another test; make it pass by refactoring; repeat.

For people who are new to TDD, it may be more comfortable to take one step at a time. Like when we learn any new programming language, we start with the minimal first steps, often by writing a "Hello, World" program.

But as you gain confidence with a programming language and get ready to solve larger, more complex problems, you no longer need to write a program in tiny baby steps. You take bigger strides, you cover increasing more ground in one go.

This manner of thinking applies to TDD tasks as well. When you are new to TDD, it is a good idea to take baby steps, one at a time. But as you become more proficient, you can confidently identify and write multiple tests in one go that specify how your class or component will behave as a whole. You may still choose to implement your class or component to make the tests pass one at a time, without any risk.

Automated tests are a great way to ferret out the requirements and be explicit about them. When I have to implement some behavior in a class, I find it more efficient to think about all the expectations from the class (even if I'll implement that behavior one at a time), so that I get a full picture of what a class is supposed to do.

Writing one test case at a time, usually makes me feel uncomfortable that I may miss some scenarios as I move forward. I worry that I may remain too engaged in the small details to consider the big picture. At the same time, I cannot claim that I will be able to identify all the required test cases upfront. And that is fine too. I do my best to think of as many scenarios as possible before starting the implementation. As I start implementing the class, additional scenarios often occur to me, and I keep adding those to the test cases.

Let’s consider an example, even if it appears to be a simple, academic algorithm problem. Suppose I am implementing a Stack class.

I would first ask myself this question: what is expected of a stack? The ability to add an element to it (push), remove the topmost object (pop), find out the number of elements in it (size), and perhaps examine the topmost object without removing it (peek).

Rather than think about just one test for the Stack class, I would now exercise my mind to write down as many possibilities as I can, such as:

  • A new stack should be empty (size == 0)
  • A push() operation should increase the size of the stack by one.
  • A pushed element should be at the top of the stack
  • A pop() operation should remove the topmost object and return its value to the client, and reduce the size by one
  • But pop() on an empty stack should throw an exception
  • Successive pop() operations should remove and return the elements from a Stack in the reverse order (until the stack becomes empty)
  • A peek() operation should return a reference to the topmost element without removing it from the stack, and thereby the size should remain unchanged
  • … (and so on)

I feel reasonably comfortable with this approach, despite knowing that some scenarios may not have occurred to me yet. On the other hand, identifying only test case at a time would leave me with a nagging fear that I may not be seeing the big picture, and I may fail to even think of some important scenarios later.

Once I have the list of the scenarios that I can think of, then I write all the test methods in my test class. As I write the test code, and the compiler complains about the missing push(), pop(), peek(), etc methods, I let the IDE generate those methods for me, but I don't bother with the implementation yet.

After all the test methods (that I could initially think of) are written by me, then, naturally, I turn my attention to implementing the class that is being tested. Depending on my confidence, I may choose to implement the class one step at a time, or all of it, or something in between.

As an added advantage, the list of failing tests also serves the purpose of being a task list for me, as well as an indicator of progress. If 4 out of 10 test cases are passing at a time, it may be reasonable for me think that about 40% work on the class has been completed. This also helps me in making the estimate for the remaining effort for the class to complete. Though, of course, work completion and estimate of remaining effort can almost never be computed so precisely with a formula. But it certainly helps.