ATDD - A principled approach

I have built 6 such frameworks over the last 10 years, some more successful than others. The following are my responses to common criticisms and some notes on how you can use ATDD to achieve your own development goals.

ATDD - A principled approach

Automated testing, test frameworks and TDD are, much like they sound, complicated masses of stress and strife. They exist independent of your own company's processes, typically being bolted onto a project after it has already been exposed to the public, and as such are never really experienced in a pure form.

I have built 6 such frameworks over the last 10 years, some more successful than others. The following are my responses to common criticisms and some notes on how you can use ATDD to achieve your own development goals.


What is an ATDD?

Put simply, Automated Test Driven Development is a state of mind or process. I write tests that are executed automatically and drive my development forward. Unfortunately, there are a lot of people who argue about this mindset...

The practice of Test Driven Development is impossible to maintain at any non-research ready company.
- Dude @Work
Most of us get cornered in our non-office open-space desk and are poked and prodded for estimates and plans only to later find out that our estimates are being used as live data and delivery dates. Most of us don't have the time to breathe along side that, let alone being fully TDD.
- Another Dude @Work

We have heard these silly things said by many over the course of our careers, and it's become an unfortunate fake truth we run into wherever we go. These dudes are lying to us all, and they are costing us bajillions of simoleons.

The truth is that automated testing has a cost, but not testing has a different, more insidious, cost...

Tested Code

  • Evidence Based Proof of Correctness "It Just Works"
  • Failures can come from undocumented behavior and edge cases
  • Tends to create code that has good separation of responsibilities
  • Refactoring is common place
  • Costs time up front - almost

Untested Code

  • Strong Belief of Correctness "It Just Worked"
  • Failures can come from every code change
  • Tends to create code with strong interdependencies
  • Refactoring is troublesome and complicated
  • Costs time and money long term

These things are not just true, they are documented all over the world, time and again. This is not to say that true full blown TDD is the only way to find solace, only that testing is something we are already doing, and accepting it is the way forward.

What do I test, and why?

To answer this, I have to start at the use case. TDD is useless unless you are running your tests as code is being created. This implies a few things:

  • Tests have to be quick
  • Tests have to be independent
  • Tests have to be deterministic

Tests should be run on each compile, report to the developer in a clear and well understood way a simple Yes or No everything is as it should be.

Why are these three principles important?

If my tests are not quick, I will be unlikely to run them per build.

I compile my project, it takes 300ms to create a new dll, it takes me another 200ms to look over at a report and see that everything is in order. Less time spent executing my tests than it takes to blink your eye lids. This is not how it always works out, and unfortunately when you move into the world of games or embedded hardware it's not exactly useful to test things independent of their running environment.

  • My test cases for Flathead execute in about 20 seconds
  • My main project at work has tests that take 30 seconds to execute
  • The full automated framework I built for a client takes on the order of 2 days to execute

This comes at a cost for sure, and there is a corollary You can also look at this in the terms of our flashy idols of the 90s put it - mo money mo problems - or more appropriately to our needs  - mo tests mo time. Thanks Ma$e, you are missed.

To manage this issue, typically we execute our tests in groups. We split out API and Simple tests from the long running interdependent tests; we mock out our classes and interfaces to avoid hitting web services, databases, or file systems wherever possible; we avoid running acceptance or integration tests until the internal tests are taken care of.

This brings us to independence...

If my tests are not independent, there are hidden dependencies within my code.

This tautology is even more obvious and painful with two notes.

  • At any point I should be able to execute any single test to ensure it is working
  • At any point I should be able to execute any group of tests to ensure they are working

Given that Test B depends on the state being exactly as Test A left it, how do I execute Test B without executing Test B? In turn, how do I execute Test A after Test B has just completed?

Every single test must follow a similar flow

  • Setup the world as is needed for confirmation of state and state change
  • Assert that the world is as it should be
  • Execute test
  • Assert that the world is as it should be
  • Cleanup after itself, returning the universe to its original state

Programming is absolutely an exercise in data caching, so everything we do can be tested with assertions with regards to state change and data manipulation.