Write Automated Tests

One of the Devs I used to work with has been asking a lot of questions recently, all revolving around what can only be considered the principles I hold dear as a TDD champion. I don’t claim to be a guru, but I hope to be able to start some conversation about the importance of automated testing, some of the principles I hold dear and most of all I intend to engage you readers so I can learn from your experiences and possibly guide you if questions arise.

Let’s dive in.

Principle #1 – Write Automated Tests

Above all else, writing tests must happen to have a test driven development cycle – A tautology of sorts. Most of us are, as ridiculous as it may sound, already doing this. We think to ourselves – it would be great if … – and write code to meet that expectation. In our minds we have already got an idea of what we are expecting to happen, and our mental notes constitute the depth of our written test.

Of course this is prone to issues that every marriage has had to deal with…

  • We all forget things,
  • some of those things are incredibly important,
  • and apologies only go so far

To get around this some of us write down those tasks into trackers like Trello or Asana and follow them as close to religiously as our time and commitments permit. This is the equivalent of a specification document – good enough for the first implementation, but rarely kept up to date or reviewed over time. This is a good start because it gets you into the realm of formalizing your agreements – hopefully your spouse will be accepting of the new process being used.

By employing a tracker you gain more than formalized documentation, though. Most trackers provide a means of commenting and visualizing progress, but something is still missing. Over time most applications grow in complexity. This can be seen in software trackers everywhere in the form of increasing ticket numbers and bug reports. Any non trivial application will grow in this fashion.

The first thing typically done is to split testing across multiple people. A strong QA engineer is worth her weight in gold! That works for a while, but still it seems lacking. You wrote code 14 seconds ago and you want to know if it has broken anything. Well, there are 4000 tests in the form of requirements tickets needing to be exercised before you can convincingly say you have a) delivered what was asked, and b) not broken other commitments.

Even the most amazing QA Tester is human and will suffer from fatigue and the virtues of trusting you to deliver – since you always have, right? This sits on top of the conundrum of relativity making time a pretty expensive and finite resource, because as the complexity grows not only is it harder to avoid breaking the interconnected parts, it is also going to take more time to test these things by hand. It’s 8 minutes later and we still have 3990 requirements remaining, and our 20 person QA team is burning through them at a rate of almost 20 an hour – I’ll see you in May with the HR lady, VP of Engineering and your software lead to discuss the overtime coming out of your paycheck.

This is why we all need to write automated tests.

  • We don’t know the impact of our changes until they are actually implemented.
  • Subtle changes to data structures and algorithms have effects that may ripple through your repository – you are using version control, no?
  • We cannot be expected to keep the entire scope of a project in our heads when working on the implementation details of a specific portion, let alone when context switching between the many applications, tools, and responsibilities needed by a modern developer.
  • We cannot be expected to manually test such complicated systems, even with the help of our gigantic and completely perfect robotic QA team™.
  • Most of all, we cannot be expected to wait hours or days to see if our changes are appropriate.

Let’s say a next phase is to distill your requirements down to specifically documented test cases. You work with your product owner, QA team and your colleagues to refine this…

As a player I need a clear interface for selecting and configuring my gun so that I can quickly mod my gun when in a firefight.

down to these…

  • Press I for inventory screen to be shown
  • Press Escape to hide inventory screen
  • Left Click an item in the inventory to select it
  • Right click an item in the inventory to open a fly out menu
  • Fly out menu includes button to open configuration screen
  • Configuration screen provides 3D representation of gun
  • Left Click hard points of weapon to update list of configuration options
  • Etc.

In many ways this is better, because each of these things is concrete and thus easier to check off of a list. The issue is, even though you have nailed many things down, the list has grown. Manually testing is now less subjective but is still taking 30 years to complete. This is a non-starter!

Some teams I have been on have addressed testing growing requirements list in many wonderful ways…

  1. Don’t. This is probably the most common solution for mod teams and companies that don’t exactly care much about their work withstanding the tests of time.
  2. Only test key portions. This works for a while but results in legacy issues when undocumented or untested code is executed… Whoops.
  3. Only test the new features. Regression testing is the most important part! How do you know the application still works as your customers or clients expected without testing it?

So okay, how do we

  1. make sure we have completed a task,
  2. document features as well as progress towards them,
  3. not get fixated on key or new features, and
  4. have our tests completed in a short enough period of time that it is useful and not break some laws of physics?

We automate the shit out of it!

The only place to go with applications that are non-trivial, require robustness, while still being tested is the realm of Automated Testing.

Once you come to this understanding the rest of the process is just friction to the different mentality and finding the right tools to fit the constraints of your application.

I am not a zealot for TDD, and I certainly don’t always work on applications that need rigorously tested API, but generally speaking I err on the side of caution and build testing into everything I can because of the tendency for prototypes to become production applications.

The plan is to continue discussing the principles I have learned to enjoy as a TDD champion in weekly installments, but life may have different plans for me. Let’s use this as a starting point and see where it goes.

The next principle – Use A Strong Testing Harness!

I hope you have enjoyed this, if you have any questions, comments or corrections please drop them off in the comments.