I keep experiencing this: I’m unhappy with how we do things at work, I really wish we could do <x> instead. Then I finally get to try <x> and it turns out to be worse than the situation I was complaining about. I’ve complained about functions/methods being way too long (more than 5 lines each) I’ve complained about not having proper requirements before starting work. I’ve complained about not having dedicated specialist software testers.
Now I’ve experienced all of these things I’m not so convinced anymore. I find a codebase where nothing exceeds 5 lines frustrating to work with - especially for non-trivial software written by a large team. I find requirements documents counter productive and demotivating (talk in Norwegian) And I’m starting to realize that software testers can actually make your code worse and get you to focus on entirely the wrong things.
The most useful testers, in my experience, are the people who actually need the product you’re making. They will often be quite happy to test your application too, as they are eager to see it finished, and finished in a way that suits their way of working. If they aren’t eager to test your application, you should really be asking yourself if you’re making the right product. When these people test your product, they will test if it makes their life easier. They test whether or not it gets the job done in a good way. These are the most important things to get right.
Professional software testers on the other hand have a different focus. They start out with the requirements document. They have no need for the application under development, what is important to them is that it complies with the requirements. This is the definition of quality after all. After ensuring that the software meets the specifications, they do their best to make it fail, reporting any success they achieve a long the way. If they were testing microwaves, they’d be putting animals in there, reporting all deaths as critical bugs to be fixed immediately. If they were testing coffee machines, they would be pouring hot coffee in their eyes and reporting how your product can cause blindness. They input nonsense and complain that the output isn’t useful.
I can’t really blame them. They are there full time, doing nothing but testing. Testing a product they have never needed, and are unlikely to ever use again after the project completes. They need something to do. And with shows like Jackass, Top Gear and Mythbusters so popular, who can blame these hard working souls for being inspired to do damage in as many creative ways as possible?
It is a well established fact that most of the code in any application is practically never executed. Quite a lot is actually NEVER executed once in production. This code increases the overall complexity, increases chances of bugs, decreases maintainability, while giving very marginal, if any, benefit.
These are the kinds of features that your software testers will spend 98% of their time finding for you. You’re paying good money to get someone to find ways to make your software worse. Instead of focusing on how the software is going to be used, focusing on the overall technical structure of the application, you end up focusing on “fixing” “bugs” that no sane user will ever encounter, or even see as a bug. As an example I once got the following bug report:
"When I click the context menu button on the keyboard [I didn’t even know there was such a button before I got this bug report!], a context menu appears with all menu items disabled. The context menu is only relevant for clicks on individual cells in the table shown, so no context menu should be shown when I click on the context menu button"
I swear that was a real bug report! We ended up spending 15 hours “fixing” this “bug”. Most of it was spent WTFing and asking ourselves and our managers whether this was really necessary. A couple of hours here, another few hours there, it all adds up. More importantly though, the code fills up with ridiculous amounts of code that serves no important purpose at all.
My second concern with full time software testers, is that the developers themselves don’t think so much about testing, as they know they have dedicated testers to test for them. This leads to sloppy coding, and code that is hard to test at all.
I once had the luck of working on a project where we had a domain expert on the team with us full time. He didn’t so much test our application, as use it. He used it as we were building it. Giving us invaluable feedback along the way, every day. When he reported a bug, that bug was always something that stopped him from doing his job. He was focused on how the application worked. Not whether it matched a specification, or whether it was physically possible to get the system to output nonsense.
I say let the coders do the monkey testing. Make it the programmers themselves’ responsibility to ensure the system handles reasonable input ranges, say. Send them to a TDD course, teach them FitNesse. Even if you don’t plan on using any of them. It is important that your developers understand what it means and how important it is to test for correctness. Once they understand this, and know how to do it, it doesn’t matter if they do TDD or if they use an acceptance test framework (they could just stick to normal unit test frameworks). The point is that us programmers are more than capable of doing this kind of testing ourselves. Someone just needs to ensure that we take this responsibility seriously.
One thing I did miss before using it, and haven’t regretted a second after starting, was doing automated testing. I love unit tests. I think most programmers worth keeping in your employment do too. With a team of programmers keen on testing, you don’t need a team of software testers. At least not full time. Invest time in domain experts who actually need your product instead. If no domain experts are interested in helping you out, you might be building the wrong product. If management actually stops you from getting access to and input from domain experts or real users… you’re probably doomed anyway.