Science at school was fun, wasn't it? Peering at cells through the microscope, igniting tiny lengths of magnesium. Even physics got pretty good once you got past over-stretching springs. The experiments were the best part, the practical act of discovering whether a hypothesis could be believed or not. Good old Scientific Method. It was drilled into us for years.
Then I went to University, learned to program, and it never occurred that I ought to apply the same principle to my code. Some years later, when I discovered test driven development (TDD), it was a revelation. I was a Born Again tester. I could write something, and then prove that it worked. I was a proper computer Scientist!
What's more, I could always be sure that some idle, careless mistake I made later would not ruin the party later. And of course, testable code is always well-designed code. TDD became part of my religion, and rightly so. Scientific method, better design and a bulletproof guarantee: TDD for the win, right?
Well, not always. A theme running through a number of presentations at this year's QCon London conference was to challenge our unthinking adoption of best-practices, including automated builds, component-based architectures, asynchronous IO etc, which have entered software development lore. Dan North's entertaining talk "Decisions Decisions" neatly exposed the trade-offs that we make when utilising them. None of these are complex, or obscure. It is just that as problem solvers we tend to focus on the benefits and rarely on the costs. In short, we are not always very good scientists.
And TDD is unmistakably a trade-off. Well-tested code doesn't half take a long time to do properly (more so if you favour the super-readable Behaviour Driven Development, invented by Mr North himself). This is a fine for mission-critical code, of course. But when does your code become mission-critical? By definition, it can't be until it is running in production, which presumably will be some time after you wrote it, which rather conflicts with that Write-Tests-First mantra.
And what happens if I do release it to production, but nobody likes it? In that situation I'd like to have done as little work as possible on what amounted to a prototype. I would certainly prefer to not spend hours writing an eloquent suite of tests that formalise a feature nobody wants.
Oh, and thinking back, I have seen many examples of extensively tested code that was both horribly written and horribly tested. Encapsulation and Dependency injection alone, it seems, are not enough to make well designed software.
Daniel Worthington-Bodart's talk on 10-second-builds revealed another major issue with heavily tested projects: they can take ages to build (especially when checking integration points), which can become a major problem in itself. If you follow another best practice of making many small, incremental changes, then waiting for a clean build after each can seriously damage your effectiveness. Coffee run build times can be handy, but you don't want that sort of disruption for the whole day.
Rich Hickey's presentation on simplicity touched on our over-reliance of tools in general. Running a suite of automated tests against some code can and does provide a false sense of security. Code with high test coverage isn't necessarily (or ever) bug free. Your change didn't break any existing tests, so it can't possibly have broken anything. Still... those new bugs in production... how did they happen?
The purpose of these speakers was not to rail, Quixote-like, against these techniques. Despite their drawbacks they remain exactly what you'd expect - best practice. The point was to dismantle some of the religious fervour with which we sometimes adopt them and evangelise them to others. As programmers, we enjoy patterns, and best practices provide general building blocks for writing decent code, which is definitely a good thing. However, as can be the case with Agile adoption, weeks or months pass by and the practices become routine, then habit. Eventually the brain disengages as we degenerate into unthinking adherence.
My takeaway message from QCon was this: never stop thinking, and never stop questioning why you're doing something - especially when somebody else tells you to do it. Good programmers follow these principles, but better programmers always understand and remember their cost.