Agile Release & Testing Procedures

You've scored yourself an amazing team, you've all wholesale committed to some form of agile development model (scrum, crystal, fdd, etc.) and now you're in the thick of it - with a release coming up - and the question is: how do you test, what do you test, when do you stop?

The underlying tenets of agile - short iterations, quick releases, faster feedback loops - create an underlying tension with many of the 'traditional' testing requirements. A quick google search for 'types of software testing' brings up several dozen concepts, many of which are nothing short of unrealistic given typical one/two week team iteration. Of course, value of any practice depends on its context, which is another way of saying: there are good practices in context, but there are no best practices. Hence, let's set the context: you're a small web development team (2-7 people), working on a live project (you have customers), with expectation of delivering a functional slice of your backlog every two weeks.

Flickr, Friendfeed, and others...

Good software testing is a challenging process. Every project follows some logical path from under the developers fingers and into a production environment - granted, sometimes this process can be as simple as editing a file live (we're all guilty of it at times). In their early days, Flickr ran with pseudo 30-minute release cycles, more recently, Friendfeed is showing several 1-2 day release iterations. Have either fully automated their testing and staging environments? I doubt it. In fact, Cal Henderson's remarks about the early days of Flickr clearly show lack of such infrastructure, and yet, Flickr has always been known for their excellent user experience.

Discovering the formula

How far do you go? Unit testing can be derived from test driven development, integration testing requires a staging environment, user acceptance is the logical conclusion in Scrum, and what about the remainder? Given the scarce resources - time, people, and money - most web startups seem to forgo, or pick and choose from the remainder. Perhaps this is a sign of immaturity and we're unconsciously hurting ourselves, or maybe, the old model simply does not recognize the modern requirements.

Intrigued by this question, I've recently engaged in a number of online and offline conversations with team leads, and fellow entrepreneurs. The end result? A full spectrum of testing environments, including a few cases of complete lack of thereof - all the while, the projects are high-profile, high-traffic websites. For obvious reasons, we'll let the names be anonymous, but this raises an interesting question:

How do you test, what do you test, when do you (or your team) stop? Phrased differently, what are your release procedures?

Would love to hear your feedback, war stories, and rumors. Once the dust settles, I'll aggregate and report the results.

P.S. Testing is not assurance of quality.

Ilya GrigorikIlya Grigorik is a web ecosystem engineer, author of High Performance Browser Networking (O'Reilly), and Principal Engineer at Shopify — follow on Twitter.