From Automated Cloud Deployment to Progressive Delivery

By: David Eastman
1st October 2019
Image

So, your team is on its Agile journey and you can more or less track a deployment from a story, through git commits, to an automated builder, to an artefact repository, into a container, and then onto the cloud. Or, in fact, any other of the many valid variants that lead you to believe your deployments are largely automated. What you can point to is that a business request comes in and the result is a service or a new app. Your purview seems to stop before your users can even respond though; is that good enough?

Let's go back to the early idea of a release. It was a collection of all the features and fixes that the loudest stakeholders had persuaded the product owner to put at the top of the story list. A date was promised, but QA was late, and the persistent stakeholders squeezed a little more juice out of a story. Then it was stuffed into a ball and delivered to your servers overnight, lest anyone notice. The next day the developers had to deal with the fallout as the support queues grew with confused users.

While developers were getting the hang of the Agile notion of automated deployment, the release life cycle did pretty much end with a deployment going live. This gave certainty that what had been built and placed in the artefact repository was the same thing the testers had seen in the staging environment. And it also had the changes Jira said it had.

This type of release was very much like a birthday present. Your uncle talked to your dad briefly, without your knowledge, and agreed that you really would benefit from a new pair of socks. These were then delivered on the day with the receipt in the bag, just in case you needed to take them back.

The first evolution came with the concept that a release wasn't exactly synonymous with a deployment. Finally, the end-users perspective nudged its way into corporate release strategy - updates could be deployed without necessarily impinging directly on every user. What if there were two identical release environments, with clever network switching meaning that only one was truly live? This type of flip-flop arrangement (sometimes referred to as blue/green deployment) allowed changes to be tested internally in a realistic environment before going live.

It was the advertising industry that prompted the idea of A/B testing. Instead of guessing one variation of a campaign might be superior to another, the surprisingly scientific method of using a controlled sample and a variation was used on audiences to see which they preferred.  In the digital age, it is possible to deploy two variations of the same release to different server sets. This might mean that each user session could be using either release. It would then be necessary to associate the resulting click-throughs, or whatever success measurement is chosen, to the specific release flavour. The worth of this kind of experimentation was only as good as the question you asked but it was at least observing real user interaction.

Moving on, DevOps environments began to use more feature flags or toggles. These were code paths that could deliver feature changes in live servers, controlled through configuration changes. This reduced the need to re-deploy to make certain changes. Release minded teams were coalescing around configuration, as opposed to just code. Whereas code once compiled and built was locked away in unreadable artefact files, a larger stakeholder community could administer configuration files, usually encoded in English. This helped to keep the feature changes closer to the stakeholders and less likely to slip back into silos.

As a more defined understanding of delivery arrived, so did the understanding of the user community. It used to be the case that "a user" was a one line entry in a database. Even firms whose business is not mining their user's data are well aware that there are different sets of users for products and services. 

Traditionally, there has always been internal users or beta testers, those within the firm whose job is to check the bits they understand or are responsible for are working as expected. Then outside the firewall, there are the tech savvy community who actively want the latest updates. These guys report bugs and often compare your product with the competition, perhaps using both. This audience will not lose their marbles if you deliver a bug to them. More to the point, they will notice rapidly, perhaps telling everyone on social media about your failings. The disengaged users who may only have the free version or tier of your product come next. These people are more likely to work on impressions and are best not disappointed. They are also unlikely to be keen on updates.  Finally, the large core of solid users who pay their way, just want things to work and need to understand a path to continuity will already appreciate the software and will understand the road map by osmosis. These will probably include your biggest users too. Of course, if you have no way of distinguishing your users all the above is moot.

With services sitting on a public cloud and CDN like edge services such as CloudFront, the ability to release by territory is much more straightforward. This is also increasingly necessary for legal reasons, for EU regulations as an example. This allows for tactical releases to communities in different timezones and sizes. The practice of deploying changed services to a small group for risk mitigation, or "canarying", is another method that takes advantage of smart routing. This requires the ability to quickly observe and rollback releases though.

Today, most software has some degree of "dark launch" that restricts the visibility of a release to the appropriate community, until all is well. In this sense, a progressive delivery (defined as continuous delivery with fine-grained control over the blast radius" by James Governor) can be seen as the virus-like spreading of your software or services from your development teams laptops to the last user. Observing service use is simply part of the core concern of virtually every company in the tech space. From this perspective, progressive delivery changes the way to see your development team as mere builders to user fulfilment specialists, which of course, they always were.

The future will undoubtedly involve increasingly customer-driven releases, which goes hand in hand with the increasing use of data science to study user communities and their exhaust. The important take away is to make sure your team is reaching further to the right-hand side of the product journey and keep the user's reaction to what you make as a larger part of your sensory input. Think more about constant small changes and less about ceremonial releases.