Pages

Showing posts with label Continuous Delivery. Show all posts
Showing posts with label Continuous Delivery. Show all posts

Sunday, October 2, 2016

Service Simulation rather the Integration Testing of Micro Services

Micro Services or Distributed Monolith

One of the main characteristics of a Micro Service architecture is that each Micro Service has its own lifecycle. This is very important as this is the only way to break the monolith pattern. If the lifecycle of a Micro Service is tied to another entity such as Service, System or Solution then we have a distributed monolith and not a Micro Service architecture.

Ive found that almost everyone agrees in theory but in practice it very wants to do integration testing before release. Integration Testing is so in our DNA, we have been doing it for years and dont really know any other way.

Lets explore the problem of Integration Testing a little bit. If we have Micro Service X and it is consumed by Micro Services Y and Z. Then there is no denying there is a dependency between X, Y and Z. Part of Micro Service architecture is API versioning and backward compatibility. But even with these practices and good test coverage on component X it's hard to deny that the integration between X, Y and Z can still fail. But if we rely on Integration Testing to find these failures then we create a dependency between the teams developing Y and Z and the team of X. That means X no longer has a lifecycle of its own.

Hence if we do rely on Integration Testing we have a distributed monolith which imho is the only thing worse than a monolith.

Consumer Contract Testing


http://martinfowler.com/articles/consumerDrivenContracts.html
Martin Fowler talks about Contract Testing and Consumer Driven Contracts which is a great pattern. In short it means that the developers of Micro Services Y and Z provide contract tests for Micro Service X. The tests provided by the developers of Y and Z correspond to what parts of the X API they consume. These tests are executed as part of the test suite belonging to service X. This way the developers of X know if they broke any backwards compatibility.


I think this is a great pattern and one that helps developers maintain the right amount of backwards compatibility with a fast feedback mechanism. Still I am not sure it is enough. At least in our organization everyone still wants to Integration Test.

There is still a lot of failures in the integration that can fail outside of the component tests of X. In production like environments there can be deployments in different networks, there is service discovery and other factors playing in.

Service Simulation

One Solution is to introduce Service Simulation. Instead of implementing test automation we can implement simulator bots. These bots exercise continuously execute our services. This creates a constant and even load on our solution driven by our services. The monitoring of our Services is used to notify the developers if something failed. No alarms for a given time period after deploy means our solution still works and the redeployment of our Micro Service didnt break anything and we can go on to deploy into the next environment.

Its also relatively easy to implement this as part of the continuous delivery build pipe. Deploy into an environment, check the alarms in that environment for five min and if all green then the Micro Service is verified and the Continuous Delivery implementation continues.

Though its not necessary to have a advanced Continuous Delivery Engine for this pattern to work, just alarm notifications to team channels on slack is powerful in by self.

Another important thing this solves is that the developers and testers start using production grade monitoring to verify services. I think its a very big obstacle in a DevOps transformation that Developers and Testers view Test Reports to understand a system while these arnt available in production. Developers and Testers are usually lost when it comes to production systems. This moves their understanding towards runtime operations of a system which bridges the gap between Dev and Ops in a nice way.

Bots can be deployed into any environment providing same process of deploy and verify in all environments. Personally I like the idea of having a Bot workload only environment as the first full featured environment. The now obsolete Integration Testing environment could be easily converted into a Bot only environment.

While a Bot basically can be implemented in a number of ways I like the idea of using Function as a Service such as AWS Lambda to implement Service SImulation Bots. The scheduled nature and high specialization of a Bot is perfect as a FaaS workload.

Test Pyramid



  • Simulation - In all Environments Bots Executing Monitoring and Alarms to verify application.
  • Component & Contract Tests - Deployed Functional Tests using Mocks to stub consumed services. Contract Tests provided by developers consuming this services.
  • Unit Tests - Well nothing new there.


I think this is what we need to in order to release our micro services with high confidence and with life cycles of their own.






















Thursday, January 7, 2016

Optimizing Runtime Cost through Continuous Delivery

We have always had the need to understand our application runtime costs. In legacy enterprises this has often been done at a finance level and someone in the Runtime Operations department has been responsible for that cost. There have been different ways of distributing the costs to the right organization. Either each application has had its own infrastructure and own the full cost of it or applications have shared infrastructure in order to save money, utilize large servers in a better way or just to minimize setup and maintenance work of the servers. 

In many legacy enterprises there has been a huge disconnect between the rollout the application evolution and the infrastructure evolution. Application development has changed towards micro services with a lot of small applications with lifecycles of their own. The legacy enterprises runtime operations are largely unable to delivery on these changes due to processes, licenses and hardware leases of huge "enterprise scale servers" for the old monoliths. Requests of several servers per micro services have been waved off as unrealistic. 

Several ways around this have been invented in order to cohost multiple applications on the same servers in a somewhat isolated way. Docker draws so much attention partially because legacy enterprises can still have their big old servers and isolate the micro services on them. (Of course not only good thing with Docker).

Regardless of how this was solved the understanding of runtime cost becomes exponentially harder with micro service in legacy enterprise infrastructure. 

When transitioning to DevOps and Micro Services each team is responsible for delivering its services end to end in all environments with the right functionality, performance and imho to the right cost. So what is the right cost? Before even answering that question let’s start with what is the cost. The DevOps team needs to know the cost of its services. In the legacy enterprise that cost might at best case arrive as some kind of cost spit formula calculated by a finance guy based on how many micro services shared the servers, a cost split formula of any license costs and a cost split formula of the man hours to maintain the servers. At best a team would get this information once a month.

Part of the reason we want DevOps is so that the team has full competence and ability to improve its services. This needs to include the runtime costs of the services. A performance optimization of a service that cuts resource requirements by 25% should result in lower runtime costs of the service. In the legacy world of cost splits and financial formals to calculate cost this just doesn’t happen.

Thankfully we have cloud providers. Not only do we get the right resources handle our services when we need them but we get billed real money for them.

With auto scaling, elasticity and all other nice features that we get there is also a risk. We get away with writing increasingly bad applications. Bad performance? Doesn’t matter as long as it scales horizontally. We can IO block to ourselves hell and back as long as we scale horizontally and we do get away with it. Well as long as we don’t have to take responsibility for the cost of our service. This is why it’s so important for the DevOps team to take the responsibility of the runtime cost of the service.

For a few years I have had a dream to be able to stamp a cost runtime cost to a load test and correlate the performance measured in the test with the cost of the runtime environment to run the test. We are still not there with our applications since we do run on AWS Auto Scaling Groups and AWS bills us per started hour. This makes the billing data to blunt too give the correlation between performance and cost from a shorter test. With a Micro Service architecture that uses AWS Services such as Lambda, DynamoDB and Kinesis this would actually be achievable today.

With this vision in mind we have been able to integrate the runtime costs of our Micro Services in AWS with our Continuous Delivery as a Service implementation (Delivery Engine) in another way. For us DevOps Teams are the owners of services and Solutions are the drivers of the cost. So everything that we launch in AWS we tag with name of micro service, owner team name and cost driving solution. 

So this allows our DevOps Teams to see the cost of their Services.



Here we provide a graph of total costs for the services owned by a DevOps team. The services are listed in a top list below the graph and a long running trend of the cost for each component. This allows the Product Owners and Team Members to act in increasing costs. Links are provided to the Dashboard Page of each service.

This Service Dashboard provides the version history of each version built in delivery engine. Our Delivery Engine builds, black box tests the deployed service, load tests it, bakes AWS AMIs for it and launches it into the right environments. Here on the dashboard we visualize the total cost of the service grouped by environment (delivery engine, exploratory testing, qa and prod environments).


From the same dashboard we visualize service usage. In the below example it’s a visualization on cpu consumption across an auto scaling group.



This way we allow the team to ensure that they have the right scaling rules for its services and that they have the right amount of resources in each environment. 

By visualizing the cost of the runtime environment for each service and combining it with Continuous Delivery, Continuous Performance Testing and DevOps we allow our developers to constantly tweak and improve the performance and optimize the cost of our runtime environments. The lead time in the cost reporting still is at the "next day level" as we report the cost per day and per month but I still think this is a reasonable feedback loop when it comes to cost optimizations.



Friday, February 27, 2015

Pipes as Code

Finally we have started to move away from having build pipes as a chain of Jenkins jobs. There has been alot written of the subject that CI systems arnt well suited to implement CD processes. Let me first give a short recap on why before I get into how we now delivery our Pipes as Code.

First of all pipes in CI systems have bad portability. They are usually a chain of jobs set up through either a manual process or through some sort of automation based on a api provided by the CI system. The inherrited problem here is that the pipe executes in the CI system. This means that it is very hard to test and develope a pipe using Continuous Delivery. Yes we need to use Continuous Delivery when implementing our Continuous Delivery tooling otherwise we will nto be able to deliver our CD Processes in a qualitative, rapid and reliable way.

Then there is the problem of that data that we collect during the pipe. By default the data in a CI system is stored in that CI systems. Often on disk on that instans of that CI server. Adding insult to injury navigation of the build data is often tided to the current implementation of the build pipe. This means that a change to the build pipe means that we can no longer access the build data.

For a few years now we have been off loading all the build data into different types of storages depending on what type of data it is. Meta data around the build we store in a custom database. Logs go to our ELK stack, metrices to Graphite and reports to S3.

Still we have had trouble delivering quality Pipes. Now that has changed.

We still use a CI Server to trigger the Pipe. On the CI server we now have one job "DoIt". The "DoIt" job executes the right build pipe for every application. Lets talk a bit on how we pick the pipe.

Each git repo contains a YML file that says how we should build that repo. Thats more or less the only thing that has to be in the repo for us to start building it. We ingore all repos without the YML files. So we listen to all the gerrit triggers and ignore ones withouth

The YML is simply pretty much just

pipe: application-pipe
jdk: JDK8


We describe our build pipes in YML and implement our tasks in Groovy. Here is a simple definition. 

build: 
    first:
     - do: setup.Clean     - do: setup.Init
    main:
     - do: build.Build     - do: test.Test
    last:    
     - do: log.ReportBuildStatus
last: 
   last:    
     - do: notify.Email

Each task has a lifecycle of first, main, last. The first section is always executed and all of the "do´s" in the first section are executed regardless of result. In the main secion the "do´s" are only execute if everything has gone well so far. Last is always executed regardless of how things went.

The "do´s" are references to groovy classes with the first mandatory part of the package stripped. So there is a com.something.something.something.setup.Clean class.

A Context object is passed through all the execute methods of the "do´s". By setting context.mock=true the main executing process adds the sufix "Mock" to all "do´s". This allows us to unit test the build pipe inorder to assert that all the steps that we expect to happen do happen in the correct order.

When alot of things start happening its not really practicall to have a build task all that verbose especially since we have multiple pipes that share the same build task. So we can create a "build.yml"  and a "notify.yml" which we then can include like this.

build: 
    ref : build
last: 
   last:    
     - do: notify

So this is how our build pipes look and we can unit test the pipe, the tasks and each "do" implementaiton.

Looking at a full pipe example we get something like this.

init:
    ref: init
build:
    parallel:
        build:
            ref: build.deployable
        provision:
            ref: provision.create-test-environment  
deploy:            
    ref: deploy.deploy-engine
test:            
    functional-test
        ref: test.functional-tests
    load-test: 
        ref: test.load-tests
release: 
    parallel:
        release:
            ref: release.publish-to-nexus
        bake:
            ref: bake.ami-with-packer
last: 
    parallel:
        deprovision:
            ref: provision.destroy-test-environment
        end:

            ref: end

Thats it.!

This pipe builds, functional tests, load tests and publishes our artifacts as well as baking images for our AWS environments. All the steps report to our meta data database, elk, graphite, s3 and slack.

And ofcourse we use our build pipes to build our build pipe tooling.

Continuous Delivery of Continuous Delivery through build Pipes as Code. High score on the buzzword bingo!

Monday, July 7, 2014

Continuous Deployment in the Cloud Part 2: The Pipeline Engine in 100 lines of code

As I talked about in my previous post in this series we need to treat our Continuous Delivery process as a distributed system and as part of that we need to move the Pipe out of Jenkins and into a first class citizen of its own. Aside from the facts that a CI Tool is a very bad Continuous Delivery/Deploy orchestrator I find the potential of executing my pipe from anywhere in the cloud very tempting.

If my pipe is a first class citizen of its own and executable on any plain old node then I can execute it anywhere in the cloud. Which means I can execute it locally on my laptop, a simple minion in my own cloud or in one of all of the managed CI services that have surfaced in the Cloud.

To accomplish this we need five very basic and simple things

  1. a pipeline configuration that defines what tasks to execute for that pipe
  2. a pipeline engine that executes tasks
  3. a library of task implementations
  4. a definition of what pipe to use with my artefact
  5. a way of distributing the pipeline engine to the node where I want to execute my pipeline
Lets have a look.

Define the pipeline

The pipeline is a relatively simple process that executes tasks. For the purpose of this blog series and for simple small deliveries sequential execution of tasks can be sufficient but at my work we do run a lot of parallel sub pipes to improve the throughput on the test parts. So our pipe should be able to handle both.

We also want to be able to run the pipe to a certain stage. Like from start to regression test and then step by step launch in QA and launch in Prod. Obviously of we want to do continuous deployment we don't need to worry too much about that capability. But I include it just to cover a bit more scope.

Defining pipelines is no real rocket science and in most cases a few archetype pipelines will cover like 90% of the pipes needed for a large scale delivery. So I do like to define a few flavours of pipes that gives us the ability to distribute a base set of pipes for CI to CD.

Once we have defined the base set of pipe flavours the each team should configure which pipe they want to handle their deliverables.

I define my pipes something like this.

name: Strawberry
pipe: 
  - do:
     - tasks:
       - name: Build
         type: mock.Dummy
  - do:
     - tasks:
       - name: Deploy A
         type: mock.Dummy
       - name: Test A
         type: mock.Dummy
     - tasks:
       - name: Deploy B
         type: mock.Dummy
       - name: Test B
         type: mock.Dummy
    parallel: true     
  - do:
     - tasks:
       - name: Publish
         type: mock.Dummy
  
A pipe named Strawberry which builds our services then deploys it in two parallel pipes where it executes two test suites and finally publishes the artefacts in our artefact repo.  At this stage each task is just executed with a Dummy task implementation.

The pipeline engine

We need a mechanism that understands our yml config and links it to our library of executable tasks.

I use Groovy to build my engine but it can just as easily be built in any language. Ive intentionally stripped down some of the logging I do but this is basically it.  In about 80 lines of code we have a engine that loads tasks defined in a yml, executes then in serial or parallel and has the capability to run all tasks, the tasks up to one point or a single task.

@Log
class BalthazarEngine {

    def int start(Map context){
        def status = 0
        def definition = context.get "balthazar-pipe" 
        
        for (def doIt:  definition["pipe"] ){
            status = executePipe(doIt, context)
        }
        return status
    }
    def int executePipe(Map doIt, Map context){
        def status = 0
        if (doIt.parallel == true){
            status = doItParallel(doIt,context)
        } else {
            status = doItSerial(doIt,context)
        }
        return status
    }
    
    def int doItSerial(def doIt, def context){
        def status = 0
        for (def tasks : doIt.do.tasks){
            status = executeTasks(tasks, context)
        }
        return status
    }
    
    def int doItParallel(def doIt, def context){
        def status = new AtomicInteger()
        def th
        for (def tasks : doIt.do.tasks){
            def cloneContext = deepcopy(context)
            def cloneTasks = deepcopy(tasks)
            th = Thread.start {
                status = executeTasks(cloneTasks, cloneContext)
            }
        }
        th.join()
        return status
    }   
    
    def int executeTasks(def tasks, def context){
        def status = 0
        for (def task : tasks){
            //execute if the run-task is not specified or if run-task equqls this task
            if (!context["run-task"] || context["run-task"] == task.name){
                log.info "execute ${task.name}"
                context["this.task"] = task
                def impl = loadInstanceForTask task;
                status = impl.execute context 
            }
            
            if (status != BalthazarTask.TASK_SUCCESS){
                break
            }
            
            if (context["run-to"] == task.name){
                log.info "Executed ${context["run-to"]} which is the last task, done executing."
                break
            }
        }
        return status
    }
    def loadInstanceForTask(def task){
        def className = "balthazar.tasks.${task.type}"
        def forName = Class.forName className
        return forName.newInstance()
    
    }
    def deepcopy(orig) {
        def bos = new ByteArrayOutputStream()
        def oos = new ObjectOutputStream(bos)
        oos.writeObject(orig); oos.flush()
        def bin = new ByteArrayInputStream(bos.toByteArray())
        def ois = new ObjectInputStream(bin)
        return ois.readObject()
    }
}

A common question I tend to get is "why not implement it as a lifecycle in maven or gradle". Well I want a process that can support building in maven, gradle or any other tool for any other language.  Also as soon as we use another tool to do our job (be it a build tool, a ci server or what ever) we need to adopt to its lifecycle definition of how it executes its processes. Maven has its lifecycle stages quite rigidly defined and I find it a pita to redefine them. Jenkins has its pre, build, post stages where its a pita to share variables. And so on. But most importantly use build tools for what they do well and ci tools for what they do well and none of that is implementing CD pipes.

Task library.

We need tasks for our pipe engine to execute. The interfaces for a task is simple.

public interface BalthazarTask {
    int execute(Map<String, Object> context);
}

Then we just implement them. For my purpose I package tasks in "balthazar.task.<type>.<task>" and just define the type and task in my yml. 

Writing tasks in a custom framework over say jobs in Jenkins is a joy.  You no longer need to do workaround to tooling limitations for simple things such as setting variables during execution.  

Anything you want to share you just put it on the context.

Here is an example of how two tasks share data.

  - tasks: 
    - name: Initiate Pipe
      type: init.Cerebro
  - tasks: 
    - name: Build
      type: build.Gradle
      command: gradle clean fatJar

I have two tasks. The first task creates a new version of the artefact we are building in my master data repository that I call Cerebro. (More on Cerebro in the next post). Cerebro is the master of all my build things and hence my version numbers come from there. So the init.Cerebro task takes the version from Cerebro and puts it on the context.

@Log
class Cerebro implements BalthazarTask {
    @Override
    def int execute(Map<String, Object> context){
        def affiliation = context.get("cerebro-affiliation")
        def hero = context.get("cerebro-hero")
        def key = System.env["CEREBRO_KEY"]
        def reincarnation =  CerebroClient.createNewHeroReincarnation(affiliation, key, hero)
        context.put("cerebro-reincarnation",reincarnation)
        return TASK_SUCCESS
    }
}


My build.Gradle task takes the version number from cerebro (called reincarnation) and sends it to the build script. As you can see I can use custom commands and in this case I do as fat jars is what I build. By default the task does gradle build. I can also define what log level I want my gradle script to run.

@Log
class Gradle implements BalthazarTask {
    @Override
    def int execute(Map<String, Object> context){
        def affiliation = context["cerebro-affiliation"]
        def hero = context["cerebro-hero"]
        def reincarnation = context["cerebro-reincarnation"]
        def command = context["this.task"]["command"] == null ? "gradle build": context["this.task"]["command"]
        def loglevel = context["this.task"]["loglevel"] == null ? "" : "--${context["this.task"]["loglevel"]}"
        
        def gradleCommand = """${command} ${loglevel} -Dcerebro-affiliation=${affiliation} -Dcerebro-hero=${hero} -Dcerebro-reincarnation=${reincarnation}"""

        def proc = gradleCommand.execute()                 
        proc.waitFor()                               
        return proc.exitValue()
    }
}

This is how hard it is to build tasks (jobs) if its done with code instead of configuring it in a CI tool. Sure some tasks like building Amazon AMI´s take a bit more of code. (j/k they don't). But ok a launch task that implements a rolling deploy on amazon using a A/B release pattern does but I will come back to that specific case.

Configure my repository

So I have a build pipe executor, pre built build pipes and tasks that execute in them. Now I need to configure my repository.

In my experience 90% of your teams will be able to use prefab pipes without investing too much effort into building tons of prefabs. A few CI a few simple CD and a few parallelized pipes should cover a lot of demand if you are good enough at putting an interface between the deploy tasks and the deploy as well as the test tasks and the deploy tools.

So in my repo I have a .balthazar.yml which contains.

balthazar-pipe: Strawberry

Distributing the pipeline engine and the task library

First thing we need is a balthazar client that starts the engine using the configuration provided inside my repository.  Simply a Groovy script does the trick.

@Log
class BalthazarRunner {
    def int start(Map<String, Object> context){
        Yaml yaml = new Yaml()
        if (!context){
            def projectfile = new File(".balthazar.yml")            
            if (projectfile.exists()){
                context = yaml.load projectfile.text
            } else {
                throw new Exception("No .balthazar.yml in project")
            }
            
        }
        def name = context.get "balthazar-pipe" 
        def definition = yaml.load this.getClass().getResource("/processes/${name}.yml").text 
        BalthazarEngine engine = new BalthazarEngine()
        
        context["balthazar-pipe"] =  definition
        context["run-to"] = System.properties["run-to"]
        context["run-task"] = System.properties["run-task"]
        return engine.start(context)
    }
}
def runner = new BalthazarRunner()
runner.start([:])

Now we need to distribute the client, our engine and our library of tasks to the node where we want to execute the pipeline with our code repository. This can be done in many ways.

We can package balthazar as a installable package and install it using yum or similar tool. This works quite well on build servers but it does limit us a bit on where we can run it as we need "that installer" to be installed on the target environment. In many cases its really isn't a problem because if your a Debian shop then you have your deb everywhere and if your a Redhat shop then you have your yum. 

I personally opted for another way of distributing the client. Partially because Im lazy and partially because it works on a lot of environments. When I make my balthazar.yml I also checkout the balthazar client project as a git submodule.

So all my projects have a .balthazar.yml and a balthazar-client folder. In my client folder I have a balthazar.sh and a gradle.build file. I use gradle to fetch the latest artefacts from my repo and then the shell script does the java -jar part. Not all that pretty but it works.

Summary

So now on all my repos I can just do...

>. balthazar-client/balthazar.sh

... and I run the pipe configured in .balthazar.yml on that repo. Since all my tasks integrate with my Build Data Repository I get ONE view of all my pipe executions regardless of where they where executed.

CD made fun! Cheers!

Friday, June 27, 2014

Continuous Deployment in the Cloud Part1: The Distributed Continuous X process

This is the first part of the Blog Series "Continuous Deployment in the Cloud".

When we started doing Continuous Delivery many of us started building the process around a CI Server. Many of us ran into problems building their pipelines with Jenkins or other CI Tools. There are several reasons to these problems these two blog posts http://www.cloudsidekick.com/blog/stretch-armstrong.html and http://www.alwaysagileconsulting.com/pipeline-antipattern-deployment-build/ outline the problems really well.

CI Server is a bad Continuous Delivery/Deployment Orchestrator

Personally Id like to boil down the core of the problem to lack of portability and separation of concerns.

If you model the process in a CI Tool then the process will never ever be portable. Even if you can distribute the process across multiple instances of the CI Tool through different means of generating and publishing process you always need an instance of that tool to run the process. This makes development quite hard since you need a local development instance of that tool.

In most CI Tools the data collected from each job is stored in the CI Tool itself. To make matters even worse its often stored in that instance of the CI Tool. This means that the only way to access the data gathered by the jobs of the pipe is through navigating that instance of the pipe on that instance of the CI Server. This makes it very hard to distribute the Continuous X process over multiple CI Servers, we can use master/slave setups but the problem still persists with multiple masters.

Since the data is often tied to the implementation of the pipe it becomes very hard to visualise historical data. If the current layout of the pipe has changed then we still want to be able to visualise old pipes of our system.

Another problem that arises is historical data and data retention as it is tied to the CI Tool and visualisation is tied to the CI Tool we need to manage the disk space on the CI Tools. We don't want to mix runtime and historical data in CI Tool.


Separating Process Implementation and Process Data Storage

So the first thing we have to deal with in order to distribute our Continuous X process is to move the data out of the process implementation.

In fact the first problem we encounter when distributing a Continuous X process is the Version Number.
What do we use as a version number and where do we get it?

  • Using the CI Server Build Number is extremely bad as you cant even reset your Build Job without encountering problems. 
  • Using a property checked in into your source code repository such as version number in the maven pom or similar is almost worse. You will have a gap in time between the repo fetch, update of version and commit back to repository. If other jobs start in this time frame then they will get the same version number.

So the answer is we get it from a central build data repository. A single database that keeps track of all our deliverables, their versions and their state. By delegating the responsibility of the version number to the Build Data Repository we ensure that the Version Number is created through a atomic update.

The first thing our Process Implementation will do is to get its version number from the Build Data Repository. Then everything we do during the Process Implementation we report it to the Build Data Repository.

So for example we report

  1. Init Pipe - Get Version Number, 
  2. Build Start - Environment, TimeStamp
  3. Unit Test Start - Environment, TimeStamp
  4. Unit Test Done - Environment, TimeStamp, Report
  5. Build Done - Environment, TimeStamp, Report
  6. Deploy to Test Start - Environment, TimeStamp
  7. Deploy to Test Done - Environment, TimeStamp, Report
  8. Test Start - Environment, TimeStamp
  9. Test Done - Environment, TimeStamp, Report
  10. Promote - Environment, TimeStamp, Promoted to PASSED_TEST
  11. Deployment Production Start - Environment, TimeStamp
  12. Deployment Production Done - Environment, TimeStamp, Report
Now we have a Continuous X process implementation that gets the version number from the same place regardless of where its executed and reports all the data into one repository regardless of where its executed. This means that the same process implementation can be executed on any CI Server instance as well as on any Developer machine. This has enabled us to implement a portable and hence distributed Continuous X process.

There are several reasons we might want to run the process locally on our Dev Environment. We need the capability to develop the process implementation and we need to be able to do it without a CI Server.  It gives us a good mindset when making a distributed implementation. If it can run locally then it can run on any build environment. In some cases we maybe don't need a full fledged build environment to run our Continuous X Process. Applications with few developers, few changes and short pipeline execution time don't really need a remote build environment.

Another important reason is that we need to be able to push our code to our customers even if our Continuous X Service is down. Regardless of how high the SLA of the service (internal or cloud based) is we will eventually run into situations where we need to execute it and its down.

Thought its important to note that a pipeline should never allow execution on changes that aren't pushed to the SCM. Every execution has to be traceable to a SCM revision. 



Process Visualization

One of the main problem with CI Tools such as Jenkins is the visualization. Its just not built for the purpose of Continuous Delivery/Deploy and as Ive mentioned above its based on local instances of job pipes. This makes it impossible to visualize a distributed process in a good way and it requires the user to have understanding of the CI Tool. A Continuous Delivery/Deploy process needs to be visualised so that non technical employees can view it.

Good thing we have a Build Data Repository then. Based on the information in the Build Data Repository we can visualize our pipe, its status and result no matter where it was executed. We can also visualise the pipe based on how it looked when it was executed, not how its implemented now.


Logging, Monitoring and Metrics vs the Build Data Repository

What do we store in the Build Data Repository? Do we store everything in there? Like execution logs from the test excutions and metrics?

No. The Build Data Repository is to store the reports from all events builds, deployments, tests, ect but the actual runtime logging should go to where it belongs. Logfiles going into a log repository such as logstash, metrics going into a metrics repository such as graphite.

The Process Visualization could/should aggregate the data from the Build Data, Log and Metric repositories to provide a good view of our process and system.


Provisioning Environments and Executing Tests

Its really important that our Continuous X Process Implementation initiates the full provisioning of the full test/prod environment. By assuring that it does so we ensure that all environmental changes go through our pipe.

What do I define as Environment? Everything LoadBalancers, Network Rules, Middleware nodes, caches, databases, scaling rules, ect, ect.

This concept is a great extension of the Immutable Server pattern. By releasing images instead of artefacts out of our build step (actually build+bake) we enable the creation of Immutable Servers through the entire build pipe.

The process builds a new test environment for each blackbox deployed test execution (basically anything beyond unit test) and then it destroys it again once the test is finished. When we deploy into our next environment call it QA/PreProd what ever then we do it by creating new middleware servers in that environment based on the same image and virtualisation spec as we had in our Test Environment. Once they are upp and running we rotate them into our cluster/load balancer, whatever.

The big difference between our Test, QA and Prod environment deploys is that the first one is a create and destroy scenario while the others are an update scenario as we cannot build new full environments for production. We cannot build new databases, bring up new load balancers, ect for each production deploy. So preferably we separate the handling of Mutable and Immutable infrastructure in one environment so that we can create and destroy our Immutable Servers/Clusters and mutate our databases.

So basically a very low level of detail topology of a Distributed Continuous X implementation looks something like this.


This gives us the fundamental basic understanding of what we need to do inorder to build a scaleable distributed system that handles our Continuous X processes in our company.

In my next post in this series I will go into example of how to build a portable Continuous X Process Implementation that is independent of any CI and Build Tools.


Monday, June 2, 2014

Blog Series: Continuous Deployment in the Cloud

For my next few posts I am going to focus on writing a series of articles how to do Continuous Delivery & Deployment in a cloud environment. Ive always been a bit cautious when it comes to tutorial style blogs, talks and articles. I usually find them to be too shallow and then never reveal the true issues that need to be solved. This often leads to bad, premature and uninformed decisions made by the consumer of the tutorial.

So instead I am going to try provide a much richer series of articles that focus on how to Architect, Test, Deploy and Deliver in a Cloud environment.

In my conference talks I often talk about the key of having a build data repository that is separate from the build engine (CI Tool). More then once I have been asked if I can open source this our tool. Well Im not sure has been my answer. So instead Ive decided to implement a new similar tool and open source it. Im going to over architect it a bit on purpose as its going to be the main example in the article series. The Process Implementation in this series is going to use that tool as its build repo.

This is how the Process Implementation will look like. I will go deeper into this picture as I move forward but a few quick words about it.

The key to a scaleable CD process is for it to be independent of its runtime environment. The CD process drawn here can be executed from a Build Environment and/or a Dev Environment. The dev can push from his own environment right into production or he/she can let the build environment do it from him/her. Regardless of where the process is initiated it will be executed in the same way and it will integrate with the build data repository to which it reports any events that happen on that build and its also from where it gets its version number.

Will I talk about tooling this time? Yes I will. This article will be based on Git as SCM, AWS as Test and Prod Runtime Environments, Travis CI as Build Environment and Gradle as Build Tool but I still have not decided upon test tool most likely it will be RESTAssured.

This series will take some time to write and will mostly be done during the later part of this summer and the fall. If you are interested in this article series and/or the build data repository then please +1 this article to show me that there is interest.

Thanks

Wednesday, May 14, 2014

Tomorrow is the premiere of "Scaling Continuous Delivery" at GeeCon 2014

Tomorrow on the 15th of May Ive got a talk at GeeCon in Krakow. Its the first outing of my new talk Scaling Continuous Delivery. The talk is an experience report on all the struggles we have had scaling our continuous delivery rollout. Hopefully the talk will provide an insight to what we have done and the steps we have taken while scaling. Sometimes its not just the end goal that is interesting but also the journey.

Hopefully its will be appreciated.

Here are the slides for the talk  http://www.slideshare.net/TomasRiha/scaling-continuous-delivery-geecon-2014



Tuesday, April 1, 2014

Upcoming talks

I've gotten the honor to speak at two fantastic conferences this spring.

First one is PipelineConf 8th of april in London where I will talk about the people side of Continuous Delivery. This is the talk Ive had at Netlight EDGE and JDays Conferences though its been update with the experiences from the last 6-8 months of working with Continuous Delivery.

The second one is GeeCon 14th-16th may in Krakow Poland where I will be speaking about Scaling Continuous Delivery. This is a new talk that focuses on lessons learned from our journey to scale continuous delivery from a team of 5 to an organization of 100s.

If you are interested in hearing me speak at a conference, seminar or a workshop. Dont hesitate to contact me.

Monday, March 17, 2014

Portability

I've talked about Portability of the CD process before but it continuously becomes more and more evident for us how important it is. The closer the CD process comes to the developer the higher the understanding of the process. Our increase in portability has gone through stages.

Initially we deployed locally in a way that was totally different from the way we deployed in the continuous delivery process. Our desktop development environments where not part of our CD process at all. Our deploy scripts handle stopping starting of servers, moving artifacts on the server, linking directories and running liquibase to upgrade/migrate database. We did all this manually on the local environments. We ran liquibase but we ran it using the maven plugin (which we don't do in our deploy scripts there we run it using java -jar). We moved artifacts by hand or by other scripts.

Then we created a local bootstrap script which executed the CD process deploy scripts on a local environment. We built in environment specific support in the local bootstrap so that we supported linux and windows. Though in order to start Jboss and Mule we needed to add support for the local environment in the CD process deploy script as well. We moved closer to portability but we diluted our code and increased our complexity. Still this was an improvement but the process was still not truly portable.

In recent time we have decided to shift our packaging of artifacts from zip files to rpms. All our prod and test environments are redhat so the dependency on technology is not really an issue for us here. What this gives us is the ability to manage dependencies between artifacts and infrastructure in a nice way. The war file depends on a jboss version which depends on a java version and all are installed when needed. This also finally gives us a clear separation between install and deploy. The yum installer installs files on the server, our deploy application brings the runtime online, configures it and moves the artifacts into runtime.

In order for us to maintain portability to the development environment this finally forced us to go all in and make the decision "development is done in a linux environment". We won't be moving to linux clients but our local deploy target will be a virtual linux box. This finally puts everything into place for us creating a fully portable model. Its important to understand that we still dont have a cloud environment in our company.


This image, created by my colleague Mikael, is a great visualization of how portability we can build in our environment now and when we get a cloud. By defining a Portability level and its interface we manage to build a mini cloud on each jenkins slave and on a local dev machine using the exactly same process as we would for a QA or test deploy. The Nodes above the Portability level can be local on the workstation/jeknins slave or remote in a Prod Environment. The process is the same, regardless of environment Provision, Install and Deploy.


Friday, February 21, 2014

Scaling Continuous Delivery

Its been a while since I posted. Main reason is that we have been very focused on our main deliveries and feature development for the last six month. Whenever the feature train hits central station its always work such as build, release, test automation that gets hit first.

Though there are upsides to not touching your Continuous Delivery process for a few months. If you just keep working on your backlog you don't get time to analyze the impact of the changes you just made. Several times we have realized that the number two/three items in the backlog have dropped significantly in priority as we have fixed the most important issue and others rising fast in priority.

Now we have had time to analyse a lot of new issues and its time for us to pick up the pace again.

Scaling the Organization 

The good thing, the awesome thing (!) is that during these six or so months our organization has changed and we have actually be able to create a line organization that owns and takes responsibility for the continuous delivery process.

One of the major bottlenecks we found in our process was our platform/tools team. The team was small and resources in that team where always first to go when feature pressure increased. The team became just another "IT function" that didn't have time to be proactive due to all the reactive support work it had to do.

There was a few reasons behind this first it was the way the team worked in the past. It actually built the pipes and processes for all the teams by hand and tailored to the custom needs of each team. On some teams there were individuals who picked up the work and kept on configuring the jenkins jobs to tailor them even more but on some teams there was no interest whatsoever and their jobs degraded.

The result of this was that no one really knew how the pipes looked and how they should look. Introducing process change was a horribly slow process as it was all manual and dependent on the platform/tools team.

One of the first changes we made was to increase the bandwidth of the team and reducing the dependency on that team.  provided a great solution for this over a chat this summer. Instead of the platform/tools team supporting the development teams the development teams put resources into the platform/tools team. Each team was invited to add a 50% resource on a volunteered basis. This way the real life issues got much better attention in the platform/tools team and the competence about the Continuous Delivery process got spread in a much more organic way.

This did not eliminate the bottleneck organization but it gave us bandwidth to change the way we work and long term gave us the ability to scale with the number of teams that use the process.

Scaling the Process

The main issue with why we were a bottleneck was the way we worked. We preached Automate Everything, Test Everything, If its hard do it more often, ect but when it came to the Continuous Delivery process we didn't do what we where teaching.

We had ONE Jenkins Environment so all the changes happened directly in production. Testing plugins and new configurations on a production environment isn't really the way to delivery stability, reliability and performance.

Manually created Jenkins Pipes isnt really a way to create sustainable pace and continuous improvements.

Developing Deploy scripts without explicit unit tests isnt really a good way of creating a stable process. We have been priding ourselves with our deployment being tested hundreds of times pre production deploy which was true but very dumb. Implicit testing means that someone else takes the pain for my mistakes. Deployment scripts are applications and need to be treated as first class citizens.

This had to change.

First thing we did was to use the extra bandwidth we had obtained to build a totally new way of delivering continuous delivery. Automate everything, obvious, hu?

We also decided to deliver a continuous delivery environment per development team and not have them all in one environment. So we started with automating provisioning of Jenkins & Test environments. We dont have a cloud solution in our company at this time so we have a fake cloud that we work with which is a huge pool of virtual servers. This pool we provision and maintain using chef.

Second thing was to automate the build pipe setup. We built us a little simple pipe generator which has defined pipe templates of 5-8 different layouts to support the different needs. We actually managed to get the development teams to adjust to a stricter maven project naming convention to use the generated pipes as everyone saw the benefits of this.

The pipes we have are basically typed by what they build if its libs or deployable components and how they are tested as we still need to initiate our Fitnesse tests a bit differently from our other tests.

We made it the responsibility of the platform/tools team to develop the pipe templates and the responsibility of the development teams to configure their generator to generate the pipes they needed for their components.

Getting to this stage was a lot of work and a lot of migration work for all the teams but the results have been terrific. The support load has gone down alot on the platform/tools team and each bug fix is rolled out within minutes to all the pipes.

We have also be able to take on new development teams very easily. Not all teams in our company are ready to do Continuous Delivery but they are all heading in this direction and we can now provide environments and pipelines that match their maturity.

Summary

We have gone from a process developed as skunkworkz to Continuous Delivery as a Service within our organization. We always run into new bottlenecks and challenges this time the bottleneck was much more us than anything else. I assume that the next big bottleneck is going to be hardware and our inability to deliver on a cloud solution, since we now can roll out to more and more teams. But who knows I can be wrong only time will tell.





Tuesday, June 18, 2013

Its about the people.

Last week I attended QCon New York. Fantastic conference as usual and it was comforting to see that basically everyone was saying the same thing. "Continuous Delivery is not about the technology, its about the people". Which also happens to be the title of my talk at Netlight´s EDGE conference in september,

In his talk Steve Smith (@agilestevesmith) talked about how 5% is technology and 95% is organization. While I agree with that I think that the non-technical 95% can be divided into organization, change of role definitions and individual maturity. Its these three that my talk will cover.

Hopefully I will be able to have this talk in Gothenburg as well as its been submitted to JDays.

Monday, April 8, 2013

Talk at HiQ 24th of April

Continuous Delivery - Enabling Agile.

The key to agile development is a fast feedback loop. Continuous Delivery strives towards always having tested releases in deliverable state. Continuous Delivery is not just a technical process but a change to the entire organization and the individuals within it. This presentation describes the principles of Continuous Delivery, a brief overview on how it can be implemented, how it changes the organization and how it impacts the individuals.

Target audience for this presentation is Developers, Architects, Testers, Scrum Masters, Project Managers and Product Owners in no particular order. The presentation is not rich in technical detail and based on real life experiences.

Please use this post to provide questions and feedback.

Welcome

Sunday, February 24, 2013

So it took a year.

When we first started building our continuous delivery pipe I had no idea that the biggest challenges would be non technical. Well I did expect that we would run into a lot of dev vs ops related issues and that the rest would be just technical issues. I was so naive.

We seriously underestimated how continuous delivery changes the every day work of each individual involved in the delivery of a software service. It affects everyone Developer, Tester, PM, CM, DBA and Operations professionals. Really it shouldn't be a big shocker since it changes the process of how we deliver software. So yes everyone gets affected.

The transition for our developers took about a year. Just over a year ago we scaled up our development and added give or take 15-20 developers. All these developers have been of a very high quality and very responsible individuals. Though none of them had worked in a continuous delivery process before and all where more or less new to our business domain.

When introducing them everyone got the run down of the continuous delivery process, how it works, why we have it and that they need to make sure to check in quality code. So off you go make code, check in tested stuff and if something still breaks you fix it. How hard can it be?

Much much harder then we thought. As I said all our developers are very responsible individuals. Still it was a change for them. What once was considered responsible like if it "compiles and unit tests check it in so that it doesn't get lost" leads broken builds. Doing this before leaving early on Friday becomes a huge issue because others have to fix the build pipe. But it goes for a lot of things like having to ensure that database scripts work all the time, everything with the database is versioned, roll backs work, ect, ect. So everyone has had to step up their game a  notch or two.

Continuous delivery really forces the developer to test much more before he/she checks in the code. Even for the developers that like to work test driven with their junit tests this is a step up. For many its a change of behavior. Changing a behavior that has become second nature  doesnt happen over night.

We had a few highly responsible developers that took on this change seamlessly. These individuals had to carry a huge load during this first year. When responsibility was dropped by one individual it was these who always ensured that the pipe was green. This has been the biggest source of frustration. I get angry, frustrated and mad when the lack of responsibility by one individual affects another individual. They get angry and frustrated as well because they don't want to lave it in a bad state and their responsibility prevents them from going home to their families. I'm so happy that we didn't loose any of these individuals during this period.

Now after about a year things have actually changed everyone takes much more responsibility and fixing the build pipe is much more of a shared effort. Which is soo nice. But why did it take such a long time? Id really like to figure out if this transition could have been made smoother and faster.

Key things why it took so much time.

A change to behavior.
Developers need to test much more, not just now and then but all the time. No matter how much you talk about "test before check in" , "test", "test", "test" the day the feature pressure increases a developer will fall back on second nature behavior and check in what he/she believes is done. We can talk lean, kanban, queues, push and pull all we want but fact is still there will always be situations of stress. Its not before a behavior change has become second nature we do it under pressure.

Immature process.
Visibility, portability and scale ability issues have made it hard to take responsibility. Knowing when, where and how to take responsibility is super important. Realizing that lack of responsibility is tied to these took us quite some time to figure out. If its hard to debug a testcase its going to a lot of time to figure out why things are failing and its going to require more senior developers to figure it out. Its also hard to be proactive with testing if the portability between development environment and test environment is bad.

Lot of new things at once
When you tell a developer about a new system, domain and a new process Im quite sure the developer will always listen more to the system and domain specific talks.
Developer has head full of this system communicates with that system and its that type of interface. Then I start going on about "Jira, bla bla bla, test bla, checkin bla bla, Jenkins bla, deploy, bla, fitnesse, test bla, bla" and developer goes "Yeah yeah yeah Ill check in and it gets tested I hear you, sweet!".

I defiantly think its much easier for a developer to make the transition if the process is more mature, has optimized feedback loops, scales and is portable. Honestly I think its easily going to take 3-6 months of the learning curve. But its still going to take a lot of time in range of months if we don´t become better at understanding behavioral changes.

Today we go straight from intro session (slides or whiteboard) to live scenario in one step. Here is the info now go and use it. At least now we are becoming better at mentoring. So there is help to get so that you can be talked through the process and the new developer is usually not working alone, which they where a year ago. Still I dont think its enough.

Continuous Delivery Training Dojos

I think we really need to start thinking about having training dojos where we learn the process from start to finish. I also think this is extremely important when transitioning to acceptance test driven development. But just for the reason of getting a feeling for the process. What is tested where, how and what happens when I change this and that. How should I test things before comiting and what should be done in which order.

I think if we practiced this and worked on how to break and unbreak the process in a non live scenario the transition would go much faster. In fact I dont think these dojos should be just to train new team members but they would also be a extremely effective way of sharing information and consequences of process change over time.


Sunday, February 3, 2013

Working the trunk

When my colleague Tomas brought up the idea of continuous delivery he first thing that really caught my attention was "we do all work on the trunk". I've always hated branches. I've worked with many different branching strategies and honestly they have all felt wrong.

My main issue has always been that regardless of branch strategy (Release Branches or Feature Branches) its a lot of double testing and debuging after merge is always horrible. Its also hard to have a clear view of a "working system", what is the system, which branch do you refer to? Always having a clean and tested version of the trunk felt very compelling. No double work and a clear notion of "the system"! I'm game!

So we test everything all the time. How hard can it be.

Well it has proven to be a lot harder then we thought, not to continuously test but to manage everyone's desire to branch. Somehow people just love branches. Developers want their feature branches where they can work in their sandbox. Managers want their branches so that they don't get anything else then just that explicit bug fix for their delivery and not risk impact from anything else.

These are two different core problems one is about taking responsibility and one is about trust.

Managers don't trust "Jenkins".

Managers don't trust developers but somehow they do trust testers. Its interesting how much more credit a QA manager has when he/she says "I've tested everything" then a blue light on jenkins. In fact managers have MORE confidence in a manual regression test that has executed "most of the test-cases on the current build" then an automated process which executes "all the test-cases on every build". I think the reasons are twofold one is that the process is "something that the devs cooked up" and the other is that jenkins cant look a manager in the eyes. It would be much easier if Jenkins was actually a person who had a formal responsibility in the organisation and could be blamed, shouted on and fired if things went wrong.

It takes alot of hard work to sell "everything we have promised works as we have promised it". For each new build that we push into user acceptance testing we need to fight the desire to branch the previous release. Each time we have to go through the same discussion.

"I just want my bug fix"
"you get the newest version"
"I don't want the other changes"
"Everything we have promised works as we have promised it"
"How can you guarantee that"
"We run the tests on each check in"
"Doesn't matter I don't want the other changes they can break something, I want you to branch"

I dint know how many times we have had this argument. Interesting is that we are yet to break something in a production deploy as a result of releasing bug fix from the trunk (and hence including half done features). Though we have had a failed deploy due to having subdued to the urge to branch. We made a bad call and subdued to the branch pressure. By doing that we branched but we didn't build a full pipe for the branch which resulted in us not picking up a incompatible configuration change.

Developers love sandboxes

Its interesting, developers push for more releases, smaller work packages yet they love their feature branches. I despise feature branches even more then release branches. Reason is that they make it very hard to refactor an application and the merging process is very error prone. The design and implementation of a feature is based on a state of the "working system" which can be totally different from the system its merged onto. Also it breaks all the intentions to do smaller work packages and test them often, a merge is always bigger then a small commit.

The desire to feature branch comes from "all the repo updates we have to do all the time slow us down so much" and "we cant just check stuff in that doesn't work without breaking stuff". The later one isn't just from developers wanting to be irresponsible its also from us running SVN and not GIT. Developers do want to share code in a simple way. Small packages that two team mates want to share without touching the trunk is a viable concern. So the ability to micro branch would be nice. So yes I do recommend GIT if you can but its not a viable option for us. Though I'm quite sure that if we where using GIT we would end up having problems related to micro branches turning into stealth feature branches.

I think the complaint "all the repo updates we have to do all the time slow us down so much" is a much more interesting one. In general I think that developers need to adopt more continuous integration patterns in their daily work but this is actually a scale-ability issue. If you have too many developers working in the same part of the repo you are gonna get problems. When developers do adopt good continuous integration patterns in their daily work and their productivity drops then there is an issue. This is one of the reasons why we have seen feature branches in the past.

Distribute development across the repository

When we started building our delivery platform we based it on an industry standard pattern that clearly defines component responsibility. Early on in the life cycle we saw no issues of developers contesting the same repository space as we had just one or two working on each component. But as we scaled we started to see more and more of this. We also saw that some of the components where to widely defined in their responsibility which made them bloated and hard to understand. So we decided to refactor the main culprit component into several smaller more well defined components.

The result of this was very good. The less bloated a component  is the easier it is to understand and to test, which leads to increased stability. By creating sub components we also spread out the developers across the repository. So we actually created stable contextual sandboxes that are easy to understand and manage.

Obviously it we shouldn't just create components to spread out our developers but I think that if developers start stepping on each other then its a symptom of either bad architecture or over staffing. If a component needs so many developers that they are in the way of each other then the chance is quite good that the component does way too much or that management is trying to meet feature demand by just pouring in more developers.

Backwards compatibly

Another key to working on the trunk has been our interface versioning strategy. Since we mostly provide REST services we actually where forced into branching once or twice where we had not other option and that was due to not being backwards compatible on our interfaces. We couldn't take the trunk into production because the changes where not backwards compliant and our tests had been changed to map the new reality. This is what lead to our new interface strategy where we among things never ever change interfaces or payloads, just add new ones and deprecate old ones.

Everything that interfaces the outside world needs to be kept backwards compatible or program management and timing issues will force inevitable branching.

Not what I expected

When we first deiced to work solely on the trunk I thought it was gonna be all about testing. Its important but I think people management has been a bigger investment (at least measured in mental energy drain) and importance of good architecture was under rated.