Sunday, October 2, 2016

Service Simulation rather the Integration Testing of Micro Services

Micro Services or Distributed Monolith

One of the main characteristics of a Micro Service architecture is that each Micro Service has its own lifecycle. This is very important as this is the only way to break the monolith pattern. If the lifecycle of a Micro Service is tied to another entity such as Service, System or Solution then we have a distributed monolith and not a Micro Service architecture.

Ive found that almost everyone agrees in theory but in practice it very wants to do integration testing before release. Integration Testing is so in our DNA, we have been doing it for years and dont really know any other way.

Lets explore the problem of Integration Testing a little bit. If we have Micro Service X and it is consumed by Micro Services Y and Z. Then there is no denying there is a dependency between X, Y and Z. Part of Micro Service architecture is API versioning and backward compatibility. But even with these practices and good test coverage on component X it's hard to deny that the integration between X, Y and Z can still fail. But if we rely on Integration Testing to find these failures then we create a dependency between the teams developing Y and Z and the team of X. That means X no longer has a lifecycle of its own.

Hence if we do rely on Integration Testing we have a distributed monolith which imho is the only thing worse than a monolith.

Consumer Contract Testing
Martin Fowler talks about Contract Testing and Consumer Driven Contracts which is a great pattern. In short it means that the developers of Micro Services Y and Z provide contract tests for Micro Service X. The tests provided by the developers of Y and Z correspond to what parts of the X API they consume. These tests are executed as part of the test suite belonging to service X. This way the developers of X know if they broke any backwards compatibility.

I think this is a great pattern and one that helps developers maintain the right amount of backwards compatibility with a fast feedback mechanism. Still I am not sure it is enough. At least in our organization everyone still wants to Integration Test.

There is still a lot of failures in the integration that can fail outside of the component tests of X. In production like environments there can be deployments in different networks, there is service discovery and other factors playing in.

Service Simulation

One Solution is to introduce Service Simulation. Instead of implementing test automation we can implement simulator bots. These bots exercise continuously execute our services. This creates a constant and even load on our solution driven by our services. The monitoring of our Services is used to notify the developers if something failed. No alarms for a given time period after deploy means our solution still works and the redeployment of our Micro Service didnt break anything and we can go on to deploy into the next environment.

Its also relatively easy to implement this as part of the continuous delivery build pipe. Deploy into an environment, check the alarms in that environment for five min and if all green then the Micro Service is verified and the Continuous Delivery implementation continues.

Though its not necessary to have a advanced Continuous Delivery Engine for this pattern to work, just alarm notifications to team channels on slack is powerful in by self.

Another important thing this solves is that the developers and testers start using production grade monitoring to verify services. I think its a very big obstacle in a DevOps transformation that Developers and Testers view Test Reports to understand a system while these arnt available in production. Developers and Testers are usually lost when it comes to production systems. This moves their understanding towards runtime operations of a system which bridges the gap between Dev and Ops in a nice way.

Bots can be deployed into any environment providing same process of deploy and verify in all environments. Personally I like the idea of having a Bot workload only environment as the first full featured environment. The now obsolete Integration Testing environment could be easily converted into a Bot only environment.

While a Bot basically can be implemented in a number of ways I like the idea of using Function as a Service such as AWS Lambda to implement Service SImulation Bots. The scheduled nature and high specialization of a Bot is perfect as a FaaS workload.

Test Pyramid

  • Simulation - In all Environments Bots Executing Monitoring and Alarms to verify application.
  • Component & Contract Tests - Deployed Functional Tests using Mocks to stub consumed services. Contract Tests provided by developers consuming this services.
  • Unit Tests - Well nothing new there.

I think this is what we need to in order to release our micro services with high confidence and with life cycles of their own.

Thursday, January 7, 2016

Optimizing Runtime Cost through Continuous Delivery

We have always had the need to understand our application runtime costs. In legacy enterprises this has often been done at a finance level and someone in the Runtime Operations department has been responsible for that cost. There have been different ways of distributing the costs to the right organization. Either each application has had its own infrastructure and own the full cost of it or applications have shared infrastructure in order to save money, utilize large servers in a better way or just to minimize setup and maintenance work of the servers. 

In many legacy enterprises there has been a huge disconnect between the rollout the application evolution and the infrastructure evolution. Application development has changed towards micro services with a lot of small applications with lifecycles of their own. The legacy enterprises runtime operations are largely unable to delivery on these changes due to processes, licenses and hardware leases of huge "enterprise scale servers" for the old monoliths. Requests of several servers per micro services have been waved off as unrealistic. 

Several ways around this have been invented in order to cohost multiple applications on the same servers in a somewhat isolated way. Docker draws so much attention partially because legacy enterprises can still have their big old servers and isolate the micro services on them. (Of course not only good thing with Docker).

Regardless of how this was solved the understanding of runtime cost becomes exponentially harder with micro service in legacy enterprise infrastructure. 

When transitioning to DevOps and Micro Services each team is responsible for delivering its services end to end in all environments with the right functionality, performance and imho to the right cost. So what is the right cost? Before even answering that question let’s start with what is the cost. The DevOps team needs to know the cost of its services. In the legacy enterprise that cost might at best case arrive as some kind of cost spit formula calculated by a finance guy based on how many micro services shared the servers, a cost split formula of any license costs and a cost split formula of the man hours to maintain the servers. At best a team would get this information once a month.

Part of the reason we want DevOps is so that the team has full competence and ability to improve its services. This needs to include the runtime costs of the services. A performance optimization of a service that cuts resource requirements by 25% should result in lower runtime costs of the service. In the legacy world of cost splits and financial formals to calculate cost this just doesn’t happen.

Thankfully we have cloud providers. Not only do we get the right resources handle our services when we need them but we get billed real money for them.

With auto scaling, elasticity and all other nice features that we get there is also a risk. We get away with writing increasingly bad applications. Bad performance? Doesn’t matter as long as it scales horizontally. We can IO block to ourselves hell and back as long as we scale horizontally and we do get away with it. Well as long as we don’t have to take responsibility for the cost of our service. This is why it’s so important for the DevOps team to take the responsibility of the runtime cost of the service.

For a few years I have had a dream to be able to stamp a cost runtime cost to a load test and correlate the performance measured in the test with the cost of the runtime environment to run the test. We are still not there with our applications since we do run on AWS Auto Scaling Groups and AWS bills us per started hour. This makes the billing data to blunt too give the correlation between performance and cost from a shorter test. With a Micro Service architecture that uses AWS Services such as Lambda, DynamoDB and Kinesis this would actually be achievable today.

With this vision in mind we have been able to integrate the runtime costs of our Micro Services in AWS with our Continuous Delivery as a Service implementation (Delivery Engine) in another way. For us DevOps Teams are the owners of services and Solutions are the drivers of the cost. So everything that we launch in AWS we tag with name of micro service, owner team name and cost driving solution. 

So this allows our DevOps Teams to see the cost of their Services.

Here we provide a graph of total costs for the services owned by a DevOps team. The services are listed in a top list below the graph and a long running trend of the cost for each component. This allows the Product Owners and Team Members to act in increasing costs. Links are provided to the Dashboard Page of each service.

This Service Dashboard provides the version history of each version built in delivery engine. Our Delivery Engine builds, black box tests the deployed service, load tests it, bakes AWS AMIs for it and launches it into the right environments. Here on the dashboard we visualize the total cost of the service grouped by environment (delivery engine, exploratory testing, qa and prod environments).

From the same dashboard we visualize service usage. In the below example it’s a visualization on cpu consumption across an auto scaling group.

This way we allow the team to ensure that they have the right scaling rules for its services and that they have the right amount of resources in each environment. 

By visualizing the cost of the runtime environment for each service and combining it with Continuous Delivery, Continuous Performance Testing and DevOps we allow our developers to constantly tweak and improve the performance and optimize the cost of our runtime environments. The lead time in the cost reporting still is at the "next day level" as we report the cost per day and per month but I still think this is a reasonable feedback loop when it comes to cost optimizations.

Friday, February 27, 2015

Pipes as Code

Finally we have started to move away from having build pipes as a chain of Jenkins jobs. There has been alot written of the subject that CI systems arnt well suited to implement CD processes. Let me first give a short recap on why before I get into how we now delivery our Pipes as Code.

First of all pipes in CI systems have bad portability. They are usually a chain of jobs set up through either a manual process or through some sort of automation based on a api provided by the CI system. The inherrited problem here is that the pipe executes in the CI system. This means that it is very hard to test and develope a pipe using Continuous Delivery. Yes we need to use Continuous Delivery when implementing our Continuous Delivery tooling otherwise we will nto be able to deliver our CD Processes in a qualitative, rapid and reliable way.

Then there is the problem of that data that we collect during the pipe. By default the data in a CI system is stored in that CI systems. Often on disk on that instans of that CI server. Adding insult to injury navigation of the build data is often tided to the current implementation of the build pipe. This means that a change to the build pipe means that we can no longer access the build data.

For a few years now we have been off loading all the build data into different types of storages depending on what type of data it is. Meta data around the build we store in a custom database. Logs go to our ELK stack, metrices to Graphite and reports to S3.

Still we have had trouble delivering quality Pipes. Now that has changed.

We still use a CI Server to trigger the Pipe. On the CI server we now have one job "DoIt". The "DoIt" job executes the right build pipe for every application. Lets talk a bit on how we pick the pipe.

Each git repo contains a YML file that says how we should build that repo. Thats more or less the only thing that has to be in the repo for us to start building it. We ingore all repos without the YML files. So we listen to all the gerrit triggers and ignore ones withouth

The YML is simply pretty much just

pipe: application-pipe
jdk: JDK8

We describe our build pipes in YML and implement our tasks in Groovy. Here is a simple definition. 

     - do: setup.Clean     - do: setup.Init
     - do: build.Build     - do: test.Test
     - do: log.ReportBuildStatus
     - do: notify.Email

Each task has a lifecycle of first, main, last. The first section is always executed and all of the "do´s" in the first section are executed regardless of result. In the main secion the "do´s" are only execute if everything has gone well so far. Last is always executed regardless of how things went.

The "do´s" are references to groovy classes with the first mandatory part of the package stripped. So there is a com.something.something.something.setup.Clean class.

A Context object is passed through all the execute methods of the "do´s". By setting context.mock=true the main executing process adds the sufix "Mock" to all "do´s". This allows us to unit test the build pipe inorder to assert that all the steps that we expect to happen do happen in the correct order.

When alot of things start happening its not really practicall to have a build task all that verbose especially since we have multiple pipes that share the same build task. So we can create a "build.yml"  and a "notify.yml" which we then can include like this.

    ref : build
     - do: notify

So this is how our build pipes look and we can unit test the pipe, the tasks and each "do" implementaiton.

Looking at a full pipe example we get something like this.

    ref: init
            ref: build.deployable
            ref: provision.create-test-environment  
    ref: deploy.deploy-engine
        ref: test.functional-tests
        ref: test.load-tests
            ref: release.publish-to-nexus
            ref: bake.ami-with-packer
            ref: provision.destroy-test-environment

            ref: end

Thats it.!

This pipe builds, functional tests, load tests and publishes our artifacts as well as baking images for our AWS environments. All the steps report to our meta data database, elk, graphite, s3 and slack.

And ofcourse we use our build pipes to build our build pipe tooling.

Continuous Delivery of Continuous Delivery through build Pipes as Code. High score on the buzzword bingo!

Monday, July 7, 2014

Continuous Deployment in the Cloud Part 2: The Pipeline Engine in 100 lines of code

As I talked about in my previous post in this series we need to treat our Continuous Delivery process as a distributed system and as part of that we need to move the Pipe out of Jenkins and into a first class citizen of its own. Aside from the facts that a CI Tool is a very bad Continuous Delivery/Deploy orchestrator I find the potential of executing my pipe from anywhere in the cloud very tempting.

If my pipe is a first class citizen of its own and executable on any plain old node then I can execute it anywhere in the cloud. Which means I can execute it locally on my laptop, a simple minion in my own cloud or in one of all of the managed CI services that have surfaced in the Cloud.

To accomplish this we need five very basic and simple things

  1. a pipeline configuration that defines what tasks to execute for that pipe
  2. a pipeline engine that executes tasks
  3. a library of task implementations
  4. a definition of what pipe to use with my artefact
  5. a way of distributing the pipeline engine to the node where I want to execute my pipeline
Lets have a look.

Define the pipeline

The pipeline is a relatively simple process that executes tasks. For the purpose of this blog series and for simple small deliveries sequential execution of tasks can be sufficient but at my work we do run a lot of parallel sub pipes to improve the throughput on the test parts. So our pipe should be able to handle both.

We also want to be able to run the pipe to a certain stage. Like from start to regression test and then step by step launch in QA and launch in Prod. Obviously of we want to do continuous deployment we don't need to worry too much about that capability. But I include it just to cover a bit more scope.

Defining pipelines is no real rocket science and in most cases a few archetype pipelines will cover like 90% of the pipes needed for a large scale delivery. So I do like to define a few flavours of pipes that gives us the ability to distribute a base set of pipes for CI to CD.

Once we have defined the base set of pipe flavours the each team should configure which pipe they want to handle their deliverables.

I define my pipes something like this.

name: Strawberry
  - do:
     - tasks:
       - name: Build
         type: mock.Dummy
  - do:
     - tasks:
       - name: Deploy A
         type: mock.Dummy
       - name: Test A
         type: mock.Dummy
     - tasks:
       - name: Deploy B
         type: mock.Dummy
       - name: Test B
         type: mock.Dummy
    parallel: true     
  - do:
     - tasks:
       - name: Publish
         type: mock.Dummy
A pipe named Strawberry which builds our services then deploys it in two parallel pipes where it executes two test suites and finally publishes the artefacts in our artefact repo.  At this stage each task is just executed with a Dummy task implementation.

The pipeline engine

We need a mechanism that understands our yml config and links it to our library of executable tasks.

I use Groovy to build my engine but it can just as easily be built in any language. Ive intentionally stripped down some of the logging I do but this is basically it.  In about 80 lines of code we have a engine that loads tasks defined in a yml, executes then in serial or parallel and has the capability to run all tasks, the tasks up to one point or a single task.

class BalthazarEngine {

    def int start(Map context){
        def status = 0
        def definition = context.get "balthazar-pipe" 
        for (def doIt:  definition["pipe"] ){
            status = executePipe(doIt, context)
        return status
    def int executePipe(Map doIt, Map context){
        def status = 0
        if (doIt.parallel == true){
            status = doItParallel(doIt,context)
        } else {
            status = doItSerial(doIt,context)
        return status
    def int doItSerial(def doIt, def context){
        def status = 0
        for (def tasks :{
            status = executeTasks(tasks, context)
        return status
    def int doItParallel(def doIt, def context){
        def status = new AtomicInteger()
        def th
        for (def tasks :{
            def cloneContext = deepcopy(context)
            def cloneTasks = deepcopy(tasks)
            th = Thread.start {
                status = executeTasks(cloneTasks, cloneContext)
        return status
    def int executeTasks(def tasks, def context){
        def status = 0
        for (def task : tasks){
            //execute if the run-task is not specified or if run-task equqls this task
            if (!context["run-task"] || context["run-task"] =={
       "execute ${}"
                context["this.task"] = task
                def impl = loadInstanceForTask task;
                status = impl.execute context 
            if (status != BalthazarTask.TASK_SUCCESS){
            if (context["run-to"] =={
       "Executed ${context["run-to"]} which is the last task, done executing."
        return status
    def loadInstanceForTask(def task){
        def className = "balthazar.tasks.${task.type}"
        def forName = Class.forName className
        return forName.newInstance()
    def deepcopy(orig) {
        def bos = new ByteArrayOutputStream()
        def oos = new ObjectOutputStream(bos)
        oos.writeObject(orig); oos.flush()
        def bin = new ByteArrayInputStream(bos.toByteArray())
        def ois = new ObjectInputStream(bin)
        return ois.readObject()

A common question I tend to get is "why not implement it as a lifecycle in maven or gradle". Well I want a process that can support building in maven, gradle or any other tool for any other language.  Also as soon as we use another tool to do our job (be it a build tool, a ci server or what ever) we need to adopt to its lifecycle definition of how it executes its processes. Maven has its lifecycle stages quite rigidly defined and I find it a pita to redefine them. Jenkins has its pre, build, post stages where its a pita to share variables. And so on. But most importantly use build tools for what they do well and ci tools for what they do well and none of that is implementing CD pipes.

Task library.

We need tasks for our pipe engine to execute. The interfaces for a task is simple.

public interface BalthazarTask {
    int execute(Map<String, Object> context);

Then we just implement them. For my purpose I package tasks in "balthazar.task.<type>.<task>" and just define the type and task in my yml. 

Writing tasks in a custom framework over say jobs in Jenkins is a joy.  You no longer need to do workaround to tooling limitations for simple things such as setting variables during execution.  

Anything you want to share you just put it on the context.

Here is an example of how two tasks share data.

  - tasks: 
    - name: Initiate Pipe
      type: init.Cerebro
  - tasks: 
    - name: Build
      type: build.Gradle
      command: gradle clean fatJar

I have two tasks. The first task creates a new version of the artefact we are building in my master data repository that I call Cerebro. (More on Cerebro in the next post). Cerebro is the master of all my build things and hence my version numbers come from there. So the init.Cerebro task takes the version from Cerebro and puts it on the context.

class Cerebro implements BalthazarTask {
    def int execute(Map<String, Object> context){
        def affiliation = context.get("cerebro-affiliation")
        def hero = context.get("cerebro-hero")
        def key = System.env["CEREBRO_KEY"]
        def reincarnation =  CerebroClient.createNewHeroReincarnation(affiliation, key, hero)
        return TASK_SUCCESS

My build.Gradle task takes the version number from cerebro (called reincarnation) and sends it to the build script. As you can see I can use custom commands and in this case I do as fat jars is what I build. By default the task does gradle build. I can also define what log level I want my gradle script to run.

class Gradle implements BalthazarTask {
    def int execute(Map<String, Object> context){
        def affiliation = context["cerebro-affiliation"]
        def hero = context["cerebro-hero"]
        def reincarnation = context["cerebro-reincarnation"]
        def command = context["this.task"]["command"] == null ? "gradle build": context["this.task"]["command"]
        def loglevel = context["this.task"]["loglevel"] == null ? "" : "--${context["this.task"]["loglevel"]}"
        def gradleCommand = """${command} ${loglevel} -Dcerebro-affiliation=${affiliation} -Dcerebro-hero=${hero} -Dcerebro-reincarnation=${reincarnation}"""

        def proc = gradleCommand.execute()                 
        return proc.exitValue()

This is how hard it is to build tasks (jobs) if its done with code instead of configuring it in a CI tool. Sure some tasks like building Amazon AMI´s take a bit more of code. (j/k they don't). But ok a launch task that implements a rolling deploy on amazon using a A/B release pattern does but I will come back to that specific case.

Configure my repository

So I have a build pipe executor, pre built build pipes and tasks that execute in them. Now I need to configure my repository.

In my experience 90% of your teams will be able to use prefab pipes without investing too much effort into building tons of prefabs. A few CI a few simple CD and a few parallelized pipes should cover a lot of demand if you are good enough at putting an interface between the deploy tasks and the deploy as well as the test tasks and the deploy tools.

So in my repo I have a .balthazar.yml which contains.

balthazar-pipe: Strawberry

Distributing the pipeline engine and the task library

First thing we need is a balthazar client that starts the engine using the configuration provided inside my repository.  Simply a Groovy script does the trick.

class BalthazarRunner {
    def int start(Map<String, Object> context){
        Yaml yaml = new Yaml()
        if (!context){
            def projectfile = new File(".balthazar.yml")            
            if (projectfile.exists()){
                context = yaml.load projectfile.text
            } else {
                throw new Exception("No .balthazar.yml in project")
        def name = context.get "balthazar-pipe" 
        def definition = yaml.load this.getClass().getResource("/processes/${name}.yml").text 
        BalthazarEngine engine = new BalthazarEngine()
        context["balthazar-pipe"] =  definition
        context["run-to"] =["run-to"]
        context["run-task"] =["run-task"]
        return engine.start(context)
def runner = new BalthazarRunner()

Now we need to distribute the client, our engine and our library of tasks to the node where we want to execute the pipeline with our code repository. This can be done in many ways.

We can package balthazar as a installable package and install it using yum or similar tool. This works quite well on build servers but it does limit us a bit on where we can run it as we need "that installer" to be installed on the target environment. In many cases its really isn't a problem because if your a Debian shop then you have your deb everywhere and if your a Redhat shop then you have your yum. 

I personally opted for another way of distributing the client. Partially because Im lazy and partially because it works on a lot of environments. When I make my balthazar.yml I also checkout the balthazar client project as a git submodule.

So all my projects have a .balthazar.yml and a balthazar-client folder. In my client folder I have a and a file. I use gradle to fetch the latest artefacts from my repo and then the shell script does the java -jar part. Not all that pretty but it works.


So now on all my repos I can just do...

>. balthazar-client/

... and I run the pipe configured in .balthazar.yml on that repo. Since all my tasks integrate with my Build Data Repository I get ONE view of all my pipe executions regardless of where they where executed.

CD made fun! Cheers!

Friday, June 27, 2014

Continuous Deployment in the Cloud Part1: The Distributed Continuous X process

This is the first part of the Blog Series "Continuous Deployment in the Cloud".

When we started doing Continuous Delivery many of us started building the process around a CI Server. Many of us ran into problems building their pipelines with Jenkins or other CI Tools. There are several reasons to these problems these two blog posts and outline the problems really well.

CI Server is a bad Continuous Delivery/Deployment Orchestrator

Personally Id like to boil down the core of the problem to lack of portability and separation of concerns.

If you model the process in a CI Tool then the process will never ever be portable. Even if you can distribute the process across multiple instances of the CI Tool through different means of generating and publishing process you always need an instance of that tool to run the process. This makes development quite hard since you need a local development instance of that tool.

In most CI Tools the data collected from each job is stored in the CI Tool itself. To make matters even worse its often stored in that instance of the CI Tool. This means that the only way to access the data gathered by the jobs of the pipe is through navigating that instance of the pipe on that instance of the CI Server. This makes it very hard to distribute the Continuous X process over multiple CI Servers, we can use master/slave setups but the problem still persists with multiple masters.

Since the data is often tied to the implementation of the pipe it becomes very hard to visualise historical data. If the current layout of the pipe has changed then we still want to be able to visualise old pipes of our system.

Another problem that arises is historical data and data retention as it is tied to the CI Tool and visualisation is tied to the CI Tool we need to manage the disk space on the CI Tools. We don't want to mix runtime and historical data in CI Tool.

Separating Process Implementation and Process Data Storage

So the first thing we have to deal with in order to distribute our Continuous X process is to move the data out of the process implementation.

In fact the first problem we encounter when distributing a Continuous X process is the Version Number.
What do we use as a version number and where do we get it?

  • Using the CI Server Build Number is extremely bad as you cant even reset your Build Job without encountering problems. 
  • Using a property checked in into your source code repository such as version number in the maven pom or similar is almost worse. You will have a gap in time between the repo fetch, update of version and commit back to repository. If other jobs start in this time frame then they will get the same version number.

So the answer is we get it from a central build data repository. A single database that keeps track of all our deliverables, their versions and their state. By delegating the responsibility of the version number to the Build Data Repository we ensure that the Version Number is created through a atomic update.

The first thing our Process Implementation will do is to get its version number from the Build Data Repository. Then everything we do during the Process Implementation we report it to the Build Data Repository.

So for example we report

  1. Init Pipe - Get Version Number, 
  2. Build Start - Environment, TimeStamp
  3. Unit Test Start - Environment, TimeStamp
  4. Unit Test Done - Environment, TimeStamp, Report
  5. Build Done - Environment, TimeStamp, Report
  6. Deploy to Test Start - Environment, TimeStamp
  7. Deploy to Test Done - Environment, TimeStamp, Report
  8. Test Start - Environment, TimeStamp
  9. Test Done - Environment, TimeStamp, Report
  10. Promote - Environment, TimeStamp, Promoted to PASSED_TEST
  11. Deployment Production Start - Environment, TimeStamp
  12. Deployment Production Done - Environment, TimeStamp, Report
Now we have a Continuous X process implementation that gets the version number from the same place regardless of where its executed and reports all the data into one repository regardless of where its executed. This means that the same process implementation can be executed on any CI Server instance as well as on any Developer machine. This has enabled us to implement a portable and hence distributed Continuous X process.

There are several reasons we might want to run the process locally on our Dev Environment. We need the capability to develop the process implementation and we need to be able to do it without a CI Server.  It gives us a good mindset when making a distributed implementation. If it can run locally then it can run on any build environment. In some cases we maybe don't need a full fledged build environment to run our Continuous X Process. Applications with few developers, few changes and short pipeline execution time don't really need a remote build environment.

Another important reason is that we need to be able to push our code to our customers even if our Continuous X Service is down. Regardless of how high the SLA of the service (internal or cloud based) is we will eventually run into situations where we need to execute it and its down.

Thought its important to note that a pipeline should never allow execution on changes that aren't pushed to the SCM. Every execution has to be traceable to a SCM revision. 

Process Visualization

One of the main problem with CI Tools such as Jenkins is the visualization. Its just not built for the purpose of Continuous Delivery/Deploy and as Ive mentioned above its based on local instances of job pipes. This makes it impossible to visualize a distributed process in a good way and it requires the user to have understanding of the CI Tool. A Continuous Delivery/Deploy process needs to be visualised so that non technical employees can view it.

Good thing we have a Build Data Repository then. Based on the information in the Build Data Repository we can visualize our pipe, its status and result no matter where it was executed. We can also visualise the pipe based on how it looked when it was executed, not how its implemented now.

Logging, Monitoring and Metrics vs the Build Data Repository

What do we store in the Build Data Repository? Do we store everything in there? Like execution logs from the test excutions and metrics?

No. The Build Data Repository is to store the reports from all events builds, deployments, tests, ect but the actual runtime logging should go to where it belongs. Logfiles going into a log repository such as logstash, metrics going into a metrics repository such as graphite.

The Process Visualization could/should aggregate the data from the Build Data, Log and Metric repositories to provide a good view of our process and system.

Provisioning Environments and Executing Tests

Its really important that our Continuous X Process Implementation initiates the full provisioning of the full test/prod environment. By assuring that it does so we ensure that all environmental changes go through our pipe.

What do I define as Environment? Everything LoadBalancers, Network Rules, Middleware nodes, caches, databases, scaling rules, ect, ect.

This concept is a great extension of the Immutable Server pattern. By releasing images instead of artefacts out of our build step (actually build+bake) we enable the creation of Immutable Servers through the entire build pipe.

The process builds a new test environment for each blackbox deployed test execution (basically anything beyond unit test) and then it destroys it again once the test is finished. When we deploy into our next environment call it QA/PreProd what ever then we do it by creating new middleware servers in that environment based on the same image and virtualisation spec as we had in our Test Environment. Once they are upp and running we rotate them into our cluster/load balancer, whatever.

The big difference between our Test, QA and Prod environment deploys is that the first one is a create and destroy scenario while the others are an update scenario as we cannot build new full environments for production. We cannot build new databases, bring up new load balancers, ect for each production deploy. So preferably we separate the handling of Mutable and Immutable infrastructure in one environment so that we can create and destroy our Immutable Servers/Clusters and mutate our databases.

So basically a very low level of detail topology of a Distributed Continuous X implementation looks something like this.

This gives us the fundamental basic understanding of what we need to do inorder to build a scaleable distributed system that handles our Continuous X processes in our company.

In my next post in this series I will go into example of how to build a portable Continuous X Process Implementation that is independent of any CI and Build Tools.

Monday, June 2, 2014

Blog Series: Continuous Deployment in the Cloud

For my next few posts I am going to focus on writing a series of articles how to do Continuous Delivery & Deployment in a cloud environment. Ive always been a bit cautious when it comes to tutorial style blogs, talks and articles. I usually find them to be too shallow and then never reveal the true issues that need to be solved. This often leads to bad, premature and uninformed decisions made by the consumer of the tutorial.

So instead I am going to try provide a much richer series of articles that focus on how to Architect, Test, Deploy and Deliver in a Cloud environment.

In my conference talks I often talk about the key of having a build data repository that is separate from the build engine (CI Tool). More then once I have been asked if I can open source this our tool. Well Im not sure has been my answer. So instead Ive decided to implement a new similar tool and open source it. Im going to over architect it a bit on purpose as its going to be the main example in the article series. The Process Implementation in this series is going to use that tool as its build repo.

This is how the Process Implementation will look like. I will go deeper into this picture as I move forward but a few quick words about it.

The key to a scaleable CD process is for it to be independent of its runtime environment. The CD process drawn here can be executed from a Build Environment and/or a Dev Environment. The dev can push from his own environment right into production or he/she can let the build environment do it from him/her. Regardless of where the process is initiated it will be executed in the same way and it will integrate with the build data repository to which it reports any events that happen on that build and its also from where it gets its version number.

Will I talk about tooling this time? Yes I will. This article will be based on Git as SCM, AWS as Test and Prod Runtime Environments, Travis CI as Build Environment and Gradle as Build Tool but I still have not decided upon test tool most likely it will be RESTAssured.

This series will take some time to write and will mostly be done during the later part of this summer and the fall. If you are interested in this article series and/or the build data repository then please +1 this article to show me that there is interest.


Wednesday, May 14, 2014

Tomorrow is the premiere of "Scaling Continuous Delivery" at GeeCon 2014

Tomorrow on the 15th of May Ive got a talk at GeeCon in Krakow. Its the first outing of my new talk Scaling Continuous Delivery. The talk is an experience report on all the struggles we have had scaling our continuous delivery rollout. Hopefully the talk will provide an insight to what we have done and the steps we have taken while scaling. Sometimes its not just the end goal that is interesting but also the journey.

Hopefully its will be appreciated.

Here are the slides for the talk