Pages

Monday, March 17, 2014

Portability

I've talked about Portability of the CD process before but it continuously becomes more and more evident for us how important it is. The closer the CD process comes to the developer the higher the understanding of the process. Our increase in portability has gone through stages.

Initially we deployed locally in a way that was totally different from the way we deployed in the continuous delivery process. Our desktop development environments where not part of our CD process at all. Our deploy scripts handle stopping starting of servers, moving artifacts on the server, linking directories and running liquibase to upgrade/migrate database. We did all this manually on the local environments. We ran liquibase but we ran it using the maven plugin (which we don't do in our deploy scripts there we run it using java -jar). We moved artifacts by hand or by other scripts.

Then we created a local bootstrap script which executed the CD process deploy scripts on a local environment. We built in environment specific support in the local bootstrap so that we supported linux and windows. Though in order to start Jboss and Mule we needed to add support for the local environment in the CD process deploy script as well. We moved closer to portability but we diluted our code and increased our complexity. Still this was an improvement but the process was still not truly portable.

In recent time we have decided to shift our packaging of artifacts from zip files to rpms. All our prod and test environments are redhat so the dependency on technology is not really an issue for us here. What this gives us is the ability to manage dependencies between artifacts and infrastructure in a nice way. The war file depends on a jboss version which depends on a java version and all are installed when needed. This also finally gives us a clear separation between install and deploy. The yum installer installs files on the server, our deploy application brings the runtime online, configures it and moves the artifacts into runtime.

In order for us to maintain portability to the development environment this finally forced us to go all in and make the decision "development is done in a linux environment". We won't be moving to linux clients but our local deploy target will be a virtual linux box. This finally puts everything into place for us creating a fully portable model. Its important to understand that we still dont have a cloud environment in our company.


This image, created by my colleague Mikael, is a great visualization of how portability we can build in our environment now and when we get a cloud. By defining a Portability level and its interface we manage to build a mini cloud on each jenkins slave and on a local dev machine using the exactly same process as we would for a QA or test deploy. The Nodes above the Portability level can be local on the workstation/jeknins slave or remote in a Prod Environment. The process is the same, regardless of environment Provision, Install and Deploy.


2 comments:

  1. Hi there Thomas,
    Interesting post - I've had similar experiences with CD as yourself - not identical mind you!... Have you looked at how Docker could help you deploy to the same shaped VM's, making a consistent deploy and configure process all the way through your pipeline.. even locally!

    ReplyDelete
  2. Hi Gary,

    Not sure if you read this post http://continuous-delivery-and-more.blogspot.se/2014/04/environment-portability.html it covers more on the subject of portability.

    Docker is interesting and we have considered it. Still it does not really solve the problem I talk about in the linked post which is making the complexity of the production environment portable. If you have a production cluster in say AWS with an ELB and an Auto Scaling Group then you use docker to deploy the nodes in the ASG. (just used AWS as an example rather then our actual setup). Each node can then be portable to your dev environment but you dont get the cluster. So it doesnt solve that problem.

    Next problem for us is that we have to introduce new technology into the production environment. This in itself is a political battle.

    Then my final issue is that Im undecided as to the value it will add in the long run for us as our long term vision is to bake images in a bakery. So we do a deploy once then bake that image and just mount it in all envs from test to prod. Docker would help us do the deploy during bake but then doing it just once it doesnt add as much value.

    Reason behind baking is that there is just thre operations involved in each deployment Get Virtual Node + Mount Image + Init Server. Deploying using docker, rpms or what ever other custom zip involves Get Virtual Node + Mount Image + Transfer Artifact + Unpack Artifact + Init Server which is two more operations and one more network dependency.

    The best way to make deployment as reproducable as possible is to do it just once. The main culprit is the Transfer of Artifacts if the maven or what ever repo is down then you cant deploy. If there is a hickup on the network the deploy fails.

    So yes I do think Docker is a good tool but it really doesnt solve the main problems I see.

    ReplyDelete