Pages

Showing posts with label Architecture. Show all posts
Showing posts with label Architecture. Show all posts

Thursday, January 7, 2016

Optimizing Runtime Cost through Continuous Delivery

We have always had the need to understand our application runtime costs. In legacy enterprises this has often been done at a finance level and someone in the Runtime Operations department has been responsible for that cost. There have been different ways of distributing the costs to the right organization. Either each application has had its own infrastructure and own the full cost of it or applications have shared infrastructure in order to save money, utilize large servers in a better way or just to minimize setup and maintenance work of the servers. 

In many legacy enterprises there has been a huge disconnect between the rollout the application evolution and the infrastructure evolution. Application development has changed towards micro services with a lot of small applications with lifecycles of their own. The legacy enterprises runtime operations are largely unable to delivery on these changes due to processes, licenses and hardware leases of huge "enterprise scale servers" for the old monoliths. Requests of several servers per micro services have been waved off as unrealistic. 

Several ways around this have been invented in order to cohost multiple applications on the same servers in a somewhat isolated way. Docker draws so much attention partially because legacy enterprises can still have their big old servers and isolate the micro services on them. (Of course not only good thing with Docker).

Regardless of how this was solved the understanding of runtime cost becomes exponentially harder with micro service in legacy enterprise infrastructure. 

When transitioning to DevOps and Micro Services each team is responsible for delivering its services end to end in all environments with the right functionality, performance and imho to the right cost. So what is the right cost? Before even answering that question let’s start with what is the cost. The DevOps team needs to know the cost of its services. In the legacy enterprise that cost might at best case arrive as some kind of cost spit formula calculated by a finance guy based on how many micro services shared the servers, a cost split formula of any license costs and a cost split formula of the man hours to maintain the servers. At best a team would get this information once a month.

Part of the reason we want DevOps is so that the team has full competence and ability to improve its services. This needs to include the runtime costs of the services. A performance optimization of a service that cuts resource requirements by 25% should result in lower runtime costs of the service. In the legacy world of cost splits and financial formals to calculate cost this just doesn’t happen.

Thankfully we have cloud providers. Not only do we get the right resources handle our services when we need them but we get billed real money for them.

With auto scaling, elasticity and all other nice features that we get there is also a risk. We get away with writing increasingly bad applications. Bad performance? Doesn’t matter as long as it scales horizontally. We can IO block to ourselves hell and back as long as we scale horizontally and we do get away with it. Well as long as we don’t have to take responsibility for the cost of our service. This is why it’s so important for the DevOps team to take the responsibility of the runtime cost of the service.

For a few years I have had a dream to be able to stamp a cost runtime cost to a load test and correlate the performance measured in the test with the cost of the runtime environment to run the test. We are still not there with our applications since we do run on AWS Auto Scaling Groups and AWS bills us per started hour. This makes the billing data to blunt too give the correlation between performance and cost from a shorter test. With a Micro Service architecture that uses AWS Services such as Lambda, DynamoDB and Kinesis this would actually be achievable today.

With this vision in mind we have been able to integrate the runtime costs of our Micro Services in AWS with our Continuous Delivery as a Service implementation (Delivery Engine) in another way. For us DevOps Teams are the owners of services and Solutions are the drivers of the cost. So everything that we launch in AWS we tag with name of micro service, owner team name and cost driving solution. 

So this allows our DevOps Teams to see the cost of their Services.



Here we provide a graph of total costs for the services owned by a DevOps team. The services are listed in a top list below the graph and a long running trend of the cost for each component. This allows the Product Owners and Team Members to act in increasing costs. Links are provided to the Dashboard Page of each service.

This Service Dashboard provides the version history of each version built in delivery engine. Our Delivery Engine builds, black box tests the deployed service, load tests it, bakes AWS AMIs for it and launches it into the right environments. Here on the dashboard we visualize the total cost of the service grouped by environment (delivery engine, exploratory testing, qa and prod environments).


From the same dashboard we visualize service usage. In the below example it’s a visualization on cpu consumption across an auto scaling group.



This way we allow the team to ensure that they have the right scaling rules for its services and that they have the right amount of resources in each environment. 

By visualizing the cost of the runtime environment for each service and combining it with Continuous Delivery, Continuous Performance Testing and DevOps we allow our developers to constantly tweak and improve the performance and optimize the cost of our runtime environments. The lead time in the cost reporting still is at the "next day level" as we report the cost per day and per month but I still think this is a reasonable feedback loop when it comes to cost optimizations.



Monday, July 7, 2014

Continuous Deployment in the Cloud Part 2: The Pipeline Engine in 100 lines of code

As I talked about in my previous post in this series we need to treat our Continuous Delivery process as a distributed system and as part of that we need to move the Pipe out of Jenkins and into a first class citizen of its own. Aside from the facts that a CI Tool is a very bad Continuous Delivery/Deploy orchestrator I find the potential of executing my pipe from anywhere in the cloud very tempting.

If my pipe is a first class citizen of its own and executable on any plain old node then I can execute it anywhere in the cloud. Which means I can execute it locally on my laptop, a simple minion in my own cloud or in one of all of the managed CI services that have surfaced in the Cloud.

To accomplish this we need five very basic and simple things

  1. a pipeline configuration that defines what tasks to execute for that pipe
  2. a pipeline engine that executes tasks
  3. a library of task implementations
  4. a definition of what pipe to use with my artefact
  5. a way of distributing the pipeline engine to the node where I want to execute my pipeline
Lets have a look.

Define the pipeline

The pipeline is a relatively simple process that executes tasks. For the purpose of this blog series and for simple small deliveries sequential execution of tasks can be sufficient but at my work we do run a lot of parallel sub pipes to improve the throughput on the test parts. So our pipe should be able to handle both.

We also want to be able to run the pipe to a certain stage. Like from start to regression test and then step by step launch in QA and launch in Prod. Obviously of we want to do continuous deployment we don't need to worry too much about that capability. But I include it just to cover a bit more scope.

Defining pipelines is no real rocket science and in most cases a few archetype pipelines will cover like 90% of the pipes needed for a large scale delivery. So I do like to define a few flavours of pipes that gives us the ability to distribute a base set of pipes for CI to CD.

Once we have defined the base set of pipe flavours the each team should configure which pipe they want to handle their deliverables.

I define my pipes something like this.

name: Strawberry
pipe: 
  - do:
     - tasks:
       - name: Build
         type: mock.Dummy
  - do:
     - tasks:
       - name: Deploy A
         type: mock.Dummy
       - name: Test A
         type: mock.Dummy
     - tasks:
       - name: Deploy B
         type: mock.Dummy
       - name: Test B
         type: mock.Dummy
    parallel: true     
  - do:
     - tasks:
       - name: Publish
         type: mock.Dummy
  
A pipe named Strawberry which builds our services then deploys it in two parallel pipes where it executes two test suites and finally publishes the artefacts in our artefact repo.  At this stage each task is just executed with a Dummy task implementation.

The pipeline engine

We need a mechanism that understands our yml config and links it to our library of executable tasks.

I use Groovy to build my engine but it can just as easily be built in any language. Ive intentionally stripped down some of the logging I do but this is basically it.  In about 80 lines of code we have a engine that loads tasks defined in a yml, executes then in serial or parallel and has the capability to run all tasks, the tasks up to one point or a single task.

@Log
class BalthazarEngine {

    def int start(Map context){
        def status = 0
        def definition = context.get "balthazar-pipe" 
        
        for (def doIt:  definition["pipe"] ){
            status = executePipe(doIt, context)
        }
        return status
    }
    def int executePipe(Map doIt, Map context){
        def status = 0
        if (doIt.parallel == true){
            status = doItParallel(doIt,context)
        } else {
            status = doItSerial(doIt,context)
        }
        return status
    }
    
    def int doItSerial(def doIt, def context){
        def status = 0
        for (def tasks : doIt.do.tasks){
            status = executeTasks(tasks, context)
        }
        return status
    }
    
    def int doItParallel(def doIt, def context){
        def status = new AtomicInteger()
        def th
        for (def tasks : doIt.do.tasks){
            def cloneContext = deepcopy(context)
            def cloneTasks = deepcopy(tasks)
            th = Thread.start {
                status = executeTasks(cloneTasks, cloneContext)
            }
        }
        th.join()
        return status
    }   
    
    def int executeTasks(def tasks, def context){
        def status = 0
        for (def task : tasks){
            //execute if the run-task is not specified or if run-task equqls this task
            if (!context["run-task"] || context["run-task"] == task.name){
                log.info "execute ${task.name}"
                context["this.task"] = task
                def impl = loadInstanceForTask task;
                status = impl.execute context 
            }
            
            if (status != BalthazarTask.TASK_SUCCESS){
                break
            }
            
            if (context["run-to"] == task.name){
                log.info "Executed ${context["run-to"]} which is the last task, done executing."
                break
            }
        }
        return status
    }
    def loadInstanceForTask(def task){
        def className = "balthazar.tasks.${task.type}"
        def forName = Class.forName className
        return forName.newInstance()
    
    }
    def deepcopy(orig) {
        def bos = new ByteArrayOutputStream()
        def oos = new ObjectOutputStream(bos)
        oos.writeObject(orig); oos.flush()
        def bin = new ByteArrayInputStream(bos.toByteArray())
        def ois = new ObjectInputStream(bin)
        return ois.readObject()
    }
}

A common question I tend to get is "why not implement it as a lifecycle in maven or gradle". Well I want a process that can support building in maven, gradle or any other tool for any other language.  Also as soon as we use another tool to do our job (be it a build tool, a ci server or what ever) we need to adopt to its lifecycle definition of how it executes its processes. Maven has its lifecycle stages quite rigidly defined and I find it a pita to redefine them. Jenkins has its pre, build, post stages where its a pita to share variables. And so on. But most importantly use build tools for what they do well and ci tools for what they do well and none of that is implementing CD pipes.

Task library.

We need tasks for our pipe engine to execute. The interfaces for a task is simple.

public interface BalthazarTask {
    int execute(Map<String, Object> context);
}

Then we just implement them. For my purpose I package tasks in "balthazar.task.<type>.<task>" and just define the type and task in my yml. 

Writing tasks in a custom framework over say jobs in Jenkins is a joy.  You no longer need to do workaround to tooling limitations for simple things such as setting variables during execution.  

Anything you want to share you just put it on the context.

Here is an example of how two tasks share data.

  - tasks: 
    - name: Initiate Pipe
      type: init.Cerebro
  - tasks: 
    - name: Build
      type: build.Gradle
      command: gradle clean fatJar

I have two tasks. The first task creates a new version of the artefact we are building in my master data repository that I call Cerebro. (More on Cerebro in the next post). Cerebro is the master of all my build things and hence my version numbers come from there. So the init.Cerebro task takes the version from Cerebro and puts it on the context.

@Log
class Cerebro implements BalthazarTask {
    @Override
    def int execute(Map<String, Object> context){
        def affiliation = context.get("cerebro-affiliation")
        def hero = context.get("cerebro-hero")
        def key = System.env["CEREBRO_KEY"]
        def reincarnation =  CerebroClient.createNewHeroReincarnation(affiliation, key, hero)
        context.put("cerebro-reincarnation",reincarnation)
        return TASK_SUCCESS
    }
}


My build.Gradle task takes the version number from cerebro (called reincarnation) and sends it to the build script. As you can see I can use custom commands and in this case I do as fat jars is what I build. By default the task does gradle build. I can also define what log level I want my gradle script to run.

@Log
class Gradle implements BalthazarTask {
    @Override
    def int execute(Map<String, Object> context){
        def affiliation = context["cerebro-affiliation"]
        def hero = context["cerebro-hero"]
        def reincarnation = context["cerebro-reincarnation"]
        def command = context["this.task"]["command"] == null ? "gradle build": context["this.task"]["command"]
        def loglevel = context["this.task"]["loglevel"] == null ? "" : "--${context["this.task"]["loglevel"]}"
        
        def gradleCommand = """${command} ${loglevel} -Dcerebro-affiliation=${affiliation} -Dcerebro-hero=${hero} -Dcerebro-reincarnation=${reincarnation}"""

        def proc = gradleCommand.execute()                 
        proc.waitFor()                               
        return proc.exitValue()
    }
}

This is how hard it is to build tasks (jobs) if its done with code instead of configuring it in a CI tool. Sure some tasks like building Amazon AMI´s take a bit more of code. (j/k they don't). But ok a launch task that implements a rolling deploy on amazon using a A/B release pattern does but I will come back to that specific case.

Configure my repository

So I have a build pipe executor, pre built build pipes and tasks that execute in them. Now I need to configure my repository.

In my experience 90% of your teams will be able to use prefab pipes without investing too much effort into building tons of prefabs. A few CI a few simple CD and a few parallelized pipes should cover a lot of demand if you are good enough at putting an interface between the deploy tasks and the deploy as well as the test tasks and the deploy tools.

So in my repo I have a .balthazar.yml which contains.

balthazar-pipe: Strawberry

Distributing the pipeline engine and the task library

First thing we need is a balthazar client that starts the engine using the configuration provided inside my repository.  Simply a Groovy script does the trick.

@Log
class BalthazarRunner {
    def int start(Map<String, Object> context){
        Yaml yaml = new Yaml()
        if (!context){
            def projectfile = new File(".balthazar.yml")            
            if (projectfile.exists()){
                context = yaml.load projectfile.text
            } else {
                throw new Exception("No .balthazar.yml in project")
            }
            
        }
        def name = context.get "balthazar-pipe" 
        def definition = yaml.load this.getClass().getResource("/processes/${name}.yml").text 
        BalthazarEngine engine = new BalthazarEngine()
        
        context["balthazar-pipe"] =  definition
        context["run-to"] = System.properties["run-to"]
        context["run-task"] = System.properties["run-task"]
        return engine.start(context)
    }
}
def runner = new BalthazarRunner()
runner.start([:])

Now we need to distribute the client, our engine and our library of tasks to the node where we want to execute the pipeline with our code repository. This can be done in many ways.

We can package balthazar as a installable package and install it using yum or similar tool. This works quite well on build servers but it does limit us a bit on where we can run it as we need "that installer" to be installed on the target environment. In many cases its really isn't a problem because if your a Debian shop then you have your deb everywhere and if your a Redhat shop then you have your yum. 

I personally opted for another way of distributing the client. Partially because Im lazy and partially because it works on a lot of environments. When I make my balthazar.yml I also checkout the balthazar client project as a git submodule.

So all my projects have a .balthazar.yml and a balthazar-client folder. In my client folder I have a balthazar.sh and a gradle.build file. I use gradle to fetch the latest artefacts from my repo and then the shell script does the java -jar part. Not all that pretty but it works.

Summary

So now on all my repos I can just do...

>. balthazar-client/balthazar.sh

... and I run the pipe configured in .balthazar.yml on that repo. Since all my tasks integrate with my Build Data Repository I get ONE view of all my pipe executions regardless of where they where executed.

CD made fun! Cheers!

Friday, June 27, 2014

Continuous Deployment in the Cloud Part1: The Distributed Continuous X process

This is the first part of the Blog Series "Continuous Deployment in the Cloud".

When we started doing Continuous Delivery many of us started building the process around a CI Server. Many of us ran into problems building their pipelines with Jenkins or other CI Tools. There are several reasons to these problems these two blog posts http://www.cloudsidekick.com/blog/stretch-armstrong.html and http://www.alwaysagileconsulting.com/pipeline-antipattern-deployment-build/ outline the problems really well.

CI Server is a bad Continuous Delivery/Deployment Orchestrator

Personally Id like to boil down the core of the problem to lack of portability and separation of concerns.

If you model the process in a CI Tool then the process will never ever be portable. Even if you can distribute the process across multiple instances of the CI Tool through different means of generating and publishing process you always need an instance of that tool to run the process. This makes development quite hard since you need a local development instance of that tool.

In most CI Tools the data collected from each job is stored in the CI Tool itself. To make matters even worse its often stored in that instance of the CI Tool. This means that the only way to access the data gathered by the jobs of the pipe is through navigating that instance of the pipe on that instance of the CI Server. This makes it very hard to distribute the Continuous X process over multiple CI Servers, we can use master/slave setups but the problem still persists with multiple masters.

Since the data is often tied to the implementation of the pipe it becomes very hard to visualise historical data. If the current layout of the pipe has changed then we still want to be able to visualise old pipes of our system.

Another problem that arises is historical data and data retention as it is tied to the CI Tool and visualisation is tied to the CI Tool we need to manage the disk space on the CI Tools. We don't want to mix runtime and historical data in CI Tool.


Separating Process Implementation and Process Data Storage

So the first thing we have to deal with in order to distribute our Continuous X process is to move the data out of the process implementation.

In fact the first problem we encounter when distributing a Continuous X process is the Version Number.
What do we use as a version number and where do we get it?

  • Using the CI Server Build Number is extremely bad as you cant even reset your Build Job without encountering problems. 
  • Using a property checked in into your source code repository such as version number in the maven pom or similar is almost worse. You will have a gap in time between the repo fetch, update of version and commit back to repository. If other jobs start in this time frame then they will get the same version number.

So the answer is we get it from a central build data repository. A single database that keeps track of all our deliverables, their versions and their state. By delegating the responsibility of the version number to the Build Data Repository we ensure that the Version Number is created through a atomic update.

The first thing our Process Implementation will do is to get its version number from the Build Data Repository. Then everything we do during the Process Implementation we report it to the Build Data Repository.

So for example we report

  1. Init Pipe - Get Version Number, 
  2. Build Start - Environment, TimeStamp
  3. Unit Test Start - Environment, TimeStamp
  4. Unit Test Done - Environment, TimeStamp, Report
  5. Build Done - Environment, TimeStamp, Report
  6. Deploy to Test Start - Environment, TimeStamp
  7. Deploy to Test Done - Environment, TimeStamp, Report
  8. Test Start - Environment, TimeStamp
  9. Test Done - Environment, TimeStamp, Report
  10. Promote - Environment, TimeStamp, Promoted to PASSED_TEST
  11. Deployment Production Start - Environment, TimeStamp
  12. Deployment Production Done - Environment, TimeStamp, Report
Now we have a Continuous X process implementation that gets the version number from the same place regardless of where its executed and reports all the data into one repository regardless of where its executed. This means that the same process implementation can be executed on any CI Server instance as well as on any Developer machine. This has enabled us to implement a portable and hence distributed Continuous X process.

There are several reasons we might want to run the process locally on our Dev Environment. We need the capability to develop the process implementation and we need to be able to do it without a CI Server.  It gives us a good mindset when making a distributed implementation. If it can run locally then it can run on any build environment. In some cases we maybe don't need a full fledged build environment to run our Continuous X Process. Applications with few developers, few changes and short pipeline execution time don't really need a remote build environment.

Another important reason is that we need to be able to push our code to our customers even if our Continuous X Service is down. Regardless of how high the SLA of the service (internal or cloud based) is we will eventually run into situations where we need to execute it and its down.

Thought its important to note that a pipeline should never allow execution on changes that aren't pushed to the SCM. Every execution has to be traceable to a SCM revision. 



Process Visualization

One of the main problem with CI Tools such as Jenkins is the visualization. Its just not built for the purpose of Continuous Delivery/Deploy and as Ive mentioned above its based on local instances of job pipes. This makes it impossible to visualize a distributed process in a good way and it requires the user to have understanding of the CI Tool. A Continuous Delivery/Deploy process needs to be visualised so that non technical employees can view it.

Good thing we have a Build Data Repository then. Based on the information in the Build Data Repository we can visualize our pipe, its status and result no matter where it was executed. We can also visualise the pipe based on how it looked when it was executed, not how its implemented now.


Logging, Monitoring and Metrics vs the Build Data Repository

What do we store in the Build Data Repository? Do we store everything in there? Like execution logs from the test excutions and metrics?

No. The Build Data Repository is to store the reports from all events builds, deployments, tests, ect but the actual runtime logging should go to where it belongs. Logfiles going into a log repository such as logstash, metrics going into a metrics repository such as graphite.

The Process Visualization could/should aggregate the data from the Build Data, Log and Metric repositories to provide a good view of our process and system.


Provisioning Environments and Executing Tests

Its really important that our Continuous X Process Implementation initiates the full provisioning of the full test/prod environment. By assuring that it does so we ensure that all environmental changes go through our pipe.

What do I define as Environment? Everything LoadBalancers, Network Rules, Middleware nodes, caches, databases, scaling rules, ect, ect.

This concept is a great extension of the Immutable Server pattern. By releasing images instead of artefacts out of our build step (actually build+bake) we enable the creation of Immutable Servers through the entire build pipe.

The process builds a new test environment for each blackbox deployed test execution (basically anything beyond unit test) and then it destroys it again once the test is finished. When we deploy into our next environment call it QA/PreProd what ever then we do it by creating new middleware servers in that environment based on the same image and virtualisation spec as we had in our Test Environment. Once they are upp and running we rotate them into our cluster/load balancer, whatever.

The big difference between our Test, QA and Prod environment deploys is that the first one is a create and destroy scenario while the others are an update scenario as we cannot build new full environments for production. We cannot build new databases, bring up new load balancers, ect for each production deploy. So preferably we separate the handling of Mutable and Immutable infrastructure in one environment so that we can create and destroy our Immutable Servers/Clusters and mutate our databases.

So basically a very low level of detail topology of a Distributed Continuous X implementation looks something like this.


This gives us the fundamental basic understanding of what we need to do inorder to build a scaleable distributed system that handles our Continuous X processes in our company.

In my next post in this series I will go into example of how to build a portable Continuous X Process Implementation that is independent of any CI and Build Tools.


Monday, June 2, 2014

Blog Series: Continuous Deployment in the Cloud

For my next few posts I am going to focus on writing a series of articles how to do Continuous Delivery & Deployment in a cloud environment. Ive always been a bit cautious when it comes to tutorial style blogs, talks and articles. I usually find them to be too shallow and then never reveal the true issues that need to be solved. This often leads to bad, premature and uninformed decisions made by the consumer of the tutorial.

So instead I am going to try provide a much richer series of articles that focus on how to Architect, Test, Deploy and Deliver in a Cloud environment.

In my conference talks I often talk about the key of having a build data repository that is separate from the build engine (CI Tool). More then once I have been asked if I can open source this our tool. Well Im not sure has been my answer. So instead Ive decided to implement a new similar tool and open source it. Im going to over architect it a bit on purpose as its going to be the main example in the article series. The Process Implementation in this series is going to use that tool as its build repo.

This is how the Process Implementation will look like. I will go deeper into this picture as I move forward but a few quick words about it.

The key to a scaleable CD process is for it to be independent of its runtime environment. The CD process drawn here can be executed from a Build Environment and/or a Dev Environment. The dev can push from his own environment right into production or he/she can let the build environment do it from him/her. Regardless of where the process is initiated it will be executed in the same way and it will integrate with the build data repository to which it reports any events that happen on that build and its also from where it gets its version number.

Will I talk about tooling this time? Yes I will. This article will be based on Git as SCM, AWS as Test and Prod Runtime Environments, Travis CI as Build Environment and Gradle as Build Tool but I still have not decided upon test tool most likely it will be RESTAssured.

This series will take some time to write and will mostly be done during the later part of this summer and the fall. If you are interested in this article series and/or the build data repository then please +1 this article to show me that there is interest.

Thanks

Sunday, February 24, 2013

Architect to re-Architect

We spend so much time trying to make the right decisions. It's one of the downsides of working on a next generation platform. "You better get it right this time!". We have all been there when a current generation solution just doesn't cut it anymore. Implementing that next requirement is going to be so expensive that we might just as well rewrite the whole thing. Thing is they also tried to "get it right this time!".

Why does it "always" go wrong? Why do we always run into dead ends with systems. Sure not always but always when an application is exposed to a lot of changes and new requirements.

Select technology then abstract and isolate it in the architecture.

Historically we have put a lot of thought into selection of technology when we build something new. Its important to not get it wrong so we think a lot about getting it right. We also think a lot about patterns so that we can replace tech A with tech B if the decision has to be reversed. Who hasn't written hundreds of DAOs so that we one day can change our database. How often do we change database? Historically well I have never done it. Change from Oracle to DB2 or what ever other SQL database has never been the reason for a major rewrite. In fact I've been part of more then one rewrite that has thown out everything but the data layer.

In the future we will see more database changes due NoSQL but if and when we do that do we really want to keep our DAO interfaces? If we do then we sure ant going to accomplish much with our rewrite. If we change then we change because we need to solve a bottleneck problem. In order to solve it we need to make an optimization using a niche product. So we need to write and query our data differently.

The cause of a major rewrite is either lack of scale ability or customer requirements that are to hard to expensive or too high risk to implement. The later almost always happens when everything has become so interconnected that the change can no longer be done in a safe and isolated way. We need to refactor so much I order to make the change possible that its cheaper to rewrite.

Distribute system can still be a monolith.

In standard monolithic design we monolithized everything not just the components of the system but also the data model and the business logic. By normalizing our data model and constantly striving towards decreasing code redundancy we entangle all the services of our application into a huge ball of concert. It's when we end up with our services entangled in a solid ball of concert that we need to blow it up, all of it in order to rewrite it. It doesn't matter how well we modeled our database, how nice our DAOs are or how much inversion of control we use. If we don't treat our services independently we will run into trouble down the road.

Decoupling the monolith into subsystems doesn't necessarily help either. If we still normalize our data and strive towards reusing as much code as possible within the components then all we have done is distributed the monolith. Chances are quite high that you will need to rewrite multiple components when the requirement change appears.

Lets take an example.


We have a training application aimed towards running and cycling. We have users, training sessions and races. Training sessions and races are the same thing really they both contain a number of users, equipment, time, distance and a route. We provide views of user training sessions, user races and race results by race. We sell the application to race organizers and its free to users. We have an agreement to keep the race results highly available and to keep all history of previous years.

So we have a simple data model with users and sessions with a many to many relationship and a type defining if its a race or a training session. Simple. Done. Delivered.


Now the application becomes really popular as a training application among users so we start gaining a lot of data. This data is mostly written since no one else then the user really cares about it. Though it does impact on our race data since people tend to look at that more.

Someone realizes that all the training data is interesting since we also added a heart rate integration. So we build queries on the training data to provide to medical studies. Sweet extra income that he sales dudes came up with. It's no real issue performance wise as we run them once a year and that's done over Christmas.

Now someone sells our services of race data, training and fitness trending to UCI (cycling union) as a tool for their fit against doping. We just need to add a query to correlate our sweet training reports with race results, how hard can that be. We add the develop for a sprint or two and go live. So now we get serious tonnage of data and we run our queries more often. *gag* it doesn't work we can't scale and we can't add e new query without totally killing our SLAs with the other races. We need to rewrite.

Components are not the silver bullet.

Components dont really help us
Having our system distributed into a user repository, session storage and a integration component providing rest services to our GUI component wouldn't help us all at much. Sure we have separated users and their equipment from the sessions but its the queries on the sessions that is the problem and that they are killing our SLAs with the other race organizers.




Design by Services 

So what we really need is to move the race result service into a service of its own. We need to isolate it. Even though all the data is identical to the race data by the user. Then we need to separate the integration code for the race organizer service into a service of its own so that we can deploy it separately.

Services do help us
Doing this when hitting then wall is both hard, costly and risky. Just the database split is a nightmare if the data has grown big.

If we would have done this from the get go we could just have re architected the user race and training session service. We could have moved that from our MySQL to a big table database or what ever without affecting our race by organizer service. But doing this upfront feels so awkward we would have had duplicate tables and redundant code.

Define and isolate services in the architecture.

If we focus on isolating services across our components instead of isolating technology then we can actually re-architecture our bottlenecks. In fact in our example we could just added a uci services that duplicates the other services and if it would run into performance issues we could just  re-architectured it. But that would have forced us to duplicate more upfront and to increase our initial development costs.
Services can be extremely similar and
yet be different services

It's hard to "get it right" when the right can be against everything you have been thought for years. What we must learn to understand better is how we define and isolate services so that we can re-architecture our bottlenecks for the services that experience them and not the entire system.





Tuesday, February 5, 2013

My dear Java what have you done?!?!

What is this diagram? Well its a generic lightweight storage component. In previous posts on this blog Ive talked about how we have refactored our main components into smaller specialized components and how fruitful that has been for us. It really has and its been one of the best architectural steps we have taken and the components are super clean and very well written. They are very well written based on our requirements and on how we as a community expect Java applications to be built. But they have really made me think, are we actually putting the right requirements on them??

Lets take an example if our system was a fitness application then we could have four of these components one for user profiles, one for assets (bikes, shoes, skis, ect), one for track routes (run, bike, ski routes) and one for training schedules. Small simple components that store and maintain well defined data sets. Then we have a service component that aggregates these data sets into a consumer service.

So say that our route track component has the responsibility of storing a set of GPS track points and classifying them as run, bike or ski. How hard can it be? Well look at the diagram.

First of all what is the value of the component? Its providing a set of track points. There could be some rules on what data is required and how missing data or irregular data is handled since GPS can be out of reach. But its still very simple receive, validate, store and retrieve. The value is the data. The request explicitly asks for data related to a user and an activity type. So why does the request go through layer upon layer of frameworks and other crap in order to even get to the data? Shouldn't we strive to keep the data as close as possible? Shouldn't we always strive to keep the value as close to the implementation as possible?

I've always been a huge fan of persistence frameworks. I worked back in the good old dot com era when everyone and his granny wanted into IT and become programmers. Ive seen these developers trying to work directly with JDBC and SQL. The mess they created and the mess much better developers created even after the dot com era has made me believe that abstracting away the database through persistence frameworks is a must. The gain on 90% of the vanilla code written by the avg Joe heavily out weighs the corner cases where you need a seasoned developers to write the native queries and mappings.

Though Im starting to believe that JPA and Hibernate has to be the worst thing that has happened to Java since EJB2. Adding a blanket over a problem doesnt make the problem go away. The false notion of not having to care about the database can have catastrophic consequences. Good developers understand this and have learned to understand the mapping between the JPA model and the DB model. They have learned how the framework works and they have learned how hql maps to sql. By trying to mitigate the bad code written by the average Joes we have created even more complexity and developers require to not just master databases but also a new framework on top of them.

So for us to get our data from our database to our REST interface we create objects that map to a model of another system. Then we write queries in a query language that is super hard to test and experiment with and that still doesn't give us feedback on compile time. These queries and mapped objects get translated into a query language of a remote system. Note we transform it in the java application to the language of the other system which means that we in our system need to have full notion and understanding of the other system. Then we take this translated query and feed it through a straw to that other system. Down in that system the query is executed in the query engine, which we have to understand in our system since what we generate maps directly onto it. Then this query engine executes on the logical data model. Which happens to be the core value of our application. The only think that does make sense is the mapping between logical and physical storage of the data, since this we don't have to care about and its actually been abstracted out pretty well.

The database engine then feeds us the data back through our straw where Hibernate maps it back to our Java Objects. Then of course we have been good students and listened to the preaching about patterns so we transform our entity objects into DTOs that we then feed through multiple layers of frameworks so that they can be written to the http response.

To quote a colleague "Im so not impressed by the data industry". I totally agree. We have really made a huge mess of things. Not in our delivery for being what it is, a delivery on the Java platform with a RDBMS its a very well written and solid application.

Am I saying that we should just throw out all the frameworks and go back to using JDBC and Servlets straight of? Well no obviously (even though I say it when Im frustrated). The problem needs to be solved at the root cause. JDBC and SQL was equally bad because it was still pushing data through a straw. Queries written in one system pushed through a straw into another system where they are execute is never ever going to be a good model. Then the fact that the data structure of the database system doesnt match the object structure of the requesting application is another huge issue.

I really think that the query engine and the logical data storage model need to become part of the application and not just mapping frameworks. Some of the NoSQL database do this but most of them still work too much like JDBC where you create your connection to the database engine and then you send it a query. Instead Id like to just query my data. I want data not a connection. The connection has to be there but it should be between the query engine and the persistent store not between me and the query engine.


Query q = Query.createQuery(TrackPoint.class);
List<TrackPoint> trackpoints = q.where("userId").equals("triha")
                                .and().where("date")
                                .between("2012-01-01", "2013-01-30");



This should be it. My object should be queried and logically stored as close to me as possible. This way we would mitigate the problems of JDBC and SQL, by removing them not coverying them up with yet another framework. This would give us a development platform easy enough for average Joes and yet not dumbed down to a level that restricts our bright minds.

We could use more of our time actually writing code that adds value rather then managing frameworks that are just a necessity and not a value. Our simple components would look something like the diagram to the right. Actually pretty simple.

How hard can it be? Obviously quite hard. The most amazing thing is that we get all this complexity to work. No wonder its super expensive to build systems.