Pages

Saturday, December 8, 2012

The impact of Continuous Delivery on the role of the Tester

Continuous Delivery really does change  way we work. It's not just tools and processes. We know that because every talk, every blog says it does but when you really see it its quite interesting.

The changes to the level of responsibility required by each developer has been one of the hardest thing to manage. Don't check shit in or you will break the pipe! Don't just commit and run, it's your responsibility that the pipe is green! All that is frustrating and has been the single most time consuming on our journey, but its worth a post of its own.

What I'd like to focus on is the changes to he role of the tester. Previously we have worked mostly with manual GUI driven testing. Our testers have tons of domain knowledge and know our systems inside out. We have worked with test automation in some projects and I have some test automation experience from the past. I've been trying to champion test automation for years but we have had a bit of a hard time getting it of the ground. When we have its not been test automation but rather automated tests that require some kind of tester supervision.

In our current project, as I've written about in previous posts , we decided to go all in. When we started we where a team of just architects. I was leading the work on the test automation. We got really good results working a lot with test architectur and test ability architecture in our application. But after some time we started to suffer from from lack of tester input in our testing. We obviously ended up with a very happy case scenario oriented suite.

So we started to bring on testers to our project but what profiles should we look for. First we started to look for just our notion of what a tester was and had always been in our projects. We happily took on testers with experience of testing and test managing portals, order systems, corporate websites ect, ect. But our delivery at this time didn't have a GUI all testing was done using Fitnesse. We didn't really get the interaction between dev and test we where hoping for.

We did get use for our testers for partner integration testing, which was manual (using rest client). But it wasn't really working well because they didn't know the interfaces and the application as they hadn't worked with th test automation.

We have been having this same experience for over a year. Our testers don't seem to be able to get involved in our test automation. But we do have a few who are and we are super lucky to have found them because we have not really had much of a clue when recruiting.

Our developers who are very modern and in a lot of cases super interested in test automation have constantly been bashing us about our choices of test tools. As I talked about in my previous post they totally refused SoapUI and arnt all that fond of Fitnesse either, even though they prefer it alot over SoapUI. But my take has always been "we need testers to feel comfortable with the tool, it's their domain, let them pick".

After we decided to move to REST assured I started to realize the problem I've failed to grasp for the six years or so I've been working with test automation. There are two sets of testers. GUI testers and technical testers. A GUI tester will always struggle with test automation. He/she has little to none experience or education in coding. The technical tester has a background as a developer or started developing as part of a automation interest.

Still even the technical eaters we have who have experience from test automation have had a transition period coming into the continuous world. Testers do tend to accept manual steps that arnt acceptable in a continuous world. It's not ok to just quickly verify this bug manually because its so simple and changing the test case that had a gap is a lot of work.  Its not ok to just add that user into e DB manually. It not ok to verify against the DB.

In the past our demand in GUI clicking testers has been high and our demand in technical testers has been low. At least in our old nonCD world the TestPyramid, http://martinfowler.com/bliki/TestPyramid.html, was upside down.

The Continuous Delivery expansion will drive a shift in what we look for in testers. Our tester demographics will move towards matching the pyramid. We will still need the GUI testers but their work will move more towards requirements gathering and acceptance testing. While I think we will see a new group of testers, with a much stronger developer background, come in and work with the automation.

This new group of technical testers or developers with super high understanding of testing is still hard to find but I hope we will see more of them.

Picking the tool for Component Testing

As i discussed in the previous post we realized we need to test our component, not just unit test code and do functional testing. Biggest reason was that we where cluttering our functional tests with validation a that didn't belong at that level. Cluttering tests with logic that doesn't belong at that level was increasing our test maint costs, something we knew rom the past we had to avoid.

Since our application is based on REST services exposed by components with very well defined responsibility its very well suited for component testing.

The responsibility for our components was clearly defined. A component test is a black box test with the responsibility of validating the functionality and interfaces of the component. This definition was somewhat a battle as some how our testers seem (our project and others in our company) to insist that they want to validate stuff on the database. I don't like beeing unreasonable but this was one thing I felt I had to be unreasonable on, black box means get the f*** out of our database. I still feel a bit bad about overriding our test managers on this but I still feel strongly that it had to be done. So all tests go through public or administrative interfaces on our components. Reason for this is obviously that I wanted our tests to support refactoring and not break when we change code.

The upside on forbidding database access was that we had to make a few administrative interfaces that we could eventually use for building admin tools.

So now what tools should we use? We where using Fitnesse for our functional testing but Fitnesse biggest weakness is that it doesn't separate test flow from test data. With functional testing this isn't really a big issue as each test is basically a flow of its own. But with component testing and it by nature beeing more detailed we saw that we would get much more tests cases per flow. Another weakness is that Fitnesse doesn't go all that well together with large XML/json document validation.

We do our GUI testing with selenium Fitnesse combo and that we would continue doing. But for our REST service testing our first choice was SoapUI. We prototyped a bit and decided we could accomplish what we wanted with it so we started building our tests. This was the single biggest mistake we made with our continuous delivery process.

Back in the days when we just did functional tests for our deliveries in Fitnesse we had a nice process  of test driven development. Our developers activly worked on developing testcases and fixtures and the tests went green as the code hit done. I really like it when it's functional test driven developement and think this is the best form of tdd. This went totally down the drain when we started using SoapUI for our component tests.

Our developers refused to touch SoapUI and started handing over functionality to testers for testing after they had coded it. This resulted in total chaos as we got a lot of untested checkins. Backloaded testing works extremely bad with a continuous delivery process. Especially since we didn't use feature flags.

This put us in an interesting dilemma. Do we choose a test tool that testers feel comfortable working with or do we choose a tool that developers like? I personally am very unreligious when it comes to tools. If it does the job then I don't care so much. But I've always had the opinion that test tools are for testers and its up to them, devs need to suck it up and contribute. Testers always seem to like clicky tools so I wasn't suprised that they wanted to use SoapUi.

We where sitting with a broken test process and our devs and testers no longer working together. Fortunately our testers came to the same conclusion and realized this tool was a dead end for us. The biggest killer was how bad the tool was suited for teamwork and versioning, even enterprise edition. Each and every checkin caused problems and you basically need to checkin all your suites for each reference you change. Horrible.

So after wasting nearly 3-4 months, growing our test dept and killing our dev process on SoapUI we decided to switch to RESTassured. For some of our components this is a definite improvement and its definitely a improvement process wise as our developers are happy to get involved. But I do still see some posible issues on our horizon with this choice. Though the biggest change is for our testers and how we as an organization view the tester role and that will be the topic of my next post.

One very nice thing though, our continuous delivery process is maven based so the change of tooling didn't affect it. Each test is triggered with mvn verify and as long as the tool has a maven plugins don't really care what tool they use for their components.

Tuesday, December 4, 2012

Automated System Testing

As I described earlier we decided that we wanted to prioritize automation of our functional system testing. The trickiest thing for us was finding the right abstraction level for theses tests. In precious projects where we have been working with Fitnesse as a test tool we have failed to find a good abstraction level.

We have always ended up with tests that are super technical in nature and cluttered with details that are very implementation specific. The problem this has given us is that very few people understand the test cases. Especially testers tend to have problems understanding them as they are so technical. If our testers can't work with our tests then we do have some sort of problem (quite a few and I will get to them later).

This time around we set out to increase the level of abstraction in the tests to make them more aimed towards testing requirements and less towards testing technology. We where hoping to achieve two thins, less maintainance and more readability. Maintainance budget for our automated tests had been high in the past. The technical tests exposed too much implementation and hence most refactoring resulted in test rewrite. This was shit because our tests didn't help to secure our refactoring. This was something we had to address as well.

The level we went for was something like this. (In pseudo fitnesse)

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|


We abstracted out the interfaces from the test cases and just loosely verified that things went right.

This registration test could be testing registration through the web portal or directly on the REST Service. This did lower the maintenance drastically. It did give us room to refactor or application without affecting the test cases. Though we where still having trouble getting our testers to write test cases that worked and made seance. The example above is obviously simplified and out of context, our test cases where quite long and complex. The biggest problem was what fixture method to use and how to design new ones with right abstraction level.

This exposed the need for something we really hadn't thought much about before, test architecture. We where really (and still are) lacking a test architect. Defining abstraction layers, defining test components, reusability of testcomponents, handling of test data, ect. All these become dramatically wrong when done by testers alone and not good enough when done by developers either.

Another problem we ran into with this abstraction level was that our testers didn't know our interfaces. The test doesnt really care about the interface so the tester doesn't need to take responsibility for the REST interface. Imho the responsibility of securing the quality of a public REST interface should lay with the testers. But here we give them tools to only secure registration functionality. Yes they are verifying the requirement to be able to register but the verification of the REST interface is implicit and left to the fixture implementation. This is not good.

It also generated a huge problem for us. We had integration tests with our partners that consume our REST interfaces and our testers who where in the sessions dint know our interfaces. When should they have learned them this was the first time they where exposed to them. They also where required to use tooling that they dint use other wise to test, REST clients of what ever flavor.

We had solved one thing but created another problem.

Had we abstracted too much? Was this the wrong level to test on?

Before we analyzed this we started to compensate for these issues by just jamming in more validations.

!define METHOD {POST}
|Register user by sending a |REST_REQUEST|${METHOD}| with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|RESPONSE_CODE|200|
|verify registration email|

Still a fictive example but Im trying to just illustrate how it went downhill. So we basically broke our intended abstraction layer and made our tests into shit. But it got worse. Our original intent was that registration is registration regardless of channel, web portal or rest. But then we had to have different templates for different users. Say that it was based on gender. So we took our web portal test case and made that register a female user and verify the template by string matching and then we used the REST interface for the guys.

Awesome now we made sure that we got a HTTP 200 on our REST response, we made sure that we used pink templates for the guys and green ones for the gals. Awesome and we made sure to test our web interface and our rest interface. Sweeet!!! We had covered everything!

Well maybe not.

This is when we started to think again and our conclusion was that the abstraction layer of the initial tests was actually quite ok.

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|

This tests a requirement. Its a functional test. It does have a value. Leave it at that! But it cant be our only test case to handle registration.

Coming back to this picture 

Our most prioritized original need was to have regression test on functional system, end to end of our delivery. We had that. We also had unit tests on the important mechanisms of the system, not on everything but and yest too little but on the key parts. Still we where lacking something and that something was component/subsystem end to end tests. This was something we deep  down always knew that we would have to make but we had ignored it for quite some time due to "other prioritize". I still think we made the right call back then and there when we prioritized but this has been the most costly call of our entire journey.

So we started two tasks. One retro fitting Component Tests and cleaning up our System Tests. More on how that went and how we decided to abstract these in the next post.


Friday, November 23, 2012

What is Continuous Regression Testing?

In my previous entry I said that our business case was Continuous Regression Testing. I will make a few entries on the subject.

Our definition was quite clear on what level of testing we wanted to achieve.

We define it at the System Test level, meaning regression testing of our end to end functionality as specified by our Use Cases. Important here is end to end and driven by Use Cases.

As an example say that our system would have a "Register User" Use Case which states that the user should be able to register through a portal and receive an confirmation email. In this case our end to end delivery is from our web front to our mail server. We would be creating an automated Test Case which registers the users through the portal and then the test evaluates that there is a mail sent to the email address of the registered user.

Me and others in our company (not on the team at the time) at the time have had quite a bit of experience working with automated System Tests. Yet we ran into quite a few mines (AGAIN!!) when building our test suites. I will cover these later.

So we had a clear understanding on how we wanted to test and we knew we wanted to test it often. For us testing on each code commit and so heavily relying on test automation was way out of our comfort zone. But our business case was strong and we knew why we wanted to go down that road.

Early on we made a few key decisions. One was to test everything on every commit. Second was to only build each artifact once. We discussed this and came to the conclusion that we would need to retest after a rebuild. We felt our process would be simpler and more correct by doing this.

Our architecture was based on a few well defined components (publishing rest services) that together formed our deliverable.

This is a simplistic view of something similar. Some components where internal and some where externally integrating.

We where aware that we most likely would need to test the components in a continuous way but we ignored it on purpose. Our focus had to be on our delivery as well and we all know too well that we need to prioritize well when working on deadlines. Its easy to get carried away on theoretical models and a desire to solve everything but its not the way we work. We always want something deliverable and then iterate it.

So our initial pipe ended up looking something like this.  Each of our components where built, unit tested and released on commit. The our deployment assembly was updated one component at the time. Each time the assembly was updated we deployed it and tested it on our test server.

It was a very simplistic start to our Continuous journey but it was very adequate for us and most importantly it solved our primary concern Continuous Regression of the customer application.

Its important to remember that the application was new and our team was small at the time.

Thursday, November 22, 2012

Where to start with Continuous Delivery

A lot of people have already started the journey without really knowing it. Its few companies today that don´t utilize some key parts of a Continuous process. When reading books and listening to speakers at conferences its easy to feel totally stone age. But the key is to remember that the journey is baby steps towards a vision. But Continuous Delivery is just progression of Continuous Integration and a lot of the basics are built upon in the progression.

By realizing we have something we can start to think about what we really need out of a Continuous Process. In some sense we all want to release all the time and we want our artifacts to be server images deployed on cloud nodes monitored by super cool graphs of all the metrics we can possibly think of. That is all nice but we all do need to provide a, hate the word, business value. So where is the business value in a continuous process for our organisation. That is the question we should ponder before we start.

In fact we had didnt have to ask our selfs the question in that way. Instead we where provided a challenge. For us it was key to solve system regression testing. Test automation and lowering cost in regression testing is obviously one of the key reasons to have Continuous Delivery. For us the key business case was to solve Continuous Regression testing. Continuous what ever else would come as a bonus.

Knowing that Continuous Regression Testing and not Continuous Deploy was our main priority made it easy for us to prioritize our efforts. Our efforts went largely into Testability Architecture in our application, Test Architecture in our test automation frameworks and automating a pipe that would deploy and test each code commit.

So really as with any agile development find your business value, focus on delivering it with a minimal effort "good enough" approach and iterate from there.

Over the next coming posts I will go though what we focused om in order to achieve Continuous System Regression Testing.

What drives my interest in Continuous Delivery?

I want to mitigate the risk of my presence in a project.

My weaknesses as a individual both privately and professorially are that I'm horrible at following instructions I cant do it once but I'm even worse at doing it multiple times. I´m feel physically sick if I need to do repetitive tasks, I get demotivated and hence I do an even worse job. I also tend to forget to do things because I've got my mind full of other stuff. I'm also very bad at testing. Ive worked with test automation for nearly six years now and I still suck at  good test cases.

What I am good at is creative problem solving, finding solutions to problems and thinking outside the box. I'm also quite good at inspiring people.

This is a horrible combination for any project. This guy who cant follow instructions constantly comes up with new "bright ideas", cant event test them and even worse he gets people to engaged and enthusiastic towards his crazy ideas, do we really want him on our team???

Yes I am a high risk and I need to be mitigated.

In all seriousness, I do love change, I thrive in an ever changing environment. I love the positive energy towards solving problems and not dwelling over them. I love high pace and new challenges. I also do know that in order to do what I love to do the "maintenance" of our work needs to be minimized as its a huge time and cost sink for each project.

The "maintenance" part isnt just our production applications. Its our code, our tests, our mechanisms and of course our production deliveries. Every hour not spent maintaining these is spent on bringing in new business value into the organization and solving fun new challenges.

If I can contribute towards lowering the onetime and run time costs of a delivery then Im satisfied because I know my work has really mattered. Im also satisfied on a personal level because I know that Im spending time solving new problems and not maintaining old problems.

I want to come to work and feel satisfied...