Pages

Showing posts with label Test. Show all posts
Showing posts with label Test. Show all posts

Tuesday, April 22, 2014

New Talk: Continuous Testing

April 29th I will be visiting HiQ here in Gothenburg. I will have the opportunity to talk about the super important subject of Continuous Delivery and Testing.  This talk is a intro level talk that describes Continuous Delivery and how we need to change the way we work with Testing.

If anyone else is interested in this talk then please dont hesitate to contact me. The talk can be focused on a practitioner audience as well digging in a bit deeper into the practices. 

Here is a brief summary of the talk.

Why do we want to do Continuous Delivery
Introduction to Continuous Delivery and why we want to do it. What we need to do in order to do it, principles and practices and a look at the pipe. (Part of the intro level talk)

Test Automation for Continuous Delivery
Starting with a look on how we have done testing in the past and our efforts to automate it. Moving on to how we need to work with Application Architecture in order to Build Quality In so that we can do fast, scaleable and robust test automation.

Test Driven Development and Continuous Delivery
How we need to look work in a Test Driven way in order to have our features verified as they are completed. A look at how Test Architecture helps us define who does what.

Exploratory Testing and Continuous Delivery
In our strive to automate everything its easy to forget that we still NEED to do Exploratory Testing. We need to understand how to do Exploratory Testing without doing Manual Release Testing. The two are vastly different

Some words on Tooling

Alot of time discussions start with tools. This is probably the best way to fail your test automation efforts.

Areas Not Covered
Before we are done we need to take a quick look at Test Environments, Testing Non Functional Requirements, A/B Testing, Power of Metrics. (This section expands in the practitioner level talk)

Sunday, February 24, 2013

So it took a year.

When we first started building our continuous delivery pipe I had no idea that the biggest challenges would be non technical. Well I did expect that we would run into a lot of dev vs ops related issues and that the rest would be just technical issues. I was so naive.

We seriously underestimated how continuous delivery changes the every day work of each individual involved in the delivery of a software service. It affects everyone Developer, Tester, PM, CM, DBA and Operations professionals. Really it shouldn't be a big shocker since it changes the process of how we deliver software. So yes everyone gets affected.

The transition for our developers took about a year. Just over a year ago we scaled up our development and added give or take 15-20 developers. All these developers have been of a very high quality and very responsible individuals. Though none of them had worked in a continuous delivery process before and all where more or less new to our business domain.

When introducing them everyone got the run down of the continuous delivery process, how it works, why we have it and that they need to make sure to check in quality code. So off you go make code, check in tested stuff and if something still breaks you fix it. How hard can it be?

Much much harder then we thought. As I said all our developers are very responsible individuals. Still it was a change for them. What once was considered responsible like if it "compiles and unit tests check it in so that it doesn't get lost" leads broken builds. Doing this before leaving early on Friday becomes a huge issue because others have to fix the build pipe. But it goes for a lot of things like having to ensure that database scripts work all the time, everything with the database is versioned, roll backs work, ect, ect. So everyone has had to step up their game a  notch or two.

Continuous delivery really forces the developer to test much more before he/she checks in the code. Even for the developers that like to work test driven with their junit tests this is a step up. For many its a change of behavior. Changing a behavior that has become second nature  doesnt happen over night.

We had a few highly responsible developers that took on this change seamlessly. These individuals had to carry a huge load during this first year. When responsibility was dropped by one individual it was these who always ensured that the pipe was green. This has been the biggest source of frustration. I get angry, frustrated and mad when the lack of responsibility by one individual affects another individual. They get angry and frustrated as well because they don't want to lave it in a bad state and their responsibility prevents them from going home to their families. I'm so happy that we didn't loose any of these individuals during this period.

Now after about a year things have actually changed everyone takes much more responsibility and fixing the build pipe is much more of a shared effort. Which is soo nice. But why did it take such a long time? Id really like to figure out if this transition could have been made smoother and faster.

Key things why it took so much time.

A change to behavior.
Developers need to test much more, not just now and then but all the time. No matter how much you talk about "test before check in" , "test", "test", "test" the day the feature pressure increases a developer will fall back on second nature behavior and check in what he/she believes is done. We can talk lean, kanban, queues, push and pull all we want but fact is still there will always be situations of stress. Its not before a behavior change has become second nature we do it under pressure.

Immature process.
Visibility, portability and scale ability issues have made it hard to take responsibility. Knowing when, where and how to take responsibility is super important. Realizing that lack of responsibility is tied to these took us quite some time to figure out. If its hard to debug a testcase its going to a lot of time to figure out why things are failing and its going to require more senior developers to figure it out. Its also hard to be proactive with testing if the portability between development environment and test environment is bad.

Lot of new things at once
When you tell a developer about a new system, domain and a new process Im quite sure the developer will always listen more to the system and domain specific talks.
Developer has head full of this system communicates with that system and its that type of interface. Then I start going on about "Jira, bla bla bla, test bla, checkin bla bla, Jenkins bla, deploy, bla, fitnesse, test bla, bla" and developer goes "Yeah yeah yeah Ill check in and it gets tested I hear you, sweet!".

I defiantly think its much easier for a developer to make the transition if the process is more mature, has optimized feedback loops, scales and is portable. Honestly I think its easily going to take 3-6 months of the learning curve. But its still going to take a lot of time in range of months if we don´t become better at understanding behavioral changes.

Today we go straight from intro session (slides or whiteboard) to live scenario in one step. Here is the info now go and use it. At least now we are becoming better at mentoring. So there is help to get so that you can be talked through the process and the new developer is usually not working alone, which they where a year ago. Still I dont think its enough.

Continuous Delivery Training Dojos

I think we really need to start thinking about having training dojos where we learn the process from start to finish. I also think this is extremely important when transitioning to acceptance test driven development. But just for the reason of getting a feeling for the process. What is tested where, how and what happens when I change this and that. How should I test things before comiting and what should be done in which order.

I think if we practiced this and worked on how to break and unbreak the process in a non live scenario the transition would go much faster. In fact I dont think these dojos should be just to train new team members but they would also be a extremely effective way of sharing information and consequences of process change over time.


Thursday, February 7, 2013

The world upside down.

Sometimes the world just goes upside down. I talked in a previous post about how continuous delivery and test driven development changes the role of the tester. I retract that post. Well maybe not fully but let me elaborate.

Our testers are asking HOW should we automate and our developers are asking WHAT are you trying to do.

I´ve thought that our problem has been that we haven't been able to find testers that know HOW to automate. Its not our problem. Our problem is that we are asking our testers to automate instead of asking them WHAT to test. If a tester figures out what to test then any developer can solve the how to automate with ease.

I still believe that the role of the tester will change over time and that anyone who can answer the what and how question will be the most desired team member in the future. Until then we need to have testers thinking about WHAT and developers thinking about HOW.

Frustrating how such a simple fact can stare us in the eyes for such a long time without us noticing it.

Next time a tester asks how and a developer has to question what then its time to stop the line and get everyone into the same room asap!

Sunday, February 3, 2013

Working the trunk

When my colleague Tomas brought up the idea of continuous delivery he first thing that really caught my attention was "we do all work on the trunk". I've always hated branches. I've worked with many different branching strategies and honestly they have all felt wrong.

My main issue has always been that regardless of branch strategy (Release Branches or Feature Branches) its a lot of double testing and debuging after merge is always horrible. Its also hard to have a clear view of a "working system", what is the system, which branch do you refer to? Always having a clean and tested version of the trunk felt very compelling. No double work and a clear notion of "the system"! I'm game!

So we test everything all the time. How hard can it be.

Well it has proven to be a lot harder then we thought, not to continuously test but to manage everyone's desire to branch. Somehow people just love branches. Developers want their feature branches where they can work in their sandbox. Managers want their branches so that they don't get anything else then just that explicit bug fix for their delivery and not risk impact from anything else.

These are two different core problems one is about taking responsibility and one is about trust.

Managers don't trust "Jenkins".

Managers don't trust developers but somehow they do trust testers. Its interesting how much more credit a QA manager has when he/she says "I've tested everything" then a blue light on jenkins. In fact managers have MORE confidence in a manual regression test that has executed "most of the test-cases on the current build" then an automated process which executes "all the test-cases on every build". I think the reasons are twofold one is that the process is "something that the devs cooked up" and the other is that jenkins cant look a manager in the eyes. It would be much easier if Jenkins was actually a person who had a formal responsibility in the organisation and could be blamed, shouted on and fired if things went wrong.

It takes alot of hard work to sell "everything we have promised works as we have promised it". For each new build that we push into user acceptance testing we need to fight the desire to branch the previous release. Each time we have to go through the same discussion.

"I just want my bug fix"
"you get the newest version"
"I don't want the other changes"
"Everything we have promised works as we have promised it"
"How can you guarantee that"
"We run the tests on each check in"
"Doesn't matter I don't want the other changes they can break something, I want you to branch"

I dint know how many times we have had this argument. Interesting is that we are yet to break something in a production deploy as a result of releasing bug fix from the trunk (and hence including half done features). Though we have had a failed deploy due to having subdued to the urge to branch. We made a bad call and subdued to the branch pressure. By doing that we branched but we didn't build a full pipe for the branch which resulted in us not picking up a incompatible configuration change.

Developers love sandboxes

Its interesting, developers push for more releases, smaller work packages yet they love their feature branches. I despise feature branches even more then release branches. Reason is that they make it very hard to refactor an application and the merging process is very error prone. The design and implementation of a feature is based on a state of the "working system" which can be totally different from the system its merged onto. Also it breaks all the intentions to do smaller work packages and test them often, a merge is always bigger then a small commit.

The desire to feature branch comes from "all the repo updates we have to do all the time slow us down so much" and "we cant just check stuff in that doesn't work without breaking stuff". The later one isn't just from developers wanting to be irresponsible its also from us running SVN and not GIT. Developers do want to share code in a simple way. Small packages that two team mates want to share without touching the trunk is a viable concern. So the ability to micro branch would be nice. So yes I do recommend GIT if you can but its not a viable option for us. Though I'm quite sure that if we where using GIT we would end up having problems related to micro branches turning into stealth feature branches.

I think the complaint "all the repo updates we have to do all the time slow us down so much" is a much more interesting one. In general I think that developers need to adopt more continuous integration patterns in their daily work but this is actually a scale-ability issue. If you have too many developers working in the same part of the repo you are gonna get problems. When developers do adopt good continuous integration patterns in their daily work and their productivity drops then there is an issue. This is one of the reasons why we have seen feature branches in the past.

Distribute development across the repository

When we started building our delivery platform we based it on an industry standard pattern that clearly defines component responsibility. Early on in the life cycle we saw no issues of developers contesting the same repository space as we had just one or two working on each component. But as we scaled we started to see more and more of this. We also saw that some of the components where to widely defined in their responsibility which made them bloated and hard to understand. So we decided to refactor the main culprit component into several smaller more well defined components.

The result of this was very good. The less bloated a component  is the easier it is to understand and to test, which leads to increased stability. By creating sub components we also spread out the developers across the repository. So we actually created stable contextual sandboxes that are easy to understand and manage.

Obviously it we shouldn't just create components to spread out our developers but I think that if developers start stepping on each other then its a symptom of either bad architecture or over staffing. If a component needs so many developers that they are in the way of each other then the chance is quite good that the component does way too much or that management is trying to meet feature demand by just pouring in more developers.

Backwards compatibly

Another key to working on the trunk has been our interface versioning strategy. Since we mostly provide REST services we actually where forced into branching once or twice where we had not other option and that was due to not being backwards compatible on our interfaces. We couldn't take the trunk into production because the changes where not backwards compliant and our tests had been changed to map the new reality. This is what lead to our new interface strategy where we among things never ever change interfaces or payloads, just add new ones and deprecate old ones.

Everything that interfaces the outside world needs to be kept backwards compatible or program management and timing issues will force inevitable branching.

Not what I expected

When we first deiced to work solely on the trunk I thought it was gonna be all about testing. Its important but I think people management has been a bigger investment (at least measured in mental energy drain) and importance of good architecture was under rated.

Wednesday, January 9, 2013

Test for runtime

Traditionally our testers have been responsible fore functional testing, load testing and in some cases for some failover testing. This covers our functional requirements and some of our supplemental requirements as well. Though it doesn't cover the full set of supplemental requirements and we haven't really taken many stabs at automating these in the past.

The fact that we haven't really tested all the supplemental requirements also leaves a big question, who's responsibility is verification of supplemental requirements? Lets park that question for a little bit. To be truthful we don't really design for runtime either. Our supplemental requirements almost always come as an afterthought and after the system is in production. They always tend to get lost in the race for features to get ready.

In our current project we try to improve on this but we are still not doing it well enough. We added some of the logging related requirements early but we have no requirement spec and no verification of the requirements.

The logging we added was checkpoint logging and performance logging. Both these are requirements from our operations department. The checkpoint logging is a functional log which just contain key events in an integration. It's used by our first line support to do initial investigation. The performance log is for monitoring performance of defined parts of the system. It's used by operation for monitoring the application.

Lets use user registration as an example (its a fictive example).

1. User enters name, username, password and email into a web form.
2. System verifies the form.
3. System calls a legacy system to see if the email is registered in that system as well.
3a. If user registered in legacy system with username and password matching the userid is returned from that system.
4. System persists user.
5. Email is sent to user.
6. Confirmation view displayed.

From this we can derive some good checkpoints.

2013-01-07 21:30:07:974 | null | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst
2013-01-07 21:30:08:234 | usr123 | User found in legacy system
2013-01-07 21:30:08:567 | usr123 | User persisted
2013-01-07 21:30:08:961 | usr123 | User notified at joe@some.tst

The performance log could look something like this.

2013-01-07 21:30:07:974 | usr123 | Legacy lookup completed | 250 | ms
2013-01-07 21:30:08:566 | usr123 | User persisted | 92 | ms
2013-01-07 21:30:08:961 | usr123 | User registration completed | 976 | ms

This is all nice but who decides what checkpoints should be logged? Who verifies it?

Personally I would like to make the verification the responsibility of the testers. Though I've never been in a project where testers have owned the verification of any kind of logging. This logging is in fact not "just" logging but system output, hence should definitely be verified by the testers. By making this the responsibility of the tester it also trains the tester in how the system is monitored in production.

So how do can this be tested?

Lets make a pseudo Fitnesse table to describe the test case .

| our functional fixture |
| go to | user registration form |
| enter | name | Ted | username | JohnDoe | email | joe@some.tst |
| verify | status | Registration completed |
| verify | email account | registration mail received |

This is how most functional tests would end. But let's expand the responsibility of the tester to also include the supplemental requirements.

| checkpoint fixture |
| verify | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst |
| verify | User found in legacy system |
| verify | User persisted |
| verify | User notified at joe@some.tst |

So now we are verifying that our first line support can see a registration main flow in their tool that imports the checkpoint log. We have also taken responsibility of officially defining how a main flow is logged and we are regression testing it as part of our continuous delivery process.

That leaves us with the performance log. How should we verify that? How long should it take to register a user? Well we should have an SLA on each use case. The SLA should define the performance under load and we should definitely not do load testing as part of our functional tests. But we could ensure that the function can be executed within the SLA. More importantly we ensure that we CAN monitor the SLA in production.

| performance fixture |
| verify | Legacy lookup completed | sub | 550 | ms |
| verify | User persisted | sub | 100 | ms |
| verify | User registration completed | sub | 1000 | ms |

Now we take responsibility that the system is monitor able in production. We also take responsibility and officially define what measuring points we officially support and since we do continuous regression testing we make sure we don't break the monitor ability.

If all our functional test cases look like this then we Test for runtime.

| our functional fixture |
| go to | user registration form |
| enter | name | Ted | username | JohnDoe | email | joe@some.tst |
| verify | status | Registration completed |
| verify | email account | registration mail received |

| checkpoint fixture |
| verify | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst |
| verify | User found in legacy system |
| verify | User persisted |
| verify | User notified at joe@some.tst |

| checkpoint fixture || performance fixture |
| verify | Legacy lookup completed | sub | 550 | ms |
| verify | User persisted | sub | 100 | ms |
| verify | User registration completed | sub | 1000 | ms |

Monday, December 17, 2012

Test stability

The key to a good Continuous Delivery process is a stable regression suite. There are a few different types of instability on can encounter.

• Lack of robustness
• Random test result
• Fluctuation in execution time

Lack of robustness often comes from incorrectly defined touch points. If a test needs to include a lot of functionality in order to verify a small detail then its bound to lack robustness. Even worse is if a test directly touches implementation.

Adding extra interfaces to create additional Touch Points.
I'm sure everyone has broken unit tests when refactoring without actually breaking any functionality. This almost always happens when unit tests evaluate implementation and not functionality. For example when you just split methods to clean up responsibility.

Tough this doesn't only happen to unit tests it can just as easily, if not easier to functional acceptance tests. If they for instance touch database in validation, instrumentation or data setup you will break your tests every time you refactor your database.

Testers seem to be very keen on verifying against the database. This is understandable since in an old monolith system with manual verification you only have two touch points GUI and db. It's important to work with test architecture and to educate both developers and testers to prevent validation of implementation.

Another culprit is dependencies between implementation and test code. For example Fitnesse fixtures that directly access, model objects, DAOs or business logic implementation.

Tests must be robust enough to survive a refactoring and addition of functionality. Change of functionality is obviously another matter.

We actually managed this quite well with our touch points.

Random test failures are a horrible thing because it will result in either unnecessary stoppage of the line or people loosing respect for the color red.

This is where we have had most our issues. At one point our line was failing 8 of 10 times it was executed and at least 7 of these where false negatives. "Just run em again" was the most commonly heard phrase. Obviously the devs totally lost respect for the color red. The result was that when now and again a true bug caused the failures no on cared to fix it and they started stacking.

Result of our Jenkins Job History for
Functional Tests could look like this.
So why did we have these problems? First of all our application is heavily asynchronous so timing is a huge issue in our tests. Many of our touch points are asynchronous triggers that fire off some stuff in the application and then we wait for another trigger from the application to the test before we validate. We don't really have much architectural room here as its an industry standard pattern. This in it self isn't a big deal except that each asynchronous task schedules other asynchronous tasks. So a lot of things happen at the same time.

Since its desirable to keep execution times short we reconfigure the system at test time to use much shorter timeouts and delays. This further increases the amount of simultaneous stuff that happens for each request.

When all these simultaneous threads hit the same record in one table you get transactional issues. We had solved this through use of optimistic locking. So we had a lot of rollbacks and retries. But it "worked". But our execution times where very unpredictable and since our tests where sensitive to timing they failed randomly.

Really though did we really congest our test scenarios so much that this became a problem? I mean how on earth could we hit the optimistic lock so often that it resulted in 7 out of 10 regressions failing due to it?

Wasn't it actually so that the tests where trying to tell us something? Eventually we found that our get requests where actually creating transactions due to an incorrectly instances empty list, marking an object as new. We also changed our pool sizing as we exhausted it causing a lot of wait.

So we had a lot of bugs that we blamed on the nature of our application.

Listen to your tests!! They speak wise things to you!

Eventually we refactored that table everyone was hitting so that we don't do any updates on it but rather track changes in a detail table. Now we didn't need any locking at all. Sweet free feeling. Almost like dropping in alone on a powder day. ;)

Fluctuation in execution time. I find it just as important that a test executes equally fast every time as it is that it always gives the same result.

First of all a failed test should be faster then a success, give me feed back ASAP! Second if it takes 15 sec to run it should always take 15 sec, not 5 or 25 at times. This is important when building your suites. You want to have your very fast smoke test suites your bang per min short suites and your longer running suites. Being able to easily group tests by time is important.

It's also important to be able to do a trend analysis on the suite execution time. It's a simplistic but effective way to monitor decrease in performance.

We have actually nailed quite a few performance bugs this way.

Asynchronous nature makes everything harder to test but don't make it an excuse for your self. Random behavior is tricky and frustrating in a pipe but remember TRUST YOUR TESTS!

Saturday, December 8, 2012

The impact of Continuous Delivery on the role of the Tester

Continuous Delivery really does change  way we work. It's not just tools and processes. We know that because every talk, every blog says it does but when you really see it its quite interesting.

The changes to the level of responsibility required by each developer has been one of the hardest thing to manage. Don't check shit in or you will break the pipe! Don't just commit and run, it's your responsibility that the pipe is green! All that is frustrating and has been the single most time consuming on our journey, but its worth a post of its own.

What I'd like to focus on is the changes to he role of the tester. Previously we have worked mostly with manual GUI driven testing. Our testers have tons of domain knowledge and know our systems inside out. We have worked with test automation in some projects and I have some test automation experience from the past. I've been trying to champion test automation for years but we have had a bit of a hard time getting it of the ground. When we have its not been test automation but rather automated tests that require some kind of tester supervision.

In our current project, as I've written about in previous posts , we decided to go all in. When we started we where a team of just architects. I was leading the work on the test automation. We got really good results working a lot with test architectur and test ability architecture in our application. But after some time we started to suffer from from lack of tester input in our testing. We obviously ended up with a very happy case scenario oriented suite.

So we started to bring on testers to our project but what profiles should we look for. First we started to look for just our notion of what a tester was and had always been in our projects. We happily took on testers with experience of testing and test managing portals, order systems, corporate websites ect, ect. But our delivery at this time didn't have a GUI all testing was done using Fitnesse. We didn't really get the interaction between dev and test we where hoping for.

We did get use for our testers for partner integration testing, which was manual (using rest client). But it wasn't really working well because they didn't know the interfaces and the application as they hadn't worked with th test automation.

We have been having this same experience for over a year. Our testers don't seem to be able to get involved in our test automation. But we do have a few who are and we are super lucky to have found them because we have not really had much of a clue when recruiting.

Our developers who are very modern and in a lot of cases super interested in test automation have constantly been bashing us about our choices of test tools. As I talked about in my previous post they totally refused SoapUI and arnt all that fond of Fitnesse either, even though they prefer it alot over SoapUI. But my take has always been "we need testers to feel comfortable with the tool, it's their domain, let them pick".

After we decided to move to REST assured I started to realize the problem I've failed to grasp for the six years or so I've been working with test automation. There are two sets of testers. GUI testers and technical testers. A GUI tester will always struggle with test automation. He/she has little to none experience or education in coding. The technical tester has a background as a developer or started developing as part of a automation interest.

Still even the technical eaters we have who have experience from test automation have had a transition period coming into the continuous world. Testers do tend to accept manual steps that arnt acceptable in a continuous world. It's not ok to just quickly verify this bug manually because its so simple and changing the test case that had a gap is a lot of work.  Its not ok to just add that user into e DB manually. It not ok to verify against the DB.

In the past our demand in GUI clicking testers has been high and our demand in technical testers has been low. At least in our old nonCD world the TestPyramid, http://martinfowler.com/bliki/TestPyramid.html, was upside down.

The Continuous Delivery expansion will drive a shift in what we look for in testers. Our tester demographics will move towards matching the pyramid. We will still need the GUI testers but their work will move more towards requirements gathering and acceptance testing. While I think we will see a new group of testers, with a much stronger developer background, come in and work with the automation.

This new group of technical testers or developers with super high understanding of testing is still hard to find but I hope we will see more of them.

Picking the tool for Component Testing

As i discussed in the previous post we realized we need to test our component, not just unit test code and do functional testing. Biggest reason was that we where cluttering our functional tests with validation a that didn't belong at that level. Cluttering tests with logic that doesn't belong at that level was increasing our test maint costs, something we knew rom the past we had to avoid.

Since our application is based on REST services exposed by components with very well defined responsibility its very well suited for component testing.

The responsibility for our components was clearly defined. A component test is a black box test with the responsibility of validating the functionality and interfaces of the component. This definition was somewhat a battle as some how our testers seem (our project and others in our company) to insist that they want to validate stuff on the database. I don't like beeing unreasonable but this was one thing I felt I had to be unreasonable on, black box means get the f*** out of our database. I still feel a bit bad about overriding our test managers on this but I still feel strongly that it had to be done. So all tests go through public or administrative interfaces on our components. Reason for this is obviously that I wanted our tests to support refactoring and not break when we change code.

The upside on forbidding database access was that we had to make a few administrative interfaces that we could eventually use for building admin tools.

So now what tools should we use? We where using Fitnesse for our functional testing but Fitnesse biggest weakness is that it doesn't separate test flow from test data. With functional testing this isn't really a big issue as each test is basically a flow of its own. But with component testing and it by nature beeing more detailed we saw that we would get much more tests cases per flow. Another weakness is that Fitnesse doesn't go all that well together with large XML/json document validation.

We do our GUI testing with selenium Fitnesse combo and that we would continue doing. But for our REST service testing our first choice was SoapUI. We prototyped a bit and decided we could accomplish what we wanted with it so we started building our tests. This was the single biggest mistake we made with our continuous delivery process.

Back in the days when we just did functional tests for our deliveries in Fitnesse we had a nice process  of test driven development. Our developers activly worked on developing testcases and fixtures and the tests went green as the code hit done. I really like it when it's functional test driven developement and think this is the best form of tdd. This went totally down the drain when we started using SoapUI for our component tests.

Our developers refused to touch SoapUI and started handing over functionality to testers for testing after they had coded it. This resulted in total chaos as we got a lot of untested checkins. Backloaded testing works extremely bad with a continuous delivery process. Especially since we didn't use feature flags.

This put us in an interesting dilemma. Do we choose a test tool that testers feel comfortable working with or do we choose a tool that developers like? I personally am very unreligious when it comes to tools. If it does the job then I don't care so much. But I've always had the opinion that test tools are for testers and its up to them, devs need to suck it up and contribute. Testers always seem to like clicky tools so I wasn't suprised that they wanted to use SoapUi.

We where sitting with a broken test process and our devs and testers no longer working together. Fortunately our testers came to the same conclusion and realized this tool was a dead end for us. The biggest killer was how bad the tool was suited for teamwork and versioning, even enterprise edition. Each and every checkin caused problems and you basically need to checkin all your suites for each reference you change. Horrible.

So after wasting nearly 3-4 months, growing our test dept and killing our dev process on SoapUI we decided to switch to RESTassured. For some of our components this is a definite improvement and its definitely a improvement process wise as our developers are happy to get involved. But I do still see some posible issues on our horizon with this choice. Though the biggest change is for our testers and how we as an organization view the tester role and that will be the topic of my next post.

One very nice thing though, our continuous delivery process is maven based so the change of tooling didn't affect it. Each test is triggered with mvn verify and as long as the tool has a maven plugins don't really care what tool they use for their components.

Tuesday, December 4, 2012

Automated System Testing

As I described earlier we decided that we wanted to prioritize automation of our functional system testing. The trickiest thing for us was finding the right abstraction level for theses tests. In precious projects where we have been working with Fitnesse as a test tool we have failed to find a good abstraction level.

We have always ended up with tests that are super technical in nature and cluttered with details that are very implementation specific. The problem this has given us is that very few people understand the test cases. Especially testers tend to have problems understanding them as they are so technical. If our testers can't work with our tests then we do have some sort of problem (quite a few and I will get to them later).

This time around we set out to increase the level of abstraction in the tests to make them more aimed towards testing requirements and less towards testing technology. We where hoping to achieve two thins, less maintainance and more readability. Maintainance budget for our automated tests had been high in the past. The technical tests exposed too much implementation and hence most refactoring resulted in test rewrite. This was shit because our tests didn't help to secure our refactoring. This was something we had to address as well.

The level we went for was something like this. (In pseudo fitnesse)

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|


We abstracted out the interfaces from the test cases and just loosely verified that things went right.

This registration test could be testing registration through the web portal or directly on the REST Service. This did lower the maintenance drastically. It did give us room to refactor or application without affecting the test cases. Though we where still having trouble getting our testers to write test cases that worked and made seance. The example above is obviously simplified and out of context, our test cases where quite long and complex. The biggest problem was what fixture method to use and how to design new ones with right abstraction level.

This exposed the need for something we really hadn't thought much about before, test architecture. We where really (and still are) lacking a test architect. Defining abstraction layers, defining test components, reusability of testcomponents, handling of test data, ect. All these become dramatically wrong when done by testers alone and not good enough when done by developers either.

Another problem we ran into with this abstraction level was that our testers didn't know our interfaces. The test doesnt really care about the interface so the tester doesn't need to take responsibility for the REST interface. Imho the responsibility of securing the quality of a public REST interface should lay with the testers. But here we give them tools to only secure registration functionality. Yes they are verifying the requirement to be able to register but the verification of the REST interface is implicit and left to the fixture implementation. This is not good.

It also generated a huge problem for us. We had integration tests with our partners that consume our REST interfaces and our testers who where in the sessions dint know our interfaces. When should they have learned them this was the first time they where exposed to them. They also where required to use tooling that they dint use other wise to test, REST clients of what ever flavor.

We had solved one thing but created another problem.

Had we abstracted too much? Was this the wrong level to test on?

Before we analyzed this we started to compensate for these issues by just jamming in more validations.

!define METHOD {POST}
|Register user by sending a |REST_REQUEST|${METHOD}| with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|RESPONSE_CODE|200|
|verify registration email|

Still a fictive example but Im trying to just illustrate how it went downhill. So we basically broke our intended abstraction layer and made our tests into shit. But it got worse. Our original intent was that registration is registration regardless of channel, web portal or rest. But then we had to have different templates for different users. Say that it was based on gender. So we took our web portal test case and made that register a female user and verify the template by string matching and then we used the REST interface for the guys.

Awesome now we made sure that we got a HTTP 200 on our REST response, we made sure that we used pink templates for the guys and green ones for the gals. Awesome and we made sure to test our web interface and our rest interface. Sweeet!!! We had covered everything!

Well maybe not.

This is when we started to think again and our conclusion was that the abstraction layer of the initial tests was actually quite ok.

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|

This tests a requirement. Its a functional test. It does have a value. Leave it at that! But it cant be our only test case to handle registration.

Coming back to this picture 

Our most prioritized original need was to have regression test on functional system, end to end of our delivery. We had that. We also had unit tests on the important mechanisms of the system, not on everything but and yest too little but on the key parts. Still we where lacking something and that something was component/subsystem end to end tests. This was something we deep  down always knew that we would have to make but we had ignored it for quite some time due to "other prioritize". I still think we made the right call back then and there when we prioritized but this has been the most costly call of our entire journey.

So we started two tasks. One retro fitting Component Tests and cleaning up our System Tests. More on how that went and how we decided to abstract these in the next post.


Thursday, November 22, 2012

Where to start with Continuous Delivery

A lot of people have already started the journey without really knowing it. Its few companies today that don´t utilize some key parts of a Continuous process. When reading books and listening to speakers at conferences its easy to feel totally stone age. But the key is to remember that the journey is baby steps towards a vision. But Continuous Delivery is just progression of Continuous Integration and a lot of the basics are built upon in the progression.

By realizing we have something we can start to think about what we really need out of a Continuous Process. In some sense we all want to release all the time and we want our artifacts to be server images deployed on cloud nodes monitored by super cool graphs of all the metrics we can possibly think of. That is all nice but we all do need to provide a, hate the word, business value. So where is the business value in a continuous process for our organisation. That is the question we should ponder before we start.

In fact we had didnt have to ask our selfs the question in that way. Instead we where provided a challenge. For us it was key to solve system regression testing. Test automation and lowering cost in regression testing is obviously one of the key reasons to have Continuous Delivery. For us the key business case was to solve Continuous Regression testing. Continuous what ever else would come as a bonus.

Knowing that Continuous Regression Testing and not Continuous Deploy was our main priority made it easy for us to prioritize our efforts. Our efforts went largely into Testability Architecture in our application, Test Architecture in our test automation frameworks and automating a pipe that would deploy and test each code commit.

So really as with any agile development find your business value, focus on delivering it with a minimal effort "good enough" approach and iterate from there.

Over the next coming posts I will go though what we focused om in order to achieve Continuous System Regression Testing.