Pages

Showing posts with label Continuous Regression Testing. Show all posts
Showing posts with label Continuous Regression Testing. Show all posts

Tuesday, April 22, 2014

New Talk: Continuous Testing

April 29th I will be visiting HiQ here in Gothenburg. I will have the opportunity to talk about the super important subject of Continuous Delivery and Testing.  This talk is a intro level talk that describes Continuous Delivery and how we need to change the way we work with Testing.

If anyone else is interested in this talk then please dont hesitate to contact me. The talk can be focused on a practitioner audience as well digging in a bit deeper into the practices. 

Here is a brief summary of the talk.

Why do we want to do Continuous Delivery
Introduction to Continuous Delivery and why we want to do it. What we need to do in order to do it, principles and practices and a look at the pipe. (Part of the intro level talk)

Test Automation for Continuous Delivery
Starting with a look on how we have done testing in the past and our efforts to automate it. Moving on to how we need to work with Application Architecture in order to Build Quality In so that we can do fast, scaleable and robust test automation.

Test Driven Development and Continuous Delivery
How we need to look work in a Test Driven way in order to have our features verified as they are completed. A look at how Test Architecture helps us define who does what.

Exploratory Testing and Continuous Delivery
In our strive to automate everything its easy to forget that we still NEED to do Exploratory Testing. We need to understand how to do Exploratory Testing without doing Manual Release Testing. The two are vastly different

Some words on Tooling

Alot of time discussions start with tools. This is probably the best way to fail your test automation efforts.

Areas Not Covered
Before we are done we need to take a quick look at Test Environments, Testing Non Functional Requirements, A/B Testing, Power of Metrics. (This section expands in the practitioner level talk)

Wednesday, January 9, 2013

Test for runtime

Traditionally our testers have been responsible fore functional testing, load testing and in some cases for some failover testing. This covers our functional requirements and some of our supplemental requirements as well. Though it doesn't cover the full set of supplemental requirements and we haven't really taken many stabs at automating these in the past.

The fact that we haven't really tested all the supplemental requirements also leaves a big question, who's responsibility is verification of supplemental requirements? Lets park that question for a little bit. To be truthful we don't really design for runtime either. Our supplemental requirements almost always come as an afterthought and after the system is in production. They always tend to get lost in the race for features to get ready.

In our current project we try to improve on this but we are still not doing it well enough. We added some of the logging related requirements early but we have no requirement spec and no verification of the requirements.

The logging we added was checkpoint logging and performance logging. Both these are requirements from our operations department. The checkpoint logging is a functional log which just contain key events in an integration. It's used by our first line support to do initial investigation. The performance log is for monitoring performance of defined parts of the system. It's used by operation for monitoring the application.

Lets use user registration as an example (its a fictive example).

1. User enters name, username, password and email into a web form.
2. System verifies the form.
3. System calls a legacy system to see if the email is registered in that system as well.
3a. If user registered in legacy system with username and password matching the userid is returned from that system.
4. System persists user.
5. Email is sent to user.
6. Confirmation view displayed.

From this we can derive some good checkpoints.

2013-01-07 21:30:07:974 | null | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst
2013-01-07 21:30:08:234 | usr123 | User found in legacy system
2013-01-07 21:30:08:567 | usr123 | User persisted
2013-01-07 21:30:08:961 | usr123 | User notified at joe@some.tst

The performance log could look something like this.

2013-01-07 21:30:07:974 | usr123 | Legacy lookup completed | 250 | ms
2013-01-07 21:30:08:566 | usr123 | User persisted | 92 | ms
2013-01-07 21:30:08:961 | usr123 | User registration completed | 976 | ms

This is all nice but who decides what checkpoints should be logged? Who verifies it?

Personally I would like to make the verification the responsibility of the testers. Though I've never been in a project where testers have owned the verification of any kind of logging. This logging is in fact not "just" logging but system output, hence should definitely be verified by the testers. By making this the responsibility of the tester it also trains the tester in how the system is monitored in production.

So how do can this be tested?

Lets make a pseudo Fitnesse table to describe the test case .

| our functional fixture |
| go to | user registration form |
| enter | name | Ted | username | JohnDoe | email | joe@some.tst |
| verify | status | Registration completed |
| verify | email account | registration mail received |

This is how most functional tests would end. But let's expand the responsibility of the tester to also include the supplemental requirements.

| checkpoint fixture |
| verify | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst |
| verify | User found in legacy system |
| verify | User persisted |
| verify | User notified at joe@some.tst |

So now we are verifying that our first line support can see a registration main flow in their tool that imports the checkpoint log. We have also taken responsibility of officially defining how a main flow is logged and we are regression testing it as part of our continuous delivery process.

That leaves us with the performance log. How should we verify that? How long should it take to register a user? Well we should have an SLA on each use case. The SLA should define the performance under load and we should definitely not do load testing as part of our functional tests. But we could ensure that the function can be executed within the SLA. More importantly we ensure that we CAN monitor the SLA in production.

| performance fixture |
| verify | Legacy lookup completed | sub | 550 | ms |
| verify | User persisted | sub | 100 | ms |
| verify | User registration completed | sub | 1000 | ms |

Now we take responsibility that the system is monitor able in production. We also take responsibility and officially define what measuring points we officially support and since we do continuous regression testing we make sure we don't break the monitor ability.

If all our functional test cases look like this then we Test for runtime.

| our functional fixture |
| go to | user registration form |
| enter | name | Ted | username | JohnDoe | email | joe@some.tst |
| verify | status | Registration completed |
| verify | email account | registration mail received |

| checkpoint fixture |
| verify | Verified user form name=Ted, username=JohnDoe, email=joe@some.tst |
| verify | User found in legacy system |
| verify | User persisted |
| verify | User notified at joe@some.tst |

| checkpoint fixture || performance fixture |
| verify | Legacy lookup completed | sub | 550 | ms |
| verify | User persisted | sub | 100 | ms |
| verify | User registration completed | sub | 1000 | ms |

Tuesday, December 4, 2012

Automated System Testing

As I described earlier we decided that we wanted to prioritize automation of our functional system testing. The trickiest thing for us was finding the right abstraction level for theses tests. In precious projects where we have been working with Fitnesse as a test tool we have failed to find a good abstraction level.

We have always ended up with tests that are super technical in nature and cluttered with details that are very implementation specific. The problem this has given us is that very few people understand the test cases. Especially testers tend to have problems understanding them as they are so technical. If our testers can't work with our tests then we do have some sort of problem (quite a few and I will get to them later).

This time around we set out to increase the level of abstraction in the tests to make them more aimed towards testing requirements and less towards testing technology. We where hoping to achieve two thins, less maintainance and more readability. Maintainance budget for our automated tests had been high in the past. The technical tests exposed too much implementation and hence most refactoring resulted in test rewrite. This was shit because our tests didn't help to secure our refactoring. This was something we had to address as well.

The level we went for was something like this. (In pseudo fitnesse)

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|


We abstracted out the interfaces from the test cases and just loosely verified that things went right.

This registration test could be testing registration through the web portal or directly on the REST Service. This did lower the maintenance drastically. It did give us room to refactor or application without affecting the test cases. Though we where still having trouble getting our testers to write test cases that worked and made seance. The example above is obviously simplified and out of context, our test cases where quite long and complex. The biggest problem was what fixture method to use and how to design new ones with right abstraction level.

This exposed the need for something we really hadn't thought much about before, test architecture. We where really (and still are) lacking a test architect. Defining abstraction layers, defining test components, reusability of testcomponents, handling of test data, ect. All these become dramatically wrong when done by testers alone and not good enough when done by developers either.

Another problem we ran into with this abstraction level was that our testers didn't know our interfaces. The test doesnt really care about the interface so the tester doesn't need to take responsibility for the REST interface. Imho the responsibility of securing the quality of a public REST interface should lay with the testers. But here we give them tools to only secure registration functionality. Yes they are verifying the requirement to be able to register but the verification of the REST interface is implicit and left to the fixture implementation. This is not good.

It also generated a huge problem for us. We had integration tests with our partners that consume our REST interfaces and our testers who where in the sessions dint know our interfaces. When should they have learned them this was the first time they where exposed to them. They also where required to use tooling that they dint use other wise to test, REST clients of what ever flavor.

We had solved one thing but created another problem.

Had we abstracted too much? Was this the wrong level to test on?

Before we analyzed this we started to compensate for these issues by just jamming in more validations.

!define METHOD {POST}
|Register user by sending a |REST_REQUEST|${METHOD}| with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|RESPONSE_CODE|200|
|verify registration email|

Still a fictive example but Im trying to just illustrate how it went downhill. So we basically broke our intended abstraction layer and made our tests into shit. But it got worse. Our original intent was that registration is registration regardless of channel, web portal or rest. But then we had to have different templates for different users. Say that it was based on gender. So we took our web portal test case and made that register a female user and verify the template by string matching and then we used the REST interface for the guys.

Awesome now we made sure that we got a HTTP 200 on our REST response, we made sure that we used pink templates for the guys and green ones for the gals. Awesome and we made sure to test our web interface and our rest interface. Sweeet!!! We had covered everything!

Well maybe not.

This is when we started to think again and our conclusion was that the abstraction layer of the initial tests was actually quite ok.

|Register user with first name|Joe|last name|Doe|and email|jd@test.mail.xx|
|verify registration email|

This tests a requirement. Its a functional test. It does have a value. Leave it at that! But it cant be our only test case to handle registration.

Coming back to this picture 

Our most prioritized original need was to have regression test on functional system, end to end of our delivery. We had that. We also had unit tests on the important mechanisms of the system, not on everything but and yest too little but on the key parts. Still we where lacking something and that something was component/subsystem end to end tests. This was something we deep  down always knew that we would have to make but we had ignored it for quite some time due to "other prioritize". I still think we made the right call back then and there when we prioritized but this has been the most costly call of our entire journey.

So we started two tasks. One retro fitting Component Tests and cleaning up our System Tests. More on how that went and how we decided to abstract these in the next post.


Friday, November 23, 2012

What is Continuous Regression Testing?

In my previous entry I said that our business case was Continuous Regression Testing. I will make a few entries on the subject.

Our definition was quite clear on what level of testing we wanted to achieve.

We define it at the System Test level, meaning regression testing of our end to end functionality as specified by our Use Cases. Important here is end to end and driven by Use Cases.

As an example say that our system would have a "Register User" Use Case which states that the user should be able to register through a portal and receive an confirmation email. In this case our end to end delivery is from our web front to our mail server. We would be creating an automated Test Case which registers the users through the portal and then the test evaluates that there is a mail sent to the email address of the registered user.

Me and others in our company (not on the team at the time) at the time have had quite a bit of experience working with automated System Tests. Yet we ran into quite a few mines (AGAIN!!) when building our test suites. I will cover these later.

So we had a clear understanding on how we wanted to test and we knew we wanted to test it often. For us testing on each code commit and so heavily relying on test automation was way out of our comfort zone. But our business case was strong and we knew why we wanted to go down that road.

Early on we made a few key decisions. One was to test everything on every commit. Second was to only build each artifact once. We discussed this and came to the conclusion that we would need to retest after a rebuild. We felt our process would be simpler and more correct by doing this.

Our architecture was based on a few well defined components (publishing rest services) that together formed our deliverable.

This is a simplistic view of something similar. Some components where internal and some where externally integrating.

We where aware that we most likely would need to test the components in a continuous way but we ignored it on purpose. Our focus had to be on our delivery as well and we all know too well that we need to prioritize well when working on deadlines. Its easy to get carried away on theoretical models and a desire to solve everything but its not the way we work. We always want something deliverable and then iterate it.

So our initial pipe ended up looking something like this.  Each of our components where built, unit tested and released on commit. The our deployment assembly was updated one component at the time. Each time the assembly was updated we deployed it and tested it on our test server.

It was a very simplistic start to our Continuous journey but it was very adequate for us and most importantly it solved our primary concern Continuous Regression of the customer application.

Its important to remember that the application was new and our team was small at the time.

Thursday, November 22, 2012

Where to start with Continuous Delivery

A lot of people have already started the journey without really knowing it. Its few companies today that don´t utilize some key parts of a Continuous process. When reading books and listening to speakers at conferences its easy to feel totally stone age. But the key is to remember that the journey is baby steps towards a vision. But Continuous Delivery is just progression of Continuous Integration and a lot of the basics are built upon in the progression.

By realizing we have something we can start to think about what we really need out of a Continuous Process. In some sense we all want to release all the time and we want our artifacts to be server images deployed on cloud nodes monitored by super cool graphs of all the metrics we can possibly think of. That is all nice but we all do need to provide a, hate the word, business value. So where is the business value in a continuous process for our organisation. That is the question we should ponder before we start.

In fact we had didnt have to ask our selfs the question in that way. Instead we where provided a challenge. For us it was key to solve system regression testing. Test automation and lowering cost in regression testing is obviously one of the key reasons to have Continuous Delivery. For us the key business case was to solve Continuous Regression testing. Continuous what ever else would come as a bonus.

Knowing that Continuous Regression Testing and not Continuous Deploy was our main priority made it easy for us to prioritize our efforts. Our efforts went largely into Testability Architecture in our application, Test Architecture in our test automation frameworks and automating a pipe that would deploy and test each code commit.

So really as with any agile development find your business value, focus on delivering it with a minimal effort "good enough" approach and iterate from there.

Over the next coming posts I will go though what we focused om in order to achieve Continuous System Regression Testing.