Monday, August 16, 2010

Defining the Test Architect and the Software Architect

I heard the term “Test Architect” for the first time about two years ago.

At the time I felt that it was not well defined – even a bit artificial.

But towards the end of 2009 I discovered this article by John Morrison:

The Test Architect, by John Morrison

(original article: The Test Architect, original article by John M)

This clearly positions the test architect in terms of roles and responsibilities, as well as the relation to the software architect.

Software Architect” on the other hand, is a term that is widely used and accepted in the software development community.

In this regard I am aligned with Martin Fowler’s thinking and ideas as defined in this article:

Who Needs an Architect? by Martin Fowler

Friday, February 6, 2009

ScribeFire from Mauritius and cyclone GAEL class 2

Mauritus is fun - in and out of the office.

So far out of the office = hotel La Plantation

But I intend to do some traveling over the weekend.

The cyclone was a bit of a surprise, but we are now right in the middle of the cyclone season!
Check for current cyclones!

Tonight we are doing a production implementation on Bankmaster. Nobody is going home early, but hopefully if all goes well we can take off the weekend!

Tuesday, January 13, 2009

FW: CI + QC + QTP

https://hudson.dev.java.net/

From: Webber, Steven SW
Sent: 13 January 2009 10:03 AM

Subject: RE: CI + QC + QTP

Hi.

Our current thinking is:
Unit tests are executed with each build.
Integration tests, written by developers, are executed with each deployment to INT. (Nightly/after a successful build)

QTP scripts are executed only when the build is promoted to UAT. (Bi-weekly? End of Sprint..?)

Not sure how much of this can be fed back to CI?

Thanks
Steven

-----Original Message-----
From: Van Der Merwe, Ben B
Sent: 13 January 2009 09:49 AM
To: de Vleeschauwer, Camiel C; Venter, Carl c; Webber, Steven SW; craig;
Phale, Mafatshe M; Mase, Luvuyo L
Cc: charles; Frazer, Shirley
Subject: FW: CI + QC + QTP

Hi guys,

If some of the acceptance / 'smoke' tests have been automated/mechanised
(with QTP scripts) it makes sense to execute them with each (nightly)
build that is deployed to UAT and feed the results back to the CI tool.

In terms of the incremental / hourly builds it makes sense to execute
unit tests and unit integration tests as part of the incremental builds
and to feed the results back to the CI tool.

Maybe we can work towards some goals in both of these areas? Who else
need to be involved?

Thanks,
Ben

-----Original Message-----
From: Craig McKenzie
Sent: 13 January 2009 08:00 AM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Charles
Subject: CI + QC + QTP

Hi,

I'd like to know if there is the possibility that the QTP results could
be wanted in the CI tool (or if the CI tool should possibly execute the
QTP scripts).

Thanks

--
Craig McKenzie
Technical Testing Consultant
Micro to Mainframe
http://www.mtom.co.za

Sunday, January 11, 2009

FW: Continuous integration tool at Tanzanite

Extensible continuous integration engine
Meet Hudson
Find out what is Hudson and get started.
Use Hudson
See how to get more out of your Hudson.
Extend Hudson
Learn how to build Hudson or extend Hudson by writing plugins.



Building a Software Project in Hudson

Firefox Add-on Build Monitor




































From: Van Der Merwe, Ben B
Sent: 09 January 2009 10:24 AM
To: 'craig'; 'charles'
Subject: RE: Continuous integration tool at Tanzanite


As far as I know the firewall port that has to be opened is for the
purpose of automatic deployment from the CI server to the destination
server (in UAT behind the firewall)...

This piece will be missing for now.

There is also some tagging required as part of the WAS 6.1 code branch
(based on our current production release). This may be a once off
activity. A new Hudson build will have to be defined for this (the
current Hudson build was defined before the WAS 6 code branch).

Ben

-----Original Message-----
From: Van Der Merwe, Ben B
Sent: 09 January 2009 09:24 AM
To: 'craig'; charles
Subject: RE: Continuous integration tool at Tanzanite

Craig,

I have confirmed with Naveen and Goosie and Carl Wanting.

Thee will be using Hudson for incremental builds (Payments, Balances and
Statements, Channel), but only starting with the WAS 6.1 upgrade (we
receive this code drop from C2P today).

There are / will be hourly builds - but there is still a firewall port
that must be opened (this can only be done next week).

Ben

-----Original Message-----
From: Craig McKenzie
Sent: 09 January 2009 07:23 AM
To: charles; Van Der Merwe, Ben B
Subject: Continuous integration tool at Tanzanite

Hi,

I have a vague recollection that the Maven continuous integration tool
was going to be used by the Tanzanite developers for their CI needs.
Would it be possible for you to confirm this for me?

Thank you

--
Craig McKenzie
Technical Testing Consultant
Micro to Mainframe
http://www.mtom.co.za

Monday, December 29, 2008

FW: CIG defect test cycle update



From: Derrick Beling [mailto:dhb@mtom.co.za]
Sent: 03 October 2008 17:26 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Mckenzie, Craig C; du Plessis, Johann J; Johnson, Judy J; Perold, Louise L; salome
Subject: RE: CIG defect test cycle update

There is no problem with your responses - keep in mind this is a journey of thinking and no thoughts are judged good or bad.

Also this is a dialogue (hence the need to Blog it)

I have not digested your comments, but would like you to give thought to the problem at hand - There is a delay in the CIG testing cycle due to environmental issues. This is not a once off situation, so how do you solve the problem. And then how do you solve the problem with mechanisation.

Regards

Derrick Beling



From: Van Der Merwe, Ben B [mailto:Ben.VanDerMerwe2@standardbank.co.za]
Sent: 03 October 2008 07:27 AM
To: Derrick Beling; Phale, Mafatshe M
Cc: Mckenzie, Craig C; du Plessis, Johann J; Johnson, Judy J; Perold, Louise L; salome
Subject: RE: CIG defect test cycle update

Derrick, I agree whole heartedly - and I am sorry if my response seemed to be a bit negative - it was intended to be on the realistic side (i.e. how do you do the required testing and mechanise at the same time under the current project constraints).

The other aspect is whether it makes sense to automate / mechanise while you are still in the exploratory / ad hoc testing phase. During this phase the application (i.e. CIG) still changes frequently and it is often not possible to complete a test case - either because there are serious bugs or there is a deployment / configuration issue. A formal process is not good under these circumstances (from there the 'ad hoc' approach). It is normally best to go to the developers so that the basic issues / bugs / configuration can be sorted out promptly.

One other aspect that worries me is the automation / mechanisation of the business process versus the automation / mechanisation of test execution. The two must not be confused.

The automater / mechanic (:) is in a good position to automate the business process (i.e TPP payments) and do some default checks (for example: have we been able to navigate successfully to the next screen, check for payment status, etc), but the automation of the test execution requires the involvement of the testing SME - in this case the test analyst.

It is even possible to regard the mechanisation of the business process and the mechanisation of test execution as two separate processes. Various combinations are possible. It is possible to automate the business processes and then do the actual testing of the results manually (using any tools that may speed up this process, i.e. comparison tools). It is also possible to do the business processes manually and just automate the test execution - I think this is where the challenge lies.

It may even be possible to automate both - but the automation of the test processes is the more challenging one.

Ben


From: Derrick Beling [mailto:dhb@mtom.co.za]
Sent: 02 October 2008 17:01 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Mckenzie, Craig C; du Plessis, Johann J; Johnson, Judy J; Perold, Louise L; salome
Subject: RE: CIG defect test cycle update

We actually need to Blog this

Your point on mindshift is important. However the mindshift to mechanisation / automation must happen at all levels. And hence the email. We are starting with the level at which we have control, that is with us - Testers, Test Analysts, Snr Test Analysts, Test Leads need to be thinking in terms of how mechanisation can solve problems. Which will lead us to what problems we want to solve with mechanisation.

Put it another way - if you are able to say Do 1,2,3 and you solve Problem X with a cost of Y and a benefit of Z - then you have a chance of getting the time and money. If you just say put mechanisation in the project plan, then it is not going to happen. More specifically if we say "Automate 10 scripts, run this check list and compare to these logs" and you will eliminate the problem that Judy has raised then it will happen.

The most important part is that this is a thinking exercise, not a doing exercise. Which means what - your email outlines challenges and has a problem statement "between a rock and a hard place". Thus the need for a solution cannot be disputed.

So, having expanded on the problem, we must :-

1. determine whether the problem is clearly stated and is complete

2. find a solution to the problem (here we can apply the principles of testing for bugs, for example - how else can we recreate the problem)

3. determine what of the solution can be mechanised / automated (in the true definition of mechanisation, the mechanisation itself could determine the solution - Load Testing is an example, you cannot load test if you are not mechanised).

4. Be future gazing - what we could do if....;not past gazing - what we can't do because...

Regards

Derrick Beling



From: Van Der Merwe, Ben B [mailto:Ben.VanDerMerwe2@standardbank.co.za]
Sent: 02 October 2008 01:33 PM
To: Derrick Beling; Phale, Mafatshe M
Cc: Mckenzie, Craig C; Johnson, Judy J; Perold, Louise L; salome; du Plessis, Johann J
Subject: RE: CIG defect test cycle update

Hi Derrick,

CIG phase 2 is new code and has resulted in a multitude of defects being logged. We are almost ready to move on from the ad hoc/ exploratory testing phase, which opens up the opportunity for mechanisation / automation.

From an end to end testing perspective we have a minimal amount of time left - barely enough to finish one test cycle. An automated solution will require the equivalent of at least two test cycles (duration) to accomplish. We are between a rock and a hard place. The new BVA screens are java based and can be automated (using the current workarounds that are in place). The defect test cycle mainly applies to defects on the these screens, but the objective is to progress to the end to end test phase which will reveal defects in the core CIG application.

The main outstanding issue is that we have not been able to execute one single end to end test case - for various reasons including an emergency freeze (firewall rules are not in place), late deployments / deployment issues, etc. An end to end test consists of a customer payment file, submitted from C:D Windows to CIG PPD which is routed to Business Online test environment. The expected result is that this payment file is delivered successfully to BOL (according to the CIG scenario that has been executed and a wild card based file naming convention) and various response messages which must be routed back successfully the the customer partner (C:D Windows).

The same set of tests must be repeated for a C:D Unix 'customer' installation - the current state is that we have acquired the Unix server!

The main issue for me is that automation planning is not reflected in the project plans. It is almost done as an after thought. We have to invest at least a month in the proper automation of this project, but we do not have enough time left to complete the actual testing that is required..

I think a mind shift is required in the project planning approach so that automation / mechanisation is an objective and not a side show

Ben.


From: Derrick Beling [mailto:dhb@mtom.co.za]
Sent: 02 October 2008 12:46 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Mckenzie, Craig C; Johnson, Judy J; Perold, Louise L; salome
Subject: FW: CIG defect test cycle update

Some more thinking to be done. Look at Judy's problem as set out here.

My question is -

How can automation / mechanisation solve, or assist in solving, the problem ?

(Include anyone else who has an interest in automation on this - I'm thinking of Simon etc)

Regards

Derrick Beling


From: "Johnson, Judy J" <Judy.Johnson@standardbank.co.za>
Sent: Fri, 26/9/2008 10:18
To: Derrick Beling <dhb@mtom.co.za>
Subject: FW: CIG defect test cycle update

Hello Derrick - as you can see from this mail we have more delays to our CIG testing cycle - this time it's environmental issues, but it's pretty much more of the same kind of impact to testing. ..... Judy

______________________________________________
From: Johnson, Judy J
Sent: 26 September 2008 09:55 AM
To: Fuchs, Karl K; Frazer, Shirley; Van Stade, Carike; Loubser, Juan J (CIB); de Vleeschauwer, Camiel C; Damjanovic, Jovanka J; Bright, Shannon S

Cc: Abdullah, Zainab Z; du Plessis, Johann J; Frazer, Shirley; Kasiram, Roshni R; Kgaphola, Doctor; Kruger, Hanna; Kwesaba, Lusanda L; Mase, Luvuyo L; Padayachee, Kiroshnee K; Phale, Mafatshe M; Phukubje, Ivy I; Seraseng, Simon S; Tshitimbi, Ntungufhadzeni N; Tshitimbi, Ntungufhadzeni N; Vadachia, Mahomed M; Van Der Merwe, Ben B; Zwane, Siphesihle S

Subject: CIG defect test cycle update
Importance: High

Greetings everyone

We were supposed to get started with our CIG defect cycle testing yesterday however, this did not happen as it was discovered that both the INT and UAT1 environments did not have the upgraded version of GIS. This resulted in Shannon having to spend at least six hours yesterday on sorting this out for both INT and UAT1 environments.

This morning after we were given the goahead to test we still were unable to continue due to the following issues:-

1. Bebstation link to thin BVA is pointing to UAT2 instead of UAT1 - this has been resolved
2. UAT1 Websphere configuration for MQ is incorrect - resulting in inability to Create a Customer Partner - we are waiting on Evert to resolve this problem

The plan to have all the critical and cignificant defects retested and closed by EOD today is looking less and less likely as we still do not have access to the application and also have a clean database and therefore will have to capture all relevant data before we are able to start any retesting of defects.

Regards, Judy

Judy Johnson
Micro to Mainframe (Pty) Ltd
for
Standard Bank Group Ltd.
Infrastructure and Testing
Tel: +27 11 631 1465

Fax: +27 11 636 4633
Cell: +27 (0)83 5156037
Email: judy.johnson@standardbank.co.za

___________________________________________________________________________________________________________________



_____________________________________________________________________________________

FW: SCRUM



From: Van Der Merwe, Ben B
Sent: 09 October 2008 07:47 AM
To: 'Derrick Beling'; Perold, Louise L
Subject: RE: SCRUM

Derrick, Louise
I have spoken to Shirley and got some additional feedback.
At the moment Scrum seems to be implemented in a non-Agile way - the BAs, developers, etc have full participation in the process but the testers are in general not part of all the crucial meetings like the burn down chart analysis (with Roshni being an exception).
One item affecting the test team (this is input from Arrie) is that (functional) requirements and test cases must happen in quick succession - the implication being that we should already be working on the Feb bucket test requirements and test cases.
This will not address the dependency that test execution has on development -> deployment -> configuration.
Shirley has confirmed that the length of our iteration is planned to be 4 weeks, but from recent experience (Rel 1.2) we have only been able to complete one full test cycle (a week) out of 7 weeks - due to various reasons, but the dependency highlighted above being the major one.
Other aspects include the quality of new code from development (unit testing). I feel very strongly about this - it is discussed in quite some detail in test driven development (from the SCRUMMING FROM THE TRENCHES pdf document). Maybe it makes testers feel good if they uncover bugs that should have been uncovered in unit testing, but quality cannot be added after the fact.
Quality starts with requirements, then design, then development. By the time delivery (to the test team) takes place the quality can only be measured- if it is bad, especially due to bad requirements or bad design/development - it is already costly to fix. The only way to measure and address quality earlier is to involve test analysts meaningfully at an early stage (requirements) - and also in all the SCRUM processes, and for developers to do meaningful Unit testing. As a technical test analyst I would definitely prefer to be involved in this process already in some way.
I am sure I can assist with simple and meaningful test cases for unit testing. Currently the design/development phase is a 'Black Box'. We cannot provide any input to it. And we do not get any output from it - until it is too late.
We can still make a good proposal to Karl...
Ben


From: Derrick Beling [mailto:dhb@mtom.co.za]
Sent: 08 October 2008 12:29 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Perold, Louise L; du Plessis, Johann J
Subject: RE: SCRUM

Good thinking here.
The challenge we have as Testers is that poor planning and performance on the part of someone else constitutes an emergency for us. Which means it is actually our problem and we benefit most from the solution.
So let's evolve this into a proposal for Karl. (We will use SCRUM as the excuse). Bring it up for discussion on Friday in the training



Regards
Derrick Beling
Managing Director





From: "Van Der Merwe, Ben B" <Ben.VanDerMerwe2@standardbank.co.za>
Sent: Wed, 8/10/2008 07:07
To: Derrick Beling <dhb@mtom.co.za> ; "Phale, Mafatshe M" <Mafatshe.Phale@standardbank.co.za>
Cc: "Perold, Louise L" <Louise.Perold@standardbank.co.za> ; "du Plessis, Johann J" <Johann.duPlessis2@standardbank.co.za>
Subject: RE: SCRUM

The mechanisation / automation of the build and deployment process must be the first objective. This does not really fall within the testing space, but is a prerequisite for achieving on-time (tested) delivery.
We can first look at the build process and the deployment process in isolation. I suspect that the build process has been automated to a huge degree, with some initial configuration (based on the code branch and target environment e.g. INT1, INT2, INT3, INT4 or UAT1, UAT2, UAT3, UAT4) being required before the build is kicked off. (?)
I also suspect that the deployment and configuration process (CVA, BVA, BPH, Websphere / MQ, BIG servers, SFI, Bankmaster, Equinox, Interfaces between these) is the bottleneck and is a very manual process. The complexity also increases exponentially with the amount of code branches and the amount of environments that must be handled in parallel. [James, Khumo, Bradley, Carl, and a few others are required resources - if they are not present the deployment and configuration often stand still and cannot continue !) Priority is also given to production environments, even though there is an effort to hand these activities over to the production team.
In the short term we can try to alleviate this in various ways, but it will really take a huge effort from the project team to improve on this. Karl Fuchs will have to drive this.
I have suggested the monitoring of the queues in the UAT environment to help with the early identification of Websphere MQ issues in test environments - this monitoring is already happening in the production environments. This can be achieved in the short term - I have requested this from Evert a few times and must follow up again.
The 'basic' or smoke testing referred to can already be mechanised / automated in many instances (Mafatshe - an example of this is a base currency payment that goes into the "DeliveredForProcessing" status. This indicates that the interface elements between CVA and the back end is in place and has been configured for payments...)
The length of our iterations can be confirmed, but the manual deployment and configuration issues outlined normally makes the duration of the actual test cycles a lot shorter !
Ben


From: Derrick Beling [mailto:dhb@mtom.co.za]
Sent: 07 October 2008 17:57 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Perold, Louise L
Subject: FW: SCRUM

Note the comments on a build machine. Here is an example of mechanisation to start addressing the problems associated with environments. Which refers to my previous email / blog The comments from Karl seem to support that one of the critical areas of non-delivery will be the environment. And that Ben and Simon are taking up more of the responsibility of managing the environment

Regards

Derrick Beling


From: Perold, Louise L [mailto:Louise.Perold@standardbank.co.za]
Sent: 07 October 2008 10:50 AM
To: Derrick Beling
Subject: FW: SCRUM

From: Chris Blain [mailto:cblain@tranxition.com]
Sent: 07 October, 2008 01:42
To: Perold, Louise L
Subject: RE: SCRUM

Hi, I've testing in "SCRUM" environments in my last two jobs.

I put it in quotes because neither time was it implemented 100%. That colors my comments to a degree. Why did the client choose SCRUM? Are they getting SCRUM training, or are they reading a book and doing it on their own? SCRUM works more as a high level project management method. It says nothing about how you do the development or testing. They need to decide this in addition. For example, are they using XP for their development process? Pair programming, the amount of unit testing, whether they do TDD development, will all affect your testing as it gives you different places to interject testing influence and dictates when you will know what is being delivered in an iteration. An important consideration is the length of the iteration. There is a big difference between two week and four week iterations. This will impact the amount of test design and execution you can do. It puts different pressures on the dev team as well. Do you know if your client has a good build and automated smoke test procedure? I consider these essential for any project style, but especially an agile process. They should be able to push one button and have the build machine go from a clean state, pull down fresh sources, compile application, create installer and publish (zip file, CD ISO image, whatever is the distribution media). It should then take the build and put it on a test machine and run some basic tests to validate the build for further testing. I'm not a fan of continuous integration, but the previous infrastructure is essential to agility. Those are some initial thoughts. Let me know what you think and we can keep the discussion going. --Chris

From: Perold, Louise L [mailto:Louise.Perold@standardbank.co.za]
Sent: Monday, October 06, 2008 2:37 AM
To: Chris Blain; ben.kelly@hitwise.com; tim@openplans.org; Jeff Fry; Andersson, Henrik; carsten.feilberg@s-d.dk
Subject: SCRUM Hi guys,

Have any of you tested in a SCRUM environment? Any helpful hints, tips, links you can share?

One of our clients is changing from a waterfall development lifecycle to SCRUM. My idea is for us to implement Session based testing to complement this.

thanks
Louise


_____________________________________________________________________________________________________



_________________________________________________________________________________

Thursday, October 16, 2008

The approaching season of summer


Spring is in the air - hay fever, August winds (a bit belated) and all. Keeping the pool clean is a mission - too many trees around the pool and not enough paving / concrete...

The sea holiday is something to look forward to.