Monday, December 29, 2008


From: Van Der Merwe, Ben B
Sent: 09 October 2008 07:47 AM
To: 'Derrick Beling'; Perold, Louise L
Subject: RE: SCRUM

Derrick, Louise
I have spoken to Shirley and got some additional feedback.
At the moment Scrum seems to be implemented in a non-Agile way - the BAs, developers, etc have full participation in the process but the testers are in general not part of all the crucial meetings like the burn down chart analysis (with Roshni being an exception).
One item affecting the test team (this is input from Arrie) is that (functional) requirements and test cases must happen in quick succession - the implication being that we should already be working on the Feb bucket test requirements and test cases.
This will not address the dependency that test execution has on development -> deployment -> configuration.
Shirley has confirmed that the length of our iteration is planned to be 4 weeks, but from recent experience (Rel 1.2) we have only been able to complete one full test cycle (a week) out of 7 weeks - due to various reasons, but the dependency highlighted above being the major one.
Other aspects include the quality of new code from development (unit testing). I feel very strongly about this - it is discussed in quite some detail in test driven development (from the SCRUMMING FROM THE TRENCHES pdf document). Maybe it makes testers feel good if they uncover bugs that should have been uncovered in unit testing, but quality cannot be added after the fact.
Quality starts with requirements, then design, then development. By the time delivery (to the test team) takes place the quality can only be measured- if it is bad, especially due to bad requirements or bad design/development - it is already costly to fix. The only way to measure and address quality earlier is to involve test analysts meaningfully at an early stage (requirements) - and also in all the SCRUM processes, and for developers to do meaningful Unit testing. As a technical test analyst I would definitely prefer to be involved in this process already in some way.
I am sure I can assist with simple and meaningful test cases for unit testing. Currently the design/development phase is a 'Black Box'. We cannot provide any input to it. And we do not get any output from it - until it is too late.
We can still make a good proposal to Karl...

From: Derrick Beling []
Sent: 08 October 2008 12:29 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Perold, Louise L; du Plessis, Johann J
Subject: RE: SCRUM

Good thinking here.
The challenge we have as Testers is that poor planning and performance on the part of someone else constitutes an emergency for us. Which means it is actually our problem and we benefit most from the solution.
So let's evolve this into a proposal for Karl. (We will use SCRUM as the excuse). Bring it up for discussion on Friday in the training

Derrick Beling
Managing Director

From: "Van Der Merwe, Ben B" <>
Sent: Wed, 8/10/2008 07:07
To: Derrick Beling <> ; "Phale, Mafatshe M" <>
Cc: "Perold, Louise L" <> ; "du Plessis, Johann J" <>
Subject: RE: SCRUM

The mechanisation / automation of the build and deployment process must be the first objective. This does not really fall within the testing space, but is a prerequisite for achieving on-time (tested) delivery.
We can first look at the build process and the deployment process in isolation. I suspect that the build process has been automated to a huge degree, with some initial configuration (based on the code branch and target environment e.g. INT1, INT2, INT3, INT4 or UAT1, UAT2, UAT3, UAT4) being required before the build is kicked off. (?)
I also suspect that the deployment and configuration process (CVA, BVA, BPH, Websphere / MQ, BIG servers, SFI, Bankmaster, Equinox, Interfaces between these) is the bottleneck and is a very manual process. The complexity also increases exponentially with the amount of code branches and the amount of environments that must be handled in parallel. [James, Khumo, Bradley, Carl, and a few others are required resources - if they are not present the deployment and configuration often stand still and cannot continue !) Priority is also given to production environments, even though there is an effort to hand these activities over to the production team.
In the short term we can try to alleviate this in various ways, but it will really take a huge effort from the project team to improve on this. Karl Fuchs will have to drive this.
I have suggested the monitoring of the queues in the UAT environment to help with the early identification of Websphere MQ issues in test environments - this monitoring is already happening in the production environments. This can be achieved in the short term - I have requested this from Evert a few times and must follow up again.
The 'basic' or smoke testing referred to can already be mechanised / automated in many instances (Mafatshe - an example of this is a base currency payment that goes into the "DeliveredForProcessing" status. This indicates that the interface elements between CVA and the back end is in place and has been configured for payments...)
The length of our iterations can be confirmed, but the manual deployment and configuration issues outlined normally makes the duration of the actual test cycles a lot shorter !

From: Derrick Beling []
Sent: 07 October 2008 17:57 PM
To: Van Der Merwe, Ben B; Phale, Mafatshe M
Cc: Perold, Louise L
Subject: FW: SCRUM

Note the comments on a build machine. Here is an example of mechanisation to start addressing the problems associated with environments. Which refers to my previous email / blog The comments from Karl seem to support that one of the critical areas of non-delivery will be the environment. And that Ben and Simon are taking up more of the responsibility of managing the environment


Derrick Beling

From: Perold, Louise L []
Sent: 07 October 2008 10:50 AM
To: Derrick Beling
Subject: FW: SCRUM

From: Chris Blain []
Sent: 07 October, 2008 01:42
To: Perold, Louise L
Subject: RE: SCRUM

Hi, I've testing in "SCRUM" environments in my last two jobs.

I put it in quotes because neither time was it implemented 100%. That colors my comments to a degree. Why did the client choose SCRUM? Are they getting SCRUM training, or are they reading a book and doing it on their own? SCRUM works more as a high level project management method. It says nothing about how you do the development or testing. They need to decide this in addition. For example, are they using XP for their development process? Pair programming, the amount of unit testing, whether they do TDD development, will all affect your testing as it gives you different places to interject testing influence and dictates when you will know what is being delivered in an iteration. An important consideration is the length of the iteration. There is a big difference between two week and four week iterations. This will impact the amount of test design and execution you can do. It puts different pressures on the dev team as well. Do you know if your client has a good build and automated smoke test procedure? I consider these essential for any project style, but especially an agile process. They should be able to push one button and have the build machine go from a clean state, pull down fresh sources, compile application, create installer and publish (zip file, CD ISO image, whatever is the distribution media). It should then take the build and put it on a test machine and run some basic tests to validate the build for further testing. I'm not a fan of continuous integration, but the previous infrastructure is essential to agility. Those are some initial thoughts. Let me know what you think and we can keep the discussion going. --Chris

From: Perold, Louise L []
Sent: Monday, October 06, 2008 2:37 AM
To: Chris Blain;;; Jeff Fry; Andersson, Henrik;
Subject: SCRUM Hi guys,

Have any of you tested in a SCRUM environment? Any helpful hints, tips, links you can share?

One of our clients is changing from a waterfall development lifecycle to SCRUM. My idea is for us to implement Session based testing to complement this.




No comments: