I Love Test Scripts

91. Stop following test scripts and think.

I’m getting this posting “Stop following test scripts and think“ written before Michael Larsen gets to my two testing quotes which were published in the “99 Things you can do to become a better tester” booklet by the Ministry Of Testing recently as it won’t be long, he’s currently at #72 as of this posting.

Test Scripts

There have been plenty of postings against testing with scripts from within the Context Driven School of Testing, so was my quote just an obvious trite cliché? It’s obvious to me, yes, but in my experience it’s not obvious to others within the testing field. Why is it that I have often experienced a resistance to changing the way test scripts are written?

Well I’ve been just as resistant and to explain that I need to make an admission. I have been a long term advocate of test scripts, the more detail the better was my philosophy, dam you ISTQB. My basic premise was I wanted test scripts to be written so that anyone who came into QA could immediately start testing against the scripts because they were so detailed and easy to follow.

I didn’t see that as a problem, until I started to understand what we lost by using that tired methodology. Where’s the testing here? There isn’t any, this is checking by following the dots, this kind of testing (in the losses sense of the word) can be done by anybody, but it’s not testing. Very little is gained by using this approach, the only thing it could assist with would be to help plan out a regression automation suite, even then the time spent with this approach really isn’t worth the time or effort, nor is it needed.

Do I use test scripts now? Yes. So what are you talking about then?

It’s not the use of test scripts in general I’m against, it’s the details that’s contained within the test scripts and how the scripts are used by testers that is the problem.  The test scripts I currently use are very high level, no expected outcomes; just enough detailed to guide the testing activities around critical functionality prior to each release. The key word in that sentence is guide, not instruct, not to follow a line by line test and expect this approach.

I am not saying this is a perfect approach, but what we do gain is a reasonable understanding of what has been tested from a regression point of view. These are tests, as they allow for the testers individual interpretation of the process flow from start to finish with that particular functional area. They are valuable within my current context within one organisation. The way these scripts are written allows the tester to explore and to think for themselves, they are in essence mini exploratory mission statements, rather than test scripts.

Stop calling them test scripts then!

However in another organisation I work for we don’t have any test scripts, we work from Mind Maps, as that’s a more appropriate approach to use in that context. Why do I have test scripts in one organisation and not in another? It’s not down to not having the authority to change the way either organisation tests, that’s my remit for both. I use exploration test scripts in one organisation because the product we are testing is highly complex and mission critical for our customers and I do not believe in this context Mind Maps would work. The reason being is mainly due to design issues of the Mind Maps themselves, they would become too unwieldy and I don’t believe they would add anything to the types of exploratory testing scripts we current develop.

In the other organisation Mind Maps are the current obvious approach to testing, easy to use as a guide, easy to understand where you are with your testing, small and not unwieldy, it’s a web site after all, not multiple standalone products as with the other organisation.

Exploratory testing isn’t about going straight in to testing without having an idea of what the aim of the exploratory sessions is, and then randomly and aimlessly pressing buttons to see what happens. Exploratory testing has a remit of what you are trying to achieve. The test scripts we use are just the same as an exploratory remit they allow the tester to explore and think. The test script sets up a scenario and what is to be achieved, how they go from the start to the end is down to the testers own imagination.

Yes the title of this blog is somewhat of a lie to pull you in to read it, but just because someone says they use test scripts, I think you should ask what kind of test scripts they use and what they mean when the say test scripts, before you make a judgment.

17 Responses to I Love Test Scripts

  1. Stephen,

    Nice post. I’ve been thinking a lot lately about the continuum of discretion that testers can/should exert while testing. The Bach Brothers captured the continuum of possibilities in what they called the “Tester Freedom Scale.”

    I’ll be presenting at CAST next week and will refer to this post because it fits in with my presentation. My plan: after talking about pros and cons of a few different potential places on the continuum, I’ll propose what you’re describing here as a pretty sensible place for a context driven tester to “land.” Then, assuming the level of “tester freedom” you describe, I’ll describe how pairwise and related test design methods can usually be used (as a way of guiding “tests” or “test ideas”) to increase variation during testing.

    Justin Hunter

    • We should always be thinking about what is the best way to test within our context. There are so many methodologies and there isn’t one way, you have to be flexible, you have to adapt to the changing landscape.

      I wish I could have been at Cast, unfortunately this year it wasn’t to be, I know so many people who are going that hopefully from the tweets it’ll be almost like being there.

      Good luck with your talk, I hope I can catch it somewhere, sometime.

    • Thanks for the link, your posting is very detailed and somewhat prescriptive with involving combinatorial testing. My posting was very high level but I’ll explain in more detail with a reply to another poster who has asked a specific question to do with combinations and tracking progress.

  2. Hi Stephen,

    I like this approach for UAT and i’m actually in the middle of writing my own post on the subject. However, when testing is in the SYS or SIT phases how are you ensuring that you have enough coverage of the different permutations of the system and variables?

    [Stephen’s Reply | This posting is very high level and doesn’t go in to the specific details you have asked about. Firstly the permutations we have to test are limitless, just for Browsers, we also have to consider O/S, Windows and Mac and many Linux variants, also 3 different Java versions and 3 different SVN Binaries. Those alone create a vast number of permutations, so we have automation smoke tests for various combinations, and then only commonly used variants. The products themselves also have countless permutations, firstly we use variations of configuration and setup that customers use. We then have test cases that are still high level that state scenarios to devise combinatorial testing from, some of these we keep and add to the stack for automation, some are thrown away as they were testing something specific at that time. Test cases are not stangnet they are continually evolving to provide more coverage. Hexawise is something we have recently been looking at to provide more understanding of coverage, however we have a good understanding of how our products are used in the wild, so those are the main permutations that we test, this is in no way perfect and again we continue to improve over time with each and every product release.]

    What you discuss in the post sounds like you are pushing in the direction of non-session based exploratory testing with very specific missions rather than a more general mission with a time boxed nature, the danger here is how do you know when to stop and how do you track your progress?

    [Stephen’s Reply | Firstly we stop when we said we would stop, our estimation is very mature with certain products and improving with others, we have a clear testing cycle, with significant amounts of automation to help catch regression bugs. Obviously sometimes unexpected delays occur, priorities change and new deadlines are agreed, but we always work to a deadline and release then unless something highly significant is found.]

    [Even though I have provided some detail here this is in no way the full story]

    Also i’ll be in touch soon regarding the live testing session for #shefftest.

    [Stephen’s Reply | Sounds good, get in contact soon]

    Cheers

    Dan

  3. kinofrost says:

    “The test scripts we use are just the same as an exploratory remit they allow the tester to explore and think.”

    You cannot get a human to NOT do exploratory testing – it’s the product of a huge set of innate, tacit processes. Exploratory/scripted is not a binary decision, it’s a sliding scale, and it’s bringing that concept to the fore that allows us to understand the degree to which not only we should not stick to the script but the degree to which we do not. No decent tester spots a problem then ignores it if it’s not on their script.

    On your scale I don’t know if the test scripts are actually test charters, which makes this a purely semantic issue.

    But I don’t even think that’s the issue with test scripts – I think the issue is repeatability. Do you know why you want to repeat those tests over and over instead of thinking about other areas to test or learning/exploring the product? Because if not then I think that’s where the issue is, not the lack of exploratory testing.

    Also I don’t think exploratory testing being ad hoc testing is an issue here either. It’s not about doing or not doing exploratory testing – every human tester does it already – it’s about whether it’s done well, in terms of the skill of the tester and the way in which the exploratory approach is applied.

    • Our Test Scripts are in no way perfect, but we are improving upon them continually. They are not prescriptive, they are not scripts that when used force the testers to test the same thing each and every time. They are functionally based with a starting point and goal described within them. This does not restrict the tester to become a robot and follow strict instructions it allows them to explore with an aim in mind.

      Exploratory testing is *Not* ad-hoc testing. At first I thought I’d actually written that and on review I see that I didn’t, however you seem to have interpreted my posting that way.

      For a better description of what Exploratory Testing is and is not I suggest you read: What Exploratory Testing Is Not (Part 5): Undocumented Testing

      • kinofrost says:

        “they are not scripts that when used force the testers to test the same thing each and every time”
        Then are they scripts?

        From the post: “The test script sets up a scenario and what is to be achieved, how they go from the start to the end is down to the testers own imagination.”
        Are they *really* scripts?

        To me a script is just a set of (externally) imposed instructions that tell someone or something what to do. I don’t know what your scripts look like, but again I’ll state that I don’t think the problem is the level of exploratory work done when executing a script (the unscripted parts) but the reason to have the scripts in the first place. I suppose I’m saying that while it’s great that you love scripts you’ve yet to get me to understand why.

        Re: ad-hoc – Sorry, I have misrepresented my point. I know very well that exploratory testing is not ad-hoc, and you did say basically the same thing in the article (“Exploratory testing isn’t about going straight in to testing without having an idea of what the aim of the exploratory sessions is, and then randomly and aimlessly pressing buttons to see what happens.”) and I agree. I was saying that to bring up the point that ad-hoc != exploratory does not imply or infer that test scripts are therefore good. External structure is not necessarily better than internal structure.

        I’d like to emphasise that while ET it is not always undocumented *it can be undocumented*. It even says as much in the article you linked me to (“any kind of testing can be heavily documented or completely undocumented”). Exploratory testing is an approach, not a set of techniques that include the step “work from a guiding list of instructions”. There may be a lot of exploratory work in your scripts, there may be little. That’s the difficult part of writing test scripts – specifying what is necessary AND sufficient for the particular person who is to follow them. And that’s all fine…

        But I don’t know why you call them test scripts. To be honest I think we’re on the same page in two different languages – you say “test scripts that allow for exploratory work” and I say “guided exploratory testing using narrowly defined test charters”. It depends what your test script looks like. But again, I don’t think that’s the problem with scripts in this sense, I think the problem is: “why are you increasing costs by writing these steps down?”. If you can answer that question for your context I think I’ll be able to agree with you.

        Can you answer your own challenge: “Stop calling them scripts then!”? What’s wrong with your own suggestion “mini exploratory mission statements”? You could call them “MEMMs” in the office, it has a nice round sound to it.

        • I call them scripts in this posting to get people to read it, as it’s provocatively titled.

          However as we are still in the process of moving from test scripts in the traditional sense, with expected results, we have a mixed bag presently. I do understand your point however as we are aiming for mini functional testing charters, or mission statements as you have said.

          I like the MEMMs idea, however that acronym is too similar to MEME and I wouldn’t want our testing charters to be referred to as MEMEs, so probably, FTC or TMS although they have not got the same ring to them.

  4. Kiran says:

    By looking at the subject, I thought we were talking about automated scripts.

    While every thing is context-based, I doubt having thorough test scripts is going to add a lot of value. As requirements change, test scripts have to be updated and soon we find maintaining them will be a nightmare.

    Also I can see not having any guidance (either scripts or charters) is not going to work in a large team as one will soon have no clue who is testing what and how much progress has been made on testing.

    Just having pre-written charters alone again won’t work, for the simple reason that it is not practical to come up with the list of all the charters. We start with some basic characters and come up with new charters as we explore more with our findings in the previous charter. This is going to pose additional challenges in estimation, since we don’t have all the charters before hand, we can’t estimate against all the charters.

    I think the approach will also depend on the project methodology. What would be more appropriate in an Agile project with 2 week iterations with no dev task more than 2 days effort and there is a test task for every dev task. Do you want to any more written scripts at this stage than the task(possibly in JIRA) itself?

    Also, how much does it really matter with in a team whether you call it a script, a charter, MEMM or MEME. I can see why it matters in a public forum/blog as it can lead to different interpretations, but I wouldn’t be overly worried what something is called with in a team as long as everyone understands what we are doing.

    • kinofrost says:

      “Also, how much does it really matter with in a team whether you call it a script, a charter, MEMM or MEME. I can see why it matters in a public forum/blog as it can lead to different interpretations, but I wouldn’t be overly worried what something is called with in a team as long as everyone understands what we are doing.”

      I basically agree with you, but I think it does matter what we call them. Words evoke memories, feelings and a sense of our understanding. If I said “I legally assaulted her” instead of “I poked her gently in the arm”.. they sort of mean the same thing (under the technicalities of UK law) but have very different feelings to them. In the same way “script” to me evokes pictures of computer programs and rigid instructions.

    • kinofrost says:

      “Writing a sample of the test ideas as a Detailed Test Script, allows to validate a group of test scripts,”
      It does?

      “We can still do that and oblige to D.R.Y rule.”
      If you’re validating a group of test scripts you’ll later run then you will be “repeating” your test steps every time you run that script (except the first time, of course).

      “BTW – I do not understand why you blame ISTQB for test scripts when this habit has been there long before ISTQB was founded, written in books, standards and course material.
      Does that make the post sound more “cool” ?”

      Those books, standards and course material are, for the most part, wrong. They describe testing that doesn’t work, never worked, and doesn’t work today. The ISTQB continue to teach “testing” that doesn’t work, never worked, and doesn’t work today. They are to blame for the over-reliance and use of test scripts in the same way that homeopaths are responsible for the teaching and use of old “medicines” that don’t work, never worked, and don’t work today beyond the placebo effect. I still blame them for teaching it, even though homeopathy existed before they were born, because we now know better.

    • I haven’t been overly detailed about the specifics of how we test using scripts and what our future aims are, so I’ll leave that for another blog post. but one thing to note here, when you mention Jira tasks. Some we create new tests for, most we don’t, again here we can use exploratory testing to test the issues and around the issue, the ticket itself is the testing charter so to speak.

  5. There is one thing I have found detailed test scripts useful for:
    Writing a sample of the test ideas as a Detailed Test Script, allows to validate a group of test scripts,
    Some times we are unaware of limitation or needs arising from the test ideas –
    Till we elaborate them a bit more.
    Then reviewers can get more into the details, and indicate limitations in the suggested approach, setup, tools etc.
    We can still do that and oblige to D.R.Y rule.

    BTW – I do not understand why you blame ISTQB for test scripts when this habit has been there long before ISTQB was founded, written in books, standards and course material.
    Does that make the post sound more “cool” ?

  6. Hi Stephen,

    The term “test script” has become a bit of a poisoned term in the CDT community these days (I have to admit to a bit of bias as well in that regard), so it’s interesting to see you’ve shined a light on how a script might be used for good instead of evil. That said, KinoFrost may have a point in that may no longer be scripts as most testers would understand, but you’ve already got that so disregard this sentence…

    You may have already read it, but the first thing that popped into my head while reading this was Cem Kaner’s presentation on checklists – http://www.kaner.com/pdfs/ValueOfChecklists.pdf. Along with a few other experiences and wise heads, it helped modify my “test scripts = bad testing” attitude and while I don’t use them in the mass-market, conventional sense, some of the related / semi-related precepts are among my testing approaches.

    Looking forward to any follow-up posts you might do on this one.

    • Thank you for that link, I’ve read a lot of Dr Cem Kaner’s stuff but there is so much out there that you always find new ones.

      I will be writing a follow up post in the future, and that one will be more specific about what we really do than the generalities I’ve included in this posting.

%d bloggers like this: