Cynefin In Software Testing

The Cynefin framework has found many useful applications in the software development industry. Liz Keogh has done great work in applying the model to software development.

Some focus has been paid to Cynefin from a software testing perspective, such as by James Christie, Jesper Lottsen & Joe Larson, all of which have helped form my ideas on Cynefin & testing.

This post is my attempt to improve my understanding of Cynefin & how it can be applied to software testing.

(Another) brief introduction to the Cynefin framework

 

Cynefin_as_of_1st_June_2014

Image courtesy of Wikipedia

Dave Snowden talks about the framework in his own words on Cognitive Edge

Disorder

  • Not knowing what domain you are in
  • We revert to our preferred when making decisions (which may be incorrect for the situation at hand)

Obvious (previously Simple)

  • Sense - Categorise - Respond
  • Can apply best practice
  • Repeating relationships between cause & effect to any reasonable person
  • Things in this domain don’t change, yet may have high utility whilst in here

Complicated

  • Sense - Analyse - Respond
  • Can apply good practices (multiple practices might solve the problem)
  • Repeating relationships between cause & effect that are not self evident
  • Good for exploitation

Complex

  • Probe - Sense - Respond
  • Practices emerge through probing (exploring through safe-to-fail experiments)
  • End state is not known
  • Cause & effect apparent in retrospect
  • Good for exploration

Chaotic

  • Act - Sense - Respond
  • Can apply novel practice
  • Can move into this domain willingly or unexpectedly

In this post, I propose that a lot of the problems we encounter during testing are a result of us believing we are in one domain, when actually we are in Disorder. In this domain we adopt our most comfortable problem solving style & this may, or may not correct. The approaches to and techniques used in testing in each domain should be different; if we apply one when we should be applying the other then trouble (AKA Chaos) could be looming just around the corner…

It is important to remember that Cynefin is fundamentally about dynamics or movement, not the static.

I argue that when we pick up a piece of work, we start in the domain of Disorder as we do not yet know which domain we are in. For example, is it an urgent issue in the Production environment that we don’t yet understand, suggesting that we might be in Chaos? Is it a new feature that no one in the team has developed before suggesting we might be in the Complex domain? Or is the change a refactor to a well trodden piece of code which might put us in the Complicated or even the Simple domains?

For the purposes of this post I’ll start us in the Complex domain by taking the example that we are developing a new feature for our website. No one in the team has developed a feature like this before. We are in a development cycle that appreciates fast feedback.

The Complex domain is valuable for exploration. As we pick up the work, we discuss what problem this feature is hoping to solve and what the expectations of the various stakeholders are. We ask questions of the desired software before any code is written. We explore “what if?” & “what about?” scenarios. When we think we understand the problem enough, we start to think about potential solutions to the problem & eventually we start to think about writing & testing some code.

At this stage we’re still exploring - we might try out small safe-to-fail experiments (aka spikes) to help us get started & identify what route to take. Someone has the idea to research how problems similar to ours have been solved by others (effectively calling in the experts) which, if a search is successful, might move this part of the feature development into the Complicated domain (David Anderson points out you can hire your competitors in order to move from the Complex to Complicated domains).

Here, we start exploiting the knowledge of others who have solved this problem before & hopefully we find a few different solutions which might help us progress.

What about testing?

“You said this post was about Cynefin & testing. I’ve not seen any mention of testing yet…”

We’ve been testing ever since we picked up the piece of work - We’ve been exploring to gather information to help us develop a solution to a problem which I’d say is one of the fundamentals of testing!

  • The questions we’ve been asking about the problem is testing
  • The spikes are testing
  • Researching the problem is testing
  • Gaining a perspective on the information we find from the research is testing
  • Implicit in this post is the fact that the Programmers are applying Test Driven Design (TDD) principles to help them explore their potential solutions.

This gets me to the point that when we start a new piece of work we are actually in the Complex domain, primarily because we don’t know what the ideal end state looks like (although we try to pretend we do).

This is why Exploratory Testing is so important, especially at the start of development (of a feature). We establish a set of desires of how we believe the software should behave and we set out to explore if the software fulfils those desires.

Once our explorations have provided us & the team with enough information to make informed decisions then we can be seen as crossing the boundary from the Complex into the Complicated domain.

In the Complicated domain, we can start to exploit the knowledge from our explorations. We can start to codify that knowledge, such as scripts that are executed automatically, so that we can check that our knowledge is still true & valid.

It’s not just Testers doing exploratory testing; The Programmers are also exploring as they write the code. When they are writing their failing automated checks, they are thinking about & setting out the desires (more typically though its expectations they’re setting) of the software. When they start to write the code, they are thinking about how to make that check pass. Once it passes, they are thinking about how they can refactor the code.

Once the Programmers re-run those automated checks, for example as change detectors, the problem moves into the more desired Complicated domain where we can exploit, scale & repeat various solutions.

If we start development by establishing an ideal future state by defining all the requirements, striving for up front fail-safe design & writing all our test cases which will all pass when we reach our future state then we fall into the trap of believing that software development exists in the ordered domains of Complicated & Obvious.

If we attempt to manage a development project as if it were ordered and the project starts to fail, the people doing the development hide the failure. This continues, the managers become complacent, believing the project is progressing ok, until such a point as when the failure is uncovered it is so catastrophic it moves the project over the cliff of complacency into Chaos. It could now potentially take more effort to pull the project out of Chaos back into Complexity than if it had been treated as complex in the first place.

This is an example which demonstrates that the density of informal networks is directly proportional to the level perceived bureaucracy. The more bureaucracy in an organisation, the more informal networks there will be to cope with the bureaucracy.

I’m starting to think that most testing problems start out in the Complex domain regardless of the business problem - even though a problem may have been solved by someone else, the system in which the problem was solved is likely to be so different as to make the testing of the solution still complex. This is a whole other post…

How do I know what domain I’m in?

When picking up a new piece of development work, try to understand what domain the problem you are trying to solve is in (relative to your ability to solve it).

Liz offers a simplified scale to help you determine the domain in her Estimating Complexity post:

  1. Just about everyone in the world has done this.
  2. Lots of people have done this, including someone on our team.
  3. Someone in our company has done this, or we have access to expertise.
  4. Someone in the world did this, but not in our organization (and probably at a competitor).
  5. Nobody in the world has ever done this before.

Estimating Complexity laid over Cynefin framework

Image courtesy of Liz Keogh

Nip long, drawn out meetings in the bud by accepting that different ideas may actually work - don’t try and work out all the minute detail up front & instead opt for safe-to-fail experiments and base your decisions on the result of those experiments.

Summary

Be sure you are at least aware of what domain you could be in before you dive in & start work. If in doubt, err on the side of caution & assume Complex.

If you’ve identified the problem is actually in the Complex domain, don’t push for all the requirements up front - you won’t get them because the people you’re asking them from are unlikely to know them. And definitely do not try to write a raft of test cases to “prove” the requirements you do have - spend your time thinking about how you can explore, learn about & share information on the software you are helping to develop.

And finally, recognise, appreciate and learn from failures instead of trying to hide them - they will surface & the longer they stay hidden, the worse the situation will be when they do surface.

There are other posts to follow discussing the ideas of Dave Snowden & their impact on software & testing.

Useful links

For more information about Cynefin in software development, I’d recommend reading Cynefin For Devs & Estimating Complexity from Liz.

A couple of great talks that formed the basis of this post (& subsequent others in the pipeline) include:

Dave Snowden - Cynefin at LLKD13

Dave Snowden - How Not To Manage Complexity

Liz Keogh - BDD & Cynefin

  • Anthony Green

    “We establish a set of desires of how we believe the software should behave and we set out to explore if the software fulfils those desires.” This is of something of a contentious topic amongst the Cynefin community. Some advocate some pre-known quality, your ‘set of desires’ and advocate using Sensemaking techniques to check progress. Others, myself included, favour situational awareness, using catalytic probes and making continual decisions on the basis of the stream of information they report back.

    • DuncanNisbet

      Thanks for stopping by Anthony, I appreciate your comment.

      My intention here is that the word “desire” drives out conversations like it “it should do this” which leaves us open to the idea that it might not behave like that & so what if it doesn’t? Is that a problem? It leaves us open to new information & hopefully serendipity.

      This is opposed to “expected results” which is less exploratory & potentially closes the door on information that does not confirm the results. I might say that expectations are good for the automated scripts in Complicated domain.

      I fully agree about situational awareness - this is exactly how I see exploratory testing applied to the Cynefin model but that doesn’t appear to have come across in the post.

      The definition of ET I follow is simultaneous learning, test execution & design to discover new information about the software to enable informed decisions to be made.

      I see missions & test charters as the catalytic probes, the start of our exploration

      As we’re performing our testing, executing our charters against the mission of the session, we’re free to create new charters for later exploration based on the information we find. We allow ourselves to be open to other ideas.

      What would catalytic probes look like in your context?

      Duncs

      • Anthony Green

        The notion “it should do this” prescribes an end state. Instead in (an alternative vision of) the complex domain you having multiple hypothesis including “it shouldn’t do this” and “this is even the right thing to be working on”. Thus you have multiple safe-to-fail experiments , some contradictory, some oblique. You determine from observed patterns of usage and captured stories whether it’s worth pursuing. But this means you have true serendipity - you can end up working on things there was no way of predicting you’d be working on, including things that run contrary to your own inclination. Skunkworks is the commonest form of dealing with complexity where they employing numerous interventions/experiments as your catalytic probes. Snowden examines the complex domain in detail in this video: https://vimeo.com/74724603

        • DuncanNisbet

          I would argue that few development teams work in that nirvana 🙂

          The theory is great, but when it comes to applying that theory my experience has shown me that some end state is desired be that a solution is already provided or a problem posed to the development team for them to solve.

          I’m intrigued - can you share any examples from your day-to-day working practices where you are applying Dave’s theory?

          • Anthony Green

            End-state is certainly desired in ordered domains, but applying it in complex situations collapses options prematurely. Complexity is a valuable domain rich in exploitable opportunities for those with expertise. Some may attempt to claim the privilege of complexity says Snowden http://cognitive-edge.com/blog/entry/6398/perspectives-on-cynefin/ but that’s a mistake.

            Software/product development has yet to grasp the nature complex adaptive systems in the same way as N|GOs so there are, at least to my knowledge, no public case studies. Our frustrations are compounded by the IP surrounding SenseMaker®, a tool a few people have told me would be integral to exploring complexity further.

          • Anthony Green

            There’s a new presentation by Snowden that talks directly to these points: https://vimeo.com/120942121

          • DuncanNisbet

            Nice one, thanks for this Anthony - I’ll get it watched

          • David Michel

            Hi, sorry to jump in on a conversation from a year ago. I just stumbled upon the blog post and that discussion…

            Is the difference between between you two simply that Duncan sees the “what” as known and the “how” as unknown and complex, whereas Anthony sees the “what” (i.e. the end state) also as unknown and complex?

          • DuncanNisbet

            I see it differently in the software development teams I’ve been . The end / desired state is the problem being solved. Ideally, how the problem is solved is worked out through exploration, but more frequently I observed that a solution is requested outright (this is slowly changing in my experience).

            With Exploratory Testing, as opposed to scripted tested, we have an idea of the end / desired state, but we don’t follow one route through the system to that end / desired state - we explore the system.

            Because we are exploring, we are not collapsing options prematurely. On the contrary, through exploring we open other options we were not aware of when we started testing (which might form charters for future exploration).

            So yes, I agree with you that thinking of software development as being in the ordered domains & therefore we can write scripts to follow to confirm the end states will close down options.

            The point of this post was an attempt to show the importance & value Exploratory Testing as software development is in fact complex & therefore trying to codify our work into scripts too soon will highly likely cause us to fail.

          • Anthony Green

            In the complex space we may not what the problem is we’re trying to solve: “people don’t know what they want and when they get it they want something different”. Alternatively we can try to continually manage the situated present and as our safe-to-fail interventions gain coherence amplify them and steer them towards complicated - the better to exploit them. Capturing scripting opportunities whilst exploratory testing adds increasing constraints - a move toward obvious. We recognise that value is obtainable in all of the domains, they’re just manifest differently. There’s also a strong temporal component: Obvious is about long term exploitation, Complex about the immediate present.

          • DuncanNisbet

            “Capturing scripting opportunities whilst exploratory testing” Not sure I explicitly said that. Exploratory Testing & looking for opportunities to codify / script are not directly linked

          • Anthony Green

            For the benefit of future readers I should disclose that there are Cynefin advocates like Greg Brougham who support your view http://www.infoq.com/presentations/cynefin-framework-uncertainty 🙂

          • DuncanNisbet

            Excellent - thanks for the link Anthony. Always good to get another angle

          • Anthony Green

            Your question goes to heart of the struggle over Cynefin. One can’t in my view rebadge Complicated as Complex. You have in Snowden’s much emphasised phrase ‘to think anew to act anew’. I might also infer by that same statement that no general uptake of a CAS approach will emerge whilst people negate the probe, sense, analyse approach to Complexity as ‘nice in theory’. Perhaps we need to start making wider leaps of faith if we’re to explore the depths of complexity.

          • DuncanNisbet

            Whose rebadging Complicated as Complex?

            I agree with you on the idea of wider leaps of faith, although I might rephrase it as more of an openness to exploration with less of a defined end state.

  • Pingback: Testing Bits – 3/1/15 - 3/7/15 | Testing Curator Blog()

  • Pingback: Complexity In Context - Complexity vs Systems Thinking | Duncan Nisbet()

  • Pingback: Cynefin & Software Bugs | Duncan Nisbet()

  • Pingback: Making notes while learning testing | ObServant Tester()

  • Pingback: Less Software, more Testing | Complexity is a Matter of Perspective()