As you may be aware, I follow certain testing folks in the Context Driven community. Some of these Testers are members of the Miagi-Do School of Software Testing.
I have read & heard about this Miagi-Do school for a while - I knew I had to complete a challenge to ‘gain entry’ in order to prove my worth, but I had never got round to following up on how I go about receiving a challenge.
Now, largely due to a post from David Greenlees, I got my ass into gear & contacted the Miagi-Do school for a challenge!
It was Matt Heusser who offered the first challenge (Well Michael Larsen offered first, but due to time constraints we played out Matts challenge first)
The challenge was a sifting exercise - sort through all available options & provide feedback to Matt on the best deal which met his needs (requirements). Matt had tried it out on Twitter previously, so I had a rough idea what it was about. It also gave me a list of questions which had already been answered to bring to the table & elaborate upon.
The challenge was carried out via email, with a debrief on Skype, which certainly worked well for me. The full transcript of the debrief can be found here.
Because the challenge was timeboxed, I chose a time when both Matt & I would be available to try & complete the challenge ‘real-time’. As it turns out, I had to crash out & give my response in the morning. I had already made my decision before going to sleep, but I needed some time to craft a suitable answer.
The debrief was good & as with my other experiences of Skype coaching/conversations it was a Socratic learning exercise.
I struggled with the idea of ‘best deal’ - I had no idea what Matt values were. What I might consider to be the best deal is very likely to be different to that of Matts.
What I noticed is that I attached a load of assumptions onto Matts request. This allowed me to get to answer quicker, but I may have sacrificed the value of my answer but not making relevant suggestions.
Matt didn’t seem to mind that I made assumptions in order to help me provide an answer to the challenge - This countered my coaching with Anne-Marie which resulted in assumptions being bad. I need to get my head into this one. Was it ok to make assumptions in Matts challenge because I used several heuristics which helped me make fairly accurate assumptions? What if my assumptions had been wildly off - Would I have been slammed for making assumptions? As Matt points out, you can make assumptions if you’re right or if there is small consequence if you are wrong. I like to think I err on the side of asking more questions as opposed to attempting to make the right, low risk assumptions.
As it turns out, I asked enough questions to get near enough the kind of best deal Matt was expecting. Matt picked up on this in the debrief - balancing the amount of questions in the allotted time in order to give a valuable response. You cannot possibly ask every question (= you cannot possibly execute every test?!)
I decided to provide Matt with several justified options which I felt offered him the best deal, as opposed to just the one option. He seemed happy with this.
Takeaways from the session include
- Customers don’t always give an exact requirement for what they’re after
- You don’t need to ask a boat load of questions to deliver what the customer has asked for
- Its better to ask fewer, more relevant questions than many ‘out there’ questions - prevents you upsetting customer & to get to answer quicker
- Frequent feedback to the customer allows to refine their requirements which helps you get closer to delivering what they want
- Providing options rather than a specific answer is OK (in certain contexts?)
- Making assumptions may be OK if you’re either right or there are small consequences to making wrong assumptions.
Ultimately, I was successful in completing the challenge, both from what I learned & being accepted into the school. It was a very warm welcome from everyone in the school & David Greenlees & I have already collaborated on another challenge!