Before you start, get comfortable….This is a long one.
The people who build and configure software are, very rarely, the people who will use that software in their day job. No matter how often you’ve spoken to your users and how deeply you understand their needs, there is no substitute for giving a ‘thing’ to users and setting them loose. Never have I ever seen a user acceptance test (UAT) round pass with no amendments which, in my eyes, is the proof of the pudding.
Recently, I faced users who had either not accepted software before or, if they had, felt it hadn’t gone well at all. This presented me with some challenges; How do you prepare folks to accept software who aren’t familiar with how to accept software or why they’ve been asked to do it at all, especially when their previous experience has been negative? I wanted to make sure we capitalised on the UAT time we had so I knew I needed to address this.
I stripped back to what I consider the absolute bare minimum components of a good UAT and tried to avoid what makes a bad UAT.
Here’s what I think you need (at a minimum) for success:
- Representatives across all the roles expected to use the live system
- Scenarios that covered the most common types of work the system would be expected to support
- Clear expectations of what the system should be supporting
- A way to record what the actual outcome was vs what was expected
- Enthusiasm & buy in
Colleagues from eight different service areas all needed to work together if UAT was going to succeed. I wanted the team to to know how to do what they’re being asked to do and what good looked like. I wanted them to feel supported to do UAT ‘right’ and safe in the inevitable failures they would have. I wanted them to know that I was on their side 100% because I did not want to end up with this:
I was offered various training courses at high cost to get my colleagues ready for UAT. Spending any amount of money was not really a viable option and the training included from the supplier didn’t addresses what I considered the key points. I could not find any low cost/free ideas that were applicable to our particular situation. I had a bit of a panic so I made a cup of tea (specifically Yorkshire Tea)
The tea worked.
The team needed upskilling and invigorating to deliver these critical things so I designed a lightweight, engaging way to do that…LEGO.
I engineered a scenario which got 2 teams to UAT LEGO. I tied this into the principals of a good UAT by giving each team different amounts of LEGO and bad instructions so we had the opportunity to learn about what good looks like.
I started with really vague instructions of what I wanted them to do (bad acceptance criteria) and worked with them to identify how that could be improved so the team understood the importance of, and how to create, good acceptance criteria. With vague instructions, these were some of the ‘tests’ we conducted:
As we built better instructions, we found one team couldn’t complete the test successfully because they had the wrong kind of lego to do so. This enabled learning about different roles and why it’s important to not only accommodate them when thinking about what’s being tested but also to actually complete the test in that role.
We also looked at how we could help the team who couldn’t complete their tasks by assessing the feedback we got about what they did and how it went. We worked together to identify what would be useful to know if we’re supposed to be helping them succeed so the team built understanding of why it’s important to record an outcome in a way that allows it to be understood and actioned if it needs to be.
Once the basics were understood, I brought in the industry standard terms to get teams comfortable with the terminology as we needed to communicate effectively and standardised terms generally help with that.
The final part of the session was me facilitating the team to start to identify the roles, scenarios and acceptance criteria for the UAT they would need to run. via the medium of post it notes and enthusiasm. I always related back to what we’d learned in LEGO terms to really reinforce how what they were doing really was the same as the exercise we just followed.
By the end of the session, they had it. They were all confident about how to prepare for UAT and how to run it. We got through 80% of our test cycle before COVID-19 changed the landscape but I am pretty confident we were doing it right due to the amount of issues and queries we were sending back to our supplier. Here’s some figures:
- Week 1 – 64 issues raised
- Week 2 – 62 issues raised
- Week 3 – 52 issues raised
We have yet to get our 4th and final UAT round completed and see just how good our UAT was by releasing the system into the wild but, the team are pretty confident that they have covered all bases to date.
Once the project is back from hiatus and all systems are go again, I’ll update with just how good (or bad) at UAT we turned out to be.