It was becoming a familiar pattern. My team were doing frequent research and testing, generating lots of insights into our various user groups.
They were writing up the insights – easily readable documents with main findings for our stakeholders – and presenting back to the whole team. And even better, those insights were being fed back into our designs.
However, a missing link in the process became clear to me during a conversation with one of our developers when, I mentioned something we had discovered in testing, and she said “yes but that was only with 6 users”.
I realised that, because in that testing session we only tested with 6 users, she felt that this was not a representative sample to have conclusive evidence on why something should be a particular way.
It began to dawn on me that although we were doing regular testing, there was nothing to string it all together, to show the team that we have a lot of knowledge and insights about our particular users, derived from hours and hours of interviews, testing and ethnographic studies.
This problem reminded me of the clever User Research Dashboard that was created for HMRC:
Simply, this is a sheet of paper stuck on a wall that the HMRC design team could use to detail how many user research participants they had, how many days had elapsed since last testing, and how many team members had observed the session.
What works about this is that it visualises to your team exactly how many users your designs have faced and how consistently you are researching. Your team can get more of a sense of how much knowledge on your users you have.
Inspired, I created a similar version for my team (download here) and stuck it up by our sprint board, making a point of running through the figures each stand-up. The one adjustment I made was to include another area to make it clear what is being tested next session. This can be helpful to make sure that the entire team know what the focus is for next testing and help anchor the team around it.
(And, pro-tip: if you have a good relationship with your developers, they may even code up a digital version for easier updating and even more visibility!)
For me, the main takeaways from implementing this dashboard with my team were:
- The insights and findings of your research don’t stay locked up in the minds of only the UX team. The dashboard is a constant reminder to the wider team that research is continually happening and gives them visibility on precisely what we are testing or working towards testing at any given time. It helps to build user research into the cadence of our sprints and it is much more discussed now that it has a constant visible presence.
- This in turn means that the whole team can feel more involved in the research and be a part of making it happen. This has implications down the road for everyone to feel much more bought into the findings and insights.
- Crucially, the dashboard helps to drive home the message that we are following a user-centred design process and that our decisions are based on clear rationale and constant contact with real users.
Do you have a dashboard, or similar? What are your experiences of evangelising user research to your wider team? Share in the comments!