Conducting UX research
After writing my article on UX, I got some questions on the research side of things. As I have been on a UX brainwave the past many months, here is how I aim to do the research. It is not like research is my forte. I had to do some digging. I had some experts support me with crafting the methodology. My main fear was bias – how do I ensure I am not predesposing my respondents to an answer? The reality is – no matter how smart you are, you cannot avoid the bias of your own mind. You need to train your brain to avoid creating leading questions, and structuring your questions correctly. There needs to be a decision in regards to how much quantitative work versus how much qualitative work would be done. In this post I will walk you through the quantitatvi side of things.
Here are my top tips for successful quantitative UX research (or how I roll):
- Define what do you want to find out. You will not be able to find out everything you ever wanted. So there are 2 options:
- Prioritise what is the most important finding for you, and structure your questions to answer that
- Stage your research in phases, to allow for multiple sets of questions
- Limit the number of questions – gone are the days where people will sit and answer a Census style questionnaire. Anything more than 5 minutes is a stretch.
- Just gather information you do not have – if you have a customer portal (you know who they are), do not ask them again, it is just a waste of 30 seconds
- Offer an incentive – I mean this is always an easy win. People are more likely to give away their time if you incentivize them (shock 🙂 )
- Do not try to impose your own opinion. Research is there to prove your hypothesis right or wrong – you should not be emotionally attached to your hypothesis. (As your heart might be broken 🙂 ) The reason why you do research is to understand things better, so in the grand scheme of things it does not matter if you are right or wrong. If you knew it all you wouldn’t be doing research right?
- Do not go for crazy complex matrices that try to cramp in 75 variations of an answer – people do not get them and it is likely they will drop out. Been there done that, so i understand the temptation
- Ask scale questions (1-5, happy to sad face, green to red) – ‘How likely are you to…’ ‘How important is it for you to…’ . So if you want to know how much people love your website, do not just ask ‘Don’t you just love our website’. Go for a more generic question ‘How satisfied are you with our website journey?’ , ‘How important is it for you to have a website for brand X’ etc…
- Decide on a sample that makes statistical sense – if you have 5000 people, and you send the survey to 50, get 5 answers back that is not a proper sample. You need to decide what sample makes statistical sense given your database. This is something that will differ business to business. When you take your sample decision, I guess you need someone more analytical than me to give you the right sample size.
- Allow enough time to analyse the results, make sense of them and design a strategy.
- Have your UX person ready to start scamping ideas based on what you found out
- You need to get feedback on your scamps/wireframes/prototypes before you go into full blown development – this can be internal stakeholder or a sample of your customers (company dependent)
Just remember research is cold hearted and emotionless, this is what is going to get you the best results. The moment you insert emotion, you are introducing bias and the results will be inaccurate.