UX designers go iteratively through four main phases: Assess, design, build, and evaluate. For each phase, there are a handful of methods and techniques (design activities) to be undertaken, they can be generally categorized as drawing, sound, movement, writing, discussion and game-based activities. They can focus on different themes including identity, spaces and places, society and politics. Below I break down these phases and list some of the methods and techniques used in each phase.
Before delving in the data gathering, we should clearly define the “vision” behind the project to inform every step of the design process.
We should also define a set of questions for which we seek answers. Then we can define what potential techiniques to sue for each question to be answered.
It is important to keep in mind that the purpose of the user research is not the research itself but rather the practical use of its results.
1. Requirements activities (understanding users/user research/requirement gathering/problem space):
This could refer to business requirements and users requirements. For business requirements, we can use a variety of techniques such as reviewing project documents, holding kick-off meetings, interviewing stakeholders, and conducting requirements planning workshop. User requirements aim to understand as much as possible about the users, their needs, values, current practice, and behavior, rational, the context of use, the challenges they face and their wishlist/strategies to improve their current practices in their physical and social environment. As such, we should be able to translate this data into design requirements which are the end goal of this phase. There are different types of requirements including user, environment, organizational, social and technical environment and usability requirements.This phase consists of three tasks: data collection (formative research), data analysis and data presentation (communicating the results).
There are three basic approaches to collect data:
- Ask: interviews, focus groups (5-10), surveys (and questionnaires), diary studies, contextual inquiry, object-based techniques (e.g. photo elicitation, collage), competitor benchmarking and experience sampling.
- Observe: naturalistic observations, digital observation, ethnography, user testing (scripted tasks) and usage analytic (e.g. video analysis and social media mining)
- Inspect (an artifact or prototype): guideline based, walkthrough based, analytics (log analysis, google analysis), and comparative analysis
We can use a mix of these techniques to validate our findings. A wide range of activities can be used in these studies including collage, cards sorting, laddering questions, games (e.g. top trumps) and paper dolls
The data should be analyzed and interpreted as soon after the data gathering as possible in order to avoid bias. Then, we can base the presentation and ideation on this analysis. We can use qualitative and/or quantitative analysis. In qualitative analysis, Grounded Theory (GT) approach is usually adopted which is an inductive and data-driven approach. In this approach, the data is analyzed in themes, patterns, concepts, needs and requirements. Whereas in quantitative analysis, the data is analyzed in statistical techniques linear progression and visual presentations such as graphs and charts.
Data presentation tools (define interaction):
- Presenting findings about the users (stakeholders) such as:
- descriptive statistics(quantitative: range,mean and median)
- user characteristics table
- Presenting findings about the task descriptions such as:
- service blueprints
- customer journey map
- use case (or usage stories)
- situational representation/ scenarios (including task ‘what’ and walkthrough ‘how’)
- essential use case (how, when and where a persona does something).
- Presenting findings about task analysis (activity mapping) such as:
- task analysis diagram e.g. flowchart
- hierarchal task analysis
- process model (experience model) e.g. swim lane diagram, user journey
- Presenting general design implications which could be sensitizing concepts, abstraction and meta-abstraction (abstraction functionality), instantiation (possible design), and prescription (specific requirements).
Data validation: it is preferable to validate and confirm the analysis by collecting more data.
2. Design activities: (Ideation/design alternatives):
The design process should be preceded by ideation sessions with design teams and stakeholders. This phase is about generating and selecting ideas based on the results.
However, Ideo.org suggests that before jumping into ideation we should go through the following steps respectively: Learnings (collecting interesting findings), Themes (categorizing findings), Insight statements (what these findings/themes mean), ‘How might we’s’ (asking how to intervene in these insights) and finally Ideation (generating answers to the previous question)
This activity could be conducted before or after presenting the design alternatives but in case the users felt overwhelmed by having to come up with design ideas from the scratch we can start first with presenting the alternatives then the ideation activity. It is imperative that the ideation activity focuses on the creativity and the quantity while withholding criticism. There is a wide variety of ideation techniques such as brainwriting, reverse brainstorming, metaphor brainstorming, free listing, object-based techniques (e.g.card sorting), envisioning cards and future workshop. More techniques can be found here, here and here.
Combine, cluster (3 to 10 clusters) and rate ideas before selecting.
At this phase, you should not start unless you have a deep understanding of two aspects: the user + the task. Accordingly, we can develop a set of alternative designs, taking cultural values and context in consideration. We can use a couple of techniques to present these alternatives such as wireframes, sketch, flows, monetary method, information architecture, use case, storyboards, perspective-based inspection, claim analysis, features list, competitor analysis, comparative research (comparing systems and products) and mapping and navigation design.
3. Build/Develop (prototype)
A prototype is an early model of a novel design. After deciding on how to address needs, we start designing different versions (prototypes) of the proposed solution. These versions can range in terms of similarity to the real product from Lo- to Mid- to Hi- Fi prototypes.
Lo-Fi are often paper-based prototypes ranging from hand-drawn mockups to printouts. Examples include wireframing, storyboards (narratives), sketches (interfaces and tasks), card based prototypes (sequence interactions) and films. This can be used in the previous step (design alternatives) as it is easy and quick to present ideas. Tools can facilitate this include keynotes, powerpoint, and film-making software.
Mid-Fi is more detailed and clickable than the Lo-Fi but less interactive than the HI-Fi. Examples of tools include interactive PDFs, Invision and Balsamiq.
Hi-Fi prototypes are computer based and intended to represent the final fully functioning solution to be adopted. Tools include Balsamiq, Pixate, Invision, and codes.
Traci Lepore explained the difference visually in the following diagram
Other types of prototypes (not lo- or hi-) include wizard of oz, metaphor development and proof of concept video. Here is an example of an award-winning proof of concept video:
There are a couple of tools that facilitate creating prototypes such as:
- Indigo Studio
A quick comparison between these tools provided by Rapid prototyping course.
4. Evaluation activities (Critique):
Similar to the first stage, this stage aims at data collection but regarding a specific product using the prototype we developed in the previous stage. This might be referred to as ‘usability evaluation’. By usability, for the purpose of this blog, I refer to both paradigms of defining usability: essentialism and relationalism. For essentialists, usability is an inherent property/quality of the system itself, whereas for relationalists, usability is an emergent property of the usage and interaction.
Essentialist methods are represented in Analytical or Inspection Methods such as Heuristics Evaluation and (indirect) Cognitive Walkthroughs. Whereas Relationalist methods are represented in Discount or Empirical methods such as User Testing. A third approach includes model-based methods such as GOMS.
Examples of techniques to conduct an evaluation activity include:
- Micro usability test (Aka: hallway): quick feedback to check whether we are on track. It typically has 3 to 5 tasks, 15 to 20 minutes. It is not formal data collection.
- Concept interviews
- Think aloud method
- Usability benchmark
- User stories
- Heuristics evaluation
- User testing
- Cognitive walkthrough
There are different evaluation measures such as time, the number of clicks, errors, learnability, memorability, cognitive and emotional aspects. These measures are defined based in the task under testing. We can define specific tasks (close-ended tasks) for users to undertake in order to accurately evaluate their interactions. We can also define open-ended tasks to explore varieties of users interactions. If we have multiple tasks, we should consider the ordering effects. We should also carefully design the task wording and make sure the wording is not very leading or misleading (ambiguous), rather try to be as neutral as possible. Thus, it is imperative to run pilot studies first.
The evaluation phase is followed by iterative data collection, analysis and design until the final product meets the users’ needs and objectives.