When designing an application, website, or product, three things can give users a more usable experience: focusing on users and tasks in advance, empirical measurement, and iterative design.
As with the emergence of the graphical user interface era, in the mobile era, the same obstacles that affect usability design still exist.
Follow users and tasks ahead of time
Let the design team have direct contact with potential users instead of going through an intermediary or an examination of profiles. Especially in the first stage, it is necessary to collect information about what the user wants to accomplish, what is meaningless, and the extent to which the information architecture and system navigation meet the user’s expectations. Top task analysis (top task analysis) is an effective way to understand the key tasks users want to complete on the software or website.
Iterative design
Doing things well the first time is a good goal, but experience tells us that things are not as simple as it sounds. If you have a budget to test 15 users, it is best to split these 15 users into 3 groups. Test 5 users in the first round, resolve those issues that are not controversial or will not cause new problems, and then test again.
Empirical measurement
Many R&D teams may follow the first two principles but are hesitant to measure. Obtaining some key usability indicators from each round of testing can provide a simple objective assessment for your design decisions. Low-fidelity prototypes and changing tasks are no excuses for not measuring. In addition to the completion rate, there are also other indicators used to measure product improvement from iterative design.
- Task difficulty
We conducted three rounds of testing on an iPad application in three months. In each round of testing, we asked 5-6 participants to try a series of tasks. After each task is completed, we will ask participants to evaluate the difficulty of the task on a 7-point scale (we will ask verbally). In the first round of testing, the prototype was basically usable, but we still found some problems with navigation and labeling. The figure below shows the average score for each round of testing at the 85% confidence level.
Figure 1-Average score on each task in each round of testing
Although the tasks we use in each round of testing will vary, there are three tasks that are the same in each round of testing, and there are 5 tasks that are the same in the two rounds of testing.
It can be seen from Figure 1 that the perception of task difficulty of three tasks in each round of testing is continuously improving (you can notice that the error bars on the green bar and the error bars on the blue bar are mostly There is no overlap). This is consistent with the results of most other same tasks, which basically have a high level of ease (about 90%). This empirical measurement provides solid evidence of quantitative user experience improvement.
- Overall Usability Perception
In addition to task-level measurement, you can also measure your perception of the entire APP experience at different stages. Bangor et al. introduced an example of iterative design that uses the System Usability Scale (SUS) to measure at each stage. The figure below shows the data they obtained in five rounds of testing, and the baseline average score of 68 points based on previous data.
Figure 2-System Usability Scale Score in Iterative Design
- Percentage of critical/serious issues found
If you don’t want to track other data, you can measure the frequency and severity of usability problems found in each round of testing. These can also measure the improvement of the product. We pay more attention to the ratio of key issues to all issues rather than the general number of all issues.
The reason for this is that we often find the same number of overall problems in each round of testing. When the interface improves, the problem becomes more subtle. In some cases, when the key problem is solved, the user can further discover more problems through the task.
For example, in the usability test of the iPad application described above, we found that the percentage of critical issues dropped from 27% in the first round to 17% in the second round, and dropped to 8% in the last round, which is equivalent to the first round. About one-third of the round.