This project re-design the Medisana Wake Up Light by conducting several user tests with a focus on user experience and usability. The first user test discovered the most severe usability and experience problems, which helped to frame the design brief and guide the redesign. A second user test was performed to find final flaws and to evaluate if the re-design had indeed improved the experience of use.
It was a 6 months project, developed by a team of 5 interaction designers.
Multiple usability flaws are easily perceived what causes a negative user experience, reducing the perceived value. The product is also equipped with diverse functions that are probably not used. In order to improve these issues a thoroughly and structured assessment is performed.
Approach and process overview
This project focuses in usability and user experience assessment, therefore multiple methods for this matter are applied with a user centered design approach, as: first use evaluation, analysis of similar interfaces, paper prototyping, heuristic walkthrough and user testing, the latter included both qualitative and quantitative (Attrakdiff) data collection and a comparative analysis.
Based on the results of the first user testing the design goal is defined as follows:
Based on the results of the second user test, the tested concept was modified to improve the usability and user experience.
The silhouette of the original product was used in the redesign to maintain its relation to the Medisana brand. However, alterations were made to its appearance in terms of stability, ergonomics and readability, to increase the product appealing and perceived value. During the second user test some of the participants refer to the shape as modern, stable, smooth and futuristic, thus no major changes regarding the shape were made after the test.
The display is kept as simplistic as possible, with little variation on colors and an opaque. Furthermore, icons and text are enlarged to ensure users are able to read and view them properly.
After an analysis of the features, the interface is divided in 3 parts, a main display with embedded buttons and visual feedback, and 2 side knobs to independently configure the sound and light. The icons are selected through rapid prototyping. Finally, the arrangement of the display maintains close relation between buttons functionality and feedback, this way sound functions and feedback is on the left and light on the right. The final design is presented through a visual and a task flow.
Methods and tools
If you feel like checking in more detail which design methods and tools were used for the project, here you go! Because design is not only about the result but about the process
First use evaluation
To be able to determine what parts of the Wake Up Light should be tested, the product was analyzed first by the team. After the first use, it was concluded that the multiple options in the device are liked and the most basic operations, like setting the time and snoozing the alarm, are done easily. The design of the body is also perceived positively.
For having a better understanding of the product to re-design the task flow of the current features and functions was detailed. It was organized by feature and considered user’s input in relation with feedback and menu flow. In this it was possible to depict already some of the counter-intuitive navigation as well as inconsistent feedback.
This is done to get more insight in the existing feasible and innovative design solutions. Conclusions based on the similar interfaces analysis are divided in opportunities and design solutions which should be avoided.
First user testing
The user test is structured to give the design team an understanding of contextual usage of the product and to identify usability and experience problems that could be used afterwards to improve the design. To avoid priming or guiding the users towards certain directions as well as to ensure the validity and extend of the insights, high importance is giving to the set up and structure of the test. Two pilots were performed to improve the set-up before the final test.
Instant Data Analysis is selected as analysis method due to its efficiency in issues finding, accordingly the team roles are divided in: 1 moderator, 3 observers and 1 data logger. A single test session took approximately one hour after which a data gathering session was held for thirty minutes about the results
The research environment consisted of two different contexts: A neutral context for focusing on the configurations of the wake up light and bedroom context to find out more about how the product was operated in the intended context.
Qualitative data was collected by the observers, who written down quotes, attitudes and behaviors of the users while performing the test. Regarding quantitative data, the AttrakDiff scale is filled out before and after the test to measure changes in perception. Likert scales are filled out after each task to measure the complexity of the task and at the end of the test to measure aspects as feedback, display appearance, light quality and sound quality. Additionally, the data logger keeps track of the time spent on each task by each participant, to get more insight in the effectiveness.
Due to the focus on qualitative information seven participants were hosted for the test who performed six specific tasks. During these tasks, participants are asked to think out loud, priming them to find what they are experiencing. Interviewing was done by either asking open questions or using scales.
After the test, a data collection session was held with all members of the team, when quotes and preliminary insights were generated and clustered. The results of this meeting were afterwards triangulated with a more detailed analysis of quantitative data.
For the analysis of usability issues, the qualitative data of all participants collected during the Instant Data Analysis, the quotes and comments were clustered according to their similarity. Afterwards, the identified problems were classified based on their severity, leading to the following overview.
Quantitative data was analyzed separately to review the overall impression of the users in terms of expectations, usability and experience. The biggest differences in before and after the test Attrakdiff were found in: Pleasant to unpleasant, Premium to cheap, Attractive to ugly, what shows how usability negatively impacted the product perception. Regarding Likert scales task 1, 2 and 3 are rated more complicated than tasks 4, 5 and 6, also confidence and display are low rated.
Through an iterative approach the design is improved to its final version, iteration loops considered experts evaluation workshops, cognitive walk-through along with paper prototyping and finally a thorough user testing that included both product and interfaces prototyping.
The re-design proposed independent access buttons for the main functions which are located close to its direct feedback, new icons feedback as well as sound and light side knob controls. Finally, the proposed interface design is presented through a task flow which considered both users input and feedback according to the feature.
Second user testing
The set-up for the second test was kept similar to the first one, small variations and improvements were done according to the features design.
The prototype was built in milled foam embodiment in which electronic parts were integrated. The electronic parts consisted of an Arduino which controlled a LED strip for light and speaker for sound and were controlled with two knobs on the side. Secondly, a touch-sensitive tablet was used as a display. A render was shown to the participants to enable them to imagine how the product will look like in reality.
In usability superficial problems found, mostly related to limitations of the prototype
The data analysis focused in comparing the results with the first test. Remarkably, in AttrakDiff most characteristics show no differences between the scores of the first and second form. This differs greatly compared to scores of the previous test. Also, the prototype is generally evaluated more positively, with an average score of 0,9 compared to the previous test.
The perceived difficulty of performing the tasks is rated more positively compared to the previous test. Task 3 of the second test was rated most complicated, even with a higher score than in the first test. Possibly, the participants were preoccupied with the display and failed to notice the knobs on the side.
Configuring the reading environment was done differently from the other tasks. Therefore, usage of knobs was unclear without prior knowledge. The scores on feedback did not improve compared to the results of the previous test. This was as a consequence of slower or lacking feedback of the display, caused by prototype limitations. The feeling of confidence was higher, because participants encountered less difficulties.