(1) Summarize the project goals, context, and stakeholders
The German government introduced the ePA (electronic health record) in order to digitise the health system. This could lead to more control of the patients over their data, better communication between doctors and patients, avoid doing tests and diagnosis again and again. Overall, the hope is to make the health system more efficient and transparent.
The ePA is to be implemented by all health insurances individually. Since all three of us are insured at the Techniker Krankenkasse (TK), we chose to improve the usability of the ePA of the TK, from now on referred to as the TK-ePA.
During the first weeks of the course we evaluated the TK-ePA by inspecting it under different conditions. We found especially worrying that it did neither handled large text well nor provided a mode/UI that’s simpler and more accessible.
We decided to design a UI for the TK-ePA that is more accessible without compromising in functionality. It was planned as a mode that can be turned on and off because its design will most likely be very simplistic and not meet design expectations of other users.
The TK-ePA is an app that is potentiality used by every TK insured person in Germany. It will also be used in many different contexts, environments, devices and situations.
The stakeholders of our accessibility mode are people who struggle to use the standard UI due to vision or motor impairments. A common problem is the awareness of such modes. This might be overcome by the device checking for specific usage patterns and then suggesting to use the accessibility mode.
(2) Summarize your test results.
As we iteratively designed the TK-ePA we also repeatedly evaluated the UI.
We started developing a variety of lofi prototypes/storyboards (https://blogs.fu-berlin.de/hci2023/2023/06/06/a6-t4-developing-first-prototypes/ ). Feedback from the tutorial session helped us to decide on one of the storyboards and turn it into the first interactive prototype (https://blogs.fu-berlin.de/hci2023/2023/06/13/a7-t4-low-fidelity-prototype/ ).
Two task flows were evaluated using heuristic evaluation in the tutorial session:
1. Share a document to a doctor and 2. finding ones a specific vaccination.
The feedback we received was mostly concerned about the level of detail of the sharing flow. In this first prototype it was not possible to select with whom one shares a document. But the most important feedback was that the hierarchy of documents in our screen navigation did not make much sense to the evaluators.
The third and last evaluation was done by members of the HCI research group. We improved our document hierarchy and added more detail and a third task flow to the test. Also we decided to use the NASA-TLX to capture the users experience.
During the test it so happened that after the very formal introduction, we switched back and forth between the formal evaluation and an informal feedback conversation. This was very helpful due to the depth and detail of the informal feedback.
Dump of Informal feedback:
- Gender consistently
- List why Doctor needs the document
- Document preview when sharing
- Sharing Icon suggests that it is already shared ( a bit unclear)
- NASA-TLX only once after all 3 tasks (tasks very quick)
- NASA-TLX not the best questionnaire here
- Design of search field (shadow to the inside not outside)
- Document type as a dropdown instead of many different buttons
- higher contrast
- more 3d-like buttons
- emergency profile: Blutgruppe on first Level
Evaluation of NASA-TLX:
After we changed the strategy regarding NASA-TLX, where test subjects now filled the form after finishing all tasks, we were able to draw the following feedback from the form inputs:
- Low levels of mental stress
- Low levels of physical activity
- Not a lot of time needed for completion
- Subjects were happy with their performances
- They felt confident within the task process
One has to keep in mind, that as already mentioned in the informal feedback, that the tasks themselves, where somewhat short.
As such test subjects felt that the tasks might not be challenging enough to require the type of scales that NASA-TLX provides.
As such our main takeaway for the Evaluation is that NASA-TLX has not been very useful to gather better insights.
(3) Compare your results with the defined problem (problem statement) you wanted to solve.
Unfortunately, we did not conduct an A/B test comparing the original vs our UI with testers that are actually in need of such a simplified interface. Thus, we cannot really state that our interface is indeed easier to use for people with visual or motor impairments.
But during the tests it became clear that the tasks were (maybe a little too) easy to fulfil. We did not measure the time it took to complete the given tasks, an oversight on out part.
The task completion rate is looking quite good: All tasks were completed without help.
The informal feedback was more helpful for us: Many things in our interface, like the navigation hierarchy and the choice of icons and labels, are still not clear. The test setup was also not optimal especially because the tasks were very easy and the NASA-TLX was used to often and took to long.