[A#11, T4] Final Analysis

(1) Summarize the project goals, context, and stakeholders

The German government introduced the ePA (electronic health record) in order to digitise the health system. This could lead to more control of the patients over their data, better communication between doctors and patients, avoid doing tests and diagnosis again and again. Overall, the hope is to make the health system more efficient and transparent.

The ePA is to be implemented by all health insurances individually. Since all three of us are insured at the Techniker Krankenkasse (TK), we chose to improve the usability of the ePA of the TK, from now on referred to as the TK-ePA.

During the first weeks of the course we evaluated the TK-ePA by inspecting it under different conditions. We found especially worrying that it did neither handled large text well nor provided a mode/UI that’s simpler and more accessible.

We decided to design a UI for the TK-ePA that is more accessible without compromising in functionality. It was planned as a mode that can be turned on and off because its design will most likely be very simplistic and not meet design expectations of other users.

The TK-ePA is an app that is potentiality used by every TK insured person in Germany. It will also be used in many different contexts, environments, devices and situations.

The stakeholders of our accessibility mode are people who struggle to use the standard UI due to vision or motor impairments. A common problem is the awareness of such modes. This might be overcome by the device checking for specific usage patterns and then suggesting to use the accessibility mode.

(2) Summarize your test results.

As we iteratively designed the TK-ePA we also repeatedly evaluated the UI.

We started developing a variety of lofi prototypes/storyboards (https://blogs.fu-berlin.de/hci2023/2023/06/06/a6-t4-developing-first-prototypes/ ). Feedback from the tutorial session helped us to decide on one of the storyboards and turn it into the first interactive prototype (https://blogs.fu-berlin.de/hci2023/2023/06/13/a7-t4-low-fidelity-prototype/ ).

Two task flows were evaluated using heuristic evaluation in the tutorial session:
1. Share a document to a doctor and 2. finding ones a specific vaccination.

The feedback we received was mostly concerned about the level of detail of the sharing flow. In this first prototype it was not possible to select with whom one shares a document. But the most important feedback was that the hierarchy of documents in our screen navigation did not make much sense to the evaluators.

The third and last evaluation was done by members of the HCI research group. We improved our document hierarchy and added more detail and a third task flow to the test. Also we decided to use the NASA-TLX to capture the users experience.

During the test it so happened that after the very formal introduction, we switched back and forth between the formal evaluation and an informal feedback conversation. This was very helpful due to the depth and detail of the informal feedback.

Dump of Informal feedback:

  • Gender consistently
  • List why Doctor needs the document
  • Document preview when sharing
  • Sharing Icon suggests that it is already shared ( a bit unclear)
  • NASA-TLX only once after all 3 tasks (tasks very quick)
  • NASA-TLX not the best questionnaire here
  • Design of search field (shadow to the inside not outside)
  • Document type as a dropdown instead of many different buttons
  • higher contrast
  • more 3d-like buttons
  • emergency profile: Blutgruppe on first Level

Evaluation of NASA-TLX:

After we changed the strategy regarding NASA-TLX, where test subjects now filled the form after finishing all tasks, we were able to draw the following feedback from the form inputs:

  • Low levels of mental stress
  • Low levels of physical activity
  • Not a lot of time needed for completion
  • Subjects were happy with their performances
  • They felt confident within the task process

One has to keep in mind, that as already mentioned in the informal feedback, that the tasks themselves, where somewhat short.
As such test subjects felt that the tasks might not be challenging enough to require the type of scales that NASA-TLX provides.
As such our main takeaway for the Evaluation is that NASA-TLX has not been very useful to gather better insights.

(3) Compare your results with the defined problem (problem statement) you wanted to solve.

Unfortunately, we did not conduct an A/B test comparing the original vs our UI with testers that are actually in need of such a simplified interface. Thus, we cannot really state that our interface is indeed easier to use for people with visual or motor impairments.

But during the tests it became clear that the tasks were (maybe a little too) easy to fulfil. We did not measure the time it took to complete the given tasks, an oversight on out part.

The task completion rate is looking quite good: All tasks were completed without help.

The informal feedback was more helpful for us: Many things in our interface, like the navigation hierarchy and the choice of icons and labels, are still not clear. The test setup was also not optimal especially because the tasks were very easy and the NASA-TLX was used to often and took to long.

[A#7, T4] Low-Fidelity Prototype

(1) Summarize the feedback you received on your storyboard.

Overall, the feedback to our storyboard was positive. One of the main points of critique was the lack of environment to show where our persona would be most likely using the app and its accessibility options.Additionally, we had not drawn a face in every frame, which led to the storyboard seeming less consistent. In our storyboard, the persona interacts with their phone. We can see this from over their shoulder, but the visible screen is rather small. We were advised to include a larger copy of said screens when they are visible in the storyboard, so a viewer has an easier time understanding what is happening.

(2) Develop an interactive paper prototype.

For starters, we decided to use Figma to create our prototype. It seemed like a simple but effective tool to use. We wanted to create a prototype to address our user scenario from the previous assignment. That meant creating the screens to activate accessibility settings was very important, as well as the screens relevant to registering a visit to the doctor and sharing/creating documents.As we wanted to showcase our idea for including a voice controlled assistant, we had to add a button for that on every screen, including the button links to and from said assistant. We have not decided to make a different version for each combination of accessibility tools, as that seemed extremely excessive, though it is an idea to keep in mind for the next task.We believe that our storyboard is represented rather well in our prototype, as all the actions taken by Martin should be possible to reenact here as well.In the end, the prototype could probably use some more screens to explore. This includes at least a dummy version of the menu when accessibility options are disabled, so the difference becomes clear (that would probably be a screenshot of the actual app). We are happy with our result overall.

The link to our prototype: https://www.figma.com/proto/FGW7aa5zIfj257zIb1JG90/FirstPrototypeHCIePA?type=design&node-id=0%3A1&t=nJUjm5Zdl5hhyjvY-1