Gruppe 4 hatte nicht ihr Google Form, also ihr Feedback, im letzten Blogbeitrag verlinkt bzw. ihr Feedback im Blogpost beschrieben. Wir konnten Gruppe 4 leider nicht über Mattermost erreichen, weshalb wir leider nicht ihr Feedback einarbeiten konnten.
(2) Preparation for a summative evaluation
1. Please recall your experience and takeaways from your first testing session (#Assignment 6)
Wir möchten uns stärker an unser Skript als in Assignment 6 halten, da unsere Interviews alle sehr verschieden geworden sind(zumindestens der Sprechteil des Moderators)
2. Prepare all documents (script, consent form, …) you will need during the test session:
a. Für die Verwendung und Auswertung der gesammelten Daten, brauchen Wir eine Eigenständigkeitserklärung. Wir haben eine Online Format als Vorlage genommen.
b. Ein Skript für die Einführung durch den Moderator. Das Meiste konnten wir von unserem alten Skript übernehmen, da es gut funktioniert hatte. Wir müssen aber die Aufgaben erneut formulieren, da die Aufgabe konkret mit einem bestimmten Ziel ausformuliert werden soll.
4.This time, you will also measure the satisfaction of users, by using a post-task or post-test questionnaire.
We plan to take UEQ for measuring the satisfaction of users. Here are our considerations:
(1) We prefer a post-test questionnaire, since the full process from create account task to log out task is a complete user journey on our web application, which cannot be seperately measured. Lack of any single task will make the user journey incomplete and will make the satisfaction evaluation unreliable. The usage of social play is based on a smoothly enjoyable gaming evening, which consists of several tasks to be completed through our web application. But the usage process will be interrupted by task-level satisfaction assessment. And since all task satisfaction level together account for the general usage of social game, we decided to use post-test satisfaction questionnaire.
(2) Between different standardized measurements we prefer SUS, UEQ and AttrakDiff2 in terms of their reliability and widely usage. We first don‘t want to choose questionnaire with too many questions, since we are afraid that those questionnaires will cause implatiency of participants. Compared to SUS, UEQ and AttrakDiff2 measures satisfaction on a more well-round level with different dimenstions. But AttrakDiff2 has a strong focus on hedonistic quality, which is not our main purpose, we therefore choose UEQ for our satisfaction measurement.
The questionnaire can be reached by scanning the QR Code:
5. Document preparation.
We document the preparation as by using the assignment questions.
Reflective Design(Sengers et al., 2005) is a design paradigma based on the slow technology movement, which porposes „A design agenda for technology aimed at reflection and moments of mental rest rather than efficiency in performance[1]“. Reflective design emphasizes the socialtechnological elements in a product design, which doesn’t only mean to design a product with pure focus on high efficiency as well as effectivity, but also to design a product which assist the users to proactively interact with product/system while using it.
Sengers et al. (2005) defines reflective design as „a practice which combines analysis of the ways in which technologies reflect and perpetuate unconscious cultural assumptions, with design, building, and evaluation of new computing devices that reflect alternative possibilities“[2].
Reflective Design provides a framework for reconsidering the design that allows designers to rethink and examine their preassumptions about the systems they create, their design principles regarding target users as well as the social impact of chosen technologies. It also provides a toolkit that enables the users to be part of this reflective process.
Principles of Reflective Design [2]:
(1) Designers should use reflection to uncover and alter the limitations of design practice;
(2) Designers should use reflection to re-understand their own role in the technology design process;
(3) Designers should support users in reflecting on their lives;
(4) Technology should support skepticism about and reinterpretation of its own working;
(5) Reflection is not a separate activity from action but is folded into it as an integral part of experience;
(6) Dialogic engagement between designers and users through technology can enhance reflection.
[2]Sengers, P., Boehner, K., David, S., & Kaye, J. J. (2005, August). Reflective design. In Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility (pp. 49-58).
Compared to traditional in-lab usability testing, remote usability testing is an alternative to get rid of those on-site limitations, such as the lack of representative end-users, the expense of testing, time management problem, and the lack of a proper simulation of the users‘ true environment settings. Remoste usability testing can be usually categorized into synchronous (moderated) and asynchronous (un-moderated) testing.
Synchronous remote usability testing is operated in real time, with the evaluator being spatially detached from the participants. Synchronous testing is also called „live“ or „collaborative“ remote evaluation, is a usability evaluation method that possesses a great deal of similarity to traditional in-lab usability evaluation. This testing enables actual users to take part in the process from their natural environments, using their personal computers to keep the test conditions natural. The benefits of synchronous remote testing are the ability to obtain data from actual users in their normal environment, and reducing inconvenience for participants as there is no need for them to travel to a lab or test centre. Limitations related to the Internet and telecommunications (poor bandwidth/delays) represent some of the disadvantages of this method. Typical synchronous remote usability testing forms may include video call, screen sharing etc.
Asynchronous remote testing is done by detaching the evaluators both temporally and spatially from the participants, it splits the user from the evaluator in terms of location and time. Compared to synchronous testing, asynchronous testing may reach a larger user sample sizes, which would offer a true representation of the users. A more natural test surroundings could offset the testing bias that may occur from a lab, which often leads to participants feeling pressured that can affect the accuracy of usability results. However, this testing is narrow in scope as it doesn’t include observational data and recordings of sudden verbal data. This lack of testing data scope may limit the validity and accuracy of the results, which will lessen the chances of discovering usability problems. Typical asynchronous remote usability testing forms may include Auto-logging, user-reported critical incidents (UCI), unstructured problem reporting, etc.
References:
Alghamdi, AHMED S., et al. „A comparative study of synchronous and asynchronous remote usability testing methods.“ International Review of Basic and Applied Sciences 1.3 (2013): 61-97.
Our prototype is improved with some tiny updates so that the test users can have a more near to reality feel by heuristic evaluation and usability testing. We aim to create an easily usable interface for our potential users with corresponsing changes to user tasks.
(2) heuristic evaluation (Phase 1 and 2)
Phase 1 Prepare
Use case:
You have finished a long day of work and would like to have a game night with your friends. After consultation, you decided to organize a game night together online on Social Play. You already have an account on Social Play and you would like to invite your friends join your group. You have sent a link via Social Play to your friend by email. After your friend has logged in, you have started to play together in a meeting.
Tasks for evaluators:
1.register/log-in on Social Play
2.meet with your friends in your group
3.search a game you want to play
4.enjoy your Gaming Night with your friends with Games together with chat
Online form for evaluation collection:
we use google form to collect our feedbacks. The google form is designed using the template given by the lecturer.
Phase 2 Evaluation
We evaluate the protype of JuurMate under the scenario:
Anwendungsfall:
Du musst als Hausaufgabe einen Rechtsfall prüfen. Aktuell bist du auf dem Weg nach Hause und möchtest schon mal damit beginnen. Also öffnest du die JuurMate Anwendung auf deinem Notebook. Nach dem öffnen erscheint ein Auswahlbildschirm, bei dem du dich entscheiden kannst, ob du die geleitete Variante der Anwendung oder die ungeleitete Variante benutzen möchtest. Mit der geleiteten Variante kannst du nun ganz einfach in der Bahn auf dem Nachhauseweg ein einziges Schema dynamisch generieren lassen, dass dir dabei hilft deine Hausaufgabe zu Hause schneller zu beenden.
Aufgaben:
1.Fertige mithilfe der Anwendung ein Prüfungsschema zum Prüfen einer Rechtsfrage zu Mord an.
2. Öffne ein altes Schema, das im Vorhinein abgespeichert wurde.
3. Suche nach einem Tatbestand.
Documentation and Reflection
The individual evaluation was done separately. Ina and Brendan filled in the online form with their observations, Xin has problem filling in the form, therefore documented on a document with screenshots.
Summary of Evaluation
Based on our evaluations, we found out that (3) user control and freedom and (8) Aesthetic and Minimalist Design are the most violated guidelines. We have found out some critics in common, for example once we entered in a case, there is no „back“ button to go back to previous page. The only choice we could do is to click on the „JuurMate“ and begin with the main page.
We also encountered some problems here, there are some problems we are not so sure to which guideline they belong to. Some heuristics are hard to be evaluated, since we don’t really find any error message in the prototype (we also don’t include them in our prototype). (7) Flexibility and Efficiency of Use is also a hard to be evaluated guideline.
Reflection
Who did what?
Ina improved prototype ready for evaluation. Ina and Brendan prepared the template in google form and described the test case of JuurMate. Xin described the test case of Social Play prototype. The heuristic evaluation regarding JuurMate was done individually. Xin then summarized the evaluation while creating the blogpost.
What have we learned?
We learned how to prepare a prototype for evaluation as well as the heuristic evaluation methods. My only concern was that since our prototyp is really simple at the moment but the evaluation guidelines seem to be well-rounded, is it the good timing to do the heuristic evaluation on such a simple prototype?
What went well?
The individuell evaluation went well (not without problems) even though with difficulties by problem categorization. We were happy that the templated are provided so only minor changes need to be done by us.
What do we want to improve?
The 1st task was some how confused since it is not so clear which preparation is for us and which step is for classmates. The evaluation form is not well established, it is really hard to summarize the severity of each guideline violation and to see the screenshots indicating the problems.
(1) Continue to develop (or start a new) paper prototype based on new insights or feedback from your peers.
We skip this subtask since we considered the feed back from peers was planned to gathered during last lab session on 25. May which didn’t happen (question asked and got confirmed in Mattermost).
Wir testen die Webapplikation, nicht dich! Fragen bitte gerne stellen, ich kann aber nicht versprechen, alle Fragen direkt zu beantworten! Bitte laut denken (Think aloud). (übernommen von Lab 07)
(2) Kontext der Anwendung erklären
Wir wollen eine Webapp entwickeln für online Spieleabende mit Freunden. Sie soll die remote Brettspielabende unkomplizierter und übersichtlicher machen. Neben der Kommunikationsformen, wie Audio-, Videochat, soll unsere Webapplikation eine Filterfunktion für Brettspiele haben, sowie die eigentlichen Brettspiele eingebettet haben. Wir haben einen ersten Prototypen gebaut und würden in diesem Test gerne die Verständlichkeit und Übersichtlichkeit testen.
(3) Fragebogen
Wie heißt du?
Wie alt bist du?
Was machst du beruflich?
Spielst du gerne Gesellschaftsspiele in deiner Freizeit?
Hast du schon einmal einen remote Spieleabend gemacht?
Hast du schon einmal einen usability test (als Testperson) gemacht?
(4) Szenario(s)
erste Szenario: per Link einladen
Du hast einen langen Tag der Arbeit beendet und würdest gerne mit deiner Freund_Innen ein Spielabend machen. Nach der Absprechung habt ihr entschieden, gemeinsam online ein Spielabend auf Social Play zu organisieren. Du würdest gerne erstens ein Account auf Social Play einlegen und deine Freund_Innen per link einladen. Du hast über Social Play einen Link per email an deindem/r Freund_In geschickt. Nachdem deindem/r Freund_In sich eingeloggt ist, habt ihr eingefangen in einem Treffen zusammen Spiel auszuwählen.
zweites Szenario: Spiel aussuchen und filtern
Du triffst dich mit deiner Freund_Innen zusammen im online meeting Raum von Social Play. Ihr habt gestartet ein Spiel auszusuchen. Jenach euer gewünschte Filterkriterien (z.B. Spielzeitlänge, Spielart, geeignete Personenanzahl usw., die sollte Testperson selbst gegeben werden, um zu gucken, ob wir einige Kriterien vergessen haben) sollte eine Liste von Spiele ausgegeben werden. Dann du und deine Freund_Innen gucken solche Spiele durch und sucht ihr ein Spiel aus. Anschließend spielt ihr das Spiel zusammen.
(5) Aufgaben
Mache dich mit der Anwendung vertraut
Beschreibe was du siehst und deinen ersten Eindruck.
Was wären deine ersten Schritte, um in eine Gruppe mit deinen Freund_innen zu gelangen ?
Finde die Gruppe No board games, no gain.
Lerne die Gruppenmöglichkeiten kennen
Wie würdest du einer Gruppe per Link beitreten?
Wie würdest du eine Gruppe erstellen?
Erstelle ein Gruppe, bei welcher du deine Freund_innen hinzufügst.
Lerne die Filterfunktion kennen
Wie würdest du ein Spiel finden?
Wie würdest du das passende Spiel für euch auswählen?
Finde ein passendes Spiel für deine Gruppe und wähle es aus.
Lerne die Spielfunktion kennen
Was würdest du machen, um mit deiner Gruppe zu spielen?
Wie würdest du ein Spiel starten?
Spiele ein Spiel durch mit deiner Gruppe
(6) Abschließende Fragen
Welche Vor- und Nachteile siehst du bei der Benutzung der WebApp?
Was fehlt dir an der Social Gaming App?
Hast du Schwierigkeiten (Wo hast du Schwierigkeiten) bei Nutzung des Apps? (z.B. eine bestimmte icon oder Funktion zu finden, verwirrende icon design oder Layout design usw.)
(7) Abschluss
Hast du weitere Fragen oder Anmerkungen?
Vielen Dank für deine Teilnahme! Du hast uns sehr geholfen.
(übernommen von LU 7)
2.Document who is taking what role
In the first and second usability tests Ina will be the Facilitator and in the third Brendan. Xin will be the Observer in all usability tests.
3.Decide if you want to record your test session and how you take notes during the test sessions.
We don’t want to record the test sessions as our users expressed uncomfort with the idea of being recorded. We will take notes on paper during the usability test, as we find it the easiest way to create “creative” and adaptable notes. We will later summarise these notes in a more structured way in a Google Doc.
4.Document who you are inviting for a test session and how long the session lasted.
(3) Document and evaluate the results of your testing.
Results Evaluation
We first wrote down notes on paper block and then tranformed them into structured notes in word documents (each document for one single test person). The document is structured as the usability test workflow. We take Affinity Diagram to evaluate our results of usability testing since we consider it as the best way to fulfill our „gather and then organize“ evaluation steps.
Since we most concentrate on the improvement suggestions and critics from our test persons, we organize only the „negative“ feedback from our test persons and didn’t write down „positive feedback“ in our affinity diagram (of course positive feedbacks are noted in our structured documents).
The affinity diagram is generated using online tool Flinga.
Learning and Takeaways
We learned that all test persons liked our design idea behind our application. And they think our first design is not overwhelmed with non-necessary information, wo that the it is easy to follow the test process. However, some confusions can also be found in testing. Part of the confusions are based on the Marvelapp and partly is based on our design defects. There are also some functions we didn’t think of but proved to be important by testings. We think there are really lots of improvements to be done in further design iterations.
Part II Reflexion
(1) Team member contribution.
We first meet together in Discord and discussed the organisatorial stuff of how to organize the usability tests. Then we worked together on our script for further steps (Ina for contextual introduction and questionaire, Brendan for following-up questions, Xin for scenarios. However the separation is not 100% clear since we review and improved each other’s parts though).
The three usability tests are taken part separately over the weekends (responsible persons can be found in the table for task (2)). Notes are generated after the test and uploaded into our shared google folder.
We created the Affinity Diagram after gathering all three notes and evaluate the results.
(2) Learnings from assignment.
We learned how to formly conducted a usability test. It was an interesting experience because somehow the test person can get confused by some designs which we didn’t notice it at all. We are pleased to get constructive feedbacks from our test persons. We will take their feedback into serious considerations and think about how to improve our design.
(3) possitive/successful experience
We successfully found 3 different test persons to help us with the usability testings, which we really appreciate their kindness and helpfulness. All our test persons are so cooperative which make our test processes much more smoothly.
(4) What would you like to improve?
It seems that the assignment workload is not so evenly distributed throughout weeks, for example three usability tests over a weekend is somehow a little bit more. It is also not so easy to find and contact test persons, and then finish usability testings so shortly within a week (for example, some people may be really suitable for usability testing but due to their tough schedule they need to be contacted at least one week before for appointments). That would be so nice if we can see some assignments in advance and maybe prepare it in advance.
Another improvement lies in our previous assignments. In our assignment #2 we didn’t chose „Analyze existing software“ for data gathering. Based on our usability testing results, we found it essential to do that since some of our design defects can be easily resolved by viewing similar apps or apps offer similar functions. If we view similar app designs, we at least won’t forget registration function on the main page.
Additionally, we didn’t undertake probe testing on ourselves before formal usability testing. If we did so, some problems can also be detected earlier.
besides the filter option, there should be also an option to browse manually through the board games, a little bit like the discovery page of Netflix;
there should be an option to show or hide the video chat and messenger;
there should be a way to listen to the sound of the board game but also to be able to audio chat with friends ;
don’t show the percentage of the match (board game matches …% the filters) but just show the board games with the highest rank first;
describe the games in one sentence already on the result page;
good idea to include webcam games for movement games.
We found all of the feedback helpful and good ideas (thanks for the feedback from fellow students). Some points we thought of already but didn’t feature it in our storyboard as it was more of a detail. For our first simple prototype we will also focus on the “less tricky”/low-fidelity feedback (point 4, 5) and include the other feedback in later, more high-fidelity prototypes .
We first decided on which tool we want to use for our prototype. We liked the example from the lab session, so we decided on using Marvel. We first played around on the website a bit. Then we talked about what we want our prototype to have: We talked about the general ideas that we decided on the last couple of weeks and about how we could add the given feedback by the others from the last lab session. We decided on which sites we want our prototype to have. We found it really helpful to look at our storyboard for this. We then created one site after the other together. While we created the prototype we realized that even though it is a low-fidelity prototype, we wanted to add some very simple design features: We decided on making important buttons green and adding some emojis to the texts to make them more “alive”. These little design additions helped us to make the prototype feel more like our idea for our “end design”. In the end, we played the prototype and looked at whether everything worked well.
(ii) the use case and/or model (task analysis from last assignment) this prototype relates to.
The prototype very much relates to our task flow from last assignment. The task flow was our basis for our storyboard and this was the basis for our prototype – so the task flow is very much reflected by our prototype. Our use cases are also generally speaking reflected by our prototype. The low-fidelity prototype doesn’t 100% relate to the use cases that it doesn’t feature as much details as our use cases (e.g. our prototype has name and a passwort as information about the user, while the use case for the user account also features favourite game of the person, profile picture,… – these details will be featured in later prototypes).
(iii) how the storyboard is reflected in the prototype
As mentioned above, the storyboard was the foundation of our prototype. It is therefore very much reflected by our prototype. We wanted the prototype to create the same atmosphere as our storyboard and constantly reflected on how “Max”, our persona, would like the prototype.
(iv) self-assessment of potential strengths and weaknesses of this first step into your design space
Generally speaking, we are happy with the prototype and think that it is a step in the right direction. The site layouts are overall what we had in mind. We find a potential strength of the prototype that it is very clean and has only the absolutely necessary functions. This makes it easy to understand and the users can directly focus on playing board games rather than trying to understand the website.
A potential weakness is that the prototype idea works so well because it is not designed for answering details. So naturally, some details still haven’t been answered by it. For example: How exactly will the explanatory videos about the board games look like? What exactly will be our features? How exactly will the chat function look like? This could be a weakness moving forward since the devil is always in the details. We are curious to see whether this causes our future prototypes to later change drastically or not.
*The prototype is generated on 21. Mai 2021 through group collaboration using online collaboration tool Marvel.
(3) Design rationales
For capturing design rationales analysis, we choose process-oriented gIBIS. We consider gIBIS with its avantages of its clear presentation of inter-relationship and inter-dependencies of questions and subquestions, as well as a clear comparision between pros and cons for each position. It consists better with our design thinking process.
(i) Question: How to display filter?
(ii) Question: What platform should we offer?
Part II Reflexion
(1) Team member contribution.
We did the assignment as usual in a regular team meeting on Friday, 21. Mai 2021. Ina first reviewed the feedback and summarized it for other team members. We then considered and discussed how to incorprate the feedback into our prototype design.
For the second task, we first discussed how we would like to corporate and which tool we want to use for collaboration. We then discussed a general idea of prototype representation such as what layouts we want to design, for what use cases we want to design, and what interactions/functions need to be included in the low fidelity prototype. Then we separate the tasks to each team member, e.g. Ina is responsible for user/user accounted related pages, such as log-in, account registration etc, she is also responsible for homepage/welcome page; whereas Brendan is responsible for filter function related pages. Xin is responsible for the in-play and after-play pages. We also viewed each other’s pages and helped each other in process.
After viewing and agreeing on all the page designs, Brendan linked them together. Ina then finalized the final version with interaction and refinements.
The design retionales analysis is done by using gIBIS based on group consensus. Brendan and Xin are respectively responsible for a design question.
(2) Learnings from assignment.
We learned how to implement the theoretical knowledge into actual design processes by doing the assignments. We also learned by feedback that it is important to escape the „group think trap“ by design. Even though we analysed the user context in advance, it seems that we somehow still get caught up in „developer mind“. More important is „what users need“, but not „what developers want to offer“.
(3) possitive/successful experience
We appreciate our team work as always. Additionally, thanks to the advice for open design tools which helped us a lot in the beginning. Based on personal experience, an inadequate choice of design tools may trigger problems in later design process and waste time. But we found all the suggested tools went well with our tasks, in terms of both user friendliness as well as accessibility.
(4) Expected improvements
We are currently satisfied with the team work and team progress. But for further design phase, as mentioned in (2), we need to keep users in mind throughout the design process.
All diagrams are created using Google Jamboard as a collaboration tool.
2. Primary Persona
3. Create a Scenario
Color code: what & where
Freitag Abend möchte Max sich, wie vor Coronazeiten auch, mit seinen Freund_innen treffen, um einen entspannten Abend zusammen zu haben. Dafür setzt er sich in seinem Zimmer vor seinen Computer und loggt sich auf der Spielewebseite ein. Er klickt auf die Gruppe, die er zusammen mit seinen Freund_innen hat. Aimee hat keinen eigenen Account auf der Spielewebseite, weshalb ihr Max eine Link zuschickt, mit welchem sie auf die Gruppe zugreifen kann. Hier startet er den Videochat mit ihnen.
Nachdem sie für ein Viertelstündchen ein bischen gequatscht haben, wollen sie anfange zu spielen. Was genau, wissen sie noch nicht. Max würde aber gerne mit einem recht kurzen Spiel einsteigen, eine andere Person aus der Gruppe möchte ein Brettspiel spielen, eine weitere ein Logikspiel spielen. Sie geben ihre Kriterien in dem Vorschlag-system der Webseite ein, indem sie entsprechende Filter auswählen. Es wird ihnen eine Liste von Spielevorschlägen angezeigt. Nachdem sie sich die Kurzbeschreibung der ersten fünf Spiele angesehen haben, entscheiden sie sich für das 3. Sie klicken es an und beginnen direkt das eingebettete Spiel auf der Webseite. Sie kommunizieren weiterhin per Audio und Video. Joachims Mikrofon ist leider kaputt, deshalb kommuniziert er per Chat und Video. Nachdem sie das Spiel fertig gespielt haben, spielen sie nach dem gleichen Prinzip noch viele weitere Spiele an dem Abend, sodass Max und seine Freund_innen einen lustigen remote Spieleabend zusammen haben.
4. Two Use Cases with UML
Reflexion
Who made what contribution?
We generally did everything together in the group with twice meetings in the week. Ina prepared the documentation from her interview with friend and shared on Tuesday in the group meeting. The bulletpoints noted by Ina then are gathered unsorted in the diagram. All group members are participated in the initial gathering of ideas and further iterations. The final version with priority points as well as superheadlines is refined by Ina and Brendan.
The Persona is also created during group meeting. The main part is finished based on group discussion based on previous data analysis using affinity diagramming. Small refinements afterwards and looking for photo are done by Ina and Xin.
In our second meeting on Friday, the use cases are finished within group discussion. Ina mainly focuses on the registration process whereas Brendan and Xin focus on the social gaming process. The blogpost is then written by Xin with support from Ina and Brendan.
Learning & Take-Aways
The Interpretation of qualitative data can somehow be ambiguous since even though in our group, people have different perceptions of the qualitative data. Therefore, a sufficient and efficient communication as well as opinions exchange is really import. Otherwise, the gathered user data cannot be precisely or even correctly represented by us als developers.
With persona we learned how to depict a „typical“ user image for our product. By using the online template provided by Xtensio, we see some sections which are less relative to our project (or: we currently cannot really see /uderstand the necessity for our project) auch as personalities. We somehow also have the feeling that some descriptions are inituitiv and subjective, since an „absolut covergence“ from user data is alway hard to find.
What went well?
Everything went well. We see our weekly progress as positiv.
Improvement and Concern
We got some really use ideas from our potential users, but some the cutomer wishes are hard to implement. For this case, we are unsure whether we need to considerate the person as secondary potential user, or should we take the suggestion as important feature which we ignored to garantee. Maybe some functions proposed by users require advanced technologies which we are not sure whether it is able to implement in our application. Some user wishes seem to be hard to realize based on our product format (web application).
Another confusion is that we are not sure whether our data gathering method will trigger bias, since either interviewee or survey participants are in our „connected cycle“.