Connecting the dots – Insights from the Open Science Dashboards role-play event

Author: Anastasiia Iarkaeva (ORCiD)

Recommended citation: Iarkaeva, A. (2026): "Connecting the dots – Insights from the Open Science Dashboards role-play event." Open Research Blog Berlin. https://doi.org/10.59350/fhg7x-2nf98

Open Science activities have advanced considerably across the research landscape. Yet, without consistent monitoring, those activities often remain isolated and receive little recognition from key stakeholders. How far have we really come, and how do we connect the dots?

Open Science Monitoring is one way to address these questions. Initiatives such as the Open Science Monitoring Initiative (OSMI) support development of monitoring approaches and frameworks, including promotion of the Principles of Open Science Monitoring. Visualisation of Open Science practices through well-structured dashboards, in particular, can help inform stakeholders about the state of Open Science at various levels: from an institutional, a disciplinary (e.g. the dashboard shown in Figure 1), up to national and international levels.

This raises a question that is often overlooked: whom are we addressing with dashboards? Do we, as those who develop or use them, think carefully about our audiences? When we speak the “Open Science language”, do we also reach those who are not “native speakers”?

Figure 1: Example of an Open Science monitoring dashboard – Open Science Dashboard for the Department of Earth Sciences, Freie Universität Berlin, accessible under https://quest-open-earthsciences.charite.de/#tabOA

Role-play: Are you a librarian, funder, university administrator, or researcher using a dashboard?

These were the guiding questions during the first event of our online event series “Magnifying Open Science” on February 26th 2026. Before starting the role-play, we asked participants about their actual professional background. The participants were diverse: mainly librarians (22%) and infrastructure experts (26%), but also researchers (19%) and university administrators (7%). Another 26% identified with other groups, such as data stewards. When participants were asked about their prior knowledge with dashboards, most participants were not regular users or providers of dashboards (38%), while 35% actively used or created them. 12% of participants did not find dashboards useful, and 15% indicated not having prior experience with the topic at all.

During the 40-minute interactive role-play session, participants stepped into the shoes of others – a role different from their professional day-to-day role. They were divided into four groups: librarians, funders, university administrators, and researchers. They used a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) to explore the added value of Open Science dashboards for “their” community. As is often the case with SWOT analysis exercises, the same aspect can be interpreted in different ways: as both strength and weakness, or as both opportunity and threat. For instance, the group of supposed librarians identified ‘reduction of manual work through automation’ as both a potential strength and a weakness.

Supposed needs covered by Open Science monitoring dashboards

At the beginning of group discussions, participants formulated possible needs which Open Science monitoring dashboards could address. Each supposed stakeholder group focused on rather differing needs: 

  • The supposed researchers took the role of a meta-researcher and imagined using dashboards to conduct their own analyses, compare and replicate approaches, and integrate the data into other research outputs. 
  • The supposed university administrators emphasized the value of dashboards for research performance monitoring and institutional assessment. 
  • The supposed librarians focused on the needs in monitoring and reporting institutional research and open access activities, similar to administrators, while also highlighting support of researchers in using open science tools and helping ensure compliance with institutional policies. 
  • Finally, the supposed funders group focused on tracking where funding flows and how policies are implemented across disciplines, in order to monitor policy compliance over time and assess whether these policies have a measurable impact.

Figure 2 illustrates how, for some “stakeholders” – funders, librarians, and university administrators – dashboards were primarily associated with institutional monitoring and reporting, whereas researchers pursued their own interests with a focus on dashboards as data sources for analysis.

Figure 2. Supposed stakeholders’ goals regarding dashboard usage

Consensus on dashboards among all supposed stakeholder groups

Across all four role-play groups, dashboards were seen as tools whose value depends largely on the underlying data (see Figure 3). Participants repeatedly stressed dependence on data source and data quality as a weakness, including unease about data provenance and third-party provider dependencies. At the same time, quick access to the underlying (open) data was seen as a key strength. 

Both university administrators and funders valued the importance of dashboards for evidence-based decision-making, as well as potential connection to higher-level frameworks and research evaluation systems. At a strategic level, these two stakeholder groups could have related goals, which suggests that they may use the dashboards similarly (see Figure 2).

Another area of agreement concerned potential risks of misinterpretation of indicators, political use of numbers without critical reflection, oversimplification of calculations, and gaming of metrics

Supposed librarians and university administrators shared the opinion that dashboards can become a resource drain, not only when preparing the data, but also when keeping the dashboard up to date, raising long-term maintenance and sustainability concerns

Finally, supposed researchers, university administrators and funders noted that dashboards typically focus on what is easy to monitor and on downstream outputs like articles, which can obscure other open science practices like narratives and upstream processes. Once a metric becomes “measurable” or tied to incentives, it begins to shape research culture and behavior towards reaching those quantified targets.

Stakeholder-specific assessments of Open Science Monitoring Dashboards

 

Figure 3. Main SWOT analysis results from the role-play exercise evaluating Open Science Monitoring Dashboards

Figure 3 displays a summary of the role-play SWOT analysis exercise. The graphic preserves the complete granularity of discussed SWOT of Open Science monitoring dashboards. In the center of the figure, a summary of the main consensus points from all “stakeholders” is included.

How do we better understand the audiences of Open Science dashboards?

The role-play exercise using a stakeholder-specific SWOT analysis was challenging, but revealed valuable insights. Participants succeeded in identifying both shared concerns and stakeholder‑specific Strengths, Weaknesses, Opportunities, and Threats. The exercise revealed how differently the same dashboard can be perceived within the research ecosystem. The interactive session was in the end not only about dashboards themselves, but also about the diverse expectations that shape their interpretation. A dashboard designed for librarians in their supportive role is unlikely to match the priorities of funders seeking policy compliance or university administrators focused on institutional performance and evaluation.

Despite differing needs, dashboards can be powerful instruments for connecting communities, but only if they are contextualized, designed with purpose and target group-focused. Open Science Monitoring dashboards cannot be neutral instruments: They often highlight progress in certain Open Science practices while leaving others obscured. If misinterpreted, intentionally or not, dashboards risk reinforcing simplified metrics rather than supporting a meaningful cultural change that Open Science seeks to achieve.

Register for our next event!

In our second event in the series “Magnifying Open Science” on 𝗠𝗮𝗿𝗰𝗵 𝟮𝟲, 𝟮𝟬𝟮𝟲, 𝗮𝘁 𝟮 𝗽𝗺 (𝗖𝗘𝗧) we’ll explore how openness in the Social Sciences and Humanities (SSH) is observed, studied, and practiced — beyond research outputs to the values and infrastructures that shape Open Science.

Find more details under: https://blogs.fu-berlin.de/open-research-berlin/2025/12/18/save-the-date-for-online-event-series-magnifying-open-science/.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Captcha
Refresh
Hilfe
Hinweis / Hint
Das Captcha kann Kleinbuchstaben, Ziffern und die Sonderzeichzeichen »?!#%&« enthalten.
The captcha could contain lower case, numeric characters and special characters as »!#%&«.