Workshop report from the Recognition and Rewards Festival in Utrecht
On April 13, 2023 the third Recognition & Rewards Festival took place in Utrecht, the Netherlands. The conference was organized by the Dutch Recognition & Rewards Programme which aims to modernize the system of recognizing and rewarding academics as defined in the Programme´s Position Paper. This year, the conference theme “Rethinking Assessment”, was discussed in a plenary programme and 21 different workshops.
In the plenary programme, perspectives from multiple stakeholder communities were discussed in panel sessions and introduced in columns. Hieke Huistra (Utrecht University) stressed in her column the importance of permanent contracts for academics. Robbert Dijkgraaf (Dutch minister of Education, Culture and Science) discussed with Onur Sahin (Utrecht University) and Charisma Hehakaya (UMC Utrecht) how to include the perspectives of Early Career Researchers in the discussion on reforming research assessment, as well as incorporating insights from people who have left academia. Rianne Letschert (chair of the Coalition for Advancing Research Assessment) emphasized how collaboration across different disciplinary communities is necessary to enable research assessment reforms and consent on multiple levels.
In addition to the plenary programme, 21 workshops were organized. Workshop themes varied from assessing teacher quality and societal impact to enabling diversity and rewarding interdisciplinary collaborations, and many more. One of the workshops focused on recognising team science; acknowledging teams or consortia of academics for their joint work is one of the subject areas of the R&R Programme. Workshop topics included: how to define a team (collaboration, common goal, trust) and how team leaders could encourage their employees to make the most of their careers. Additionally, questions on how to assess team science came up. What would be metrics and indicators for team assessment? Is it even necessary and possible? One of the participants mentioned that e.g. PhD students don’t want yet another box to tick; they want to focus on doing research.
In the workshop we organized as part of the BUA Open Science Dashboards Project, meaningful metrics and indicators were part of the discussion as well. The workshop focused on recognizing the diversity of Open Science (OS) practices across different research communities. Even though stimulating and encouraging all aspects of Open Science are part of the R&R Programme, how to recognize and reward these OS Practices has yet to be defined. The workshop´s goal was not to develop or use dashboards as tools for research assessment but rather to initiate the discussion on how to make sure that when Open Science practices are assessed, disciplinary differences and multiple perspectives are taken into account.
The 16 workshop participants represented various stakeholder communities, including publishers and OS coordinators at universities, and representatives from the Universities of the Netherlands (UNL). We asked them to assess existing dashboards, like the EC Open Science Monitor and French Open Science Monitor, and to discuss what they liked and disliked about the dashboards, how meaningful indicators could be developed for other OS practices, and how to avoid a single focus on research outputs.
Some observations were made on the dashboards’ formats and content, such as the quality of the data (it is not always made clear how comprehensive and complete the used datasets are), dashboards should be updated regularly to remain relevant and when it is called a dashboard, interactive possibilities are expected, and not just static graphs. Data from open infrastructures is preferred over commercial infrastructures. The objective of a monitor or dashboard should also be made clear, e.g. the goal of the EC Open Science Monitor is not made clear and content is outdated. It seems like it is just a random collection, but what is the strategy behind it? What is the ambition of OS within the EU? Can the dashboard support policy development? It was discussed if it would be useful for the EU to develop and guide OS monitoring for all European countries. How would that work? And do we actually need a European approach? Comparing OS activities is not the objective so are tailor-made dashboards for specific institutions and/or disciplines more useful?
The discussion continued on the objectives of monitoring OS activities. Dashboards should be tools to support OS practices, not an end in themselves. If the objective is to get recognition for OS practices, how can you assess the wide range of activities and avoid comparison? A part of the Recognition & Rewards Programme is to recognize teaching tracks, how can you develop indicators for that? When the focus is on research assessment you need context, could we create context dashboards? Ways to incorporate a more qualitative approach should be included when developing indicators. Would it be possible to include a broader perspective and develop indicators for career paths and societal engagement as suggested in the R&R Programme? It was also discussed that it would be valuable to assess whether the entire process of research projects is open. At CWTS they are working on persistent identifiers for projects and these could be helpful in monitoring the openness of the different stages in a research project. Additionally, this could open up ways to recognize and reward all project contributors.
As the plenary programme and the workshops made clear: we are just at the beginning of reforming research assessment and the discussions showcased that ‘one-size-fits all’ OS Monitoring and research assessment do not work. Context is key. Only that way, we can ensure that Open Science is promoted as a positive research culture in all research disciplines and that each scientist is rewarded for openness in their work.