Scientific research activities have been subject to evaluation as long as other different social and economic activities. Accountability, policy-decision-making and knowledge society are being the most issues and drivers for research evaluation rise.
To debate these issues and the key role of research evaluation to serve the objectives of the Moroccan strategy 2015-2030 on education and research, the National Authority of Evaluation (NAE), affiliated to the Higher Council of Education, Training and Scientific Research, organize an international symposium: Research Evaluation: Issues, methods and tools. This symposium will be held on the 6th and 7th December 2017 in Rabat.
As other developing countries, Morocco is required to build research policies to increase science and technology productivity and to efficiently use them to address social and economic challenges. These policies should also focus on reducing the scientific and technological gaps between developing and developed countries. For these purposes, evaluation contributes to increase the performance of the research outputs and impact, and push forward science and technology frontiers, including social sciences ones.
Several methods, standards and tools, have been developed and adopted by different countries to measure and evaluate science and technology outputs and impact. In some of these countries, councils or agencies are even set as independent and autonomous institutions to fulfill the mission of evaluation under either a peer review approach and/or an informed review one.
The symposium will be a forum to discuss international best practices in up-to-date methods and tools for research evaluation purposes. The NAE is also seeking to build up, in the light of international evaluation practices, relevant and appropriate methods and tools for evaluation exercise.
The five topics of the symposium are:
Benchmarking and research evaluation’ measurement
Since science and technology are open and research systems are more and more under international benchmarking for both governance, policy, funding and outputs, what are the major lessons learnt from benchmarking exercises as an evaluation tool? Is the benchmarking relevant and useful to improve research systems in emerging and developing countries, in terms of governance, funding, scientific community support, international openness and addressing their social and economic needs? These are the major questions to be discussed in this topic.
This topic will also be discussing the relevance of university rankings (world-class, national-class), their objectives as well as their respective methodologies, as a method of benchmarking. Are these rankings appropriate for developing countries? As an evaluation tool, what are the rankings added-value to national systems to entice these countries to adopt or adapt them and to be engaged in the process of world ranking?
Methods of research evaluation in national and international exercises
Some exercises of national research evaluation have been completed in Morocco. The first one in 2003 covered research in the fields of sciences, engineering and technology. The second exercise in 2009 focused on social sciences and humanities. This exercise was immediately followed by evaluations on a regular-basis within the national program Programme d’Urgence between 2010 and 2011. The Moroccan national Innovation policy was under scrutiny in 2011. Furthermore, The Hassan II Academy of Science and Technology portrayed in 2009 and 2012 the research in Morocco through science and technology indicators and also made recommendations to improve its performance. In 2012-2013, Moroccan research system was evaluated under a twining program EU-Morocco. Beside these insigthful exercises, there have been also evaluations of research units in all universities.
The objective of this topic is to (i) enquire into national evaluation experiences in the world (ii) capitalize on major benefits and challenges of these aforementioned national experiences and (ii) embrace, in the light of national evaluation practices worldwide, ways and tools to succeed national evaluation exercises on a regular basis.
Social impact in research evaluation
This topic will address methods, approaches and indicators used worldwide for research’s outputs evaluation and its impact on society.
From bibliometrics to scientometrics, the aim is to evaluate the performance of research activities at different levels: Researcher, institution, country, etc. This performance is assessed based on quantitative metrics. To which extent these metrics are useful for impact-based research evaluation particularly in social sciences and humanities? How should real social impact be taken into account when producing metrics for evaluation purposes?
Scientific collaboration and its evaluation
What are methods and tools to evaluate scientific collaboration while it has been highly correlated with research productivity and impact? The topic will discuss these methods and tools to quantify inter- and intra- institution and country scientific collaboration, the patterns that favor this collaboration and those obstacles that limit it.
In this special focus, the debate is expected to bring responses whether science policies consider scientific collaboration and researcher’s mobility, as its pillars.
Evaluation methods for doctoral programmes
Doctoral schools or doctoral programme/studies are the first stage of a young researcher’s career and science and technology producing. Thus, there is a steady growing awareness and interest by international organizations such as UNESCO, OECD, EU or by countries, in the importance of doctoral programmes to boost science and technology productivity and impact.
Almost 15 years after the doctoral studies reform in Morocco, according to Bologna Process, an evaluation of this policy has been completed by the NAE. It includes the evaluation of the reform itself, its legal frame, the governance within the universities and a benchmarking of the doctoral program efficiency.
This topic will discuss the method and results of this evaluation. It will also focus on and debate international successful models, practices and methods to evaluate doctoral programmes.