Performance Agreement School

The Gini index used for this analysis showed that over time, research publications were more evenly distributed across disciplines. This turned out to be the case, especially for the four universities of technology. The Review Committee interpreted this situation as a decrease in the number of priorities in universities and as a decrease in the diversity of the Dutch university research system. For each university, the Committee recorded an increase in the number of sub-disciplines covered by research publications. Differences between different categories of universities (e.g. B all universities and universities of technology) seemed to be shrinking. This indicates a decrease in differences between academic research profiles. However, the audit committee has difficulty attributing these trends solely to performance agreements, as they occurred much earlier. The three committees agreed on many issues. They concluded that performance agreements (PAs) have contributed to the following results: at the beginning of the performance agreement process, higher education institutions agreed to use seven mandatory indicators that measure their ambitions in terms of student success and quality of education. Two indicators, success rates and dropout rates, received the greatest attention during the annual monitoring and, finally, during the final conclusion of the performance agreements.

Ambitions for differentiation and institutional profiling have been formulated in a more qualitative way with regard to thematic themes such as the launch and exit of old programmes, the introduction of student mentoring programmes, the creation of research centres, partnerships with local companies, etc. In the next section, we will present some features of PBF systems in several OECD countries. We will then (in sect. 3) Go to the Netherlands, where a recent experiment has been concluded with performance agreements. The results of Dutch performance agreements with regard to their impact on performance and diversity are discussed in the sects. Sections 4 and 56 set out some lessons that can be drawn from the Dutch experience and draw some general conclusions about performance agreements. The evidence also points to the following lessons for an effective design of such agreements: whether performance agreements are important for the performance of higher education is a question that cannot be answered solely on the basis of Dutch experience with performance agreements. Causality is difficult to prove. Firstly, because the experience has been integrated into a broader policy framework, linked to other political and political instruments.

Second, one must be aware that national policies and related incentives must be reduced from the ministry (i.e. B system level) at the university, then at the student or university level (e.g.B. teacher, researcher) to produce an effect. The specific properties of each heI and their specific properties constitute an important intermediate layer in which there are many factors involved (either disabling or facilitating) (Jongbloed and Vossensteyn 2016). And third, there is a need for a much better understanding of the concept of performance (de Boer et al. 2015); It is a multidimensional and very subjective concept in this area. The agreements ended in 2016. The Review Board assessed the performance of each institution in the light of its performance agreement, on the basis of the information provided to the Review Committee in the institutions` annual reports and in meetings with the institutional Boards of Directors. . . .

This entry was posted in Uncategorized. Bookmark the permalink.