Improving the Peer-review Process for ARC Grant Applications: Reliability, Validity, Bias, and Generalisability
Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This presentation summarizes our research research programme that is arguably the largest, most rigorous, and comprehensive research program into peer reviews of grant proposals with broad implications for the study of peer review more broadly. The research was based on data from the Australian Research Council (10,023 reviews by 6,233 external assessors of 2,331 proposals from all disciplines). It is comprehensive due to its size, but also because it includes peer reviews from all science, social science, and humanity disciplines, includes peer reviews from assessors from all over the world, includes assessors chosen by the applicants themselves as well as panel-nominated assessors. Using multilevel cross-classified models, we critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. We propose a new approach, the reader system, which was evaluated with psychology and education ARC grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.
Professor Herb Marsh (BA-Hons, Indiana; MA; PhD, UCLA; DSc, UWS; Aust Acad of Soc Sci; Brit Acad of Soc Sci) has recently returned to the University of Western Sydney as part of the Centre for Positive Psychology and Education (CPPE) after spending six years in the UK as Professor of Education Oxford University. He is the author of internationally recognised psychological tests that measure self-concept, motivation and university students evaluations of teaching effectiveness. He is widely published (380 articles in 70 journals, 65 chapters, 14 monographs, 370 conference papers), co-edits the International Advances in Self Research monograph series, and is an “ISI highly cited researcher” . He founded SELF Research Centre that has 450 members and satellite centres at leading Universities around the world . He has been recognised as the most productive educational psychologist in the world and the 11th most productive researcher across all disciplines of psychology. In addition to his methodological focus on structural equation models, factor analysis, multilevel modelling, and new statistical approaches to meta-analysis, his major substantive interests include self-concept and motivational constructs; evaluations of teaching/educational effectiveness; developmental psychology; sports psychology; the peer review process; gender differences; peer support and anti-bullying interventions. In 1998-2003 he was awarded the Australian Research Council “Special Investigator Award” to fund his entire research programme. Similarly, in 2008-11 he was awarded one of the highly competitive ESRC Professorial Fellowships (awarded to only 3-5 social science researchers across all of the UK).

Centre for Positive Psychology and Education Home
