
Blind Peer Review
Thank you for your time and expertise in reviewing our conference abstracts. Please review the assigned abstracts by 09 December, and remember to return the completed form via e-mail.

Revolving doors: Trends and turns in the assessment landscape – Part II
Presentation
During the 2024 ACSG Conference, we presented the first in a two-part presentation, Revolving Doors: Trends and Turns in the Assessment Landscape. In Part I, we focused primarily on the assessment validity debate unleashed by the Sackett et al. (2021) research presented at the 2022 SIOP Conference in Boston. In our presentation, this debate was positioned as a call to take stock of salient issues as we venture into a re-ordered assessment landscape.
In this presentation (Part II), we focus on Assessment Centre (AC) research and practice, juxtaposing AC technologies with other selection methods in a changing assessment landscape. Lievens et al.’s 2009 article, Assessment Centers at the Crossroads: Toward a Reconceptualization of Assessment Center Exercises, received ample attention since its publication over a decade and a half ago, yet their key questions remain largely unanswered. This suggests that, although AC research and practice stand on solid foundations, we develop and deploy AC technologies in a fluid, dynamic environment and constantly need to critique our thinking and practice.
In their 2019 article, Toward a Better Understanding of Assessment Centers: A Conceptual Review, Martin Kleinmann and Pia Ingold provide a framework for analysis, highlighting key considerations regarding AC participants, assessors, and the assessment situation (context). Following these landmarks, we share practice-based observations regarding areas of concern and strategic opportunities within the assessment landscape. Key amongst these is the emergence of AI-driven assessment technologies, specifically within the realm of behaviour-focused simulation exercises.
We conclude our discussion with in-house research findings and practice-based insights concerning the discourse prompted by the Sackett et al. (2022) research, focusing on how integrated assessment solutions can moderate generalised validity concerns and address client-specific assessment needs. Finally, we offer a summary of the burning issues regarding assessment validity and utility in a re-ordered assessment landscape.
We hope to leave delegates with a deeper understanding of the key factors and trends that inform current assessment best practices and will have a defining influence on future endeavours. We trust that this will be useful for AC practitioners (decision-makers) faced with mounting complexity in an ever-changing assessment landscape.
How will the delegates be able to apply your session content back on their job?
This discussion will raise awareness of the key questions facing assessment practitioners in the wake of recently published research findings on the validity and utility of assessment methods. It will provide insights into a re-ordered assessment landscape and awareness of how the emergence of AI-driven assessment technologies may provide answers to some of these questions. Ultimately, this may enable AC stakeholders to make informed decisions when addressing specific assessment needs and stimulate thinking towards a revised best practice framework for AC practitioners.
What type of tip/tool (e.g., a template, framework, etc.) will you leave the delegates with?
A conceptual framework for integrating theoretical perspectives and practice-based observations on the validity and utility of assessment methods in a changing assessment landscape.
What do you want your audience to know at the end of your presentation and what will the three main points be?
o Re-affirming the key elements that drive the utility and value of the AC method.
o Awareness of how AI-driven solutions may shape AC practices in future.
o Sensitivity towards the challenges and opportunities AC practitioners in a re-ordered assessment landscape.
