Evolving Assessment Centre Practices: Redesigning Competency Frameworks and Measurement for Emerging AI-Related Skills
Assessment Centres (ACs) have long been recognized as one of the most comprehensive and valid approaches for evaluating managerial and leadership potential. Researches found that competency evaluations generated through ACs remain reliable predictors of job-related performance (Rudrarat, 2025; Sackett et al., 2022). Traditionally, ACs are designed to assess relatively stable behavioural competencies that are linked to future job success, particularly for leadership and managerial roles. As a result, the competencies most frequently measured include leadership, communication, decision-making, teamwork, and problem-solving (Afsouran, Thornton III, & Charkhabi, 2022; Herd, Alagaraja, & Cumberland, 2016). Our internal data from 2020–2025 further indicate that most competencies assessed fall within leadership, business, and collaboration domains which reflect organizational expectations and demands of previous years
However, the accelerating pace of technological change, the rise of AI-driven workflows, and the increasing complexity of organizational ecosystems have shifted the competency landscape. New skill sets are emerging as essential for employees to remain competitive and effective in this ever-changing business climate. Among these, AI literacy has been regarded as a future-critical competency. AI literacy encompasses foundational understanding of AI systems, prompt engineering skills, and the ethical, legal, and societal considerations tied to AI adoption (Bankins, Hu, & Yuan, 2024; Peter, Riemer, & Norman, 2024; World Economic Forum, 2025). Evidence suggests that AI literacy enhances adaptability, innovation, and overall performance in technology-enabled workplaces (Imjai et al., 2025; Niam et al., 2025).
Despite its growing importance, AI literacy remains largely unmeasured within current AC practices. The traditional AC exercises (i.e. in-baskets, interaction roleplay, or business simulation) require substantial redesign to elicit behaviours that reflect AI-related competencies. This gap raises concerns regarding the predictive validity of ACs in identifying the talent suited to future organizational demands and the preparedness of leadership pipelines. Furthermore, continued reliance on assessing only traditional competencies may limit the ability of ACs to support organizations in addressing new challenges and solving complex, technology-driven problems.
Addressing these limitations requires several approaches. First, competency frameworks must be expanded to define and operationalize AI literacy. that outline core components of AI literacy, including basic AI knowledge, understanding the capabilities and limitations of generative AI, contextual decision-making when implementing AI, and awareness of ethical and legal considerations (Almatrafi, Johri, & Lee, 2024; Annapureddy, Fornaroli, & Gatica-Perez, 2025; Faruqe, Watkins, & Medsker, 2021). Second, digital collaboration scenarios should be embedded within AC exercises to simulate technology-mediated decision-making and mirror the digital environments of employee workplace. This can be strengthened by integrating AI-supported behavioural analytics to enhance observation precision through data-informed insights (Tenison & Sparks, 2023). Lastly, performance-based simulations can be one of the alternatives. Bartolomé, Garaizar, and Larrucea (2022) shows the feasibility of performance-based digital competency assessments. Incorporating such simulations alongside ACs can enrich assessment insights by leveraging multiple methods to capture future-relevant behaviours.
By modernizing AC structures and aligning them with evolving future-of-work skill requirements, organizations can strengthen the relevance and predictive accuracy of their assessments. Updating competency frameworks and leveraging digital and technology-enhanced simulation will enable ACs to capture future-critical capabilities and support the development of talent prepared to navigate ever-evolving challenges.

Presentation
Hani Pahlevi
Hani is an assessment and organizational psychology practitioner specializing in assessment design, psychometric tool development, AI-enhanced Assessment Centres, and behavioral simulations. Since 2019, she has worked at Daya Dimensi Indonesia, where she develops AC frameworks, exercises, scoring systems, and digital assessment solutions while conducting research on competency models, validity evidence, and future-of-work skills. Her experience includes creating roleplays, in-baskets, group simulations, and AI-supported assessment tools used across industries. She collaborates with organizations to align assessment strategies with evolving talent needs and is committed to advancing evidence-based, future-ready assessment practices in Indonesia.
