Comparability of High-Stakes Exams: Education Session Q&A

Share This Post:

Facebook
Twitter
LinkedIn
Email

Comparability of High-Stakes Exams in Test Centers Proctored and Live Remote Proctoring: A Multi-method Psychometric Investigation across Multiple Testing Programs

This Education Session was held on Thursday, July 21, 2022 at 11:00 a.m. Central

Featured Speakers

  • Li-Ann Kuan PhD, Senior Vice President, Test Development Services, Prometric
  • Michelle Chen, Validation Studies and Test Research Lead, Prometric
  • Gemma Cherry, Postdoctoral Researcher at the Centre for Assessment Research Policy and Practice in Education (CARPE), Dublin City University

Presentation Overview

Remote proctoring provides an alternative way for the candidates to complete exams outside of test centers and allows test providers to reach a broader candidate pool. Because many testing programs use both live remote proctoring (LRP) and test center proctoring (TCP) to deliver exams, a fundamental consideration is the equivalence or comparability of test performance and score meaning across the two test delivery modalities. Stakeholders rely on such validity and comparability evidence to make informed decisions about using remote proctoring.

This session presents two studies using data from a range of professional licensing and certification exams to investigate the comparability of the exams delivered in test centers and those delivered via LRP. Together, the presentations demonstrate various methods for investigating comparability from multiple data sources and show strong evidence of comparability for high-stakes exams delivered via LRP and TCP.

Q&A with Li-Ann Kuan

Why is this topic important to the IT certification industry?

IT credentialing organizations are early adopters of online proctoring before the pandemic. With the increasing popularity of online proctoring in the past two years, more research emerges on this topic. Because many testing programs use both live remote proctoring (LRP) and test center proctoring (TCP) to deliver exams, a fundamental consideration is the equivalence or comparability of test performance and score meaning across the two delivery modalities. However, such studies remain sparse. In this session, we will share findings based on data from 15 large-scale credentialing exams to address the critical question.

What key takeaway do you hope attendees learn or implement based on your presentation?

We will share multiple sources of evidence supporting the comparability of exams delivered through live remote proctoring (LRP) and via test centers. The studies use large datasets from a variety of high-stakes exams, allowing good generalizability of the results. The attendees may use this knowledge to inform their decisions regarding the use of dual delivery modes.

What’s the biggest change for the IT certification industry that this topic is driving? Or should be aware of? Trends?

Offering a dual modality empowers candidates to test in the option that suits them best, which can encourage more candidates to earn more certifications and, in turn, potentially help them advance their career. Additionally, employing multiple delivery modalities helps programs to scale. To do it successfully, it is critically important to evaluate and ensure the comparability and validity of exams delivered in different ways. 

About the Speakers

Li-Ann Kuan

Dr. Li-Ann Kuan is an educational psychologist with over 20 years of experience in the testing industry. Dr. Kuan received her Bachelor of Science in Psychology from Brown University, and a Master of Arts and a Doctor of Philosophy in Psychological Studies in Education from the University of California, Los Angeles. At Prometric, she serves as Senior Vice President for Test Development Solutions and leads a team of exam measurement experts who are responsible for creating reliable measures that provide valid performance interpretations. Dr. Kuan provides ongoing leadership in the development, improvement, and evaluation of all existing and future assessment products.

Gemma Cherry

Dr. Gemma Cherry is the Prometric Post-Doctoral Researcher at the Centre for Assessment, Research, Policy and Practice in Education (CARPE), Dublin City University (DCU). She holds a Ph.D. in Education from Queen’s University Belfast (QUB). She is also an Associate Fellow with the Higher Education Academy. Gemma contributes to the full programme of research at CARPE, addressing many of the challenges posed by existing and new conceptions of assessment.

Michelle Chen

Dr. Michelle Chen is an experienced psychometrician with eight years of experience in the testing and assessment industry. She holds a Ph.D. in measurement, evaluation, and research methodology from the University of British Columbia. Dr. Chen is currently the Validation Studies and Test Research Lead. Her work supports test development, evaluates score validity, and promotes a better understanding of measurements and assessments. 

Related Posts

Download Our Free Resource

There’s something about the word “certified” when it precedes a professional title that conveys the consumers and employers a sense of trust, credibility, knowledge and an official “stamp” of approval. This is not a coincidence. IT certification has long been a proven means of differentiation and qualification among professionals in the industry. Employers often include certification as a prerequisite when seeking qualified candidates to fill positions; consumers often trust only those IT professionals who boast credentials proving they have attained a certain level of knowledge.