Automatic Item Generation: Lessons Learned, Next Steps, and Outstanding Questions
This session was held on Thursday, April 15, 2021
The ITCC April Education Session included a presentation by Anjali Weber, Audry Hite, and Nadine McBride from Amazon Web Services (AWS).
Advances in computer-based assessment have dramatically increased the access candidates have to sit for exams. With this increased volume comes the risk of overexposure and item theft, both of which threaten exam validity and value in the field. Traditional item development methods are thorough and produce high-quality items, but they are time consuming, costly. High volume exam programs will need to implement practices that can quickly increase the available pool of items to supplement traditional item writing. Automatic Item Generation (AIG) is an assessment design strategy that has the potential to generate multiple items from a single template. In this session, we will present lessons learned and initial implementation considerations to start your journey.
Q&A with Anjali Weber
Why is this topic important to the IT certification industry?
Employers rely on IT certifications to validate proficiency at the skill levels needed to support emerging markets and navigate an increasingly competitive pool of applicants.
What key takeaway do you hope attendees learn or implement based on your presentation?
- The importance and nuance of construct validity in item development: AIG, and AE more broadly, focuses on systematically evaluating competency along a proficiency continuum and highlights the importance of intentionally exploring the scope of your construct.
- Trust the process. The more theoretical foundation your exam program has (domain analysis, content validation steps, etc.), the smoother your transition to AIG methodologies will be.
- Our ability to provide evidence of proficiency along the continuum of a construct depends on our validation of the construct. Under AE, a task model is a carefully constructed specification for a task that requires a particular challenge to the candidate. Changes in difficulty should not be random outcomes as a function of SME interpretation or individual choice that may unintentionally redesign the construct being measured.
What’s the biggest change for the IT certification industry that this topic is driving? Or should be aware of? Trends?
- Rapid pace of change and obsolescence of content
- Need for efficiency in scaling efforts to growing candidate volumes
- Mitigation of impact of content theft and proxy testing
About the Speakers
Anjali Weber is the Manager of Certification Strategic Initiatives at AWS
Audry Hite is the AIG Program Manager at AWS
Nadine McBride is a Psychometrician at AWS