Call for Participation
We are announcing the first BEA (2024) shared-task on automated prediction of Difficulty And Response Time for Multiple Choice Questions (DART-MCQ).
Motivation
For standardized exams to be fair and valid, test questions, otherwise known as items, must meet certain criteria. One important criterion is that the items should cover a wide range of difficulty levels to gather information about the abilities of test takers effectively. Additionally, it is essential to allocate an appropriate amount of time for each item: too little time can make the exam speeded, while too much time can make it inefficient.
There is growing interest in predicting item characteristics such as difficulty and response time based on the item text. However, due to difficulties with sharing exam data, efforts to advance the state-of-the-art in item parameter prediction have been fragmented and conducted in individual institutions, with no transparent evaluation on a publicly available dataset. In this Shared Task, we bridge this gap by sharing practice item content and characteristics from a high-stakes medical exam called the United States Medical Licensing Examination® (USMLE®) for the exploration of two topics: predicting item difficulty (Track 1) and item response time (Track 2) based on item text.
Participation
The shared-task has two separate tracks as follows:
• Track 1: Given the item text and metadata, predict the item difficulty variable. • Track 2: Given the item text and metadata, predict the time intensity variable.
Important Dates
Training data release: January 15 Test data release: February 10 Results due: February 16 Announcement of winners: February 21 Paper submissions due: March 10 Camera-ready papers due: April 22
Links
For more information about the shared task, see: https://sig-edu.org/sharedtask/2024
Organizers
Victoria Yaneva, National Board of Medical Examiners Peter Baldwin, National Board of Medical Examiners Kai North, George Mason University Brian Clauser, National Board of Medical Examiners Saed Rezayi, National Board of Medical Examiners Yiyun Zhou, National Board of Medical Examiners Le An Ha, University of Wolverhampton Polina Harik, National Board of Medical Examiners