[Apologies for cross-posting]
*1st Workshop on Automated Evaluation of Learning and Assessment Content*
AIED 2024 workshop | Recife (Brazil) & Hybrid | 8 July 2024
https://sites.google.com/view/eval-lac-2024/
The submission deadline for the Workshop on Automated Evaluation of Learning and Assessment Content, which will be held in Recife (Brazil) & online during the AIED 2024 conference, is fast approaching!
About the workshop
The evaluation of learning and assessment content has always been a crucial task in the educational domain, but traditional approaches based on human feedback are not always usable in modern educational settings. Indeed, the advent of machine learning models, in particular Large Language Models (LLMs), enabled to quickly and automatically generate large quantities of texts, making human evaluation unfeasible. Still, these texts are used in the educational domain -- e.g., as questions, hints, or even to score and assess students -- and thus the need for accurate and automated techniques for evaluation becomes pressing. This hybrid workshop aims to attract professionals from both academia and the industry, and to to offer an opportunity to discuss which are the common challenges in evaluating learning and assessment content in education.
Topics of interest include but are not limited to:
-
Question evaluation (e.g., in terms of alignment to learning objectives, factual accuracy, language level, cognitive validity, etc.). -
Estimation of question statistics (e.g., difficulty, discrimination, response time, etc.). -
Evaluation of distractors in Multiple Choice Questions. -
Evaluation of reading passages in reading comprehension questions. -
Evaluation of lectures and course material. -
Evaluation of learning paths (e.g., in terms of prerequisites and topics taught before a specific exam). -
Evaluation of educational recommendation systems (e.g., personalised curricula). -
Evaluation of hints and scaffolding questions, as well as their adaptation to different students. -
Evaluation of automatically generated feedback provided to students. -
Evaluation of techniques for automated scoring. -
Evaluation of bias in educational content and LLM outputs.
Human-in-the-loop approaches are welcome, provided that there is also an automated component in the evaluation and there is a focus on the scalability of the proposed approach. Papers on generation are also very welcome, as long as there is an extensive focus on the evaluation step.
The workshop will feature two keynote speakers:
-
Zachary A. Pardos, Associate Professor of Education at UC Berkeley
-
Victoria Yaneva, Manager of NLP research at NBME.
Important dates
Submission deadline: May 17, 2024
Notification of acceptance: June 4, 2024
Camera ready: June 11, 2024
Workshop: 8 July 2024
Submission guidelines
Submission URL: https://easychair.org/conferences/?conf=evallac2024
Authors are invited to submit short papers (5 pages, excluding references) and long papers (10 pages, excluding references), formatted according to the workshop style available on the website.
Submissions should contain mostly novel work, but there can be some overlap between the submission and work submitted elsewhere (e.g., summaries, focus on the evaluation phase of a broader work). Each of the submissions will be reviewed by the members of the Program Committee, and the proceedings volume will be submitted for publication to CEUR Workshop Proceedings.
Organisers
Luca Benedetto (1), Andrew Caines (1), George Dueñas (2), Diana Galvan-Sosa (1), Anastassia Loukina (3), Shiva Taslimipoor (1), Torsten Zesch (4)
(1) ALTA Institute, Dept. of Computer Science and Technology, University of Cambridge
(2) National Pedagogical University, Colombia
(3) Grammarly, Inc.
(4) FernUniversität in Hagen
*1st Workshop on Automated Evaluation of Learning and Assessment Content*
AIED 2024 workshop | Recife (Brazil) & Hybrid | 8 July 2024
https://sites.google.com/view/eval-lac-2024/
Important dates
[Extended] Submission deadline: *May 22, 2024*
Notification of acceptance: June 4, 2024
Camera ready: June 11, 2024
Workshop: 8 July 2024
About the workshop
The evaluation of learning and assessment content has always been a crucial task in the educational domain, but traditional approaches based on human feedback are not always usable in modern educational settings. Indeed, the advent of machine learning models, in particular Large Language Models (LLMs), enabled to quickly and automatically generate large quantities of texts, making human evaluation unfeasible. Still, these texts are used in the educational domain -- e.g., as questions, hints, or even to score and assess students -- and thus the need for accurate and automated techniques for evaluation becomes pressing. This hybrid workshop aims to attract professionals from both academia and the industry, and to to offer an opportunity to discuss which are the common challenges in evaluating learning and assessment content in education.
Topics of interest include but are not limited to:
-
Question evaluation (e.g., in terms of alignment to learning objectives, factual accuracy, language level, cognitive validity, etc.). -
Estimation of question statistics (e.g., difficulty, discrimination, response time, etc.). -
Evaluation of distractors in Multiple Choice Questions. -
Evaluation of reading passages in reading comprehension questions. -
Evaluation of lectures and course material. -
Evaluation of learning paths (e.g., in terms of prerequisites and topics taught before a specific exam). -
Evaluation of educational recommendation systems (e.g., personalised curricula). -
Evaluation of hints and scaffolding questions, as well as their adaptation to different students. -
Evaluation of automatically generated feedback provided to students. -
Evaluation of techniques for automated scoring. -
Evaluation of bias in educational content and LLM outputs.
Human-in-the-loop approaches are welcome, provided that there is also an automated component in the evaluation and there is a focus on the scalability of the proposed approach. Papers on generation are also very welcome, as long as there is an extensive focus on the evaluation step.
*Submission URL:*
https://easychair.org/conferences/?conf=evallac2024
For more information about the workshop please visit the website: https://sites.google.com/view/eval-lac-2024/