AI in Assessment: Post-Event Survey Results

On December 9, 2025, RMJ Assessment hosted a virtual, pan-Canadian large-scale assessment forum that brought together more than 100 representatives from provincial and territorial ministries of Education across the country. The event’s focus was on the use of AI in assessment and included three sessions: AI in scoring/marking, presented by @Vretta; automatic item generation, presented by @MGHL Partners; and AI for item development, presented by the @Education Quality and Accountability Office (EQAO). A post-event survey of participants indicates that the forum was well-received. The event received a strong rating, and virtually all survey respondents believed the session was well organized, relevant, and about the right length. When asked what they liked most about the forum, participants most often mentioned the quality of the presentations and the presenters, as well as the opportunity to acquire new knowledge about AI and how it is currently used in assessment. There were few responses to a question about how the forum could be improved, but among them, more time for questions and discussion was suggested. Virtually all participants supported the idea of holding another forum on a topic to be determined. Some suggested topics for future events included:

  • More sessions related to AI
  • Scoring topics such as how to improve human scoring, hybrid scoring (human and AI)
  • Technology-enhanced items (constructs and best use)
  • Guarding against bias in assessment
  • How data and reporting are used by ministries and school authorities (approaches for communicating student performance and providing feedback to educators and students)
  • How to improve the perception and understanding of the contribution of standardized assessment among educators

A recording of the forum is available here.

Passcode: zDYh1&b8

Leave a comment