KEYNOTE SPEAKER - DAISY CHRISTODOULOU
Daisy Christodoulou is a Partner and Director of Education at No More Marking, a provider of online comparative judgement. She works closely with schools on developing new approaches to assessment.
Before that, she was Head of Assessment at Ark Schools, a network of academy schools. She has taught English in two London comprehensives and has been part of government commissions on the future of teacher training and assessment.
Daisy is the author of Seven Myths about Education and Making Good Progress? The future of Assessment for Learning, as well as the influential blog daisychristodoulou.com.
In March 2020 her third book, Teachers vs Tech, was published by Oxford University Press.
You can also find her on Twitter @daisychristo
Attainment in English Schools during COVID-19:
What can Comparative Judgement tell us?
Assessments don’t just tell us about attainment at one moment in time. They tell us about student progress and standards over time. This has always been important, but the shocks caused by COVID-19 make this feature of assessment even more important. What impact have school closures had on student attainment, and how persistent will this impact be?
Comparative Judgement is an assessment technique that is well-suited to measuring standards over time. It’s also ideal for the assessment of complex open tasks that are typically hard to assess reliably. Since 2017, the team at No More Marking have been running large-scale Comparative Judgement writing assessments in England.
Every country has been affected by the disruption to children’s education caused by the Pandemic - some more than others. This presentation will reveal what the results of these writing assessments have told us about children’s attainment before and during the COVID-19 pandemic in England. I will explain how the Comparative Judgment technique works at both school and national level, and I will seek to draw out some key learning points which I hope will be relevant to schools in Singapore.
Comparative Judgement Writing Assessments:
How do they work and what do the results mean?
Assessing students’ writing skills poses a number of difficult challenges. You can prioritise the reliability of the assessment, in which case you’ll choose lots of closed questions with right and wrong answers. The problem with this is that these tasks don’t fully represent the skill you are trying to assess, and can lead to damaging distortions of the curriculum.
Or you can prioritise the authenticity of the assessment, and set a more open task that requires real-world writing skills. The problem with this is that these tasks are extremely difficult to assess reliably, making it hard to know if students really have acquired the desired skill.
Comparative Judgement (CJ) is a solution to this dilemma. It allows you to set creative and open-ended tasks, but it also provides high levels of reliability.
In this workshop, I will describe in detail how CJ works at both school and national level. Participants will take part in some live judging of real students' writing, see the results, and explore what these results tell us about student performance in England over the last 2 years. We will also look at the impact this approach has on teaching and learning compared to the traditional approach of marking with a rubric. There will be time at the end for questions and discussion about how CJ could become a valuable technique for schools in Singapore.