Oracy can be assessed

Amanda Moorghen, Head of Impact and Research at Voice 21 considers the challenges of assessing oracy.


We think we know what exams look like – a hundred or more students sat at individual desks in the cleared-out lunch hall, working in silence. Assessing oracy doesn’t fit into this picture – we talk about ‘sitting an exam’, not speaking it! We worry about the logistics (“we need to film them all! It’ll take too long!”); we worry about the reliability of marking (“it’s too subjective!”); and we might worry that the sorts of talk we care most about are the hardest to assess fairly (“I can sort of imagine assessing a single speech… but what about a group discussion?”). 

But we need to tackle these challenges head on: 

Schools need to know what works, so they need an oracy assessment 

Teachers and school leaders have neither the time nor the money to do every single thing that could be valued. The best teachers and school leaders use the available evidence, and a rich knowledge of their context, to prioritise ruthlessly. A reliable oracy assessment that is practical in an everyday school context would unlock the ability to prioritise the most effective teaching and learning approaches. It would also enable us to better understand the vital role oracy plays in supporting other outcomes, from academic achievement to student wellbeing. 

We need a national picture of our strengths and weaknesses

We know that not every child currently receives the high-quality oracy education to which they are entitled. Less than a quarter of secondary teachers and less than half of primary teachers report being confident in their understanding of the ‘spoken language’ requirements outlined in the National Curriculum. An oracy assessment could enable government and national actors to deliver targeted funding and other support where it’s needed most; and ensure oracy is not ‘invisible’ at a policy-level, in comparison to other important outcomes (e.g. literacy and numeracy) for which detailed data is available. 

The challenges of assessing oracy

This isn’t a new challenge, although it has gained more recent prominence following the removal of the Speaking and Listening component from GCSE English. The main issues are:

 

Logistics

It’s harder to ‘store’ oracy: you need video/audio files rather than written documents. It can also be harder to gather – it’s not as simple as sitting the class down in one room to complete a written paper. As a result, it’s often impractical for “oracy exams” to be as long as their written counterparts – which makes it harder to provide a reliable assessment that covers everything we want to know.

 

What type of talk do we assess?

There are lots of types of talk – from exploratory talk (the sort we use collaboratively to discuss or solve problems) to presentational (more ‘polished’ talk; “giving a speech”). Moreover, these types of talk may vary in appearance across contexts, and some genres of talk may involve additional specific skills or competencies. Any assessment needs to navigate the pitfalls this creates – the assessment might offer too narrow a conception of oracy (which depending on the use, might have knock-on effects for the oracy students are taught and have the opportunity to practise); or it might be too broad, so that some aspects feel irrelevant. 

 

Reliability 

To be useful, assessments have to give us reliable answers. The world of assessment has lots of types of reliability, but for our purposes the main concern is whether we can design an assessment that “isn’t too subjective”, i.e. where the same piece of work is likely to consistently receive the same grade, even if there are different people doing the marking. 

 

Each of these challenges can be met, but it’s hard to meet them all at the same time. This isn’t unique to oracy. Consider the range of assessment methods we use for students’ written work – we wouldn’t want to use a formative, peer-assessment method to determine which GCSE grades to give, but nor would it be appropriate to replace every weekly spelling test with a 45-minute paper based on a centrally-defined exam specification!

Assessing oracy using comparative judgement 

At Voice 21 we’re working to develop an oracy assessment that can be used by schools once or twice a year to monitor the progress of their students against the Oracy Framework. Voice 21 Oracy Schools already engage in a wide range of impact assessment activities. This includes assessing their school’s oracy provision against the Oracy Benchmarks using Voice 21’s Oracy Surveys; conducting classroom-based research, perhaps using tools like T-SEDA (which measures changes in the quality of students’ discussion) and monitoring outcomes that their oracy provision is designed to impact, such as reading scores and behavioural incidents. 

In a formative context, schools monitor individual students’ progress by creating portfolios of student work to show change over time; through teacher-assessment of students against their school’s oracy progression framework; and by using peer-assessment methods such as ‘Talk Detectives’. 

It has recently become possible to attempt to trial the use of a comparative judgement approach to assess oracy. Traditional ‘absolute’ judgement relies on teachers using a rubric or mark sheet to assess students – comparing each performance to a set of descriptions to allocate a mark or grade. This can be really difficult to do – the assessor may have to look for lots of different features of talk, and the descriptions might be hard to match to real life (is this student’s speech ‘somewhat’ or ‘very’ well-reasoned?). Previous assessments designed in this way have suffered from poor reliability. 

By contrast, a comparative judgement approach asks the assessor to compare two performances, and decide which is better. Then, two more are presented for comparison. Over time, the comparative judgement system is able to use these comparisons (which may come from multiple assessors) to rank all the performances. Grades or scores can then be imposed on this ranking to communicate the results in a meaningful way. This method tends to lead to more reliable results, particularly when assessing ‘performance’, where an expert may be able to consistently recognise quality, without being able to describe it easily. 

In Voice 21’s project, ‘Comparing Talk’, we are using RM Compare, an adaptive comparative judgement platform, to assess students’ oracy. Our initial proof-of-concept trials were promising: we found that we could generate reliable rankings of examples of student talk. Additionally, participating teachers enjoyed assessing student work on the online platform, as it gave them the opportunity to see the work of students from a range of different schools around the UK. 

There’s still work to be done – we are working to expand our robustly designed assessment task suite to include a wider range of types of talk, and to be appropriate for use by wider age groups. We are also working with RM to make sure that teachers have the best possible experience when they use the assessment: minimising the time needed to assess each group of students, and maximising the usefulness of the insight generated. 

Conclusion

Comparative judgement has the potential to change the game for oracy assessment. We’re able to bring new technology to bear on an old problem, with the hope of creating something of great value to teachers in Voice 21 Oracy Schools and beyond. Initial proof-of-concept investigations leave us quietly confident in this approach, and with lots to think about as we try to turn our thinking and theorising into an assessment that, for teachers, offers game-changing insight via a simple, easy to use platform. 

This article is from The Talking Point, the Voice 21 annual journal. 

Share This

Recent news

Back to news

© 2024 Voice 21. Voice 21 is a registered charity in England and Wales. Charity number 1152672 | Company no. 08165798