That’s a tricky question to ask, isn’t it? Now that the new curriculum is becoming more embedded and we know what the end of Key Stage assessments look like (let’s assume Justine Greening doesn’t change her mind and everything does stay the same for the next couple of years) it’s a good time to reflect on the changes happening in schools in light of the scrapping of levels.
There are plenty of people who’ll tell you that if you label your scores anything at all (Beginning, Developing, Secure, Embedding, Pioneering, etc.) then you have recreated levels. But it isn’t really that simple.
Levels was a culture and it’s the culture that’s the hardest part to get rid of. We know, according to the DfE, that they found little consistency across the country with levels – one teacher’s 2B was another’s 2A or even someone’s 3C – and that they felt the race for 2 sub levels per year forced some children to be held back and others to be pushed forward when they weren’t really ready.
There were good parts of that culture, though – teachers had a common language which was understood within schools, between schools and by parents at least as a national comparison of some sort. Removing levels was also like ripping out the rug from beneath teachers’ feet and leaving no replacement.
So how do we keep some of what we know and what was worthwhile without keeping the more negative aspects?
For me, the place to start is in how we apply the scores that we use. If you use them to compare cohorts – finding out that one intervention group is progressing faster than another or to discover that one class in the year group has higher attainment than another, for example – then that is clearly still really useful data. Even if your scores don’t translate into a nationally consistent model, it doesn’t matter for these instances. If you, as a school, can use the data to spot trends and do something about them then it shouldn’t matter that you use a label for the scores.
However, if you’re using the scores to hand out to individual pupils and parents then there might be a danger that the levels culture is fully alive and well. Also, if you’re using a score to recreate a linear progress line similar to the old 2 sub levels per year model, then you may have levels still embedded.
Do parents really need a score to tell them about their child, especially when there is no national comparison? If all it does is allow them to compare their child to another child in the same class, then it probably won’t provide them with anything they can act upon. Wouldn’t an Assessment Summary output be more useful? Something which tells them how their child has done with the objectives they have been taught rather than an arbitrary score? This at least shows parents a way to become involved if they can clearly see what the targets for their child are.
Similarly, does the child need to know their score? Will it help them to know they did better/worse than someone else in the class? Or again would a simple assessment summary target sheet be more useful?
As for the linear progress model, it is unfortunately a natural side effect of this that teachers, if faced with the expectation of a linear graph, will focus on the score more than the learning behind it when it comes to “data drop” time. I’m not saying they will change what actually happens with teaching and learning in the classroom but they may certainly change, even subconsciously, what is entered onto Classroom Monitor if they feel the score at the top of the markbook is more important than the ongoing formative assessment within the markbook.
It’s likely that many conversations you’ve had over the last two years have covered many similar areas. I know many schools have felt so strongly that they’ve recreated levels that they’ve scrapped the scores altogether and changed in favour of our percentage only model instead. This allows them to track coverage rather than scores. This is perfectly valid – and if it works for you, get in touch to find out more about how you can scrap scores altogether.
However, if you still like having the data to hand in the Attainment and Progress area for inspections, governors, LA’s/MAT’s etc. then don’t feel too bad – it’s a very common choice! Just think carefully about how you use it. In general, if you discuss a cohort (“The boys are averaging Beginning in maths, but the girls average a Developing”, for example) that is a perfectly good way to spot trends. If your discussions are on individuals (“Jane is a Beginning, but Sarah is a Developing”) this is when you might need to consider whether the scores are helpful in that context.