As schools in England adjust to a post-levels world, there have been many in-depth discussions around what data and tracking should look like. We’re told that no one is expecting assessment data to look a particular way, but it’s important that it helps schools to notice pupils who need extra support and/or a challenge. This need for assessment to feed directly into teaching and learning is vital, and that applies for a teacher planning lessons right through to the head teacher planning funding for interventions.
Most of our schools are now using the markbooks regularly for lesson planning and gap analysis to ensure they pick up on support needs as early as possible. This has helped to shift our mindset from the idea of labelling a child, e.g. as a level 2B, and moved us on to looking at pupil’s achievements and weaknesses so that they can be developed. Overall, we’ve realised that looking at data in “levels/stages” terms is only useful when comparing cohorts and groups within them. Even then, looking at this data more than a couple of times a year can be unproductive. The more often we look at the data in numbers, the less reliable those numbers become, as the very nature of learning is not linear in such small chunks.
One thing that has really struck me recently, however, is that we need to now start changing the way we look at those cohort comparisons too. We are all too often using them as a deficit model rather than looking for, and finding, the positives.
The Vulnerable Groups Analysis in the Classroom Monitor homepage favourites is used by many to view attainment and progress for groups such as children with English as Additional Language (EAL) or free school meals (FSM), and to compare performance between different groups such as boys and girls. In data terms, this CAN be interesting and can form the narrative around their learning journeys but can it actually help to change anything or create impact on learning? Also do we want to keep sharing this information if every term it shows that vulnerable groups ARE behind the rest of the cohort?
Example: who really needs the extra support?
Overall data may show that FSM children are falling behind their peers in maths, children with SEND are falling behind in maths and girls are falling behind in maths. On the surface, this may suggest that we need to do some intervention work with FSM children, SEND children and girls in maths but we don’t see the full picture until we look at the individual pupils:
- Child A: a girl with SEND who has free school meals is achieving in line with her peers in maths
- Child B: also falls into all three categories and she is achieving below her peers in maths
- Child C: is in none of the groups identified to be falling behind in maths but is achieving below his peers
This tells us where our intervention should be best placed: providing some support for Child B and Child C in maths. It doesn’t actually matter which of the “vulnerable groups” these children do or do not fall into. Child A is achieving well without support so there is no need to intervene specifically with her maths just because she falls into one of those groups.
So now our main focus should be on measuring the impact that the interventions themselves are having. So if you have Child B and Child C plus a number of other children who are behind in maths, they could be set up as a group within Classroom Monitor. Then, their individual and collective needs can be identified through gap analysis of the markbook so targeted interventions can be put in place to plug these gaps and attainment and progress of this group can be monitored.
Then, the next time you compare data on that class you can compare the whole class data with the intervention group data. Instead of now seeing that they are on track/behind/ahead based on their category heading, you can assess whether the intervention has worked. If it has then great, if not then back to the drawing board to think about how to change the intervention and how it impacts on progress.
The achievement model: a more positive outlook!
As you continue to monitor these intervention groups, your graphs and tables suddenly show not a deficit model but an achievement model. And you can share with outside agencies what you actually did to help. Instead of just saying, “Well yes, the girls are struggling”, your conversation becomes “this group of 6 children struggled with fractions, we implemented an intervention for 6 weeks and now they are able to answer those questions on the tests”. Isn’t that a more positive outlook? You have used data to make meaningful rather than arbitrary choices about how to tackle achievement. Your value added story centres on your successes, not on some information about a child that you have no impact on.
I don’t know about you but I much prefer the idea of an achievement model to a deficit model when comparing cohorts!