My position as a literacy specialist is in line with that of UKLA, and my role on the Commission is closely tied to representing their perspective as it is in line with my one. There are key concerns that I have around the testing regime as it stands, and these are borne out by the practice I see in school in my work in pre and in-service teacher education. In their response to the 2016 consultation UKLA noted their concerns as:
- A narrowing of the curriculum, as schools concentrate on teaching to the requirements of the assessment system, at the expense of a broader education.
- The introduction of an unrealistic set of standards, which categorise nearly half the school population as failing, and undermine children’s sense of themselves as learners.
- The exclusion, in practical terms, of large groups of students (particularly SEND pupils), from participation in mainstream learning activities.
- A framework for the measurement of school performance which makes use of the partial, unreliable, and inaccurate data generated by the primary assessment system.
- A failure to assist pupils’ transition from primary to secondary education.
- A danger, unrecognised by the DfE, of damage to pupils’ health and well-being.’
I notice particularly that Grammar has become a ‘naming of the parts’ exercise, making ‘out of context’ grammar exercises the norm as children are taught how to succeed in the GPS test, rather than embedded, contextualised learning about language, which can be motivating, relevant and impactful (see, for example Jones, Myhill & Bailey, 2013). Additionally, I am concerned with synonymised references to the phonics screening check as a reading attainment test. As UKLA state: ‘an assessment of word decoding skills may be perfectly valid as an assessment of decoding but not valid if it used to make a judgement about reading more generally (UKLA 2020). Somewhere along the line, ‘reading’ has been taken to mean ‘decoding’ and this is evident throughout The Reading Framework (DfE 2021). This is troublingly misleading.
I find that work by Torrance and Pryor (2001) on the differences between divergent and convergent assessment are particularly salient. We have increasingly devalued divergent assessment in the pursuit of the convergent. Harlen (2014) notes the significant limitations in the use of SATs data and makes useful comparisons with other systems of assessment – it’s an enlightening study. In 2015-2016 I was part of a group (UKLA and CLPE) who worked on creating progression scales for reading and writing. Importantly, these scales centralise what it means to develop as a reader and as a writer, and are grounded in respected theory and research. With an emphasis on a rounded view of literacy, and important points for ‘what next’ these scales are ‘divergent’ assessment in practice, offering a tool for teachers that is based on sound subject knowledge.
Jones, S., Myhill, D. & Bailey, T. (2013). Grammar for writing? An investigation of the effects of contextualised grammar teaching on students’ writing. Reading and Writing
Torrance, H. & Pryor, J (2001). Developing Formative Assessment in the Classroom: using action research to explore and modify theory. British Educational Research Journal, 27
(5), 615-631. https://doi.org/10.1080/01411920120095780