In recent years, many have drummed on the findings that rigorous curricula can improve student outcomes – with gains that sometimes outweigh those of other popular education reforms.
But to the frustration of education researchers, philanthropists, and others who follow a rigorous curriculum, school districts sometimes don’t buy, and teachers sometimes don’t use a curriculum that matches their vision of what students want. evidence suggests.
More recently, this has been manifested by people like me expressing frustration with the number of teachers who have had to create materials themselves on the fly to serve their students in the aftermath of COVID and how much, comparatively, fewer were able to access the quality online resources that were built. years, according to a national survey by the Clayton Christensen Institute.
Frustration has mounted over what it will take for the majority of districts and teachers to adopt and use ‘evidence-based’ materials.
What’s missing from all the criticisms, however, is that it’s not that districts ignore the evidence or the quality. It’s just that their definition of quality – and therefore what is the appropriate evidence to determine whether something meets that standard – is sometimes different from those of the ‘rigorous curriculum bandwagon’ (and yes, I admit it – I am a member card).
Quality, in other words, is not absolute. A report by Thomas Arnett and Bob Moesta of the Christensen Institute titled “Solving the Curriculum Conundrum” (and an accompanying report applying the findings to the situation during COVID) shows how the definition of quality can be different depending on the circumstances of a district. is going on and the progress it is trying to make.
Before delving into the report’s findings, it’s worth thinking about the claim that quality isn’t absolute. Too often in education we act as if this is not the case. We ask ourselves if a program is of high quality – without asking for whom, under what circumstances and by what measures.
To illustrate how ridiculous this is, consider the following question:
Which mug – a mug made from a mix of new and recycled paper designed for hot liquids or a sturdy drinking glass – is better?
This is an absurd question. The paper cup is perfect for carrying hot liquids on the go and for disposing of the cup after use. The latter is intended as a staple in your kitchen for cold drinks. Your background as an individual determines what works best.
The same goes for neighborhoods.
According to Arnett and Moesta, there are at least four different “jobs to do” – progress someone seeks in a difficult situation – that districts find themselves in that push them to embrace the program:
1) Review: Help us transform teaching to tackle low achievement;
2) Build consensus: Help us manage a selection and reach consensus;
3) Update: Help us update our material to better support teachers;
4) Influence: Help us shape the field.
For districts looking to transform education to tackle low achievement when there is deep discontent on the part of people like school board members, they define program quality in a similar way to what school board members do. Researchers and others involved in the rigorous movement of the program are using it, as districts seek evidence that whatever program they adopt will move the needle for success.
Research trials and evidence of alignment with standards are all useful in selecting a new program, which does not necessarily happen “on the cycle” with adoption of the traditional program. But in this case, districts could also prioritize other investments that they believe will pay a greater dividend on success – such as coaching, redesigning learning models, how they use time, etc. – and leave their current program in place, even if it is poor. from a rigorous curriculum perspective.
As part of the consensus building work, the districts select a new program on the cycle, and their goal is to gain buy-in from the main stakeholders, namely the teachers who participate in the program selection committee. The program director wants to survive the process unscathed.
Therefore, quality is about what will be of interest to students and is friendly and straightforward to teachers. The evidence teachers will be looking at is not randomized controlled trials or EdReports evaluations, but signals from teachers in other districts who have used the program, signs that what they are considering is not too different. of what they’re currently using, glimpses of lightning and crackling that might catch the interest of students, etc. It’s not that driving achievement isn’t important; it’s just that with everything that’s going on, it’s not the most urgent priority.
For districts looking to update their materials to better support teachers, this is more of a problem than for districts looking to build consensus. Teachers are unhappy with their current material and want something different.
In many cases, this means that their definition of quality is similar to that of Districts in the Job Building Consensus, but as Arnett and Moesta observe, there are nuances here. If teachers’ dissatisfaction is due to the fact that current materials are not aligned with standards, then EdReports can be a valuable tool for gathering evidence when selecting the suite, for example. If, however, teachers are hungry for more project-based approaches to learning, then PBLWorks will be a better resource with better evidence for what they’re trying to accomplish. The segments of this Job count in other words.
Finally, for neighborhoods looking to shape the land, they have a solid reputation on which to build. Like Consensus Districts, they’re embracing the cycle, but because they’re doing relatively well, they’re now looking to embrace an agenda that will influence the field more broadly – by creating a new publisher, for example, or by adopting something that earn them acclaim for its level of “innovation”.
So what is quality? Anything that will win the applause of these neighborhoods and generate positive publicity. The evidence lies in how external stakeholders seem to react to their potential choices, not in the crude research into the program itself – an important distinction given the fashionable nature of education.
There are many conclusions to be drawn from this research – and a lot of work to be done for those following a rigorous program to design a program that matches the progress districts seek to make.
But one conclusion I’ll draw at a minimum is that we should be wary of asking why districts don’t use evidence-based practices or care about quality. They do. It’s just their situation and their definition of progress – and therefore of quality – is different from those who look at the curriculum in an “ideal” vacuum and on a blank slate.