Does homework work? Does doing more of something usually make you better at it? Does retrieving knowledge from long-term memory more frequently make it easier to retrieve and build on in the future? Does water make things wet? Does fire burn flammable materials? Does leaving dirty dishes in the lounge room and not making your bed cause the parents of teenage daughters to become annoyed?
It’s a mystery. Or maybe it isn’t.
In essence, this is a question with a thunderingly obvious answer: Homework works. Nevertheless, there is still plenty of room for debate. For example, what types of homework are the most effective? We can discuss what might be an appropriate amount of homework in, say, lower primary school and how that would compare with what is appropriate for students at the pointy end of high school. We might also legitimately argue about the impact homework has on free play or hobbies or physical activity or family life or stress levels. Grown up adult humans can handle the concept that everything comes with trade-offs, but perhaps that’s too complicated a message for some.
Which is why we see the repeated refrain across social media that somehow, despite the sheer obviousness of the case for it, homework does not work. What do progressive educators who say this mean?
Well, it’s hard to collect the evidence. Schools will be reluctant to randomly assign some students homework and others not as part of some researcher-led study—although there are a few examples of this in the literature. Yes, we can go out into the wild and look for correlations between those who do homework and those who do not, but these students are likely to be different to each other in more than just homework completion, so we cannot tease out this one effect.
OK. So why don’t we look at schools who set homework and compare their outcomes with schools that don’t? Well, again, they are likely to vary in other ways. What if we try to match the schools on these other characteristics and then run the analysis? Well, how do you somehow capture all the ways schools vary? After all there must be a reason why School A is assigning homework and School B is not, even if it is just about what that school values.
If we are unconvinced about the value of homework, what can we do? There are two approaches that at least superficially make sense. The first is to try to synthesise the flawed or confounded studies in an attempt to find an overall effect, usually computed as an ‘effect size’. I am not a fan of this approach because I don’t think this solves the problem of poorly designed research. However, it is very popular and it is the approach that Professor John Hattie, an influential education researcher, has taken to synthesising research. This is significant because Hattie is perhaps the source of the idea that homework does not work, at least for primary school students.
Hattie used to be a little more ambivalent, but in recent times, he has come out decisively against primary school homework, claiming that for these students, homework has pretty much ‘zero’ effect. We can trace this back to his 2008 book, Visible Learning. Here, Hattie finds an overall effect size in favour of homework, although it is below the threshold* he sets for being an effect worth pursuing. However, for primary school, he states the following:
“Shorter is better but for elementary students, Cooper, Lindsay, Nye and Greathouse (1998) estimated a correlation of near zero (d = -0.04) between time spent on homework and achievement.”
This statement is a little confusing. I can only find an ‘r-value’ of -.04 in the paper i.e. a correlation and so this supports ‘a correlation of near zero’. However, a ‘d-value’ is meant to be an effect size and a d-value and an r-value are not the same thing. Nevertheless, the point stands.
Is this where the ‘zero’ effect idea comes from? The research Cooper et al. (1998) conducted surveyed teachers and students on homework set and homework completed, and the outcome measures were standardised test scores and grades assigned by the teacher. It is complicated to tease out these correlations and suggest causes. For example, teachers may consistently assign more homework to a higher or lower achieving class. Both seem potentially plausible. Or, at primary school, they may assign relatively simple homework that is completed quickly by more advanced students and takes less advanced students longer to do. In this latter case, teachers would report assigning the same amount of homework but less advanced students would report spending more time on it.
Cooper et al. (1998) therefore look at the data a number of different ways. Far from declaring primary school homework pointless, they conclude:
“Our results suggest that the benefits of homework for young children may not be immediately evident but exist nonetheless. First, by examining complex models and distinguishing between homework assigned and homework completed, we were able to show that, as early as the second and fourth grades, the frequency of completed homework assignments predicts grades, even when controlling for standardized test differences, amount assigned, and teacher, student, and parent attitudes.”
In Hattie’s updated Visible Learning: The Sequel, he refers to the same ‘correlation of near zero’ effect size figure, but this time references a different paper on which Cooper was the lead—a paper from 2006 rather than 1998. For some reason, I cannot find this study in the reference list but it may be this paper. This is a meta-analysis rather than a single study. The authors again suggest that the effect is stronger in secondary and close to zero in primary, but it’s complicated:
“A significant, though small, negative relationship was found for elementary school students, using fixed-error assumptions, but a nonsignificant positive relationship was found using random-error assumptions.”
Again, this is based on correlational evidence, so we have to be cautious. At one point, the authors note the possibility the difference between K-6 and 7-12 is due to a ‘range restriction’, before stating there is no evidence for this. This possibility makes a lot of sense. In other words, if the amount of homework set in primary school varies less than in secondary school, that necessarily restricts the degree to which variations in the amount assigned or completed can correlate to achievement. This again illustrates the pitfalls of trying to process all such data into a single ‘effect size’ for homework. In fact, Cooper et al. (2006) caution us about the difference between correlations and effect sizes in the ‘Moderator Analyses’ section.
Setting aside the discussion around primary school homework, Hattie makes an important point:
“The nature of the homework also makes a difference: homework that entails the deliberate practice of something already taught has higher effects than homework requiring studying new or higher-order ideas.”
This suggests that we may see greater academic effects of primary school homework if it was focused on retrieval rather than, say, instilling good work habits or researching a topic. No more, ‘Ask your grandparents about an object in their home.’
The Education Endowment Foundation (EEF) use a similar approach to Hattie. Again, they note the effect of homework is stronger in secondary school than primary school but, unlike Hattie, they conclude there is still a meaningful effect in primary school. They also correctly highlight the limited nature of the available evidence.
An alternative to the Hattie/EEF approach to evaluating homework is to be more selective of the studies included in our analysis. What does it look like if we only consider better quality studies?
When Guo and colleagues conducted a 2024 review using much stricter selection criteria than either Hattie or the EEF, they found only eleven studies that met these criteria.
This is what they found (with the technical parts deleted):
“Overall, the meta‐analysis revealed that the students who did homework had better academic performance than those who did not, but not in arithmetic concepts. Two experiments explored the effectiveness of homework moderated by homework time. In Koch (1965), the effects of long daily homework (20–30 min) and short daily homework (10–15 min) were compared. The authors found that achievement in arithmetic concepts was higher with long homework assignments every day.”
Fancy that.
So, yes, according to the best evidence we have available, homework clearly works, maybe not as much at primary as secondary but that may be related to the quality of the evidence. It also makes sense that homework should work. The result triangulates with other areas that have been heavily researched such as time on task or academic learning time, and retrieval practice.
Homework won’t work if students don’t complete it. So, have a plan for that. And no doubt, our understanding of retrieval practice can perhaps help us design more effective homework than ‘finish off the write-up of your experiment’. In my view, homework should always be something students have already been explicitly taught how to do so that it is both about retrieval and something students can complete without parental input, thereby levelling the playing field between those students with motivated, academically-inclined parents who will help them and those without.
Importantly, even if we accept homework is effective, this does nothing to adjudicate the competing priorities of homework and home life. These are all valid issues to discuss, about which reasonable people will disagree and which schools will need to constantly negotiate with their community.
Whether homework is effective is not really one of them.
*Although I suspect he would not agree with this characterisation, this is essentially the threshold Hattie sets to deal with the fact that many studies are flawed and therefore tend to generate a positive effect size. For more of a discussion on this, see this post.
I think when evaluating whether homework is effective, we have to compare apples to apples. First, the homework has to build on the in-class lessons, foster opportunities for recall, be at the appropriate level of challenge, and mitigate extraneous load. The activity needs to be relevant and not busy work. So before we can determine whether homework conceptually is effective, we have to ensure the homework itself is designed well. And I am not sure that is always happening.
Another way to look at the a similar question. Say the homework is to complete a set of practice exercises started in class. Some students complete them all during the time in the class.
Do people think those students who didn’t won’t benefit from completing them as homework? If so was there any benefit for those that didn’t won’t them in class?
The no homework crowd must be arguing that there was no benefit to completing the exercises and the whole class should only do as much as can be done by all students.
It’s possible that the concern is not a particular item of homework but a badly coordinated assignment of it where the least capable students end up with hours of work to finish up from several independent classes.
But that points to the real issue there is good levels of homework and bad and these will be different for individuals.