The knowledge-building myth?
In a recent article, Timothy Shanahan claims that there are no studies showing that increasing students’ knowledge of the world reaps general reading comprehension benefits
Timothy Shanahan was a member of the National Reading Panel that released its influential report summarising the evidence on the effectiveness of different teaching methods back in 2000. As such, he is a wise oracle of reading instruction and to that end, he now writes a blog in which he takes questions from teachers that he attempts to answer based upon research evidence.
In his most recent post, a teacher expresses concern about the idea of improving reading comprehension by teaching children knowledge of the world and refers to a study that attempted to do this. The reference is to the peer-reviewed version of the findings of a UK Educational Endowment Foundation (EEF) report into an intervention known as ‘Word and World Reading’, which did, indeed, find no discernible effect.
I have written about this study before. It does not exactly represent a gold standard of educational trials and is illustrative of the failures of the EEF more generally. Randomisation is achieved at the school level and there were just 17 schools in the study, nine of which took part in the intervention with the rest as controls (N=17). In addition, many of the teachers who delivered the intervention appeared to lack relevant knowledge themselves which, given this was a primary school curriculum, seems both worrying and potentially easy to fix. Moreover, the intervention was implemented in strikingly different ways in different schools.
However, the most interesting point is that the researchers refused to assess students on a bespoke assessment designed by the authors of the programme. It is considered good practice to prioritise standardised tests over bespoke assessments, but why not run both, just out of curiosity? The researchers ruled-out the bespoke test because:
“…the test was deemed invalid by the evaluators as the content of the test covered materials that were taught explicitly to intervention children.”
Anyone familiar with the argument made by E. D. Hirsch Jr that knowledge improves reading comprehension will know that the idea is that knowledge of topic X improves comprehension of texts set in the context of topic X. Nobody has ever claimed, as far as I am aware, that knowledge of X improves reading comprehension of Y.
So, unless the standardised reading test was set in the context of a topic similar to those taught in the Word and World Reading programme, it could not possibly work and nobody would claim otherwise. This is another great failing of the EEF - a deliberate lack of attention to the mechanisms by which their various packages of interventions are meant to work.
In such a short study - one year - it would therefore seem unlikely that we could ever generate evidence of general knowledge leading to a general improvement in reading comprehension on standardised tests. Does that prove the negative i.e. that such a cause and effect relationship does not exist?
In his response, Shanahan makes a number of points I agree with, so we will start with those. I agree that there are no large-scale experimental educational studies that prove the knowledge-improves-reading-comprehension hypothesis. I also agree that there is substantial evidence that teaching reading comprehension strategies does improve reading comprehension. Dan Willingham, a well-known advocate for the knowledge hypothesis, has made this second point a number of times (e.g. here) and so have education luminaries such as Barak Rosenshine.
I also agree with Shanahan that teaching children knowledge of the world is a good thing and a worthy aim of schools regardless of whether it specifically impacts reading comprehension:
“Towards those ends we should provide students with abundant opportunities to learn science, social studies, literature, and other subjects in school, and we need to be energetic in our efforts to make sure students gain that knowledge in those classes.”
The problem is that this is not always the practice. I do not want to comment on the relative effectiveness of the No Child Left Behind programme in the US, but one frequent observation from teachers, from Hirsch and from Natalie Wexler when she discussed her visits to schools on a recent Filling the Pail Podcast, is that the imposition of high stakes reading comprehension tests has led to a massive expansion of the teaching of reading comprehension strategies to the extent that up to half of a school day may be occupied with strategy-focused guided reading and that it is the teaching of knowledge that gets squeezed.
I don’t know of any empirical evidence to support exactly how much time schools spend of reading comprehension strategies, but I am familiar with a number of contexts and with the whole guided reading push. I am also familiar with the tendency to downplay the need for knowledge acquisition in the age of Google.
Shanahan makes his case with reference to an interesting analogy. If he goes to the doctor for general advice, he tends to be given advice based upon correlational evidence:
“Don’t drink too much… more than 1-2 drinks a day increases your risks. Being sedate, adds to that risk. Letting your weight climb above some point is associated with problems, too. And so on.”
However, if he has a specific problem, he will receive specific treatment based upon experimental evidence.
Again, in a sense, I agree. If a child has a specific reading comprehension issue, we should address it directly with code-based or strategy-based instruction. But most students are not in this position, most of the time and, to this extent, I think Shanahan downplays the value of correlational evidence.
For instance, I am aware of no experimental study where humans were randomised into two groups, one that smoked 20 cigarettes a day and one that did not smoke, in order to establish the effect of smoking. And yet we are pretty confident that smoking causes lung cancer and a range of other undesirable health effects.
Yesterday, I read a fascinating article by Christopher Snowden about those who, in the current COVID crisis, position themselves as lockdown sceptics. Snowden demonstrates that, in order to maintain such a sceptical position, it is necessary to ignore an overwhelming volume of correlational evidence - an excess number of deaths in the UK compared with previous years is interpreted as deaths caused by the lockdown itself rather than COVID, but how does that square with few excess deaths in Ireland which has been subjected to a harsher lockdown? And so on.
The fact is that some things are just not amenable to experimental tests. The effects of smoking on people is one. The effects of public health responses to a pandemic is another. In fact, entire fields such as astronomy are pretty much built on correlational evidence alone.
When we have strong correlational evidence, such as the epidemiological evidence presented by Hirsch, and that is supported by evidence from adjacent areas, such as the arguments from cognitive science ventured by Dan Willingham, and when even reading researchers accept the basic argument in their reasoning for ruling out a bespoke reading assessment in the Word and World Reading study, it is perhaps a little stubborn to insist on evidence from experiments that would be quite difficult to conduct.