The Times Educational Supplement (TES) has run a breathless report on a new meta-analysis which apparently shows that, for children from 0-8 years of age, guided play is as effective or more effective than direct instruction in achieving key outcomes, especially in mathematics.
The report quotes two of the authors of this paper, to be published in the journal, Child Development. One of these authors, Paul Ramchandani, professor of play in education, development and learning at the University of Cambridge, apparently said to the TES that, ‘the argument is often made that play adds little to education.’ Which struck me as odd. Who is saying this?
Anyway, the paper takes the form of a meta-analysis. In other words, rather than conducting a new study, the researchers reanalysed and then mushed together the results of lots of previous studies.
I am not a huge fan of meta-analysis. Although it is important to review past literature, authors of meta-analyses often try to fit the conditions in old studies into some category that the original authors did not intend and then compute an average effect size. A lot of this is hocus pocus because the studies in question rarely measure similar things in a similar way. I could rant at length about this but that’s for a different post.
Because, having stated my dislike for meta-analyses, I must commend the authors for doing a pretty good job. They have not tried to work out one overall effect size but have only calculated effect sizes for similar, relatively well defined outcomes such as ‘expressive vocabulary’. They also go to great efforts to evaluate the risk of bias in the underlying studies.
However, they have not avoided all the pitfalls of meta-analysis. One of those is that the condition we are interested in - in this case ‘guided play’ - tends to be compared to different things in different studies. The authors have therefore devised two broad categories to group them into - ‘free play’ and ‘direct instruction’. Of these, direct instruction, is the comparison in the majority of studies. Here’s how the authors define it (which is oddly sat in an appendix):
“Direct instruction – comparison group involves the adult/teacher using instructive methods to direct learning/play. This is primarily traditional teaching methods of the adult talking and the child listening, or the adult telling the child exactly what to do during a task. Business as usual control groups also fall into this category.”
So this is not the direct instruction associated with the teacher effectiveness research of the 1950s-1970s, that is summarised in Rosenshine’s Principles and that many teachers are working to deploy in classrooms. For a start, it’s not interactive. Instead, it matches Rosenshine’s first of his list of five meanings of direct instruction ie any form of instruction led by the teacher. Except that it may not even match that because it also includes ‘business as usual’ which in many early years settings is likely to be student led.
If you look at some of the underlying studies, such as this one on the use of blocks, it’s clear the original researchers were not really interested in the question of guided play versus direct instruction.
In terms of outcomes, it is apparent why the TES had to qualify the findings in quite a clunky way. The researchers tested guided play against ‘direct instruction’ for 11 different cognitive and non-cognitive outcomes. They found a significant result favouring guided play for just three of these: early maths skills, shape knowledge and task switching.
This seems strange given that in most of the studies the researchers probably did not invest much effort in designing the control or ‘direct instruction’ condition. Add to this the fact that the risk of bias in the source studies was high:
“Almost all studies… were deemed as having a “high” level of risk, with only one rated as having an “unclear” level… A funnel plot indicated there is some evidence of asymmetry, indicating there is a risk of publication bias.”
In this context, I am surprised they found so few results favouring guided play. I’m guessing this is because guided play has some negative effects that are dragging on the data and preventing a more positive picture. But that’s a guess.
So it really is a mystery why such a relatively well-conducted, appropriately caveated meta-analysis would be presented as evidence in support of guided play. It’s not evidence in support of guided play, so why suggest it is?
Another, completely unrelated, mystery is why the TES did not mention that the research was sponsored by the LEGO Foundation.
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.