Dueling Papers

When considering the evidence becomes a little boring

After I entered the discussion about an excellent article on maths teaching by Barry Garelick, a group of mainly North American maths education experts have been tweeting about me and not tagging me in to their tweets - this is known as ‘subtweeting’. You can catch the first instalment here. Since then, the most deeply weird contribution has been from Mike Steele who, in something approaching a cross between a Twitter spat and an Escher painting, went trawling through my website bio looking for something to use to personally attack me and found this:

I seem to have them really rattled.

This morning (Australian time), Dan Meyer entered the subtweeting fray. Fortunately, Meyer managed to make something approaching an interesting point.

Is Dan Meyer’s point a good one? Just as Kellyanne Conway once countered a news anchor’s facts with ‘alternative facts’, is it the case that different education papers demonstrate opposite things, that neither are definitive and we can choose the ones that suit? Is debating them futile and, well, boring?

I’m not buying it, no.

For a start, I don’t think all parties already know about all of the relevant papers, as Meyer implies. I could be wrong, but in the exchange that Meyer posted as a screenshot, I am pretty sure the person I was talking to was not familiar the paper I shared. That happens a lot.

Secondly, if all opinions are equally valid and can be supported by their own set of research papers, on exactly what basis are maths education experts deciding that trainee teachers, qualified teachers, textbook publishers and so on focus on inquiry-based learning and all that kind of stuff?

For his part, Meyer is probably most famous for a 2010 TED talk title, “Math class needs a makeover”, in which he argued for a form of problem-based learning, where structure and guidance are removed from the questions students are set. Why do this? On exactly what basis should we follow this advice? If all research is so unsure and every card played may be countered, why does Meyer think his recommendations are worth anything?

Instead, Meyer’s argument has the convenient quality of allowing us to dismiss evidence that threatens positions we have already committed to.

In his subtweet, Meyer mentions two papers, “Sweller et al.” and “SCHWARTZ”.

It’s worth examining Meyer’s point with these papers in mind. So let’s play all those cards and see what it looks like.

“Sweller et al.” probably refers to the 2006 paper, Why Minimal Guidance During Instruction Does Not Work An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching by Paul Kirschner, John Sweller and Richard Clark (so Meyer should really have called it “Kirschner et al.”).

When this paper was published, it caused a stir. Soon, three responses were printed in the same journal. Kuhn (2007) argued for the need for a shift in focus from teaching knowledge to cultivating generic inquiry skills and suggested explicit teaching has only been shown to be effective for the former. In contrast, both Hmelo-Silver, Duncan and Chinn (2007) and Schmidt, Loyens, van Gog and Paas (2007) agreed with the basic premise of Kirschner et al. but argued that their favoured approaches to problem-based learning contained absolutely loads of guidance (unlike, presumably, the maths problems in Meyer’s TED talk).

Sweller, Kirschner and Clark (2007) then replied to these critiques. In response to Kuhn, they noted the lack of evidence for the existence of generic inquiry skills. In response to the other two papers, they argued that if the provision of guidance improves problem-based learning then why not use a fully guided approach?

Subsequent to this series of papers, a debate was organised at the 2007 AERA conference. As a result of this debate, both sides agreed to write chapters outlining their position in a 2009 book, Constructivist Instruction: Success or Failure. This is a particularly interesting book because every chapter concludes with a series of questions from those on the opposite side of the debate along with replies from the chapter authors.

When Meyer mentions “SCHWARTZ” in his tweet, I think he is referring to a paper by Schwartz and Martin from 2004 that purports to show that asking students to invent solutions to problems prepares them better for future learning than ‘tell-and-practice’. Interestingly, this arises as a discussion point in Constructivist Instruction: Success or Failure. In a question, Sweller points out that Schwartz and Martin’s experiment varied more than one factor at a time, and asks, “Does this procedure not break the ‘vary only one thing at a time’ rule essential to all randomized, controlled experiments, resulting in it being impossible to determine exactly what caused the effects?”

Not all experiments - not all forms of research - are created equal. Sometimes, studies are flawed. One seriously flawed study does not negate a study conducted on a more rigorous basis.

In the final pages of Constructivist Instruction: Success or Failure, Sigmund Tobias, one of the neutral editors brought in to referee the debate, sums up his final thoughts.

Despite witnessing all the cards being played, Tobias is capable of spotting the patterns and themes and of reaching a conclusion: Constructivist approaches (such as inquiry learning, although we can debate these labels indefinitely) lack evidence:

I would argue that for the field of educational psychology, this book was basically the end of the story. More than ten years on, nobody is seriously advocating for inquiry learning in the way they did in the 2000s. Instead, the debate centres around the sorts of issues I have dealt with in my PhD research e.g. whether a brief period of open-ended problem solving followed by explicit teaching is beneficial (my study suggests it is not but research is ongoing).

It is only in the field of mathematics education, as separate from educational psychology, that anyone still seriously believes in the value of inquiry learning. Perhaps that’s because those who are active in this field tire of having to properly consider the evidence.