Building Thinking Classrooms is an approach to teaching mathematics that I understand has become very popular in North America. It was invented by Peter Liljedahl, professor of mathematics education at Simon Fraser University in Vancouver. It has a gimmick — students work in groups to solve problems on vertical surfaces i.e. whiteboards or walls painted with whiteboard paint. And it has its own bending of reality. Anything that may take place in a normal mathematics classroom is defined by Liljedahl as not ‘thinking.’ Ergo, if you want students thinking, you must follow Liljedahl’s methods.
One of the activities Liljedahl defines as not ‘thinking’ is ‘mimicking’. Essentially, any form of instruction where a teacher demonstrates a procedure or explains a concept before students use those procedures and concepts is a form mimicking and not thinking. This rules out explicit teaching, or at least the definition I use of explicit teaching. Given that explicit teaching is backed by lots of evidence, we should expect Liljedahl to have even more evidence to support his approach. Instead, the evidence is pretty weak so that should probably sink it. Case closed.
Or maybe not.
Jason To, a High School mathematics teacher, has written a thoughtful blog post arguing that Building Thinking Classrooms is consistent with evidence on explicit teaching, as summarised in Rosenshine’s Principles of Instruction, and cognitive load theory, the combination of which he refers to as ‘cognitive science’. In short, it is not. However, To’s argument does provide an interesting case study.
Firstly, deciding you like Building Thinking Classrooms and then attempting to find evidence to support it is a flawed approach. It is a recipe for confirmation bias and the opposite of the scientific method, a key feature of which is to generate falsifiable hypotheses. In other words, a hypothesis is not scientific unless some conceivable evidence could prove it wrong. Scientists then go and actively seek such evidence and if the hypothesis withstands it, we have a theory.
So there is a structural problem to To’s argument. There are also specific ones.
Thin slicing
To argues that an approach labelled by Liljedahl as ‘thin slicing’ is consistent with Rosenshine’s suggestion to present new material in small steps and with the limited capacity of working memory, a key feature of cognitive load theory. He presents a set of problems he and a colleague have devised.
In groups, students first solve three problems of the form:
5 MOOSE + 4 SHEEP + 3 MOOSE – 2 SHEEP
Then, they move on to:
8M + 5S – 2M + 3S
Eventually, they tackle problems like:
6x²y + 2xy² – 3x²y + 6xy²
This graduated approach is likely to be superior to pure discovery learning, although I have no idea how the concept of collecting like terms could be taught through ‘pure’ discovery learning. This is one of the reasons ‘pure’ discovery learning is a bit of a red herring used to excuse a lot of implicit teaching methods that still have some, often limited, structure.
Although the sequence makes sense to mathematics teachers who already know about collecting like terms, there will be many students who are confused by it. The central problem of algebra is its abstraction and this does not help. The students it will most disadvantage are likely to be the already disadvantaged. Students will wonder why we have gone from ‘SHEEP’ to ‘x²y’ and what the connection is*. Sure, the fact they are in groups may mean one of their friends can explain what’s going on, but there is one person in the room whose job it is to ensure all students understand. What is the value in the teacher not explaining it?
To’s answer seems to be that Richard Mayer, in his 2004 paper on ‘pure discovery’ recommends the benefits of ‘guided discovery.’ Mayer is mainly referring to the benefits of guided discovery over pure discovery. He does mention ‘expository methods’ and an experiment where guided discovery was superior to this approach, but it was an unusual task quite unlike learning algebra and low in ‘element interactivity’ i.e. lacking in intrinsic complexity.
Which brings me to my next point. To cites the paper I published as part of my PhD:
“There is additional evidence in the cognitive science literature suggesting that problem solving followed by explicit instruction can be effective for tasks with low complexity or low element interactivity.”
It is worth pointing out that my paper reports on an experiment with high element interactivity tasks, similar to learning algebra, in which explicit teaching followed by problem solving was superior. The review of evidence about low element interactivity tasks was something of an aside. So, why mention the aside and not the main finding of the paper?
Collaborative learning
There is not much in Rosenshine or cognitive load theory that involves collaborative learning so To hones in on the one corner that does address this — a series of studies by Femke Kirschner.
I have sat with Paul Kirschner, who was involved with these studies, while he explained to me just how difficult it was to design suitable learning tasks for this research. The idea is that individual working memory is limited but if we can pool it between a group of individuals, we can solve problems that we would not be able to solve alone. So, the working memory load of the task has to be finely tuned. There also needs to be a mechanism for dividing the working memory resources between the group members so that different members have different elements to hold in working memory. Femke Kirschner alighted on set of genetics problems that could be approached in this way. The method is explained in Kirschner et al. (2011):
“It should be noted that the learning conditions in this study were designed to optimise collaboration. This creates tension between ecological validity and experimental validity. The learning environment differed from settings encountered in ‘real’ education in that all collaborating participants received only part of the unique information elements and, consequently, were required to exchange information to solve the problems or study the worked examples. Also, participants were not allowed to offload their WMs by using pencil and/or pen and paper while learning, which also might have stimulated them to collaborate. Finally, the learning setting was highly structured and scripted causing a minimal cognitive investment with respect to transactional activities. In this sense, it is not clear to what extent the results obtained in this study can be generalised to real classroom settings.”
As far as I am aware, Building Thinking Classrooms does not divide elements between group members and the collaborative learning sessions are not scripted. Given that group members are supposed to write on the walls, the injunction against writing while learning is clearly not in place. It is therefore not possible to justify the use of Kirschner’s evidence to support the Building Thinking Classrooms approach.
Worked examples
The final strand of To’s argument is perhaps the weakest of all. He draws on the worked example effect from cognitive load theory to support ‘active note-making’:
“Liljedahl argues that traditional note-taking, where students simply copy down notes written by the teacher, is a cognitively passive activity that hinders learning. Students can usually either listen to what the teacher is saying or they can focus on copying down what the teacher is writing; however, doing both is extremely difficult. Liljedahl advocates for a shift from passive note-taking to active note-making.”
The worked example effect comes from students studying a worked example that an instructor has given them. Yes, this is often enhanced by steps to ensure students process the worked example, such as completing a similar problem or filling-in a missing step. However, it does not derive from students generating their own worked examples.
If we were to try hard to retain an open mind, we could perhaps wonder if there is evidence to support Liljedahl’s note-making. However, whatever that evidence is, it is not the worked example effect which seems to support quite the opposite of what Liljedahl is advocating.
A failure of epistemology
In each of his arguments, we can clearly see where To’s approach to truth-seeking or, if we want to be academic about it, his ‘epistemology’ lets him down.
To has a model, Building Thinking Classrooms, he wants to support. He then goes hunting through the literature on Rosenshine and cognitive load theory to find nuggets that he can bash into shape to try to shore it up. This is why he comments on an aside in my PHD paper and not the main finding. This is why he scours cognitive load theory until he finds something that looks like collaborative learning. This is why he ends up flipping the worked example effect on its head.
However, the fundamentals of Rosenshine and the fundamentals of cognitive load theory both support explicit teaching — fully explaining concepts and modelling procedures before asking students to use those concepts or solve problems involving those procedures.
This is what Liljedahl disdains as ‘mimicking’ and this is why Building Thinking Classrooms is still nonsense.
*As an aside, To’s method generates another potential problem. Pronumerals are usually defined as letters that represent numbers but in To’s initial example, the letter stands for ‘moose’ or ‘sheep’. You and I know this can be resolved because ‘sheep’ means ‘number of sheep,’ but if none of this is explicit, students need to make these conceptual leaps themselves.
Anyone proposing discovery learning in groups has to explain how they prevent only one member of the group discovering and the rest learning from that person’s exposition.
They might claim everyone spent some time attempting to discover the thing to be learned but they have no way to establish this- it’s just what they hope happened.
Hey Greg,
The second I heard the "Vertical Non-Permanent Surfaces" my inner skeptic triggered. I read the book and visited some classrooms, and completely agree with you. Daniel Buck wrote an article for the Fordham Institute that summarizes it nicely, and reminds readers of the embarassing fact in math education, which is that we've know what worked for decades. We just don't do it:
"The What Works Clearing House, a research consortium housed within the federal Institute of Education Sciences, published a 2021 practice guide on the most effective, research-backed methods for math instruction. It includes systematic instruction, representation and models, and timed activities, all of which Liljedhal denigrates for no other reason than they don’t constitute “thinking”—which by the end of the book, in a textbook example of circular logic, he seems to only define as activities that he already approves of."
Fads an myths have been plaguing education for too long. Keep up the good fight.