Just like Singapore, right?

Should we base our mathematics curriculum on problem solving because of Singapore?


Responding to criticism of the new draft Australian mathematics curriculum, David de Carvalho of ACARA, the organisation in charge, suggested the focus on ‘problem solving’ had been inspired by Singapore in comments to Rebecca Urban of The Australian:

“…Mr de Carvalho said problem solving was at the core of the curriculum in Singapore, whose students consistently topped the global education rankings, including PISA. Australia has fallen in the PISA rankings — in maths, science and reading.

‘What we’re talking about is taking mathematics concepts, teaching students to really understand them and grasp what is underlying those concepts, and them being able to recognise when (they can be) applied in solving real world problems,’ Mr de Carvalho said.”

Marty over at the wonderful Bad Mathematics blog has already pointed out one flaw in such a comparison. For instance, while Australian students are learning to:

“…count to and from 1000 and recognise increasing and decreasing number sequences [and] perform simple addition and subtraction calculations using a range of strategies and represent multiplication and division by grouping into sets.”

Singaporean students can:

“…work with numbers to 10 000, including increasing and decreasing number sequences. They add and subtract four-digit numbers and know their multiplication and division facts for 6, 7, 8 and 9.”

It’s worth reading the whole of Marty’s analysis because there is more where this came from.

So, we can reasonably ask the question: How can we be sure that Singapore’s higher performance than Australia on PISA mathematics is caused by a focus on problem-solving rather than, say, Singaporean students knowing considerably more maths?

And in fact, this is just one of the many problems you encounter if you take the simplistic approach of looking at education systems that perform better than Australia and then seeking to copy some elements of what they do. Let’s list some of the others.

Firstly, many things vary between a federation such as Australia and a city state like Singapore. One is size. Another is culture and particularly the value placed on maths education. A third is relative wealth - a quick Google shows that GDP per capita in Singapore was more than $10,000 higher than Australia in 2019 which works out to be difference of nearly 20%. All of these factors could plausibly affect education outcomes. So, the idea that we can draw a straight arrow of cause-and-effect from Singapore’s supposed focus on mathematical problem solving and its position above us in a league table is deeply flawed.

The second issue is a more subtle fallacy common to business and leadership books. Suppose we selected five highly successful businesses and found that four of them had a whole-staff meeting every Monday. We might suggest other businesses copy this model. However, it may be the case that less successful businesses also have a whole-staff meeting every Monday. Unless we sample across the entire range of business success, we cannot figure out which factors are actually shared by only the more successful ones. Technically, looking at a sample that consists of only those who meet the criteria you are interested in is known as ‘sampling on the dependent variable’ and it’s considered a bit of a faux pas by serious statisticians.

If we therefore seek to extend this analysis to less successful education systems than Singapore, it is not hard to find examples have also introduced more of a focus on mathematical problem solving such as New Zealand and Scotland. In fact, New Zealand, Scotland and Australia have followed similar pathways of decline in PISA mathematics.

Some people don’t care much for PISA data at all, either because they dislike the assessments - which, in maths, are quite heavy on literacy and supposedly real-world problems - or they rightly conclude that correlational evidence of this kind can only ever give an indication of what might be happening. However, I think there are two ways that PISA data can be used that is more valid than just looking at positions on league tables.

Firstly, comparing a single state at different points in time is more of a like-for-like comparison. Yes, states do change in nature over time, but the difference between, say, New Zealand in 2009 and New Zealand in 2018 is likely to be far less than the difference between Australia and Singapore now. Given New Zealand’s precipitous drop in maths from 2009-2018, it may be possible to identify changes to the approach to teaching maths that correlate with this slide. Such a change could conceivably involve the approach to problem solving. I don’t actually know.

A step-up from this kind of analysis is to look at trends within education systems. Students are surveyed by the PISA team as to the kinds of teaching practices they experience. You can then compare students in a particular state who get more of a particular teaching practice with students given less of it. A number of such studies have been done.

Although they don’t exactly advertise this fact in the abstract, by drawing on the PISA dataset, Caro et al. found that ‘student-oriented instruction’ correlated negatively with PISA maths scores in each of the 62 states they studied i.e. the students within, say, Romania who reported more student-oriented instruction did worse than the students in Romania would did not and this pattern was repeated across all 62 countries. Student-oriented instruction involved:

  • The teacher gives different work to classmates who have difficulties learning and/or to those who can advance faster.

  • The teacher assigns projects that require at least one week to complete.

  • The teacher has us work in small groups to come up with joint solutions to a problem or task.

  • The teacher asks us to help plan classroom activities or topics

This is not a perfect overlap with a focus on problem-solving, but it’s enough to give us pause for thought.

In 2015, PISA did its own analysis of the relationship between inquiry-based learning in science and science PISA scores. The more inquiry learning students reported, the worse their scores. This fairly crude result has now been replicated by researchers using a more rigorous form of analysis (here and here).

Even so, it is important to realise that such a finding is not unique and triangulates with what we already know from a much larger body of research. Inquiry learning, problem-based learning and all the other variants of guide-on-the-side teaching methods have been with us for over 100 years and there is no significant body of evidence pointing to their effectiveness, as this classic paper explains. We’ve known since the 1980s that novices learn more from studying worked examples than solving equivalent problems. Yes, we gain from practice at solving problems once we have been taught the necessary methods and have internalised them. However, this is due to the effect of practice and not because experts gain some nebulous problem solving skill or other. If you focus on problem-solving, you treat novices like experts and are likely to increase the gaps between the haves and the have-nots.

In turn, experimental studies such as those that established the value of worked examples triangulate with a large body of observational evidence from the 1950s-1970s that compared the practices of more effective and less effective teachers (No, they didn’t just identify the more effective ones and ask them what they did). Brophy and Good, for example, summarise one of the key findings:

“Students achieve more in classes where they spend most of their time being taught or supervised by their teachers rather than working on their own (or not working at all). These classes include frequent lessons (whole class or small group, depending on grade level and subject matter) in which the teacher presents information and develops concepts through lecture and demonstration, elaborates this information in the feedback given following responses to recitation or discussion questions, prepares the students for follow up seatwork activities by giving instructions and going through practice examples, monitors progress on assignments after releasing the students to work independently, and follows up with appropriate feedback and reteaching when necessary.”

I did not learn this when I trained to be a teacher and, by all accounts, this evidence is still largely left out of teacher education courses. Findings such as these, though triangulated and replicated across different forms of research, are largely ignored by bureaucrats and educationalists as they devote their time to thinking up new names for practices that have been discredited under their old names (‘mathematising’, anyone?). Designers of the draft Australian maths curriculum have linked one part of the way the curriculum is described in Singapore, a part that perhaps aligns with their preconceived assumptions about maths teaching, to Singapore’s position in the PISA league table while ignoring all other factors, both within the curriculum and elsewhere, that may actually be more important. And that’s all the evidence we apparently need.