Have more rigorous exams caused a decline in life satisfaction in the United Kingdom?
Unpacking George Monbiot’s claims
In England, students sit national standardised examinations in a range of subjects at the ages of 16 and 18. This is unusual in the anglosphere. In Australia, exams at 16 were phased-out years ago and in the U.S., even the exams available at 18, such as the SAT or those for Advanced Placement courses, are optional for high school graduation.
Moreover, in a reverse of the usual process, in 2015, exams in England were made more important and a little more rigorous. Previously, many of the examined courses had a substantial coursework element — work assessed by the teacher that contributed toward the final grade. This has been cut back in favour of the final mark resting on the exam. The English exam system has also put in place a set of safeguards against grade inflation.
George Monbiot, a columnist at The Guardian, occasionally turns his attention to education. Wittingly or otherwise, his educational views conform to orthodox educational progressivism and it is through this lens that he has recently tackled the subject of England’s exam system.
First, let’s examine a causal relationship that Monbiot suggests in the article:
“[The exam] reforms, imposed on schools by Michael Gove against expert advice, may have contributed to the OECD’s shocking finding in 2019 that, of the 72 nations in which the life satisfaction of 15-year-olds was assessed, the UK came 69th. Our children’s joy of living suffered the greatest decline of any country since 2015, the year in which the GCSE reforms became effective.”
I need to highlight that England and the United Kingdom — which also contains Wales, Northern Ireland and Scotland — are not the same thing. Moreover, each of these countries has its own education system and only England’s underwent the 2015 exam reforms. However, the population of England is large compared to these other nations and so it is valid for Monbiot to look to scores for the United Kingdom as a proxy for England when making his argument.
The data Monbiot is referring to can be found here. During the 2018 round of PISA — the international assessment of 15-year-olds run by the OECD — students were asked a range of survey questions. One of these was about ‘life satisfaction’. Oddly, I cannot quite replicate Monbiot’s ranking of 69 out of 72.
PISA asked students to rank their ‘life satisfaction’ on a scale of one to ten. PISA then calculated the percentage of students who ranked themselves from seven to ten on this scale — they consider these students to be satisfied with their lives even though they label scores of five and six as ‘somewhat satisfied’. Apparently, somewhat = not quite, which is confusing.
When we compare percentages across the countries that participated, the United Kingdom comes 64th out of 69, which is the closest I can get to Monbiot’s figure and which we can all agree is pretty bad.
However, it is also worth looking at the averages on the one-to-ten scale. The mean for the United Kingdom is 6.16. When I average these means across OECD countries, I obtain 7.03. So, that’s still pretty bad in comparison but it gives some perspective on what the difference is. The OECD average just creeps into their definition of ‘moderately satisfied’ whereas the average for the United Kingdom is in the ‘somewhat satisfied’ category which, as we have already seen, is interpreted as not quite satisfied.
Are you still with me?
It is also worth noting a big difference in outcomes between girls and boys. According to the OECD:
“…in Korea, Poland, Slovenia, Sweden and the United Kingdom, girls were at least 15 percentage points less likely than boys to report that they are satisfied with their lives.”
In addition, the decline in ‘life satisfaction’ was not limited to the United Kingdom:
“On average across OECD countries, students’ average life satisfaction declined by 0.30 of a point between 2015 and 2018. The decline over this period was larger than 0.50 of a point on the life-satisfaction scale in several schools systems, including Brazil, Ireland, Japan, Macao (China), Qatar, the United Kingdom and the United States.”
How plausible is Monbiot’s claim that changes to exams in England have caused a decline in life satisfaction in the United Kingdom? To support his point, he quotes survey evidence to suggest that, in England, two-thirds of students ranked homework and exams as their greatest source stress. However, we do not have this data for other countries and it is hard to think of other common sources of stress for 15-year-olds, other than that generated by their peers.
And given that Brazil, Ireland, Japan, Macao, Qatar and the United States all saw similar declines but did not, to my knowledge, change their exam system in 2015, Monbiot’s claim seems implausible. Instead, some other factor — such as increased use of social media — could perhaps explain a pattern emerging across countries.
Should we look at this data and apply the precautionary principle? We may argue that although we cannot definitively pin the decline in PISA’s measure of life satisfaction on exams, the potential harm is so great that this is enough for us to abandon such as system.
The problem with this argument is that examinations play a valuable role in an equitable education system.
Monbiot doesn’t see this. He dislikes exams for the usual progressivist reasons:
“What exams measure is aptitude in exams. While they might rank certain skills, such as the retention of facts and the performance of linear tasks under pressure, these represent just a small part of the equipment a person needs to navigate the world… Exams distort every aspect of education. It’s not just a matter of “teaching to the test” and drilling pupils in rote learning rather than encouraging deep understanding, independence and creative thought.”
We can largely dismiss this as the rhetoric of someone who draws false distinctions between knowledge and understanding. However, Monbiot also argues against exams on the grounds of equity:
“So what are exams for? Preserving privilege. Privilege loves competition, because it can always be rigged. Private schools and parents who pay for tuition can afford to drum the necessary requirements into a child, even one whose mind seeks to travel in other directions.”
Monbiot has a point, but this point dissolves when we look at the alternatives. Yes, wealthy parents can perhaps pay for better teaching — although we are often told that students in independent schools make the same progress or even less progress than those in government schools, when contextual factors are accounted for. And undoubtedly, wealthy parents have deeper pockets to pay for tutors when things go wrong.
However, the fact remains that in an exam-based system, students still have to go in to an exam hall and perform on their own with none of those supports to draw on in that moment.
Imagine a system of portfolio assessment instead. Wealthy students could employ tutors to help them with every aspect of the construction of the portfolio. The same goes for projects — a student whose uncle is a chemistry professor will have an advantage in writing-up their chemistry project. Perhaps unsurprisingly, in the U.S., where College entry is not determined on the basis of exams alone but where prewritten essays are taken into account, the essay scores correlate more closely with household incomes that scores on the SAT exam. The U.S. also factors in noncognitive qualities and experiences, such as community service. Who has better access to community service, the wealthy kid with connections or the poor kid working evenings and weekends to try to make ends meet?
Even if we simply swung back to a system of teacher assessment over exams, we are likely to reduce equity. This is because, subconsciously, teacher assessment is biased against students with special educational needs, those with challenging behaviour, poorer students, those who do not speak English at home, those who have different personality traits to the teacher and a range of other factors. So a move to teacher assessment not only increases teacher workload, it introduces multiple sources of bias.
Those who seek to abolish exams should avoid well-worn progressivist clichés about ‘drilling’ and ‘rote learning’. They should avoid probably spurious correlations with life satisfaction. Instead, they need to detail their alternative system so that we can analyse whether it would make the situation better or, as I suspect, far worse.