I spent Saturday morning on the phone to Maureen Cobbett. She was angry, upset and baffled. Maureen is headteacher at The Latymer School, a state grammar school in north London, where one of my sons has just finished lower sixth.

She was angry and upset because the A Level grades her students had just been awarded were well below the average of grades achieved at Latymer over the previous three years. In some subjects, there had been a collapse of more than ten percentage points in the proportions being awarded As and A*s. She was baffled because she had no idea why. Her anger, upset and bafflement clearly has been shared by many teachers across the country.

I have spent a large part of the past 72 hours trying to understand what has happened, wading through the dense 300-plus-page technical document produced by Ofqual supposedly explaining what it did. I have been left equally baffled.

Ofqual, to be fair, was given an impossible task. You cannot fairly assign grades to a cohort of students who have not done the exams. Many were bound to be left angry and upset. They did not, however, have to be left baffled. Maureen was provided with no explanation for her school’s results — she was merely handed them in the same way that she would have been had the kids actually sat the A Levels.

This lack of transparency is unforgivable. The closed way in which the methodology was set is also a cause for concern. The Royal Statistical Society, worried that too many members of the technical advisory group used by Ofqual were present or former employees of government or the regulator, offered that two of its distinguished fellows join the advisory group. Ofqual would agree to that only on the basis of a limiting non-disclosure agreement, which the royal society felt that it could not agree to.

So what can we make out about what happened?

The method used to assign grades makes some sense. Schools were asked to rank their students in each subject. Then information on earlier grades within the schools, and earlier attainment at GCSE, was used to assign grades to each student this year. The resulting distribution of grades looks comparable to the distribution in previous years. Indeed, there are rather more higher grades than in the past.

There are two obvious problems with what Ofqual did. I suspect that there are more, but it will require many more hours of study to discover them.

First, and most obvious, the process adopted favours schools with small numbers of students sitting any individual A Level. That is, it favours private schools. If you have up to five students doing an A Level, you simply get the grades predicted by the teacher. If between five and fifteen, teacher-assigned grades get some weight. More than 15 and they get no weight. Teacher predictions are always optimistic. Result: there was a near-five percentage point increase in the fraction of entries from private schools graded at A or A*. In contrast, sixth-form and further education colleges saw their A and A* grades barely rise — up only 0.3 per cent since 2019 and down since 2018. This is a manifest injustice. No sixth-form or FE college has the funding to support classes of fifteen, let alone five. The result, as Chris Cook, a journalist and education expert, has written: “Two university officials have told me they have the poshest cohorts ever this year because privately educated kids got their grades, the universities filled and there’s no adjustment/clearing places left.”

Second, the algorithm used makes it almost impossible for students at historically poor-performing sixth forms to get top grades, even if the candidates themselves had an outstanding record at GCSE. For reasons that are entirely beyond me, the regulator did not use the full information on GCSE performance. Rather than use data that could help to identify when there are truly outstanding candidates, the model simply records what tenth of the distribution GCSE scores were in. There is a huge difference between the 91st and 99th percentiles, yet they are treated the same. There is little difference between the 89th and 91st, yet they are treated differently.

As Dave Thomson, chief statistician at FFT Education Datalab, has noted, adjustments for changed prior attainment do not appear to take account of the historic value added of the school. That means schools that historically have been good at translating GCSE performance into good A Level results seem to be penalised.

All this may help to explain some of Ms Cobbett’s bafflement. But we don’t know. Ofqual has not provided her with that data.

Then there appears to be a more general lack of common sense applied to the results of the model. If it predicts a U grade (a fail) for a subject in a school, then some poor sucker is going to fail, deserved or not. That’s why some seem to have been awarded Us despite predicted grades of C.

To repeat, the truth is that the regulator was handed an impossible task. But it should have done a better job. More information on students’ prior performance should have been used. For individual schools, results should have been more constrained not to be worse than the average of the past three years, unless there was a big drop-off in prior attainment. Ways could have been found around the small numbers problem. Some of this might have led to a little grade inflation across the board this year. What we got was grade inflation for the already privileged and little or nothing for the rest.

The whole thing should have been more transparent, quicker and with a better worked-out appeals process. This was all bound to end up in a mess, but it didn’t need to be this much of a mess.

This article originally appeared in The Times and is used here with kind permission.