In order to create a league table for 54 subjects, we utilize nine measures of performance that cover all stages of a student’s life cycle. We treat every provider of a subject as a department and request them to inform us of the students that count within their department. Our goal is to indicate how each department is likely to offer a positive all-round experience to future students by referring to how past students in the department have fared. To assess this, we look at the resources and staff contact that have been committed to past students, examine the entrance standards, and determine the probability that students will receive support to continue their studies. We then evaluate how likely students are to be satisfied, exceed expectations of success, and have positive outcomes after finishing the course. By aggregating these measures, we obtain an overall score for each department and rank the departments accordingly.
To ensure comparability, we focus on full-time first-degree students in the data we use. For prospective undergraduates who are unsure which subject they want to study, we have averaged the Guardian scores for each institution across all subjects to produce an institution-level table.
What’s new for 2022?
The structure and methodology of the rankings have remained largely unchanged since 2008, but there have been some significant shifts in the data used for this year’s guide. As a result, we had to make some adjustments to our methodology for compilation.
Although the analysis by the OfS showed no tangible impact of the pandemic on the survey results of 2020, it is apparent that the 2021 results were heavily influenced, with nearly all providers and almost all questions demonstrating a decrease in satisfaction levels. Despite the sector-wide decline in satisfaction, we felt that institutions that performed well were displaying a resilience that could benefit potential future cohorts. As a result, we opted to utilize the results, combined with those from 2020.
Our aggregation rules necessitated that 2021 results were accessible and that the total number of relevant respondents across the two years was 23 or more. We provided extra attention to departments with fewer respondents in 2021 or results for 2021 but none for 2020, and disregarded the results if there was any indication of potential unfair representation.
Before the outcomes were even available, there was a significant policy shift behind the NSS, leading to a review of the survey and the suspension of the obligation for providers to promote the survey to their final year students. Together with the major disruption to results, this led us to decrease the weighting of the NSS metrics from 25% to 20%.
Career Prospects
The 2021 edition of the University Guide featured the first publication to display the career prospects of the 2017/18 graduating cohort, based on the new graduate outcomes survey. We anticipated utilizing the results for the 2018/19 cohort in the 2022 edition, but two factors prevented this.
Firstly, the survey results’ delayed availability was incompatible with publication’s timescales. Secondly, the bulk of the graduates surveyed referred to their occupation in September 2020, and with the pandemic’s profound effect on employment, we felt that it was unreliable to treat this data as a representation of how well a department prepares its students for the world of work.
Continuation
Continuation rates have been included since the 2019 edition of the University guide and have had a lower weighting than other metrics. The metric has been reliable as an indicator of how providers manage the risk of students dropping out during their first year since its introduction, and with the reduction in weighting assigned to NSS results, the continuation rate was an obvious metric to compensate for. For all non-medical subjects, the weighting has increased from 10% to 15%.
Previously, for the medical subjects of Medicine, Dentistry & Veterinary science, the continuation metric was displayed but not weighted. The value-added score had a 5% weighting, but the metric was not ideal for these subjects as they tend not to classify degree awards. This 5% weighting has been assigned to the continuation metric.
Standardisation
The continuation metric is not perfect for the medical subjects either, as the vast majority of students starting these courses complete their first year. This leaves a very tight distribution of scores near the 100% mark, introducing the possibility of small variations resulting in a highly negative score caused by one or two students’ departures.
In order to address inconsistencies in the allocation of UCAS tariffs to qualifications across different UK nations, adjustments have been made to the standardization process. Specifically, the average tariff for Scottish Higher/Advanced Highers is 52 points higher than the overall average tariff, which translates to a 40% higher tariff for the average student who entered higher education with these qualifications compared to students with different qualifications. This discrepancy has been increasing in recent years, but adjustments have been made to restrict further advantage and counteract this benefit. The new methodology takes into account the proportion of students with Scottish Highers/Advanced Highers in each department and standardizes the department average tariff by 52 points, multiplied by a discount factor. This metric contributes to 15% of the total department score and only applies to students who entered in 2019/20.
The ranking system considers several metrics, including entry standards, student-staff ratios, expenditure per student, and continuation rates. Entry standards measure the grades of students who entered the department and contribute 15% to the total score. Student-staff ratios seek to estimate the amount of staff contact students can expect to receive and contribute to 15% of the score. Expenditure per student is calculated by dividing total expenditure by the number of students in the subject area and contributes to 5% of the score. Continuation rates measure how successful the department is in supporting students through their studies and contribute to 10% of the score.
The system has transitioned from using JACS codes to HECOS codes, but the 54 subjects ranked remain unchanged. It is important to note that certain types of qualifications are excluded from these metrics, and only students entering year 1 of a course are included. Mature entrants are not considered appropriate for the entry standard metric. Additionally, smaller departments have their metrics calculated using a two-year average.
To factor in the impact of entry qualifications, we formulate an index score for every student with a favorable outcome based on their anticipated continuation rate, capped at a maximum of 97%. To derive this score, we require a minimum of 35 entrants in the latest cohort and 65 in the last 2 or 3 years.
This index score, encompassing the previous 2 or 3 years, forms 15% of the overall score for non-medical faculties. However, we display the percentage score obtained, also averaged over two or three years.
Student satisfaction is assessed through the National Student Survey by soliciting feedback from final-year students about the course’s academic experience and support. A five-point Likert scale is used (ranging from 1. definitely disagree to 5. definitely agree), and we aggregate the responses of full-time first-degree students enrolled at the provider course to calculate the satisfaction rate and the average response. The satisfaction rate represents the responses that "definitely agree" or "mostly agree," while the average response is the mean score between 1 and 5 reported in the responses to those questions.
To evaluate the quality of education students can expect, we utilize responses to four questions from the 2020 and 2021 NSS surveys. These questions are: staff’s ability to explain concepts, intriguing subject matter, intellectually challenging curriculum, and the course’s ability to foster achievement. The overall satisfaction rate is displayed, and the average response is given an 8% weighting.
To evaluate the likelihood of students’ satisfaction with assessment procedures and feedback, we aggregate responses to the following four questions from the 2020 and 2021 NSS surveys: clear marking criteria, fair marking and assessment, timely feedback, and helpful feedback. The overall satisfaction rate of each provider is displayed, and the average response is given an 8% weighting.
To assess overall satisfaction with courses, we aggregate responses to the question "overall, I am satisfied with the quality of the course" from the 2020 and 2021 NSS surveys.
All data was released at the CAH (Common Aggregation Hierarchy) level of aggregation and then mapped to HECOS (Higher Education Classification of Subjects) to weight and aggregate results for each of the 54 subjects, prioritizing outcomes at the most detailed level.
To assess each department’s extent of support towards students attaining good grades, we calculate value-added scores that track students from enrollment to graduation. Our scores take into account the starting qualifications of each student and report their ability to surpass expectations towards achieving a good classification of degree (a 1st or a 2:1). The probability of achieving a 1st or 2:1 is determined based on the student’s entry qualifications. If there are vague entry qualifications, we use the total percentage of good degrees expected for the student in their department. If a student manages to earn a good degree, our scores assign points based on the difficulty of doing so (with the reciprocal of the probability of attaining a 1st or 2:1). Otherwise, they score zero. Students taking integrated masters are always viewed positively.
A subject must have at least 30 students for a meaningful value-added score to be calculated, using the most recent year’s data. If there are more than 15 students in that year with a cumulative of 30 across two years, a two-year average is computed.
This metric is expressed as points/10 and holds a weightage of 15% in the department’s overall score.
To determine the career prospects of graduating students, we use results from the Graduate Outcomes Survey of the 2017/18 cohort, hoping to forecast similar patterns for future cohorts. Students who enter graduate-level occupations (approximated by SOC groups 1-3: professional, managerial & technical occupations) or go on to further study at a professional or HE level are regarded positively.
To ensure accuracy and privacy, we only use results from full-time first-degree courses where more than 20 students in a department responded. In cases where 20 to 22.5 students respond, we utilize the result but round or obscure the exact figure for confidentiality reasons.
In order to avoid inaccurate results due to economic fluctuations, we do not average results across years for this metric. Furthermore, the significant disparity between DLHE and graduate outcomes survey results means that we never mix results across years.
This metric contributes 15% towards the total score in non-medical subjects.
To determine a department’s ranking, we first ensure that they have enough data to be included in the ranking. We allow for missing metrics, as long as they do not exceed a weighting value of 40%. Additionally, the institution’s relevant department must have at least 35 full-time first-degree students, with 25 students in the relevant cost center.
If an institution qualifies for inclusion, we compare their score to the average score of other qualifying institutions, using standard deviations to obtain a normal distribution of standardized scores (S-scores). However, we cap certain S-scores, such as extremely high NSS, expenditure, and SSR figures, at three standard deviations to prevent one score from having an overwhelming influence.
When there are few datapoints for a metric, we refer to the distribution of scores observed for a higher aggregation of subjects (CAH1). We also set a minimum standard deviation for each metric and make adjustments to the mean tariff reference for departments with Scottish Highers or Advanced Highers.
If there are any missing indicators, we use a substitution based on the previous year’s corresponding standardized score. If this is not available, we assume that the department would have performed as well in the missing metric as it did in all other metrics.
Using the weighting attached to each metric, we total the standardised scores to give an overall institutional score (rescaled to 100) against which the departments are ranked.
The institutional table ranks institutions based on their performance in the subject tables, but considers two other factors when calculating overall performance: the number of students in a department and the number of institutions included in the subject table.
For each subject, the number of institutions included in the table is counted, and the natural logarithm of this value is calculated. The total S-Score for each subject is multiplied by these two values, and the results are summed for all subjects to give an overall S-score for each institution.
Institutions that appear in fewer than eight subject tables are not included in the main ranking of universities.
Our approach involves attaching each full-time academic program to designated subject groups, which are determined by analyzing the program’s subject information. We’ve empowered educational institutions to customize these groupings according to their preferences, and they are free to make modifications to their curriculum as they see fit. If a program does not meet the criteria for a degree level course, we still include it in our analysis, despite its exclusion from our ranking and scoring algorithms. We’ll be up for a refresh in September, so stay tuned for further updates.