In July, Jo Johnson, Minister of State for Universities, Science, Research and Innovation gave a much trailed speech at the think tank Reform.  Central to the speech was discussion of the Teaching Excellence Framework (TEF). TEF year two, which saw 295 institutions receive awards, has already proven controversial: Some in the sector have praised the new attention devoted to teaching, while others argue that TEF measures have little to do with teaching practice.

GuildHE supports the TEF as a means of highlighting excellence across the sector, however it is clear that the next iteration of TEF (set to include subject-level pilots) needs refinement.

LEO Data

During the speech, the Minister praised a powerful “new analysis of graduate outcomes” – the Longitudinal Education Outcomes (LEO) data set, which combines UCAS data with information from HMRC to breakdown earnings at granular institution and subject level.

Let’s set aside the questionable idea that employment is an appropriate proxy for teaching quality, when issues of social capitaland opportunity impact so significantly on graduate earnings outlooks. GuildHE has misgivings about LEO data in its current form, which excludes self-employment. This reflects poorly on subjects with high rates of graduate self-employment such as creative arts. It would also be necessary to benchmark for regional salary differences and so any moves to include LEO data in the TEF should be properly tested and the impacts modelled.  GuildHE will continue to raise these concerns with HEFCE/OfS and DfE, and are pressing that incomplete LEO data, which excludes self-employment information, isn’t used until it is rigorous and robust.

Subject-level TEF

The sector knew it was coming, but subject-level TEF has caused some concern. It aims to highlight subject areas which differ from institutional norms, helping students to make more informed choices about where to study and support institutions to improve areas of weakness. Certainly there are HEIs with exceptional performance in a few specific subjects, and others where some subjects lag behind wider excellence. It is proposed that subject-level TEF will identify and highlight these ‘exceptions’.

With the sector still getting their head around the approach to the overall TEF assessment, moving to a more granular level – with associated additional reporting and assessment burdens –  may seem premature to some. The promised independent review of TEF is not expected until 2018/19 and some TEF Panel decisions seem opaque. It might, therefore, be wise to straighten out these kinks before moving to a more granular, subject-level assessment.

HEFCE have launched a call for providers to participate in pilots of subject-level TEF.

GuildHE would urge members to participate – it is important that HEFCE and DfE are able to consider how the more granular approach of subject TEF will impact on small and specialist providers.

Teaching intensity

Perhaps the most unexpected announcement in the minister’s speech was that:

“…we will be piloting a new TEF metric that relates directly to one important aspect of value for money: the teaching intensity a student experiences. This will look at the contact hours students receive, including the class sizes in which they are taught.”

There is a clear logic here; more direct contact, particularly in small classes, can have a very positive impact on learning (see Gibbs, 2010). DfE will therefore pilot two measures of teaching intensity which account for contact hours, class size, staff-student ratios, placements and field work:

  • “A provider declaration of the contact hours they are providing, weighted by staff-student ratios, to get a measure of teaching intensity…
  • “A student survey on number of contact hours, self-directed study and whether they consider the contact hours are sufficient to fulfil their learning needs.”

Together, these are aim to “build up a more rounded picture of the nature, as well as the amount, of the teaching received”, a laudable ambition.

However, these proposals are not without challenges. You might reasonably ask whether there is a substantial difference between the learning experiences of a student in a lecture with 49 others or 149 others – yet the pilot specification would see the first of these options as three times as good as the other. Indeed, the latter lecture might be larger for the very reason that students feel they gain more from it (perhaps due to a better lecturer). Similarly,  a small class led by a less experienced lecturer could be given a better rating than a slightly larger class with an experienced Tutor.  It is hard to see how the quality of teaching can be accurately assessed.

Likewise self-directed study can have very different value between institutions and subjects. Library-based research is core to many humanities degrees, studio time to creative subjects, yet some institutions will be be far better than others at supporting individual study.

The student survey may help address these challenges, yet its inclusion adds new burdens of data collection to both students and institutions. More problematic in relying on student reporting is that most students have experience of only one institution – and different students will have different perceptions of issues including self-directed study and, crucially, their learning needs. It is therefore hard to see how such results can be meaningfully benchmarked.

All in all, there is much in the next iteration of TEF that raises questions – with answers still awaited from the previous exercise. It is welcome that the Department announced at the beginning of the year that the subject-level TEF pilots would be extended to two years to take account of the lessons learned exercise of the institutional level assessment, but without further clarifications or responses to sector concerns, it is hard to get excited by this new iteration of TEF.