Which assessment evidence has been used in the process, and how?
In each subject area, we started by pulling together data from a range of assessments, putting together an ‘evidence pack’ within each subject area for each student. This contained a minimum of three pieces of evidence for each student including:
- Assessments completed during the recent assessment windows
- Unit assessments sat throughout the course
- Year 11 mock exams (for GCSE subjects)
- Other robust assessments that were ‘uniform’ across the cohort (including NEA/ coursework where this was part of the course). For these other assessments, consideration was also given things like the level of control under which they were done and the coverage of specific assessment objectives.
Although some of these assessments had previously had specific grades attached to them, the decision was made to essentially remove those grades from the process in favour of working, initially, with percentage scores from these assessments. (Although assigning grades to smaller, specific pieces of work is something that often happens in normal times to provide a useful indicator of the quality of that work to students/ parents/ staff, it is essentially a rather crude indicator unless it is part of a full exam suite).
Subject teams then went through a rigorous process of deciding how to weight each of these pieces of assessment evidence, taking into account considerations such as when the assessment was sat (i.e. recently or earlier in the course) and the coverage of the exam specification assessment objectives across assessments.
The ‘weighted’ percentage scores from each of these assessments were then used to produce a final score for each student.
|Student Name||Evidence 1|
(Yr 11 Mock)
|Overall Percentage Score|
|Weighted at 40%||Weighted at 10%||Weighted at 50%|
In this example, the score from the most recent assessment window has a the highest weighting (50% of the overall grade), followed by the evidence from the Year 11 mock (40% of the overall grade). This reflects the guidance that evidence should cover the breadth of the studied/taught course and that judgements should lean more heavily on more recent work. The in-class assessments covered a relatively narrow proportion of the overall assessment objectives and so has been given a relatively small weighting (10%). This generates an overall percentage score (57%).
How did you then arrive at actual grades for year 11 and year 13 students?
Having arrived at amalgamated final scores, subject teams then set about assigning grade boundaries. This process was done anonymously with student names hidden in order to reduce the chance of unconscious bias shaping these decisions.
The evidence available for anonymised individuals was sampled in order to award a final grade to that student’s body of work in its entirety, rather than trying to grade individual pieces of work. This involved subject teachers reviewing physical papers from the recent assessment windows (and the Year 11 mock papers where relevant), alongside the grade descriptors provided to teachers by JCQ, as well as exemplar grading material provided by the exam boards. Rigorous discussions within subject teams took place to ensure that the grades awarded to each students’ body of evidence were fair and reflective of the evidence available.
Have you taken into account special circumstances owing to the disruption faced by individual students (eg absences during assessments, prolonged periods of absence for individuals, family bereavement etc)?
Yes. Where a student was absent for a particular assessment or faced a substantial barrier to accessing lessons in the lead-up to a particular assessment, adjustments were made. Likewise, in the rare circumstance that a subject team was using evidence from assessments in which students did not receive their usual access arrangements, adjustments were made where it was deemed necessary.
Examples of the sorts of adjustments made in these situations includes changes to the weighting given to different assessments, the use of alternative evidence for individuals, or having additional marks added to specific assessments (as happens in a normal exam year for students judged to be deserving of ‘special consideration’).
How have you quality assured this process?
Over the course of this process, there have been a series of compulsory staff training sessions to ensure:
- A shared understanding of how the process was to be conducted
- Staff training on potential sources of bias and unconscious effects on objectivity, using Ofqual’s guidance on making objective judgements
Preparing for the assessment windows:
- Robust discussion in teams about what should be assessed during the assessment windows in terms of ensuring sufficient coverage of assessment objectives, disruption to learning etc.
- Consistent approach to preparation across classes in terms of revision materials and support provided to students
- Robust discussions between SLT and middle leaders
During the marking of work completed in the assessment windows:
- A range of approaches were taken across subject teams to standardise marking, including (amongst other things) extensive moderation activities and collaborative approaches to marking. In subjects which are taught by a single teacher, this included working with colleagues in other schools to assist with standardisation.
During the grading process:
- A central data sheets proforma was used to ensure transparency and accurate recording.
- The process of assigning grades and setting grade boundaries (as detailed above) was done anonymously with names hidden, where this was possible (where this process involved reviewing recordings of student performance, for example, this was not possible).
- Subject teams used the grade descriptors provided by JCQ and exemplar grading materials provided by the exam boards.
- Robust discussions between SLT and middle leaders
After the grading:
- Use of historic data and aspirational target setting data to support benchmarking
- Review by SLT of data, including detailed analysis using specialist analysis software
Where can I read more about your policy?
Our Centre Policy, which was submitted for external quality assurance, is available below.