2017 Examinations standardisation: Umalusi briefing

Basic Education

07 February 2018
Chairperson: Ms N Gina (ANC)
Share this page:

Meeting Summary

UMALUSI said it was satisfied that it had diligently executed on its quality assurance mandate of the National Senior Certificate (NSC) examinations. UMALUSI noted that standardisation was an international practice to mitigate the effect of factors other than the learners’ knowledge and aptitude on the learners’ performance. UMALUSI then spoke to the principles which were applied in the standardisation of examination marks after which it addressed some common myths regarding standardisation, namely the myth that standardisation was used to manipulate figures to increase learners’ pass rate, the myth that mainly upward or downward adjustment of marks involved the increase or decrease of learners’ marks by the same mark from 0-300, the myth that the decision to adjust was only determined by the mean of the current cohort and that a maximum adjustment was measured from the mean, and the myth that candidates’ final marks were finalised during standardisation.

UMALUSI then spoke to the 2017 standardisation figures. It noted that UMALUSI standardised the results not only of the NSC but also of the National Certificate Vocational [NC (V)], NATED Courses (N2-N3) and the General Education and Training Certificate (GETC) and that equal attention had to be given to other qualifications as well. The standardisation data was considered confidential because the standardisation marks were not final as they only included the 75% written component and not the 25% School-Based Assessment (SBA) marks. Standardisation marks also do not include rules of combination, condonation and language compensation marks. Hence it was dangerous to speculate the outcome of the results based on standardisation outcomes. Any predictions would be misleading, and the overall pass rate was not known at this stage.

The Committee said UMALUSI was doing a good job but believed that the problem was that there were systemic problems in the educational system and that there was a suspicion that our average standard of education was low, and that South African education was like a ‘petri dish’. Members felt that this was reflected in a comment made by Prof Moodley during the presentation that ‘one can’t adjust for the ills of society on graph paper’. Members asked if the 30% pass rate was not setting students up for failure. Why was there standardisation in matric but not at university? What was the timeline allowed for the turnaround of students’ performance and was it prejudiced against students?

What was the formula for standardisation? What comparison could be drawn from the standardisation of Independent Examinations Board (IEB) students for example? Were the students the same? Why was the issuing of college certificates taking so long? Why was the standardisation norm taken over five years, why not ten years? Members said that the Committee had to be part of the standardisation process because it was part of the Committee’s oversight work and the process should be transparent. Members cautioned against lumping all learners together to get the norm because some learners might only be doing three subjects in a year. How would a five-year norm be handled if there was no technical data for a new subject? Members asked why learners should be punished by having their marks adjusted downwards because of the five-year norm. When were papers detected as being too difficult or easy? Members said schools had a mixture of levels and did UMALUSI consider special needs when doing standardisation?

Members asked how adjustments at zero benefited the learner. Who decided on the 30% pass rate and what did UMALUSI recommend? Members said the cognitive level of learners for each year were different, what was the reason for the downgrading of learners’ marks because the aim education was for learners to succeed (to pass). To what extent had UMALUSI built capacity regarding the standardisation process? Members asked if a child that received 100% at a public school would have his marks downgraded because of the historical results of his school. Members felt that the exam results would have given an indication on where improvements needed to be made. Members said it would be interesting to compare the raw marks of black and white schools. Members asked for the detailed formula that was entered into the computer to do the statistical standardisation or the theory behind the standardisation. What was the difference between the standardisation of the NCS and the IEB? Was it the same? Members asked since when had UMALUSI been using historical data for the norm. Was UMALUSI results audited to improve the credibility of the results? What recommendations had UMALUSI given the Department of Basic education in terms of improving assessment of matric results?

Meeting report

Briefing by UMALUSI

Prof John Volmink, Chairperson, UMALUSI Council, said the Council was satisfied that UMALUSI had diligently executed on its quality assurance mandate through the oversight role of the Assessment Standards Committee.

Dr Mafu Rakometsi, CEO, UMALUSI said that standardisation was practiced internationally to mitigate the effect of factors other than the learners’ knowledge and aptitude on the learners’ performance. These factors could be the difficulty in question papers, undetected errors, learner interpretation of questions. The process assumed that for large populations the distribution of aptitude and intelligence did not change appreciably. He said standardisation was a process which included the moderation of question papers; the review of learner performance against historical performance of candidates in each subject; the historical average (norm) constructed using the last 3 to 5 years data; the statistical moderation of internal assessment, qualitative input meetings; the Moderator, Chief Marker and Internal Moderator’s Reports; UMALUSI’s own research, the input of the Assessment Standards Committee (ASC) which comprised independent experts drawn from different higher education  institutions and research institutes: and Education, Mathematics and Statistics experts.

He said the principles applied in the standardisation of examination marks were that:

* No adjustment should exceed 10% of the historical average.

* In the case of the individual candidate, the adjustment effected should not exceed half of the raw
    mark obtained by the candidate.

* If the distribution of the raw marks was below the historical average, the marks may be adjusted
    upwards subject to limitations.

He said it was a myth that standardisation was used to manipulate figures to increase learners’ pass rate. He said candidates’ marks were only adjusted if there was compelling evidence from both qualitative and quantitative reports that the standard of the examination did not compare fairly with the previous examinations. The marks were then adjusted to ensure the candidates were not advantaged or disadvantaged due to factors that are beyond their control.

He said it was a myth that mainly upward or downward adjustment of marks involved the increase or decrease of learners’ marks by the same mark from 0-300. Standardisation was a process guided by principles such as guidelines, such as any adjustment should not exceed half of the raw mark; adjustments should not exceed 10 percentage points above or below the raw mark; and adjustment did not necessarily mean that all marks from 0-300 increased or decreased by the same mark.

A mainly upward adjustment could have some marks adjusted downwards and vice verse depending on the candidates’ performance, in relation to the norm.

He said it was a myth that the decision to adjust was only determined by the mean of the current cohort and that a maximum adjustment was measured from the mean. A maximum adjustment was not measured by the mean alone, but by the spread of adjustments; and the decision to adjust was not determined by the mean only, as other factors taken into consideration were the qualitative input reports, the previous five years results, the pass rate, the distinction rate, the failure rate as well as the median.

He said it was a myth that learners’ final marks were finalised during standardisation. The NSC results consisted of two components: the external component (examination) and the internal component SBA. The final result or mark of the candidate consisted of 75% exam and 25% SBA.

During the standardisation meeting adjustment was made only on the external examination raw mark

The SBA marks were subjected to a process of statistical moderation before being added to the adjusted exam mark in the above ratio with the underlying assumption being that there was a positive relationship between the learners’ performance in the SBA and in the examinations.

Prof M Moodley, UMALUSI Assessment Standards Committee member, then spoke to the 2017 standardisation figures referencing graphs and tables found in Book 1 and Book 2.

Dr Rakometsi said that UMALUSI standardised the results not only of the NSC but also of the NC (V), NATED Courses (N2-N3) and the GETC and that equal attention must also be given to other qualifications, not just the NSC.

He said the standardisation data was considered confidential because the standardisation marks were not final as they only included the 75% written component and not the 25% SBA marks. Standardisation marks also do not include rules of combination, condonation and language compensation marks. Hence it was dangerous to speculate the outcome of the results based on standardisation outcomes. Any predictions would be misleading, and the overall pass rate was not known at this stage.

He said UMALUSI would be hosting a conference on educational assessment in May 2018 in Pretoria with the theme: “Local Context in a Global Context: encouraging diversity in assessment”

Discussion

Mr I Ollis (DA) said UMALUSI was doing a good job but believed that the problem was that there were systemic problems in the educational system and that there was a suspicion that our average standard of education was low, and that South African education was like a ‘petri dish’. He felt that this was reflected in a comment made by Prof Moodley during the presentation that ‘one can’t adjust for the ills of society on graph paper’.

Mr T Mgcini (EFF) asked if the 30% pass rate was not setting students up for failure. Why was there standardisation in matric but not at university? What was the timeline allowed for the turnaround of students’ performance and was it prejudiced against students? What was the formula for standardisation? What comparison could be drawn from the standardisation of IEB students for example? Were the students the same? Why was the issuing of college certificates taking so long? Why was the standardisation norm taken over five years, why not ten years?

Ms N Tarabella-Marchesi (DA) said that the Committee had to be part of the standardisation process because it was part of the Committee’s oversight work and the process should be transparent. She said there had been a decrease in adjustments compared to the previous year where there had been big adjustments upwards. She cautioned against lumping all learners together to get the norm because some learners might only be doing three subjects in a year. How would a five-year norm be handled if there was no technical data for a new subject?

Ms J Basson (ANC) asked why learners should be punished by having their marks adjusted downwards because of the five-year norm. When were papers detected as being too difficult or easy? She said schools had a mixture of levels and did UMALUSI consider special needs when doing standardisation?

Mr H Khosa (ANC) asked how adjustments at zero benefited the learner. Who decided on the 30% pass rate and what did UMALUSI recommend?

Mr D Mnguni (ANC) said the cognitive level of learners for each year were different, what was the reason for the downgrading of learners’ marks because the aim of education was for learners to succeed (to pass). To what extent had UMALUSI built capacity regarding the standardisation process?

The Chairperson wanted further explanation on the level of passes.

Dr Rakometsi said the pass mark debate was an old debate that had been profiled and skewered current discussions. In the past a higher-grade pass was 40%, a standard grade pass was 33.3% and a lower grade pass was 25%. The NSC was more difficult that the Senior Certificate because the latter only required 720 marks regardless of the subjects failed. The former required three subjects to be passed at 40% and three subjects at 30%. He said there were students who wanted to enter the workplace and wanted to show something for their 12 years of schooling. Even if the pass mark was adjusted to be 50%, it was important to know what the cognitive load of the paper was, because a question paper could be loaded with easier questions. 

Responding to Ms Tarabella-Marchesi’s question, he said that the UMALUSI Council had decided not to invite politicians and the reasons were that while they appreciated the advice and engagements which had started in 2011 so that politicians could understand the process, it was abnormal practise. He used the example of the adjustment of electricity prices where politicians were also not part of the process. He added that parliamentary oversight could not just be on standardisation or for standardisation of the DBE only. He said that South Africa was a democratic country with separation of powers and bodies did their work without interference. He said that in 2016 confidential information had been released which had created confusion in the public’ mind.

Prof Volmink added that he wanted to emphasize the difference between confidential information which was what UMALUSI dealt with and secrecy.

On Mr Ollis’ question, Dr Rakometsi said the NSC was recognised internationally and a number of benchmarking exercises had been done. The NSC had emerged well when compared to other equivalents like the international baccalaureate. He said that when CAPS was considered for implementation, the benchmarking exercise had been considered. South Africa had its own challenges such as poverty, its history and socio-economic challenges of learners, yet students will get what they deserve.

Prof Sarah Howie, Director: Africa Centre for Scholarship - Stellenbosch University, said the issue of standardisation was important for all the reasons raised by members. She said benchmarking was important to find out where South Africa was compared to other countries. The benchmark exercises had shown that by the time they reach grade four, South African students were already five years behind children in other countries. The learners who wrote the grade 12 matric exam were a proportion of those learners who had been behind in the early years of their schooling. The benchmarking also served to warn authorities to make interventions earlier. This was being seen in the marks of the weakest learners in the early years. However, the average and above average marks were not improving, and the top group’s marks were starting to drop.

Prof Volmink said that as a country South African standardisation was at a different place compared to other countries because it had faced post apartheid challenges and a lack of participation in the global arena.

Regarding Mr Mgcini’s question, Dr Rakometsi said that UMALUSI was not the only body that did standardisation, inspectors at schools did standardisation when they inspected school results, but this was done without scientific background.

Prof Volmink said that the raw score was the ideal. He said the difference between school and university was that university exams were not national exams. If there was a deviation, then the senate investigated.

Prof Moodley said that universities selected moderators to moderate papers and scripts and that the private sector was the ultimate standardiser because it would not choose graduates from an institution if they were not up to standard.

On Ms Tarabella - Marchesi’s question, he said that the introduction of a new subject was a big problem. UMALUSI needed to interrogate the paper, the moderator and compare these to another similar subject. He said adjustments were made to avoid advantaging or disadvantaging a student. Once there was three years worth of results, a norm could be established. He said the five-year norm was a moving average.

He said that learners doing only three subjects were not taken out because that would affect the cohort.

On Mr Khoza’s clarified that zero adjustments were made but this was not to say that interrogation of the marks did not occur.

On the standard of exam papers, he said that UMALUSI paid special attention to the cognitive demand of the papers. The papers had to comprise 30% low cognitive demand, 40% middle cognitive demand and 30% high cognitive demand.

Prof Loyiso Jita, Dean: Faculty of Education - University of the Free State, said the function of standardisation was to ensure that matric results across years could be compared to one another.

On standardisation at universities, he said that the marks were looked at and moderated but there was no statistical standardisation as in the matric exam because this would only work with a large number of candidates.

On Mr Mgcini and Mr Mnguni’s questions, Dr Rakometsi said that a class or year might have a bright group in one year, however that was a small group when compared to the overall group of candidates writing the matric exam and it was not easy to change a large system, such as the education system, quickly.

On the IEB standardisation, he said that the IEB would continue to get better results because the schools were private, the number of students per class was smaller, and the schools were better resourced.

The Chairperson asked if their assessment standards were the same.

Dr Rakometsi replied that they all did the NSC and subscribed to CAPS. The demographics of the school population however were different to the public schools.

He said that the standardisation process was far from ‘tampering’ with the results and people should refrain from using that word and that a better term to use was ‘adjust’.

Ms Basson asked if a child that received 100% at a public school would have his marks downgraded because of the historical results of his school.

Mr Mgcini felt that the exam results would have given an indication of where improvements needed to be made. He said it would be interesting to compare the raw marks of black and white schools. He asked for the detailed formula that was entered into the computer to do the statistical standardisation or the theory behind the standardisation. What was the difference between the standardisation of the NCS and the IEB? Was it the same?

Ms N Mokoto (ANC) asked since when UMALUSI had been using historical data for the norm. Was UMALUSI results audited to improve the credibility of the results? What recommendations had UMALUSI given the Department of Basic education in terms of improving assessment of matric results?

Prof Volmink replied that there was no formula; human judgement was used where account was taken of, for the circumstances and the examiners and moderators reports to sort out qualitative differences between years.

On whether UMALUSI was audited, he said that UMALUSI was, in fact, the auditor of the results.

Dr Rakometsi said what were fed into the computer were the standardisation principles and the results of this were regarded as tentative, because then the examiners and moderators’ reports were considered, and a judgement was thereafter made.

On a student whose parents had invested in his education at an underperforming school, Dr Rakometsi said that the student would get what he deserved.

He said that reference could be made to the tables found in Book 1 and Book 2 to determine how the IEB and the DBE fared. He said the factors prevalent at the IEB and the DBE were different and so the IEB and the DBE were looked at differently.

He said that once standardisation was done, UMALUSI was finished with the process. They could do research which emanated from recommendations sent to the department.

Prof Volmink added that the rank order of marks remained the same.

On the issue of comparing black and white schools, Prof Jita said that standardisation was done on each subject separately, so it would be impossible to determine whether students were from poor or rich areas, rural or urban or black or white. This was not the kind of data that UMALUSI looked at the type of research suggested by Mr Mgcini might be research a university could undertake.

Dr Rakometsi said that it was important for the public to understand standardisation because it was intended to benefit students

On the human resource capacity of UMALUSI, he said that UMALUSI had gone on a recruitment drive and three statisticians had joined, but the veterans needed to be kept also.

The meeting was adjourned.

 

 

 

Audio

No related

Download as PDF

You can download this page as a PDF using your browser's print functionality. Click on the "Print" button below and select the "PDF" option under destinations/printers.

See detailed instructions for your browser here.

Share this page: