16 February 2018
Supreme Court
Download

U.P.PUBLIC SERICE COMMISSION Vs MANOJ KUMAR YADAV .

Bench: HON'BLE MR. JUSTICE S.A. BOBDE, HON'BLE MR. JUSTICE L. NAGESWARA RAO
Judgment by: HON'BLE MR. JUSTICE S.A. BOBDE
Case number: C.A. No.-002326-002326 / 2011
Diary number: 22517 / 2007
Advocates: RANBIR SINGH YADAV Vs K. V. BHARATHI UPADHYAYA


1

NON-REPORTABLE  

IN THE SUPREME COURT OF INDIA CIVIL APPELLATE JURISDICTION

CIVIL APPEAL No.2326 of 2011

U.P. PUBLIC SERVICE COMMISSION .... Appellant(s) Versus

MANOJ KUMAR YADAV & ANR.       ….Respondent(s)

WITH

Civil Appeal Nos.2328-2330 of 2011 Civil Appeal No.2327 of 2011

J U D G M E N T

L. NAGESWARA RAO, J.

Civil  Appeal  No.2326  of  2011  and  Civil  Appeal Nos.2328-2330 of 2011 :

The  Appellant  is  aggrieved  by  the  judgment  of  the

High  Court  by  which  the  results  of  the  main  written

examinations of Combined State/Upper Subordinate Service

(Backlog/Special  Recruitment)  Examination,  2004

(hereinafter  referred  to  as  “Backlog  Examination,  2004”)

and  Provincial  Civil  Service  (P.C.S.)  Examination,  2004

(hereinafter referred to as the “P.C.S. Examination, 2004”)

were quashed.   

1

2

2. An advertisement was issued by the Appellant inviting

applications  for  appointment  to  posts  under  the

Combined State/Upper Subordinate Services in February,

2004.  The preliminary examination was conducted on

19.12.2004 and the results were declared on 30.06.2005.

The  preliminary  examination  consisted  of  two  papers

namely General Studies and one optional subject.  The

main written examination was held between 19.12.2005

and 03.01.2006, the results of which were announced on

06.10.2006.  In  the  main  written  examination,  the

candidates  were  required  to  take  two  papers  of  200

marks each in General Studies, one paper in Hindi and

another in English Essay both carrying 150 marks.  All

these  four  papers  were  compulsory.   Apart  from  the

compulsory  papers,  the  candidates  had  to  take  two

optional subjects with two papers in each optional.  Oral

interviews  were  conducted  between  09.11.2006  and

16.11.2006.

2

3

3. There  was  another  advertisement  issued  by  the

Appellant  in  May,  2004  for  the  Backlog  Examination,

2004 calling  for  applications  from  reserved  category

candidates  for  backlog  posts.  The  preliminary

examinations  were  conducted  on  27.02.2005  and  its

results were announced in September, 2005.  The main

written examination was conducted between 19.05.2006

to 03.06.2006, the results of which were announced on

24.03.2007.

4. Writ Petitions were filed in the High Court challenging the

declaration  of  results  of  the  above  two  examinations

mainly on the ground that the scaling method followed

by the Appellant  in  awarding marks to the candidates

was  illegal,  arbitrary  and  irrational.   According  to  the

Respondents their actual marks were reduced due to the

scaling method adopted by the Appellant.   

5. Recruitment  made  to  the  posts  of  Civil  Judge  (Junior

Division) in the State of U.P. was the subject matter of

challenge in a Writ Petition filed in this Court in Sanjay

Singh  and  Another  v.  U.P.  Public  Service

Commission, Allahabad and Another (2007) 3 SCC

720.   The  examination  was  conducted  by  the  Uttar

3

4

Pradesh Public Service Commission in 2003.  The result

of the examination was challenged on the ground that

the adoption of the scaling method was arbitrary.  The

complaint was that there was reduction of actual marks

to the detriment of meritorious candidates.  This Court in

Sanjay Singh (supra) considered the point whether the

scaling  method  adopted  by  the  Commission  was

arbitrary and irrational and held that moderation is the

appropriate  method  to  offset  examiner  variability  and

the  process  of  scaling  can  be  followed  where  the

candidates take different optional subjects.   

6. The  High  Court  by  its  judgment  dated  25.05.2007

allowed Civil Writ Petition Nos.18775 of 2007, 19089 of

2007 and 20331 of 2007 by quashing the results of the

main written examination of  the  Backlog Examination,

2004 declared by the Commission on 24.03.2007.  There

was a further direction by the High Court to the Appellant

to declare the results of the Backlog Examination, 2004

afresh in the light of the observations made therein and

the  directions  issued  in  the  judgment  of  this  Court  in

Sanjay Singh’s  case.  The High Court referred to the

submissions made on behalf  of  the Appellant  that  the

4

5

subjects  of  General  Studies,  Hindi  and  English  Essay

were  compulsory  subjects  to  be  taken  by  all  the

candidates.  There were two optional subjects with two

papers each to be taken by the candidates in the main

written examinations. The candidates had to choose the

two options from a choice of 33 subjects mentioned in

the advertisement.  The Appellant submitted before the

High  Court  that  scaling  method  i.e.  Linear  Standard

Score  Method  was  applied  by  the  Commission  to  the

compulsory  as  well  as  the  optional  subjects.   The

pleadings and the  submissions made on behalf  of  the

Commission were also taken into account by the High

Court  for  the  purpose  of  holding  that  the  adoption  of

scaling  method  even  for  compulsory  subjects  was

contrary to the judgment of this Court in Sanjay Singh’s

case.  On the basis of the above findings, the High Court

allowed  the  Writ  Petitions  challenging  the  Backlog

Examination,  2004 by  a  judgment  dated  25.05.2007.

The High Court allowed Writ Petition No.22659 of 2007

pertaining to the  P.C.S. Examination, 2004 by following

the judgment dated 25.05.2007 in Writ Petition No.18775

of  2007  by  an  order  dated  15.06.2007.  There  is  no

5

6

dispute  that  the  point  involved  in  both  the  cases  is

similar.       7. It is relevant to state that the judgment of this Court in

Sanjay Singh’s  case was delivered on 09.01.2007. We

are  informed  that  the  final  result  of  the  P.C.S.

Examination,  2004 was  also  declared  on  09.01.2007

whereas  the  result  of  the  main  examination  in  the

backlog Examination was declared in 24.03.2007.  While

issuing notice, this Court by an order dated 20.08.2007

stayed the judgment of the High Court.  The Appellant

proceeded to make appointments on the basis of interim

order passed by this Court.   The candidates who were

appointed have been working for the past ten years.

8. The main contention of the learned Senior Counsel for

the Appellant is that the High Court did not appreciate

the ratio of the judgment in Sanjay Singh’s case in its

proper perspective.   According to him, it is true that this

Court in  Sanjay Singh’s  case held that moderation is

the  appropriate  method  to  be  followed  for  examiner

variability.  However,  it  was  also  held  that  in  case

candidates  have  to  take  examination  in  different

subjects, scaling method can be followed.  He submitted

6

7

that the examinations in this case are different from the

examinations  in  Sanjay  Singh’s  case  where  all  the

candidates  had  to  take  the  same  papers.   In  the

examinations with which we are concerned in this case,

there  were  33 different  optional  subjects  out  of  which

candidates had to choose two.  He also submitted that

the  Appellant  Commission  was  adopting  the  scaling

method on the basis of expert advice taken by them in

respect  of  all  the  examinations  conducted  by  the

Commission.  He finally submitted that this case does not

warrant interference as all the selected candidates have

been working for the past ten years and are not parties

before this Court.    

9. The  counsel  appearing  for  the  Respondents/Writ

Petitioners  submitted  that  admittedly,  scaling  method

was  followed  by  the  Appellant  -  Public  Service

Commission  due  to  examiner  variability.   They  have

pointed  out  the  pleadings  as  well  as  the  admissions

made on behalf of the Public Service Commission before

the High  Court  to  support  the  said  submission.    The

counsel  further  pointed  out  that  the  Appellant  –

Commission should not have adopted the scaling method

7

8

for the compulsory subjects i.e. English and Hindi.  In any

event,  the  learned counsel  for  the Respondents  urged

that they were adversely affected in view of the scaling

method  being  followed  due  to  reduction  of  marks

actually scored by them in the examination.  They could

not  be  selected  only  due  to  the  adoption  of  scaling

method in awarding marks.   

10. Having considered the submissions made on behalf of

the parties and after perusing the material on record, we

are  of  the  considered  opinion  that  the  Appellant

committed an error in following the scaling method for

both the examinations in issue.  It would be relevant to

refer  extensively  to  the  judgment  in  Sanjay  Singh’s

case which has dealt with the examinations conducted

by  the  Appellant-  Commission  for  recruitment  to  the

posts of Civil Judge (Junior Division).  The pattern of the

examination for the said selection is similar to the exams

in the instant case.  In the said case, the Appellant relied

upon the Proviso to Rule  50 of  the U.P.  Public  Service

Commission (Procedure and Conduct of Business) Rules,

1976 to contend that any formula or method or device to

eliminate  variation  in  marks  can  be  adopted  by  the

8

9

Commission.  One of the points considered was whether

the  scaling  method  adopted  by  the  Commission  was

arbitrary  and  irrational.  There  is  a  detailed  discussion

about the concepts of examiner variability and subject

variability.   The  reasons  given  by  this  Court  for  the

purpose  of  holding  that  moderation  would  bring

considerable  uniformity  and  consistency  in  case  of

examiner variability are as follows:

“23. When a large number of candidates appear for an  examination,  it  is  necessary  to  have  uniformity and  consistency  in  valuation  of  the  answer-scripts. Where  the  number  of  candidates  taking  the examination  are  limited  and  only  one  examiner (preferably  the  paper-setter  himself)  evaluates  the answer-scripts, it is to be assumed that there will be uniformity in the valuation. But where a large number of  candidates  take  the  examination,  it  will  not  be possible to get all the answer-scripts evaluated by the same examiner. It, therefore, becomes necessary to distribute  the  answer-scripts  among  several examiners  for  valuation  with  the  paper-setter  (or other  senior  person)  acting  as  the  Head Examiner. When  more  than  one  examiners  evaluate  the answer-scripts relating to a subject,  the subjectivity of the respective examiner will creep into the marks awarded by him to the answer-scripts allotted to him for  valuation.  Each  examiner  will  apply  his  own yardstick  to  assess  the  answer-scripts.  Inevitably therefore, even when experienced examiners receive equal batches of answer-scripts, there is difference in average  marks  and  the  range  of  marks  awarded, thereby affecting the merit of individual candidates. This  apart,  there  is  “hawk-dove”  effect.  Some examiners are liberal in valuation and tend to award more marks. Some examiners are strict and tend to give  less  marks.  Some  may  be  moderate  and balanced in awarding marks. Even among those who are  liberal  or  those  who  are  strict,  there  may  be variance in the degree of strictness or liberality. This means  that  if  the  same  answer-script  is  given  to

9

10

different examiners, there is all likelihood of different marks  being  assigned.  If  a  very  well-written answer-script  goes  to  a  strict  examiner  and  a mediocre  answer-script  goes  to  a  liberal  examiner, the  mediocre  answer-script  may  be  awarded  more marks  than  the  excellent  answer-script.  In  other words,  there  is  “reduced  valuation”  by  a  strict examiner  and  “enhanced  valuation”  by  a  liberal examiner. This is known as “examiner variability” or “hawk-dove  effect”.  Therefore,  there  is  a  need  to evolve a procedure to ensure uniformity inter se the examiners  so  that  the  effect  of  “examiner subjectivity”  or  “examiner  variability”  is  minimised. The  procedure  adopted  to  reduce  examiner subjectivity or variability is known as moderation. The classic method of moderation is as follows:

(i) The paper-setter of the subject normally acts as the  Head Examiner  for  the  subject.  He  is  selected from  amongst  senior  academicians/scholars/senior civil  servants/judges.  Where  the  case  is  of  a  large number  of  candidates,  more  than  one  examiner  is appointed and each of them is allotted around 300 answer-scripts for valuation.

(ii) To achieve uniformity in valuation, where more than one examiner is involved, a meeting of the Head Examiner with all the examiners is held soon after the examination.  They  discuss  thoroughly  the  question paper, the possible answers and the weightage to be given to various aspects of  the answers. They also carry  out  a  sample  valuation  in  the  light  of  their discussions. The sample valuation of scripts by each of  them  is  reviewed  by  the  Head  Examiner  and variations in assigning marks are further discussed. After such discussions, a consensus is arrived at in regard to the norms of valuation to be adopted. On that  basis,  the examiners are required to complete the  valuation  of  answer-scripts.  But  this  by  itself, does not bring about uniformity of assessment inter se the examiners. In spite of the norms agreed, many examiners  tend  to  deviate  from  the  expected  or agreed norms, as their caution is overtaken by their propensity for strictness or liberality or erraticism or carelessness  during  the  course  of  valuation. Therefore,  certain  further  corrective  steps  become necessary.

(iii)  After  the  valuation  is  completed  by  the examiners,  the  Head Examiner  conducts  a  random sample  survey  of  the  corrected  answer-scripts  to verify whether the norms evolved in the meetings of examiner  have  actually  been  followed  by  the examiners. The process of random sampling usually

10

11

consists of scrutiny of some top level answer-scripts and some answer books selected at random from the batches of answer-scripts valued by each examiner. The  top  level  answer  books  of  each  examiner  are revalued by the Head Examiner who carries out such corrections or alterations in the award of marks as he, in his judgment, considers best, to achieve uniformity. (For this purpose, if  necessary certain statistics like distribution  of  candidates  in  various  marks  ranges, the  average percentage of  marks,  the  highest  and lowest award of marks, etc. may also be prepared in respect of the valuation of each examiner.)

(iv) After ascertaining or assessing the standards adopted by each examiner, the Head Examiner may confirm the award of marks without any change if the examiner has followed the agreed norms, or suggests upward  or  downward  moderation,  the  quantum  of moderation  varying  according  to  the  degree  of liberality or strictness in marking. In regard to the top level answer books revalued by the Head Examiner, his award of marks is accepted as final. As regards the  other  answer  books  below  the  top  level,  to achieve maximum measure of uniformity inter se the examiners,  the  awards  are  moderated  as  per  the recommendations made by the Head Examiner.

(v)  If  in the opinion of  the Head Examiner there has  been  erratic  or  careless  marking  by  any examiner,  for  which  it  is  not  feasible  to  have  any standard  moderation,  the  answer-scripts  valued  by such  examiner  are  revalued  either  by  the  Head Examiner or any other examiner who is found to have followed the agreed norms.

(vi) Where the number of candidates is very large and the examiners are numerous, it may be difficult for one Head Examiner to assess the work of all the examiners.  In  such  a  situation,  one  more  level  of examiners  is  introduced.  For  every  ten  or  twenty examiners, there will be a Head Examiner who checks the random samples as above. The work of the Head Examiners, in turn, is checked by a Chief Examiner to ensure proper results.”

 11. This  Court  also  considered  a  situation  where

candidates have an option to take different subjects for

which the scaling method is appropriate as follows:

“24. In  the  Judicial  Service  Examination,  the candidates were required to take the examination in

11

12

respect of all the five subjects and the candidates did not have any option in regard to the subjects. In such a  situation,  moderation  appears  to  be  an  ideal solution.  But  there  are  examinations  which  have  a competitive  situation  where  candidates  have  the option  of  selecting  one  or  few among a  variety  of heterogenous subjects  and the number  of  students taking  different  options  also  vary  and  it  becomes necessary to prepare a common merit list in respect of  such  candidates.  Let  us  assume  that  some candidates take Mathematics as an optional subject and some take English as the optional subject. It is well  recognised  that  marks  of  70  out  of  100  in Mathematics do not mean the same thing as 70 out of  100  in  English.  In  English  70  out  of  100  may indicate  an  outstanding  student  whereas  in Mathematics, 70 out of 100 may merely indicate an average student. Some optional subjects may be very easy,  when  compared  to  others,  resulting  in  wide disparity  in  the  marks  secured  by  equally  capable students.  In  such a  situation,  candidates  who have opted for the easier subjects may steal an advantage over those who opted for difficult subjects. There is another  possibility.  The  paper-setters  in  regard  to some optional subjects may set questions which are comparatively  easier  to  answer  when compared  to some paper-setters in other subjects who set tougher questions  which  are  difficult  to  answer.  This  may happen  when  for  example,  in  Civil  Service Examination,  where  Physics  and  Chemistry  are optional papers, Examiner ‘A’ sets a paper in Physics appropriate to degree level and Examiner ‘B’ sets a paper in Chemistry appropriate for matriculate level. In view of these peculiarities, there is a need to bring the assessment or valuation to a common scale so that the inter se merit of candidates who have opted for  different  subjects,  can  be  ascertained.  The moderation procedure referred to in the earlier para will  solve  only  the  problem of  examiner  variability, where  the  examiners  are  many,  but  valuation  of answer-scripts  is  in  respect  of  a  single  subject. Moderation is no answer where the problem is to find inter se merit across several subjects, that is, where candidates take examination in different subjects. To solve the problem of inter se merit across different subjects, statistical  experts  have evolved a method known  as  scaling,  that  is  creation  of  scaled  score. Scaling places the scores from different tests or test forms  on  to  a  common  scale.  There  are  different methods  of  statistical  scoring.  Standard  score method,  linear  standard  score  method,  normalised

12

13

equipercentile  method  are  some  of  the  recognised methods for scaling.   25. A. Edwin Harper Jr.  and V. Vidya Sagar Misra in their  publication Research  on  Examinations  in India have  tried  to  explain  and  define  scaling.  We may usefully borrow the same. A degree “Fahrenheit” is different from a degree “Centigrade”. Though both express  temperature  in  degrees,  the  “degree”  is different  for  the two scales.  What is  40 degrees in Centigrade scale is 104 degrees in Fahrenheit scale. Similarly, when marks are assigned to answer-scripts in different papers, say by Examiner ‘A’ in Geometry and Examiner ‘B’ in History, the meaning or value of the “marks” is different. Scaling is the process which brings the marks awarded by Examiner ‘A’ in regard to  Geometry  scale  and  the  marks  awarded  by Examiner ‘B’ in regard to History scale, to a common scale.  Scaling  is  the  exercise  of  putting  the  marks which are the results of different scales adopted in different  subjects  by  different  examiners  onto  a common scale so as to permit comparison of inter se merit. By this exercise, the raw marks awarded by the examiner  in  different  subjects  are  converted  to  a “score” on a common scale by applying a statistical formula.  The  “raw  marks”  when  converted  to  a common  scale  are  known  as  the  “scaled  marks”. Scaling  process,  whereby  raw  marks  in  different subjects  are  adjusted  to  a  common  scale,  is  a recognised  method  of  ensuring  uniformity  inter  se among the candidates who have taken examinations in  different  subjects,  as,  for  example,  the  Civil Services Examination.”

12. It is clear from the above that the process of scaling is

a  recognized  method  for  ensuring  uniformity  amongst

candidates  who  have  taken  examinations  in  different

subjects.   When  there  are  a  number  of  examiners

evaluating the papers of a large number of candidates in

an  examination,  there  is  a  possibility  of  ‘examiner

subjectivity’  or  ‘examiner  variability’.  To  minimise  the

examiner variability, this Court in  Sanjay Singh’s case 13

14

held that moderation would be the best method to be

followed.  

13. In  the  P.C.S.  Examination,  2004 and  the  Backlog

Examination, 2004 the candidates had to take part in the

main  written  examinations  which  consisted  of  four

compulsory  subjects  and  two  optional  subjects.   The

compulsory subjects were common to all candidates and

the two optional  subjects  were to be chosen from the

available  33  subjects  as  mentioned  in  the

advertisements.   As per  the judgment of  this  Court  in

Sanjay  Singh’s  case,  the  Commission  could  have

followed  the  scaling  method  only  for  the  optional

subjects and not for the compulsory subjects.  However,

it is clear from the submissions made on behalf of the

Appellant  in  the  High  Court  that  scaling  method  was

followed even for compulsory subjects.  We approve the

findings of the High Court that the evaluation of  P.C.S.

and  Backlog  recruitment  examinations,  2004 was

contrary to the judgment of this Court in Sanjay Singh’s

case.   

14. Though we are in agreement with the view of the High

14

15

Court  that  the  examinations  were  not  conducted  in

accordance  with  the  principles  laid  down  in  Sanjay

Singh’s case, we do not approve the directions given in

the judgment to finalise the results afresh in accordance

with the observations made therein.  The exercise to be

undertaken  as  per  the  said  directions  would  result  in

displacement  of  a  number  of  selected  candidates  not

before this Court and alteration of the merit list causing

serious prejudice to those appointed and working for the

last ten years.  Therefore, we are of the opinion that the

appointments made pursuant to the advertisements of

2004 for the `P.C.S.’ and `Backlog’ posts should not be

disturbed.

15. It is settled law that in certain situations, on account

of  subsequent  events,  the  final  relief  granted  by  this

Court may not be the natural consequence of the  ratio

decidendi of its judgment.  In such situations, the relief

can be moulded by the Court in order to do complete

justice in the matter.    It is relevant to note the fact that

Sanjay  Singh’s  case  was  also  made  prospective  in

operation and this Court declined to interfere with the

selections already made in that case on the basis that 15

16

relief  can  be  moulded.  In  the  instant  case,  the

examinations were conducted by the Appellant  on the

basis of the pattern being followed by them since 1996.

At the time when the examinations were conducted, a

judgment  of  this  Court  in  U.P.  Public  Service

Commission  v.  Subhash Chandra Dixit  (2003)  12

SCC 701 approving the scaling method adopted by the

Commission  held  the  field.  Moreover,  the  selected

candidates  were  appointed  on the  basis  of  an interim

order passed by this Court in 2007 and they have been

working  continuously  since  then.  There  are  no

allegations of  any irregularities  or  mal-practices in  the

conduct of the said examinations.  The candidates who

participated in the examinations cannot be found fault

with for the error committed by the Appellant in adopting

the scaling method.   In view of the above,  we do not

deem it fit to disturb the appointments made pursuant to

the selections in the examinations conducted in 2004.  

16. Though we uphold the judgment of the High Court in

declaring  the  adoption  of  the  scaling  method  by  the

Appellant in the examinations as arbitrary, we set aside

the directions given by the High Court to the Appellant to 16

17

declare the results of the said examinations afresh. The

appeals are disposed of accordingly.

CIVIL APPEAL NO.2327 OF 2011   17. This  appeal  filed  by  a  successful  candidate  in  the

P.C.S. Examination, 2004  is disposed of in terms of our

judgment in Civil Appeal No. 2326 of 2011.  

               …….................................J.

                                                                     [S.A. BOBDE]

            

                 ........................................J.          [L. NAGESWARA RAO]

New Delhi, February 16, 2018

17