Multiple choice exams are:

  • A.) Internationally acceptable
  • B.) Not subject to one examiner’s opinion
  • C.) Viable in electronic formats
  • D.) In a format which enables a whole syllabus to be covered in a reasonable time frame
  • E.)  Quick to mark, enabling candidates to get their results speedily
  • F.)  All of the above, but I still don’t like them.

Exam formatting is a subject hotly debated by candidates and academics alike. Candidates say they don’t like multiple choice. They complain that the format doesn’t allow them to display their knowledge. In addition, bodies running multiple choice exams, like us, are often criticized if formats change from essay based exams to multiple choice. People say such a change only benefits the exam board because the papers are quicker and cheaper to mark.

 

There is a modicum of truth in that – multiple choice exams ARE quicker to mark, because they can be marked electronically. However, they are cross-checked by examiners and administration staff.  It’s also true that candidates can get their results much more quickly with multiple choice papers. For some, it will mean an almost instant result if their trainer is using up to date technology. This is simply not possible with essay-style exams.

There are also other factors to consider.

Testing the whole syllabus

In order for candidates to demonstrate their well rounded knowledge of a subject, we want to test them on as much of the syllabus as we can. As such, the testing methodology is designed to cover as many areas as can be feasibly achieved within a set amount of questions.  Multiple choice formats enable us to cover most of a syllabus in a reasonable time limit – meaning long enough to require some analysis and deduction, but short enough to fit into reasonable delivery models (which themselves are driven by the consumer).

Questions based on scenarios test the candidates’ understanding of the subject. They require the candidate to spend more time on reading, understanding and analyzing the situation and less time on writing. Candidates taking Practitioner exams have passed the Foundation level, so they are already familiar with the terminology of their subject.  For Foundation exams candidates need recall memory but for higher exams, deductive analysis is required.

Essays – a diminishing testing style

The essay style question is a diminishing testing style across most of academia for a number of reasons:

  • Subjectivity: In the past examiners marking essays were given “marking guidelines” to help them analyze the candidates’ responses.  These were guidelines only and the examiner mark was highly dependent on how well the candidate could express themselves in an essay question,  the format they used to answer the question, how many of the guideline points were mentioned, how well the examiner understood the essay discussion, etc.  A lot of the interpretation of intent and meaning was left to the examiner.  Although we tried to minimize the impact of “subjectivity” by pairing markers and requiring them to agree a final mark.  This was not a perfect science and is an issue eradicated by objective tests.
  • Language barriers: the interest in APMG qualifications in international and multi-lingual markets creates complexity in the essay-style examination.  In the past, exams were translated, taken in the language, then translated back to English to be marked by English examiners (due to a lack of qualified native language examiners).  This process introduced further subjectivity and margin for error since the translation process is not exact.  As native language examiners were available this diminished, but never really went away completely simply due to market demand, training, independent invigilation and administration, creating barriers to managing a foreign language market.  This has over time improved, but is still an ongoing issue to manage.
  • Marking: lack of reliable security meant that systems were not robust enough to offer electronic exams for essay style questions and much of it was done on paper, copied, distributed to examiner marking teams, etc.  This lengthened the marking turnaround time and increased security and privacy issues, which the market became increasingly unhappy with.  The intense overhead of managing a growing market meant this was a delivery model that’s days were numbered.

Our training community prefers multiple choice

Our Accredited Training Community agreed it was better to prepare candidates for multiple choice exams because it has a precise usage and outcome and less subjective interpretation.  This was supported by professional independent research within the academic testing community that this was a viable, well used and proven model for testing knowledge.

The multiple-choice format is used all over the world by respected educational establishments.  Its validity as a testing method is proven.  Based on our research and empirical evidence it delivers the opportunity to reach more of the global community with a fair, impartial, cost effective, accessible means of certification.

Multiple Choice is not a lesser means of testing knowledge, and it  is in no way inferior to the essay-based model.  Research, science, common sense and experience proves the opposite. Is it perfect?  No.

No testing method will ever be foolproof, perfect or immune from criticism.  But it does work and it has created the opportunity for providing multiple-languages, thorough testing and affordable examinations to hundreds of thousands of people whose careers and work experiences have been enriched because of it.

The ability to provide an  end-to-end electronic delivery of course study and examinations have tapped into markets and students who would otherwise never have been able to afford or access professional education.  This is not commercial exploitation – this is about extending accessibility and supporting global market demand.