This article originally appeared in The Bar Examiner print edition, Summer/Fall 2021 (Vol. 90, Nos. 2–3), pp. 45–48.

A column devoted to exploring topics and providing information especially for those new to bar admissions.

Icon of a magnifying glass with a question mark, and the words: What You Might Like to Know About:

Terms Often Used in Reference to the Bar Examination

Those who are new to the bar admissions community are likely to encounter some unfamiliar terms in the course of their work in their new roles—terms specific to the bar exam as well as those pertaining to broader testing concepts. To assist those who are new to their roles, we have assembled here a glossary of terms that are often used at NCBE educational events and in Bar Examiner articles.

We have suggested several articles to read for a few of these terms, and at the end of this glossary, we list additional articles that address the broader topics of bar exam testing basics, test development, and grading. To access even more articles, visit the Bar Examiner website, thebarexaminer.org, to search the article archives by keyword or to browse articles by topic.

Assessment: A term used to describe a standardized examination. For example, the Multistate Bar Examination (MBE) is an assessment. This word is also used in the law school context to describe an exam or another means of evaluating student performance.

CBT (computer-based testing): Administering a test via computer, typically in commercial testing centers run by professional proctors.

Courtesy seating: Allowing an applicant to sit for the Uniform Bar Examination (UBE) in a jurisdiction for geographical convenience even though the applicant does not intend to seek admission in that jurisdiction, as long as the jurisdiction is satisfied that the applicant is a bona fide candidate for admission in another UBE jurisdiction.

DIF (differential item functioning): Empirical methods used specifically to detect measurement bias at the level of the test item. (See Measurement bias.)

Equating: A statistical procedure that makes adjustments to examinees’ scores to compensate for variations in exam difficulty. Equating helps ensure that current examinees’ scores accurately reflect only their proficiency rather than any potential differences in the difficulty of the questions they answered, so that no examinee is unfairly penalized or unfairly rewarded by taking a form of the test that is more difficult or easier than another administration’s form of the test. While there are many equating methods, the approach used for the MBE is to include a set of previously administered questions—known as equators—on each exam. NCBE then compares the performance of current examinees with the performance of previous examinees on the same set of questions. (See Equators.)

Recommended Reading:

Mark A. Albanese, PhD, “The Testing Column: Equating the MBE,” 84(3) The Bar Examiner 29–36 (September 2015).

Equators: A set of previously administered questions embedded in an exam to serve as a “mini-test,” with the items chosen to represent as closely as possible the content and statistical characteristics of the overall exam. The statistical characteristics of the equators from previous administrations are used to adjust scores for differences in difficulty between the current test form and previous forms after accounting for differences between current and previous examinee performance on the equator items.

Item analysis: The process of statistically analyzing each test question (also referred to as a test item) to ensure that each question is functioning statistically in a way that is consistent with what is expected of a high-quality test question.

Jurisdiction-specific component: A separate test, course, or some combination of the two that a UBE jurisdiction may offer in person or online to candidates to assess their knowledge of jurisdiction-specific law.

KSAs: Knowledge, skills, and abilities. Establishing a connection between measurable KSAs required for entry-level legal practice and the KSAs assessed on the bar exam is important in establishing test validity. (See Practice analysis, Validity.)

Measurement bias: When scores on a test or a test item are systematically lower or higher for some definable group of examinees due to factors unrelated to the attributes that the test or test item is intended to measure.

Online testing: Delivering an exam via a secure, web browser–based application. Online testing requires that examinees have internet connectivity during the entirety of testing.

PII (personally identifiable information): Any data that can be used to identify a specific individual, such as a Social Security number, mailing address, email address, or phone number.

Passing score: Also referred to as a cut score. A value on the score scale that divides different groups of test takers and results in different consequences for those groups. For the bar exam, the passing score is the point on the score scale that separates the people who pass the exam from the people who fail the exam on a given attempt. Jurisdictions set their own passing scores for the bar exam. (See Standard setting.)

Practice analysis: A study that involves determining the knowledge, skills, and abilities necessary for entry-level practice in a profession.

Preliminary item analysis: An item analysis run using a sample of responses at the early stages of the scoring for a multiple-choice exam. This analysis is conducted to help ensure that the scoring key for each item on a test is correct. (See Item analysis.)

Psychometrician: A professional, usually possessing a PhD or other advanced degree in a quantitatively oriented field (e.g., educational measurement, industrial/organizational psychology, or statistics) who has an analytic focus on the construction of assessments, how they are scored, and the resulting inferences that can be made regarding a person’s knowledge, skills, or other attributes. Work by NCBE’s psychometricians helps to ensure the validity and reliability of the test results and provides input for organizational decision-making groups and analysis.

Raw score: The sum of the number of scored questions answered correctly on a multiple-choice test, or the score assigned by a grader or graders to a written component of a test. (See Scaled score.)

Reliability: The degree to which scores for a group of examinees would be consistent over multiple (theoretical) testing sessions. The reliability of the MBE and the MPRE is calculated following each administration of these examinations. Reliability is expressed on a scale from 0.0 to 1.0, with 1.0 indicating that examinees would receive exactly the same scores were they to take the exam again. A 0.0 reliability would indicate that scores from one administration would have no bearing on the scores examinees would receive on a second administration. The reliability of the MBE has been 0.93–0.94 in recent administrations.

Remote proctoring: Monitoring examinees using webcams, microphones, and/or AI (artificial intelligence, used for facial recognition, to monitor keystrokes, etc.). Remote proctoring is conducted on either a live basis or a “record and review” basis. Live remote proctoring requires internet connectivity and means that a proctor is watching the examinee through a video connection during testing. With record and review remote proctoring, the examinee is recorded during testing, and the video may be reviewed after testing if any anomalies are detected or flagged by either a live proctor (if live proctoring) or by AI (if no live proctoring).

Remote testing: Delivering an exam using software that is downloaded and installed by examinees before testing. Remote testing does not require internet connectivity during testing, although internet connectivity is necessary at the beginning (to check examinee IDs) and at the end (for examinees to upload answers) of each remote testing session.

Scale: A set of numbers useful in making comparisons across multiple individuals, examples, or instances. For example, the Fahrenheit scale sets the freezing temperature at 32 degrees and the boiling temperature of water at 212 degrees, with numbers between and beyond based upon the linear relationship that exists between the two points. The Celsius scale sets the freezing temperature of water at 0 degrees and the boiling temperature at 100 degrees, and as with Fahrenheit, the numbers between and beyond are based upon the linear relationship that exists between the two points. Weights expressed in pounds or kilograms, or heights expressed in inches or centimeters, are additional familiar examples. Performance on standardized tests is similarly assessed and expressed via a score scale. Since 1972 the range of MBE scores has been from 0 to 200 and was originally based on the number of items answered correctly (raw score) for the MBE that occurred in July of that year. (See Raw score.)

Scaled score: Scaled scores are obtained by transforming raw scores to the scale of an exam. Scaling facilitates consistent meaning and interpretation of scores across time. (See Scale, Scaling.)

Scaling: Mathematically transforming scores from one set of numbers (for example, raw scores or number correct) to another set of numbers in order to best represent the performance of examinees. To ensure that written component scores on the bar exam accurately reflect examinee proficiency, they are scaled to the MBE. Scaling the written component of the bar exam involves linearly transforming raw essay scores onto the MBE scale, which preserves the rank-ordering of examinees’ written scores but allows the MBE, which has equated scores that have consistent meaning over time, to function as an anchor for statistically adjusting the written scores to account for fluctuations in written exam difficulty or grader stringency. When written scores are on the same scale as MBE scores, jurisdictions can easily combine them into a single total bar exam score. (See Written component of the bar exam.)

Recommended Reading:

Mark A. Albanese, PhD, “The Testing Column: Scaling: It’s Not Just for Fish or Mountains,” 83(4) The Bar Examiner 50–56 (December 2014).

Susan M. Case, PhD, “The Testing Column: Frequently Asked Questions About Scaling Written Test Scores to the MBE,” 75(4) The Bar Examiner 42–44 (November 2006).

Susan M. Case, PhD, “The Testing Column: Demystifying Scaling to the MBE: How’d You Do That?” 83(4) The Bar Examiner 45–46 (May 2005).

Score: A number obtained from an individual’s exam performance. (See Raw score, Scaled score.)

Standard setting: The process of establishing a passing score on an examination; this process is intended to determine how much knowledge and skill is required for minimally acceptable entry-level practice. On the bar exam, each jurisdiction sets its own passing score, which represents the minimum level of competence expected by that jurisdiction for entry-level law practice. (See Passing score.)

Recommended Reading:

Michael T. Kane, PhD, and Joanne Kane, PhD, “Standard Setting 101: Background and Basics for the Bar Admissions Community,” 87(3) The Bar Examiner 9–17 (Fall 2018).

Test form: A particular set of test items, or questions, administered at a given time. For example, the July 2020 MBE is a unique test form.

Test item: Test question.

Validity: The extent to which a test measures what it claims to measure, and the degree to which evidence and theory support the interpretations of test scores for particular uses. The specific claim of the bar exam is “to protect the public by helping to ensure that those who are newly licensed possess the minimum knowledge and skills to perform the activities typically required of an entry-level lawyer.” If the bar examination has strong validity evidence, that means it produces results that correspond to knowledge and skills that lawyers need for entry-level practice. Although practice analysis serves as the primary source of validity evidence for licensure examinations, NCBE also relies on various other sources of research-based evidence to support the interpretation of test scores.

Written component of the bar exam: The portion of the bar exam that requires examinees to provide written responses. Most jurisdictions use the Multistate Essay Examination (MEE) and the Multistate Performance Test (MPT) as their written component, but some jurisdictions either substitute or augment these two tests with their own questions requiring written responses. Answers to the written component of the bar exam are graded in the jurisdiction where the exam was administered.

LEARN MORE ABOUT …

Search thebarexaminer.org for these articles and more.

Bar Exam Testing Basics:

Bar Exam Test Development:

Grading:

Contact us to request a pdf file of the original article as it appeared in the print edition.

  • Bar
    Bar Exam Fundamentals

    Addressing questions from conversations NCBE has had with legal educators about the bar exam.

  • Online
    Online Bar Admission Guide

    Comprehensive information on bar admission requirements in all US jurisdictions.

  • NextGen
    NextGen Bar Exam of the Future

    Visit the NextGen Bar Exam website for the latest news about the bar exam of the future.

  • BarNow
    BarNow Study Aids

    NCBE offers high-quality, affordable study aids in a mobile-friendly eLearning platform.

  • 2023
    2023 Year in Review

    NCBE’s annual publication highlights the work of volunteers and staff in fulfilling its mission.

  • 2023
    2023 Statistics

    Bar examination and admission statistics by jurisdiction, and national data for the MBE and MPRE.