This article originally appeared in The Bar Examiner print edition, Fall 2025 (Vol. 94, No. 3), pp. 16–19.By Bob Schwartz, JD, PhD

a row of wooden blocks with icons of checkmarks and the last block has an icon of a legal scale

This is the second piece in a three-part series on the work the NCBE Psychometrics Department is doing based on data collected during NCBE’s October 2024 NextGen Prototype Examination.1 This Bar Examiner series covers the setting of the base scale of NextGen UBE,2 the standard-setting workshop, and the concordance study that links scores on legacy UBE and NextGen UBE.

In May 2025, the National Conference of Bar Examiners, in partnership with the Human Resources Research Organization (HumRRO), conducted a multiday standard-setting workshop to establish a recommended passing score range for the NextGen Uniform Bar Exam. The workshop brought together more than 80 legal professionals from across the country to apply their expertise to one of the most critical steps in implementing a new licensure exam: defining minimum competence.

This article provides an overview of the workshop and its results, with a day-by-day account of how standard setting was conducted, who participated, what criteria were used, and how the resulting recommendation will support jurisdictions in setting policy.

Purpose of Standard Setting

Standard setting provides a formal process for determining the level of performance that separates examinees who are minimally competent from those who are not. It is an essential component of any high-stakes licensing exam. The process must be fair, defensible, and appropriately reflective of the expectations of the profession.3

In the context of the NextGen UBE, the goal was to recommend a range of passing scores that will inform jurisdictions as they look toward administering or consider adopting the new exam, alongside other relevant evidence. The range reflects the variability inherent in expert judgment and accommodates the reality that no single score perfectly defines minimum competence. While each jurisdiction retains the authority to set its own passing score, the standard-setting results offer a nationally informed, evidence-­based foundation that supports consistency and transparency in licensure decisions.

Composition of the Panel

NCBE invited jurisdictions that currently administer the legacy UBE or Multistate Bar Examination to nominate panelists. Ultimately, 84 panelists representing 40 jurisdictions participated. The group included practicing attorneys, judges, law school faculty, and bar admission professionals. More than 67% had been licensed for 16 years or more, and more than 80% had current or recent experience working with newly licensed lawyers.

This panel was selected to ensure diverse legal experience and broad familiarity with new lawyers’ expected capabilities. Panelists were organized into four rooms, with each reviewing a subset of exam content. Each room provided judgments on all item types; some content overlapped between rooms to ensure consistency across the panel.

Workshop Materials

Workshop materials came from the October 2024 Prototype Examination of the NextGen UBE. This full-length prototype was administered to over 2,000 candidates who had taken the MBE or legacy UBE in July 2024. It included 100 selected-­response multiple-choice questions, three integrated question sets, and three performance tasks, mirroring the content structure of the operational NextGen UBE. These real candidate responses gave panelists authentic examples of examinee performance and provided meaningful data for evaluating the difficulty and functioning of exam items.

The Standard-Setting Workshop

Day 1: Defining the Minimally Qualified Candidate and Applying the Modified Angoff Method

At the heart of standard setting is the definition of the minimally qualified candidate (MQC), a term of art that describes the threshold of competence necessary for licensure. On the workshop’s first day, panelists participated in guided discussions to develop a shared understanding of the MQC, drawing on examples of what a new lawyer should and should not be expected to do without supervision.

The MQC was described as someone who demonstrates foundational knowledge of core legal concepts, can identify significant issues, can perform basic legal research, and can communicate in writing at a functional level. However, this candidate might still require guidance in complex analysis, nuanced communication, and professional judgment.

Panelists were also trained in the modified Angoff method,4 which was applied to multiple-choice questions. For each such item, panelists reviewed the legal content and evaluated whether an MQC would be likely to answer it correctly. They were asked to quantify their judgment of how many MQCs would answer the question correctly as a percentage, based on their expectations of overall MQC performance. These estimates were averaged to produce a recommended passing score for the multiple­-choice portion. These values were then aggregated and analyzed across panelists and questions.

To support this work, panelists reviewed multiple-choice item types. These included both select-one and select-two formats. Select-one items present four options with one correct answer. Select-two items present six options, of which two are correct. These questions can appear either on their own or as part of integrated question sets.

Panelists completed an initial round of Angoff ratings, which was followed by group discussion and a second round of ratings.

Day 2: Paper Selection Method for Constructed Responses and Performance Tasks

On the second day, panelists were trained on the paper selection method5 and reviewed constructed­-response items and performance tasks.

“ The MQC was described as someone who demonstrates foundational knowledge of core legal concepts, can identify significant issues, can perform basic legal research, and can communicate in writing at a functional level.” 

Panelists reviewed real examinee responses and identified those that represented just-passing performance, which informed the standard for written components. For each question, panelists selected two responses they believed exemplified the performance of an MQC. They discussed the selected responses’ strengths and weaknesses and revised their choices in a second round if desired. These selections were used to determine the score thresholds that aligned with minimum competence. For example, if the two papers each panelist selected as just-passing had previously been scored by trained graders as a 3 and a 4 (a fact unknown to the panelists), then the threshold for minimally competent performance on that item would be set between those two scores.

In this phase, panelists worked with two additional item formats. The first was integrated question sets, which included a mix of multiple-­choice and short-answer, constructed-­response questions tied to a fact pattern. The second was performance tasks, which simulated real-world legal tasks such as drafting a memo or advising a client. These formats tested a range of legal skills and required panelists to make judgments about more complex and open-ended performances, necessitating different standard-setting methods than those used for multiple-choice questions.

Panelists evaluated these item types using real responses drawn from the October 2024 prototype, applying the paper selection method over two rounds of review and discussion.

Day 3: Review and Application of the Hofstee Method

To begin the final day of the workshop, panelists reviewed the results of the previous two days, including their judgments about multiple­-choice and written item types. Following this, they participated in an additional activity known as the Hofstee method,6 which served as a policy-based reasonableness check.

It has been observed in standard-­setting exercises in other ­c­ontexts—such as K–12 education—that panelists may unintentionally recommend standards that are either too stringent or too lenient. To help ensure the recommended standards for the NextGen UBE were balanced and appropriate, panelists were asked to offer their views on reasonable limits for both passing scores and failure rates. These judgments were not used to impose a particular outcome but rather to provide guardrails that reflect broader professional and policy considerations.

Although the NextGen UBE is a criterion-referenced exam, meaning there is no predetermined pass rate, the Hofstee method allows standard setters to reflect on the practical impact of their judgments and to align recommended scores with both expectations for minimum competence and fairness.

The results from this method complemented the other standard-­setting approaches, contributing to a well-rounded and policy-­informed recommendation for a passing standard.

Panelist Feedback

Throughout the workshop, panelists also completed quick pulse surveys designed to assess the clarity of the training, their confidence in making the required judgments, and the appropriateness of the standard-setting methods. Panelist feedback indicated that a substantial majority felt confident in their understanding of the MQC and the process overall and affirmed the defensibility of the recommended standard.

Support for the Recommended Standard

The results of the three methods converged closely. Ratings across all rooms were consistent, and statistical analyses showed minimal influence from outliers. Panelists demonstrated a high degree of consistency in their ratings across question types and workshop sessions.

The Hofstee results supported the standard suggested by the Angoff and paper selection methods. The recommended standard also aligned with the panel’s expressed expectations for pass/fail rates and acceptable levels of performance.

Taken together, these multiple lines of evidence support a well-­substantiated and defensible recommendation, firmly grounded in expert legal judgment and a rigorously applied standard-setting process.

Post-Workshop Steps

NCBE’s Passing Score Advisory Panel and Board of Trustees reviewed the recommended passing score range from the workshop. Together, they considered the standard-setting results alongside findings from a concordance study linking NextGen UBE and legacy UBE scores, as well as the potential effect of the recommended standard on examinees and jurisdictions.

Each jurisdiction will make its own decision about adopting a passing score for NextGen UBE. The standard­-setting report and supporting materials are intended to inform those decisions and support consistency across jurisdictions.

Conclusion

The standard-setting process for the NextGen UBE was designed to be rigorous, transparent, and aligned with best practices in licensure testing. It drew on the expertise of a diverse panel of legal professionals and used established methods suited to the exam’s structure. The result is a recommended passing standard that reflects consensus of professional judgment about minimum competence and provides a strong foundation for jurisdictions transitioning to NextGen UBE. 

Notes

  1. NCBE, “NextGen Prototype Exam: Help Shape the Future of the Legal Profession” (September 13, 2024). (Go back)
  2. Juan Chen, PhD; Xuan Wang, PhD; and Joanne Kane, PhD, “The Testing Column: Setting the Base Scale for the NextGen UBE,” 94(2) The Bar Examiner 39–41 (Summer 2025). (Go back)
  3. Gregory J. Cizek and Michael B. Bunch, Standard Setting: A Guide to Establishing and Evaluating Performance Standards on Tests (Sage, 2007). (Go back)
  4. William H. Angoff, “Scales, Norms, and Equivalent Scores,” in Educational Measurement, 2nd ed., 508–600 (Robert L. Thorndike ed., American Council on Education, 1971). (Go back)
  5. Susan Cooper Loomis and Mary Lyn Bourque, “From Tradition to Innovation: Standard Setting on the National Assessment of Educational Progress,” in Setting Performance Standards: Concepts, Methods, and Perspectives, 175–217 (Gregory J. Cizek ed., Lawrence Erlbaum, 2001). (Go back)
  6. Willem K.B. Hofstee, “The Case for Compromise in Educational Selection and Grading,” in On Educational Testing, 109–127 (Scarvia B. Anderson and John S. Helmick eds., Jossey-Bass, 1983). (Go back)

Portrait Photo of Bob Schwartz

Bob Schwartz, JD, PhD, is Managing Director of Psychometrics for the National Conference of Bar Examiners.

  • Bar
    Bar Exam Fundamentals

    Addressing questions from conversations NCBE has had with legal educators about the bar exam.

  • Online
    Online Bar Admission Guide

    Comprehensive information on bar admission requirements in all US jurisdictions.

  • NextGen
    NextGen Bar Exam of the Future

    Visit the NextGen Bar Exam website for the latest news about the bar exam of the future.

  • BarNow
    BarNow Study Aids

    NCBE offers high-quality, affordable study aids in a mobile-friendly eLearning platform.

  • 2024
    2024 Year in Review

    NCBE’s annual publication highlights the work of volunteers and staff in fulfilling its mission.

  • 2024
    2024 Statistics

    Bar examination and admission statistics by jurisdiction, and national data for the MBE and MPRE.