This article originally appeared in The Bar Examiner print edition, Summer 2025 (Vol. 94, No. 2), pp. 1–3.
By Darin B. ScheerAI and the Bar Exam: A Responsible Path Forward
Greetings, friends and colleagues in the bar admissions community. The closing plenary session at NCBE’s 2025 Annual Bar Admissions Conference, presented by Dean Andrew Perlman, was entitled “The AI Revolution in Law: Navigating Education, Practice, and Ethics.” With that discussion in mind, I wanted to share my thoughts about how NCBE is approaching the issue of artificial intelligence within the bar exam ecosystem.
AI is reshaping industries and institutions worldwide, and the legal profession is no exception. NCBE is evaluating how other high-stakes testing organizations have implemented AI into their content-generation and scoring processes as we explore how AI might responsibly support our mission. We believe AI has real potential to strengthen the bar exam ecosystem—not by replacing human expertise, but by supporting it. We also recognize that thoughtful, transparent, and principled research and implementation in this space is essential. From expanding access to high-quality study aids to enabling more nuanced, data-driven insights for jurisdictions, AI offers real promise when used responsibly. But our commitment to public trust, fairness, and psychometric rigor demands a careful, deliberate approach.
NCBE has not used AI to develop any operational content for either the current bar exam or the forthcoming NextGen UBE. Nor have we used AI in any operational scoring processes. Before using AI to assist in developing or scoring any operational items, NCBE would provide notice to jurisdictions at least a year in advance, along with a detailed explanation of how AI would, and would not, be used in that process. The rigor of the NextGen UBE, paired with its deeper measurement of skill application, makes effective AI use uniquely challenging, which is why NCBE is investing in long-term research into how AI might responsibly support our mission. That research is guided by several core principles.
AI as a Tool for Experts
First, although AI tools continue to increase in sophistication, they are not a substitute for human judgment. This is particularly true in a high-stakes domain such as legal licensure. Our drafting committees and test editors are at the heart of the bar exam. AI may assist someday, but it will never replace the legal insight, experience, and ethical awareness of the dedicated practitioners, law professors, and judges who serve on those committees. In exploring whether and how AI might be useful to our mission, NCBE’s goal is to develop tools that augment the expertise of human drafters, making the process more efficient without sacrificing quality or nuance. If, for example, today’s item development process has ten stages of development and refinement, AI might ultimately prove to be useful in making the first two or three steps more efficient by helping to generate more raw material on the front end, allowing our legal experts to work on what matters most—refining that raw material into finished products through multiple additional levels of review, discussion, and revision.
Similarly, as we study potential AI-supported scoring methods for constructed responses, we maintain a strict “human-in-the-loop” approach, ensuring that content and measurement experts review, validate, and control all AI-assisted processes. The future of legal licensure must be innovative, but never at the expense of fairness, accuracy, or validity. NCBE recognizes that professional judgment simply cannot be automated.
NCBE’s Guiding Principles Regarding AI
NCBE applies the following core principles to its approach to AI research to ensure that these efforts align with our mission and values:
- mission-driven: support competent, ethical, and inclusive entry into the legal profession
- evidence-based: be grounded in best practices in psychometrics, fairness, and assessment science
- stakeholder-centric: be responsive to the needs of jurisdictions, candidates, and legal educators
- responsible: prioritize bias mitigation, data privacy, and expert oversight
- iterative: conduct continuous research, collect feedback, and undergo improvement
NCBE has developed a set of Responsible AI Guidelines that reflect these principles. The guidelines align with US federal AI governance principles, such as the National Institute of Standards and Technology AI Risk Management Framework; the ABA Model Rules of Professional Conduct; and other international best practices. Key principles of the NCBE Responsible AI Guidelines include ensuring that any AI-generated exam item would be
- fair and unbiased across legal jurisdictions and demographic groups;
- legally accurate and grounded in verified, up-to-date legal sources;
- transparent in both its development and validation processes;
- accountable, with full human oversight and ethical review; and
- resilient against misinformation and common AI-generated errors.
By actively reviewing other high-stakes testing organizations’ AI implementation and applying these best practices, NCBE is taking a measured approach in determining whether and how this tool might responsibly be used in bar licensure.
A Commitment to Transparency
In addition to adhering to the above principles, transparency is central to everything we do. As we explore new technologies and methods, NCBE is committed to open, proactive communication with jurisdictions, candidates, and the broader legal community. We understand that the bar exam is not merely a test—it’s a gateway to a trusted profession. It must reflect the highest standards of fairness and reliability. Public communication and stakeholder input thus will feature prominently in NCBE’s evaluation of any AI-related initiatives. For example, we recently conducted a nonoperational pilot test of our first set of AI + human–generated items. Participants were informed that the items were experimental and not for operational use. We collected psychometric and survey data to assess performance and refine our research strategies. This pilot test serves as a crucial milestone in our long-term R&D efforts and underscores our commitment to transparency and accountability.
AI Beyond the Bar Exam
NCBE is also exploring several exciting—and sometimes overlooked—ways that AI might support the broader bar exam ecosystem outside the exam itself. For candidates, this includes the potential to provide access to an even greater range of high-quality study aids, to deliver personalized feedback with clear rationales and exemplary responses, and to provide tailored guidance that helps them better prepare for the bar exam. For jurisdictions and law schools, AI might be used to provide actionable insights from empirical candidate performance data, creating opportunities for academic support systems to be even more responsive and equitable.
Cautious, Collaborative, and Candidate-Centered
As the legal profession evolves, so too must the systems that uphold its standards. However, change cannot come at the expense of the trust, transparency, or rigorous standards that the public and the legal community expect and deserve. Guided by the proper principles, AI has the potential to make the legal licensure system more responsive, equitable, and efficient. But the promise of AI must be approached with equal parts caution and collaboration. NCBE will continue to place the trust of jurisdictions and candidates at the center of our research in this area. Our goal will never be to replace human experts in bar licensure, but rather to determine how AI might, transparently and responsibly, be a tool to support the ongoing efforts of those invaluable experts.
Kindest regards,
Darin B. Scheer
Contact us to request a pdf file of the original article as it appeared in the print edition.