This article originally appeared in The Bar Examiner print edition, December 2014 (Vol. 83, No. 4), pp. 4–11. By Erica MoeserPortrait photo of Erica MoeserThe results of the July 2014 bar examinations have trickled out over the past three months as jurisdictions completed their grading, and this time the ripple of reaction that normally attends such releases turned into a wave. The reason for this is that average scores on the Multistate Bar Examination (MBE) fell to the lowest level since the July 2004 MBE administration. (February results are characteristically lower.)

It was therefore inevitable that bar exam passing percentages would dip in many jurisdictions, because most jurisdictions follow the best practice of setting their written scores on the MBE scale. The technical reasons for this have been set forth in our Testing Columns (including the column in this issue). Essay questions, particularly those that involve judgments about performance of the very different test populations in February and July, challenge the ability of graders to maintain consistent standards. And graders change. The MBE score overcomes that. It is stable over time in terms of measuring the performance of each group that takes the test.

To the extent that individual law schools had large numbers of their graduates test in a given jurisdiction, the summer bar results were particularly troubling for them. Given the increased transparency that has occurred through efforts to promote disclosure of law school bar passage results to those who are contemplating the investment of time and money in a legal education, and given the sway that national rankings hold over law schools, it was perhaps preordained that the first thought to spring to many minds in legal education was Blame the Test.

The MBE, like other high-stakes instruments, is an equated test; raw scores are subjected to a mathematical procedure by psychometricians (experienced professionals, all with Ph.D.s) to adjust the final—or scaled—score to account for changes in difficulty over time. The result is that a scaled score on the MBE this past summer—say 135—is equivalent to a score of 135 on any MBE in the past or in the future.

Equating is done by embedding a set of test questions that have appeared on previous test forms into the current test form and then comparing the performance of the new group of test takers—here the July 2014 cohort—on those questions with the performance of prior test takers on those questions. The embedded items are carefully selected to mirror the content of the overall test and to effectively represent a mini-test within a test. The selection of questions for equating purposes is done with extreme precision involving professional judgments about both the content and the statistical properties of each question.

Through our customary quality-control procedures for scoring our standardized tests, we had early notice of the drop in the average scaled score—long before results were released to jurisdictions. Because we realized the implications of such a drop, we pursued a process of replicating and reconfirming the mathematical results by having several psychometricians work independently to re-equate the test. We also reexamined the decision-making process by which the equating items had been selected in the first place to confirm that there had been no changes that would have contributed to the change in the mean MBE score. It was essential that we undertake this for the sake of the test takers. They deserve a fair test competently scored. All of our efforts to date have confirmed the correctness of the scaled scores as calculated and reported.

I then looked to two areas for further corroboration. The first was internal to NCBE. Among the things I learned was that whereas the scores of those we know to be retaking the MBE dropped by 1.7 points, the score drop for those we believe to be first-time takers dropped by 2.7 points. (19% of July 2014 test takers were repeaters, and 65% were believed to be first-time takers. The remaining 16% could not be tracked because they tested in jurisdictions that collect inadequate data on the MBE answer sheets.) The decline for retakers was not atypical; however, the decline for first-time takers was without precedent during the previous 10 years.

Also telling is the fact that performance by all July 2014 takers on the equating items drawn from previous July test administrations was 1.63 percentage points lower than performance associated with the previous use of those items, as against a 0.57 percentage point increase in July 2013.

I also looked at what the results from the Multistate Professional Responsibility Examination (MPRE), separately administered three times each year, might tell me. The decline in MPRE performance supports what we saw in the July 2014 MBE numbers. In 2012, 66,499 candidates generated a mean score of 97.57 (on a 50–150 scale). In 2013, 62,674 candidates generated a mean score of 95.65. In 2014, a total of 60,546 candidates generated a mean score of 93.57. Because many MPRE test takers are still enrolled in law school when they test, these scores can be seen as presaging MBE performance in 2014 and 2015.

My second set of observations went beyond NCBE. I understand that the number of law schools reporting a median LSAT score below 150 for their entering classes has escalated over the past few years. To the extent that LSAT scores correlate with MBE scores, this cannot bode well for law schools with a median LSAT score below the 150 threshold.

Specifically, I looked at what happened to the overall mean LSAT score as reported by the Law School Admission Council for the first-year matriculants between 2010 (the class of 2013) and 2011 (the class of 2014). The reported mean dropped a modest amount for those completing the first year (from 157.7 to 157.4). What is unknown is the extent to which the effect of a change to reporting LSAT scores (from the average of all scores to the highest score earned) has offset what would otherwise have been a greater drop. (LSAC Research Reports indicate that roughly 30% of LSAT takers are repeaters and that this number has increased in recent years.)

For the future, and further complicating matters, effective with the fall 2012 entering class LSAC began reporting scores by whole numbers; therefore, the reported mean of 157 for fall 2012 (the class that will graduate in 2015) may represent anywhere from 156.5 to 157.4. For those matriculating in 2013 (the class of 2016), the LSAT mean lowered to a reported 156. Figures for the 2014 entering class are not available.

Beyond the national means lie the data that are specific to individual law schools, many of which have been struggling for several years with declining applications and shrinking enrollment figures. In some instances, law schools have been able to maintain their level of admission predictors—the undergraduate grade point average (UGPA) and the LSAT score. Some have reduced class sizes in order to accomplish this. To make judgments about changes in the cohort attending law school, it is useful to drill down to the 25th percentile of UGPA and LSAT scores for the years in question. There we see evidence of slippage at some schools, in some cases notwithstanding reductions in class size. And for matriculants below the 25th percentile, we know nothing; the tail of the curve leaves a lot of mystery, as the credentials of candidates so situated (presumably those last admitted) and the degree of change are unknown.

Another factor that bears consideration is the rising popularity of taking transfer students out of Law School A’s first-year class to bolster enrollment at Law School B, where Law School B is perceived by the transfer student to be the stronger of the two schools by reason of reputation. This mystic switch permits a law school to add to its enrollment without having to report the LSAT scores of the transferees, since the reported LSAT is a function of first-year matriculation, thus thwarting the all-important law school rankings. This process may weaken the overall quality of the student body of Law School B even as it weakens Law School A by poaching the cream of its crop. At the very least, it reduces the validity of comparisons of the mean LSAT scores and the mean MBE scores for Law Schools A and B because the cohort taking the bar exam from either school may be markedly different from the cohort of matriculants for whom an LSAT mean was calculated. Of course, this phenomenon does not account for the drop in the national MBE mean. It does, however, affect the statistics by which the performance of the two schools’ graduates is measured. (Note that the LSAT median, as reported in the rankings, is not the same thing as the LSAT mean.)

A brief list of other factors to consider could include the following:

  • The rise of experiential learning—a laudable objective—has also ushered in the greater use at some schools of pass/fail grading that may mask the needs of students at risk. Without grades for feedback, students may not realize they are at risk. In addition, the rise of experiential learning may have crowded out time for students to take additional “black-letter” courses that would have strengthened their knowledge of the law and their synthesis of what they learned during the first year.
  • There may be a trend in the direction of fewer required courses, or of fewer hours in a given required course, thereby permitting students to miss (or avoid) core subjects that will appear on the bar exam, or diminishing the amount of content coverage in a given course.
  • Bar prep courses now offered within law schools are being outsourced to bar review companies, defeating a more reasonable relationship between such courses and sound, semester-long pedagogy with more deeply embedded understandings of the application of law.
  • The decline in LSAT scores reported at the 25th percentile may have already called for increased attention to the need for academic support efforts that begin during the first year of law school, and this support may not have been delivered. Lack of academic support increases the chances that those in the bottom quartile of the class will struggle on the bar exam. In school after school, it is the law school grade-point average that is the most consistent predictor of bar passage success; therefore, close attention to those in the bottom ranks of the class is essential—and the earlier, the better—because they deserve the resources to put them on firmer footing as they move through law school and beyond.

Because it is an equated test, the MBE is a constant against which the performance of graduates can be gauged. It is a witness. It also serves as a sentinel. The July 2014 results communicated news with which law schools would rather not contend. So long as law school entrance predictors (the UGPA and the LSAT) continue to fall, and law schools do not adjust through pedagogy or by attrition, the news is not going to be encouraging.

In the September 2013 issue of this magazine, I included a chart showing the change in the number of matriculants from 2010 to 2012 as provided by the American Bar Association’s Section of Legal Education and Admissions to the Bar. Because of the relationship between enrollment changes and bar passage, the chart, updated to show an additional year of data and reworked to show how enrollment and the LSAT 25th percentile may contribute to the downward trend at a given school, follows. 2014 data is not yet available.

Change in First-Year Enrollment from 2010 to 2013 and Reported Changes to the LSAT Score at the 25th Percentile

Sources: Data for total first-year enrollment, 2010–2013, are from the American Bar Association Section of Legal Education and Admissions to the Bar, which obtains enrollment statistics from its annual questionnaire to law schools and makes the statistics available on its website at http://www.americanbar.org/groups/legal_education/resources/statistics.html. Data for 25% LSAT score, 2010–2012, are from the Law School Admission Council, LSAC Official Guide to ABA-Approved Law Schools. Data for 25% LSAT score, 2013, are from the American Bar Association Section of Legal Education and Admissions to the Bar, Standard 509 Information Reports, http://www.abarequireddisclosures.org/.

Changes in First-Year Enrollment and Average LSAT Score at the 25th Percentile, 2010‒2013

This graph shows two lines, one indicating the average LSAT score at the 25th percentile and the other showing the first-year enrollment in law schools, for the years 2010 through 2013. Both lines slope sharply downward from left to right. The 25% LSAT score line begins at around 155 in 2010 and ends at just above 152 in 2013. The enrollment line begins at around 56,000 in 2010 and ends at about 40,000 in 2013.

Contact us to request a pdf file of the original article as it appeared in the print edition.

  • Bar
    Bar Exam Fundamentals

    Addressing questions from conversations NCBE has had with legal educators about the bar exam.

  • Online
    Online Bar Admission Guide

    Comprehensive information on bar admission requirements in all US jurisdictions.

  • NextGen
    NextGen Bar Exam of the Future

    Visit the NextGen Bar Exam website for the latest news about the bar exam of the future.

  • BarNow
    BarNow Study Aids

    NCBE offers high-quality, affordable study aids in a mobile-friendly eLearning platform.

  • 2023
    2023 Year in Review

    NCBE’s annual publication highlights the work of volunteers and staff in fulfilling its mission.

  • 2023
    2023 Statistics

    Bar examination and admission statistics by jurisdiction, and national data for the MBE and MPRE.