This article originally appeared in The Bar Examiner print edition, June 2017 (Vol. 86, No. 2), pp 4–6.
By Erica Moeser
It’s time to chalk up another successful Annual Bar Admissions Conference. We are still relishing our experiences this May in rainy San Diego. (Who knew the heavens could open up as they did in sunny California?) This year’s edition drew representatives from every single U.S. jurisdiction and a record turnout. With 5 plenary sessions and 20 breakouts, we exceeded the number of slices of content that we have offered in the past. Evaluations of the program have been positive. Now it’s on to Philadelphia next April!
As always occurs following the administration of the bar examination, results trickle out as jurisdictions complete their grading processes. It is difficult to find much happy news when reading accounts of the percentage of candidates who passed and failed. It is also easy to distort the narrative if one is not careful with the facts.
We know that the February MBE mean was lower than in prior years—in fact, the lowest of all time, at 134.06. As reference to our annual statistics issue of this magazine will confirm, February results are always lower than those we see in July. This is due to the fact that new graduates spill out of law school in late spring, take the July examination, and pass more often than not. “Repeaters,” as I use the term here, includes both those who are retesting in a jurisdiction in which they were unsuccessful earlier as well as those who are taking a bar examination in an additional jurisdiction.
We at NCBE have incomplete information about who is repeating the examination. The advent of the universal identifier known as the NCBE Number has allowed us to track applicants longitudinally for the first time. With few exceptions (jurisdiction holdouts that, frankly, are difficult to understand), we are gaining ground. For February we were able to identify 87% of all test takers using their NCBE Numbers. Beyond the obvious test security advantages of the NCBE Number, we are now able to understand the flow of applicants through multiple examinations.
To the extent that we were able to track applicants as first-timers or repeaters (that is, using the NCBE Number), for February 2017 we identified 24.1% as true first-timers, 65.7% as repeaters, and 10.2% as status unknown. For February examinations this is the highest percentage of repeaters over at least six years.
State statistics typically classify repeaters as those applicants who are retaking the bar examination in only that jurisdiction; that is, an individual may have tested in three other jurisdictions, but when that individual tests in State #4 he or she is classified as a first-timer. For example, the February results from California reported that only 26% of test takers were first-timers in that state, meaning that 74% of their February 2017 test takers had previously failed the bar examination in California. (These repeater statistics are the ones we capture in our annual statistics issue.)
To state the obvious, perhaps, is to note that the descent of performances on the bar examination that began with the July 2014 results has created an accumulating group of repeaters. According to the 2017 Comprehensive Guide to Bar Admission Requirements, 33 jurisdictions of 51 (50 states plus the District of Columbia) permit applicants to take the bar examination an unlimited number of times. Such a policy (with which I take issue as a best practice, but that’s for another day) helps to fuel the growth of the repeater cohort. Because the repeater cohort tends to score more poorly than first-time takers, overall pass/fail results are that much more dismal.
I understand the anguish that failing the bar examination causes, and one should never take my expression of confidence in the Multistate Bar Examination as a failure to grasp its significance in the lives of individuals seeking a law license.
The MBE is a thoughtfully constructed test that is created by content experts, a majority of whom are tenured faculty members who teach and write and render public service in the same area in which they draft. The test development process is a rigorous one spanning years. The Conference employs sophisticated measurement approaches to constructing and equating each new test. In an excess of caution, we have taken the additional step of sending our MBE data to an independent organization seeking its separate calculation of scaled scores, with that organization applying its own methodologies to make the determination. Unsurprisingly, the MBE passes muster each time.
Nevertheless, there are—and will continue to be—criticisms, many of which I have treated in previous columns and will not repeat here. One bit of rhetoric I have encountered lately seems to take issue with the MBE as a test of memory. For starters, that is not an apt characterization. We know that the MBE tests analysis and reasoning. There is a kernel of truth in the testing of memory, though, and that is an acknowledgment that the MBE does require knowledge that must be remembered when taking the test.
For me, there is no defensible argument that emergence from law school should occur without assurance that the graduate has acquired the knowledge required for entering the practice of law. While skills and values are certainly essentials, anyone who makes the case that baseline knowledge is not an essential for practice and not appropriate for testing has got it all wrong.
The knock on memorization falters if it is a euphemism for excusing knowledge as an essential, or if it serves as an excuse for institutional or individual failure to acquire fundamental knowledge about the core subject matter that belongs at the threshold of a lifetime general license to practice law.
Another issue that has surfaced to suggest that the MBE is to blame for the February results is the fact that the examination now consists of 175 scored questions and 25 pretest items instead of the prior 190/10 configuration. During the years that NCBE evaluated the wisdom of making such a change, questions of this type were raised internally and significant modeling research was conducted. Besides breadth-of-content issues, we were interested in predicting what a reduction of scored items would mean for test reliability—reliability being a term of art in measurement that expresses the reproducibility of results.
We knew when we authorized the change that the results were expected to be positive, and they are. Measurement literature and practice support that testing 25 items in each content area is adequate. The reliability of the February 2017 MBE was .92—the highest it has ever been for a February MBE. To understand why reliability could remain high despite a reduction in the number of live items per content area, consider that previously MBE questions appearing for the first time were live items. Some performed well, of course, but others did not. Using items that have been pretested as we do now allows us to pick and choose among well-performing pretest items so that every live question is expected to contribute to the ultimate score. Our yield of high-quality test questions is therefore very high under the current model—a model, I should add, that comports with industry standards.
It bears remembering that prior to the early 2000s there were no pretested items at all on the MBE. The move toward pretesting got under way during the early years of this century with the encouragement (and prodding!) of Dr. Susan Case and Dr. Michael Kane, top-flight measurement people who had recently joined the staff here.
Finally, some commentators feel compelled to lay the blame for declining bar examination passing percentages at the foot of the Uniform Bar Examination. There is no evidence to support that claim. The UBE has no effect. With this, as with other such assertions, I am reminded that after the dog dies, there is dog hair left around forever. My job, from time to time, is to remind folks that these debunked assertions are just so much dog hair. Having said that, I note that of the eight jurisdictions in which the February MBE mean rose over the prior February, six administer the UBE. Results may relate to shifting patterns related to where applicants test, but they are not attributable to the UBE as a test.
Contact us to request a pdf file of the original article as it appeared in the print edition.