This article originally appeared in The Bar Examiner print edition, Spring 2024 (Vol. 93, No. 1), pp. 58-60.

By Sonia Gipson Rankincloseup of a large hand made up of connections of star-like bright spots holding a compass pointing due north

In 1886, Karl Benz’s Motorwagen ushered in the automobile era, an innovation complemented by Volvo’s introduction of the three-point seatbelt in 1959.1 Volvo’s decision to share its three-point seatbelt design with competitors, free of charge, stands as a landmark act of generosity and foresight.2 Recognizing the profound value of human life, Volvo set a precedent for prioritizing community well-­being and safety innovations—over potential profits—across the industry. This journey from innovation to safety standard, mandated by US law in 1968, exemplifies how legal frameworks can channel technological advancements toward the public good. This historical evolution mirrors the current challenge faced by the legal community: integrating “legal seatbelts” for the burgeoning field of artificial intelligence (AI).

As AI intertwines with the legal system, it brings a dual-edged sword of opportunities and challenges.  AI’s promise lies in its potential to revolutionize the legal sector with unmatched efficiency and innovation, offering new avenues for automating routine tasks, enhancing legal research, and refining both case management and decision-making processes.

However, complex issues arise as well, including challenges to traditional practice of law and our conceptions of justice. AI also raises concerns about job displacement, privacy breaches, cybersecurity threats, and ethical dilemmas. Of particular concern is the potential for AI to diminish human oversight, transforming legal decision making into a process driven more by data than by empathy and ethical judgment. AI algorithms’ “black box” nature further complicates the landscape, posing a threat to the transparency and accountability that form the bedrock of legal integrity.3 Yet, it is the risk of algorithmic bias—where AI, trained on historical data, might inadvertently reinforce existing societal inequalities—that stands out as particularly concerning. Algorithmic bias is defined as consistent, reproducible errors in AI systems that result in “unfair” outcomes, often disadvantaging marginalized groups.4 Real-world instances, such as the misidentification of Black people by facial recognition technologies5 or the perpetuation of historical biases in criminal justice6 and healthcare algorithms,7 underscore the urgency of a balanced approach to AI that enhances, not undermines, justice.

In response to these challenges, initiatives such as the University of New Mexico-Santa Fe Institute Interdisciplinary Working Group on Algorithmic Justice, of which I am a member, are working toward equitable, transparent, and accountable AI development.8 Our group unites a diverse array of experts from law, computer science, philosophy, and the social sciences. Emphasizing the importance of thorough algorithmic audits, diverse datasets, and interpretable AI systems amenable to human oversight, our group has actively consulted with the New Mexico legislature9 and contributed to federal discussions on AI legislation,10 focusing on transparency and fairness in government algorithm use.

AI can be used to bend our society closer to justice.11 To direct AI toward such a path, and away from one that undermines it, several key strategies are required. First, there must be a concerted effort to embed ethical considerations into the fabric of AI development, ensuring that AI systems are designed with a clear understanding of legal norms and values in place.12 Second, promoting interdisciplinary cooperation is of the utmost importance; the complexity of intertwining legal and technological areas requires insights from a wide range of experts. Third, there should be an emphasis on public engagement and transparency, which will allow for a broader discourse on how AI is reshaping our legal landscape. Finally, regulatory frameworks must evolve in tandem with AI advances and provide clear guidelines for the ethical development and use of AI in legal settings.

The journey toward mitigating AI biases and fostering a more equitable society demands collaboration, like the spirit Volvo’s approach to seatbelt innovation exemplified. Similarly, the legal sector stands at a crossroads where collaboration is paramount. Inspired by Volvo’s example, legal practitioners, judges, court personnel, technologists, and policymakers must work together. This approach is vital for harnessing AI’s potential responsibly, ensuring that technological advances serve to enhance fairness and protect the vulnerable. By pooling knowledge and resources, the legal community can navigate AI’s complexities, ensuring these powerful tools are wielded with care and precision to tackle inherent biases within AI systems, upholding justice for all.

Together, we can develop AI systems that not only streamline legal processes but also uphold the highest standards of justice and equity. Together, we can ensure AI use in the legal sector acts as a force for good, reinforcing the principles of fairness and inclusivity that are fundamental to our justice system.

Notes

  1. For more on the invention of the seatbelt, see Philip Seo, “Revisionist History: Seat Belts & Resistance to Public Health Measures,” The Rheumatologist (October 13, 2021), available at https://www.the-rheumatologist.org/article/revisionist-history-seat-belts-resistance-to-public-health-measures/. There had been a patent filed for the first version of the modern seatbelt a year before, in 1885. (Go back)
  2. Sam England, “Why Volvo Gave Away the Patent for Their Most Important Invention,” Arnold Clark Newsroom (August 7, 2023), available at https://www.arnoldclark.com/newsroom/265-why-volvo-gave-away-the-patent-for-their-most-important-invention. (Go back)
  3. A “black box” refers to a complex algorithm or machine learning system that generates outputs, such as predictions or decisions, without transparently revealing its internal reasoning process. See V. Hassija et al., “Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence,” 16 Cognitive Computation 45–74 (2024) at 47, available at https://doi.org/10.1007/s12559-023-10179-8. (Go back)
  4. Juliana Castaneda et al., “Dealing with Gender Bias Issues in Data-Algorithmic Processes: A Social-Statistical Perspective,” 15(9) Algorithms 303; 1–16 (September 2022) at 1, available at https://doi.org/10.3390/a15090303. (Go back)
  5. “Black Plaintiffs File Lawsuits for Wrongful Arrests or Jailing Due to Facial Recognition Technology,” NBC News (September 25, 2023), available at https://www.nbcnews.com/news/nbcblk/black-plaintiffs-file-lawsuits-wrongful-arrests-jailing-due-facial-rec-rcna117168; Reid v. Bartholomew, Complaint, 1:2023cv04035 (N.D. GA 2023), https://dockets.justia.com/docket/georgia/gandce/1:2023cv04035/320196. (Go back)
  6. Sonia M. Gipson Rankin, “Technological Tethereds: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments,” 78 Washington & Lee Law Review 647–724 (2021) at 647. (Go back)
  7. E. Moses and S. M. Gipson Rankin, “Medical Artificial Intelligence Should Do No Harm,” Nature Reviews Electrical Engineering (2024), available at https://rdcu.be/dEN39; Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” 366(6464) Science 447–453 (October 25, 2019), available at https://doi.org/10.1126/science.aax2342; Carolyn Y. Johnson, “Racial Bias in a Medical Algorithm Favors White Patients Over Sicker Black Patients,” Washington Post (October 24, 2019), available at https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/. (Go back)
  8. Santa Fe Institute, “Algorithmic Justice,” available at https://www.santafe.edu/research/projects/algorithmic-justice. (Go back)
  9. New Mexico Legislature, “Minutes of the Fourth Meeting of the Courts, Corrections and Justice Committee” (September 27–29, 2023), available at https://nmlegis.gov/minutes/CCJminSep27.23.pdf. (Go back)
  10. “Comment Submitted by Alfred Mathewson, Interdisciplinary Working Group,” Regulations.gov (October 17, 2019), available at  https://www.regulations.gov/document/HUD-2019-0067-2823. (Go back)
  11. In 2021, the Santa Fe Institute hosted a workshop titled, “Can Algorithms Bend the Arc Toward Justice,” which explored fundamental questions about the nature of justice in algorithms and AI, integrating insights from social science and law. See “Can Algorithms Bend the Arc Toward Justice,” Santa Fe Institute Events Wiki, available at https://wiki.santafe.edu/index.php/Can_Algorithms_Bend_the_Arc_Toward_Justice. The title is in honor of Rev. Dr. Martin Luther King, Jr.’s famous quote: “The arc of the moral universe is long, but it bends toward justice.” “Our God is Marching On” (March 25, 1965), Retha Powers, Bartlett’s Familiar Black Quotations (Little, Brown, 2013). (Go back)
  12. For example, the European Commission has incorporated the “Ethics by Design” approach into its ethics review process for AI projects, emphasizing the importance of individual well-being, societal and environmental impacts, and principles such as freedom, privacy, fairness, transparency, and accountability. P. Brey and B. Dainow, “Ethics by Design for Artificial Intelligence,” AI and Ethics (2023), available at https://doi.org/10.1007/s43681-023-00330-4. (Go back)

Portrait Photo of Sonia Gipson Rankin Sonia ­Gipson Rankin is Professor of Law at the University of New Mexico School of Law. She earned her law degree from the University of Illinois College of Law and holds a bachelor of science degree in computer science from Morgan State University.

Contact us to request a pdf file of the original article as it appeared in the print edition.

  • Bar
    Bar Exam Fundamentals

    Addressing questions from conversations NCBE has had with legal educators about the bar exam.

  • Online
    Online Bar Admission Guide

    Comprehensive information on bar admission requirements in all US jurisdictions.

  • NextGen
    NextGen Bar Exam of the Future

    Visit the NextGen Bar Exam website for the latest news about the bar exam of the future.

  • BarNow
    BarNow Study Aids

    NCBE offers high-quality, affordable study aids in a mobile-friendly eLearning platform.

  • 2023
    2023 Year in Review

    NCBE’s annual publication highlights the work of volunteers and staff in fulfilling its mission.

  • 2023
    2023 Statistics

    Bar examination and admission statistics by jurisdiction, and national data for the MBE and MPRE.