Artificial Intelligence (AI) tools like ChatGPT have been growing in prominence in the legal industry. While these tools have proven valuable in various ways, a recent incident in the federal district court for the Southern District of New York serves as a clear warning of the risks and challenges of relying solely on AI for legal research.
The Rise of AI in the Legal Field
In recent years, the use of AI and machine learning tools, like ChatGPT, has become more prevalent in the legal profession. These technologies have enabled lawyers and legal professionals to automate repetitive tasks, perform document analysis, and provide general legal information.
- Cost-Effectiveness: AI tools are often used for cost reduction, as they can process large quantities of information in a fraction of the time it takes a human.
- Efficiency: By automating processes like document review and data extraction, AI tools can free up time for lawyers to focus on more complex tasks.
- Accessibility: For the general public, AI chatbots can provide basic legal guidance, potentially bridging the gap for those who cannot afford professional legal consultation.
A Troubling Incident: Mata v. Avianca, Inc.
In the case of Mata v. Avianca, Inc.,(Case No. 22-cv-1461 (S.D.N.Y.)), a situation arose where ChatGPT, the AI program used for legal research by the plaintiff’s lawyers, provided citations to nonexistent court decisions. This issue led to an inquiry by the court, revealing a number of fabricated decisions and incorrect citations.
Key Details and Timeline:
- Initial Submission: The plaintiff’s lawyer filed papers that included citations to several non-existent court decisions. The court directed an affidavit attaching the problematic decisions.
- Bogus Decisions: The court discovered six bogus decisions, with one fabricated case supposedly from the Eleventh Circuit Court of Appeals.
- Revelation: The plaintiff’s counsel explained that the research was performed using ChatGPT and admitted to being unaware that the content could be false.
- ChatGPT’s Response: Screenshots of chats with ChatGPT revealed that the tool had assured the validity of the cases and even stated they could be found on Westlaw and LexisNexis.
- Court’s Reaction: A hearing was scheduled, demanding that the plaintiff’s counsel show cause for not being sanctioned for citing “fake” cases.
Important Lessons Learned
The incident demonstrates several crucial points:
- Inaccuracy and Fabrication: AI tools like ChatGPT can create fluent responses that appear legitimate but may include inaccuracies or even wholly fabricated information. This includes creating non-existent court decisions, citing fake endorsements, or providing incorrect procedures for legal tasks.
- Question Phrasing: The way questions are asked to AI software may affect the information provided, but even artful prompts cannot prevent incorrect responses.
- Need for Human Oversight: Despite the benefits of AI, human supervision and review remain critical. Any output generated by AI should be double-checked and verified through independent sources.
In addition, several other considerations cause challenges using AI tools in the legal field…
- Lack of Specific Understanding: AI tools may lack the nuanced understanding of legal principles and jurisdiction-specific laws, leading to incorrect or overly general advice.
- Ethical Considerations: There may be ethical concerns related to client confidentiality, conflicts of interest, and unauthorized practice of law. These concerns are part of the ongoing debate within the legal community.
- Regulatory Compliance: Legal professionals must adhere to specific rules and regulations, which may be violated unintentionally through the use of AI. Compliance requirements may differ from state to state, adding to the complexity.
- Dependence on Quality of Data: AI tools rely on the quality and accuracy of the data they are trained on. Any biases or inaccuracies in this data can lead to incorrect conclusions, especially in a field as sensitive and precise as law.
- Liability Issues: Determining liability in the event of incorrect legal advice or actions taken based on AI-driven conclusions can be a legal grey area, potentially opening firms to risks and disputes.
Use AI Carefully When Preparing For Cases
The incident in the Mata v. Avianca case underscores the importance of exercising caution and understanding the limitations of AI in the legal field. ChatGPT and similar AI tools are not yet reliable for complex legal research, whether in litigation, transactional work, or other legal contexts.
Lawyers should heed the warning from this case and utilize AI tools as supplementary resources only, ensuring proper human oversight and validation.
While AI, including ChatGPT, has significant potential to revolutionize legal practice, this case serves as a sobering reminder that the technology is still evolving. Ensuring ethical compliance, accuracy, and reliability in legal practice must remain paramount, and the role of human expertise and judgment should not be underestimated.