Skip to Main Content

Generative AI Tools for USD Law Students

Law students and Generative AI: FAQs

  1. Can I use Gen AI tools for my assignments at USD?
    At USD, including the law school, faculty members can decide how and to what degree they incorporate gen AI tools in their course generally, and/or by specific assignment.  Some faculty might explicitly coach students how to ethically use and cite it. Alternatively, they might design assignments and exams that avoid students' use of it. Or they might use some combination of the two.  Look for guidelines in the syllabus or in the instructions for the assignment.  Within the parameters of instructor permission, students should also make sure they are using generative AI ethically. For Fall 2025 and Spring 2026, all USD law exams are in person using ExamSoft which does not permit the use of ChatGPT or other gen AI tools. 
     
  2. Are other schools using generative AI tools? 
    AI has been around in education for quite some time (Turnitin, Grammarly, IBM Watson), but the generative AI/large language models we're talking about here like OpenAI’s ChatGPT 5.0, as well as Anthropic's Claude and Google's Gemini, have charted a drastically different path. Initially, some schools and universities chose to ban ChatGPT and AI tools outright, either by banning them on school networks and devices, or instituting AI detection programs. By 2025 however, many schools, changed tactics from avoidance to education, providing students access to AI tools to increase AI literacy.  Inside Higher Ed's 2025 Survey of Campus Chief Technology/Information noted that 27 percent of CTOs said their college offered students AI access through an institution wide license, with public nonprofit CTOs especially likely  (42 percent). Although roughly half of all the institutions represented in the study did not offer students access to gen AI tools, 36 percent of those were in the process of considering ways to offer access. USD students, faculty, and staff have access to Gemini, Google's AI assistant, as well as NotebookLM, an AI-powered research assistant. The university has established specific training resources through its AI Training @ USD program, which provides both general and university-specific educational materials for users learning to navigate these new tools
     
  3. Are other law students using generative AI? 
    As early as mid-2023 ChatGPT-4 was able to pass the bar exam and the MPRE.  Not only could it pass the bar, but it passed it well, scoring in the top 10% of test takers.  So presumably, some law students are using ChatGPT in an attempt to get a better grade on exams.  Students have also used generative AI to write an outline for an essay, write an essay, create references/citations list, summarize text, and create slide decks.  But ChatGPT is better at some of these tasks than others.  And not everyone is jumping on the bandwagon. Students may have several reasons for their reservations: inadvertently running afoul of their school's honor code, concern about the accuracy and validity of research, and concern that it might diminish their capacity to learn traditional legal research skills. On law school exams, early versions of ChatGPT produced varying results, somewhere between a B and a C.  However later models fared significantly better.  By early 2025, ChatGPTs newest model at the time, called o3, earned grades ranging from A+ to B on eight spring  finals given by faculty at the University of Maryland Francis King Carey School of Law.
     
  4. Are other lawyers using it? 
    Absolutely! To varying levels of success... By now you must have heard the story where it all began, Mata v. Avianca. In February 2022, Mata filed a personal injury lawsuit in the U.S. District Court for the Southern District of New York against Avianca, alleging that he was injured when a metal serving cart struck his knee during an international flight. The plaintiff's lawyers used ChatGPT to generate a legal motion, which contained numerous fake legal cases involving fictitious airlines with fabricated quotations and internal citations. In May 2023, Judge P. Kevin Castel dismissed the personal injury case against Avianca, noting numerous inconsistencies and fabrications, holding that Mata's lawyers had acted with "subjective bad faith" sufficient for sanctions under Federal Rule of Civil Procedure Rule 11. Fast forward a few years and we're still not out of the woods with fake cases. This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, and puts the number at 300 cases and climbing.

    That said, it is clear that generative AI, just like e-discovery and computerized legal research before it, will increase efficiency, improve accuracy, reduce costs and yes, take away some simpler legal tasks. An April 2023 survey by Thomson Reuters among mid and large size firms found that 2-5% already using it, 30-35% considering whether to use, and 51% agreed that they should be using generative AI tools in some form.  A different study found that 40% of legal professionals use or plan to use generative AI.  While a handful of jurisdictions have moved to restrict filings created solely by ChatGPT, it is likely that new regulations will incorporate use of generative AI tools as the new standard, especially for areas like document review.  The State Bar of California has issued Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law.
     
  5. What does the ABA say about it? 
    On July 29, 2024, the American Bar Association (“ABA”) Standing Committee on Ethics and Professional Responsibility released its first opinion regarding attorneys' use of generative artificial intelligence. 

    Formal Opinion 512 states that to ensure clients are protected, lawyers and law firms using GAI must “fully consider their applicable ethical obligations,” which includes duties to provide competent legal representation, to protect client information, to communicate with clients and to charge reasonable fees consistent with time spent using GAI. "As GAI tools continue to develop and become more widely available, it is conceivable that lawyers will eventually have to use them to competently complete certain tasks for clients. But even in the absence of an expectation for lawyers to use GAI tools as a matter of course, lawyers should become aware of the GAI tools relevant to their work so that they can make an informed decision, as a matter of professional judgment, whether to avail themselves of these tools or to conduct their work by other means.
     
  6. What should I know about hallucinations?  
    Hallucinations are outputs from generative AI that look coherent but are either simply incorrect or sometimes outright falsehoods (e.g a case that does not exist). In the legal research context, we see a few different types of hallucinations: citation hallucinations, hallucinations about the facts of cases, and hallucinations about legal doctrine. For many of the publicly available generative AI tools, hallucinations sometimes come from being trained on "bad data," i.e. a model may be trained on internet sources like Quora posts or Reddit, which may have inaccuracies. More often, hallucinations result from the nature of the prompt given to the model. 

    Legal research vendors have worked aggressively to build products that limit hallucinations and increase accuracy. First, most have developed specialized models trained on narrower, domain-specific datasets. The idea is that "good data," and only good data, is allowed in the system.  Secondly, most vendors are using retrieval-augmented generation (RAG) which takes the user’s question and passes it through a database that then adds to the user's question as “context” that is then sent through the model. Third, some products may also use vector embedding as a way to identify concepts, by way of assigning phrases or even entire documents, as numerical vectors.  Coupled with RAG, this increases precision and relevancy.  Lastly but certainly not least, almost all vendors incorporate human feedback on responses.  

    However, more recent benchmarking indicates that legal generative AI models do reduce errors compared to general-purpose AI models like GPT-4, but these legal AI tools still hallucinated at an alarming rate. A 2024 study indicated that the Lexis+ AI system produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research and Ask Practical Law AI hallucinated more than 33% of the time.
     
  7. What does this mean for me as a USD law student? 
    In your classes, the most important thing to consider is understanding what guidelines the faculty member has set and how to use these tools appropriately. For your jobs, firms will soon expect that their associates come with at least a baseline of how generative AI operates including a healthy dose of information literacy and knowledge of how to treat confidential client information.