Skip to Main Content

ChatGPT & Other Generative AI Tools for USD Law Students

Law students and ChatGPT: FAQs

  1. Can I use ChatGPT or other AI tools for my assignments at USD?
    At USD, including the law school, faculty members can decide how and to what degree they incorporate AI tools in their course generally, and/or by specific assignment.  Some faculty might explicitly coach students how to ethically use and cite it. Alternatively, they might design assignments and exams that avoid students' use of it. Or they might use some combination of the two.  Look for guidelines in the syllabus or in the instructions for the assignment.  For Fall 2023, all USD law exams are in person using ExamSoft which does not permit the use of ChatGPT or other AI tools. 
  2. Are other schools using generative AI tools? 
    AI has been around in education for quite some time (Turnitin, Grammarly, IBM Watson), but the generative AI/large language models we're talking about here like OpenAI’s ChatGPT 3.5 and its successor GPT-4, as well as Google’s Bard and Microsoft’s Bing Chat, have charted a drastically different path. Some schools and universities have chosen to ban ChatGPT and AI tools outright, either by banning them on school networks or devices, locking down exams, or instituting AI detection programs. But there’s no stopping students from accessing these tools from a personal phone or computer at home, even with exam protections, or AI detection programs in place. Large language models will have a transformative effect on the education industry and many schools are still figuring out what the future holds for them.  In recent months, more schools have shifted to teaching ChatGPT, so students aren’t left behind.
  3. Are other law students using ChatGPT? 
    We've all heard that ChatGPT-4 can pass the bar exam.  Not only could it pass it, but it passed it well, now scoring in the top 10% of test takers.  So presumably, some law students are using ChatGPT in an attempt to get a better grade.  Students have used generative AI to write an outline for an essay, write an essay, create references/citations list, summarize text, and create slide decks.  But ChatGPT is better at some of these than others.  And not everyone is jumping on the bandwagon. Students may have several reasons for their reservations: inadvertently running afoul of their school's honor code, concern about the accuracy and validity of research, and concern that it might diminish their legal research skills and make them less competitive in a firm environment. On law school exams, depending on the exam prompt, ChatGPT produces varying results.  If the prompt is rather basic, e.g. describe the elements of a contract, ChatGPT has enough samples to generate a reasonable answer.  If the prompt is very specific to terms used only in lecture, ChatGPT will not perform as well.  In short, ChatGPT can be an easy way to a B- answer.  ChatGPT-4 fares significantly better than 3.5
  4. Are other lawyers using it? 
    Absolutely! To varying levels of success... By now you must have heard the story of the NY lawyer who used ChatGPT and failed miserably.  Or the judge who used ChatGPT to write his ruling. Generative AI took many of us by surprise at the end of 2022 and initially not everyone was convinced that AI would infiltrate law firms. Fast forward a few months and it is clear that generative AI, just like e-discovery and computerized legal research before it, will increase efficiency, improve accuracy, reduce costs and yes, take away some simpler legal tasks.

    An April 2023 survey by Thomson Reuters among mid and large size firms found that 2-5% already using it, 30-35% considering whether to use, and 51% agreed that they should be using generative AI tools in some form.  A different study found that 40% of legal professionals use or plan to use generative AI

    While interest is high, almost 2/3 of legal professionals have serious concerns about security and privacy risks related to AI (e.g. client confidentiality). One thing that is abundantly clear still is that you should not use ChatGPT alone for legal research While a handful of jurisdictions have moved to restrict filings created solely by ChatGPT, it is likely that new regulations will incorporate use of generative AI tools as the new standard, especially for areas like document review. 
  5. What should I know about hallucinations?  
    Hallucinations are outputs from generative AI that look coherent but are either simply incorrect or sometimes outright falsehoods (e.g a case that does not exist). In the legal research context, we see a few different types of hallucinations: citation hallucinations, hallucinations about the facts of cases, and hallucinations about legal doctrine. For many of the publicly available generative AI tools, hallucinations sometimes come from being trained on "bad data," i.e. a model may be trained on internet sources like Quora posts or Reddit, which may have inaccuracies. More often, hallucinations result from the nature of the prompt given to the model. 

    Legal research vendors have worked aggressively to build products that limit hallucinations and increase accuracy. First, most have developed specialized models trained on narrower, domain-specific datasets. The idea is that "good data," and only good data, is allowed in the system.  Secondly, most vendors are using retrieval-augmented generation (RAG) which takes the user’s question and passes it through a database that then adds to the user's question as “context” that is then sent through the model. Third, some products may also use vector embedding as a way to identify concepts, by way of assigning phrases or even entire documents, as numerical vectors.  Coupled with RAG, this increases precision and relevancy.  Lastly but certainly not least, almost all vendors incorporate human feedback on responses.  
  6. What does this mean for me as a USD law student? 
    In your classes, the most important thing to consider is understanding what guidelines the faculty member has set and how to use these tools appropriately. For your jobs, firms will soon expect that their associates come with at least a baseline of how generative AI operates including a healthy dose of information literacy and knowledge of how to treat confidential client information.