Skip to Main Content

Generative AI Tools for USD Law Students

Prompt engineering

Remember that generative AI does not "know" anything. It is merely responding to your prompt in a way that reasonably responds to your prompt within its understanding of the human language. Consider these visual examples of three different prompts in Bing Image Creator powered by DALL·E

Prompt 1: "Create a picture of three forks on a wooded path"                                          

Image of three metal fork (utensils) standing upright in a forest.  The image is slightly absurd since this is not what one would typically associate with "forks in a path."  

Prompt 2: "Create a picture of three branches on a wooded path"

Image of three vertical tree branches jutting upward from the ground in a path in a forest.  This image is also non-sensical in that branches from a tree would not typically grow like this.

Prompt 3: "Create a picture of one wooded path diverging into three smaller paths entering a beautiful lush forest in the style of Georgia O'Keefe." 

 one wooded path diverging into three smaller paths entering a beautiful lush forest with green swirls of paint in the style of Georgia O'Keefe.

 

Prompt engineering is essential for getting better results from existing generative AI tools.  You might be thinking, "sure those pictures are kinda silly but GPT-4 is pretty great already, I mean heck it passed the bar.  Why should I take the time to improve on it?"  Students cannot use AI as a legal assistant, instead you should think of it as a tool and use it in conjunction with human input and interaction. As generative AI tools become the norm it is necessary that attorneys learn to harness their strengths fully.  Why are you even in law school if a computer can do your job? What will make your legal argument stronger than your opponent's if you're using the same generative AI tool as they are? How will you discover novel legal arguments as opposed to relying on regurgitated word choice? 

Consider using the RICE framework for better prompts:

  1. R: Role (assigning a role to the AI)
  2. I: Instructions (specific tasks for the AI)
  3. C: Context (providing necessary background information)
  4. E: Expectations (clarifying desired outcomes)
Poor prompt Good prompt 
Draft one paragraph of the law prohibiting spite fences in Michigan Draft for a layperson one paragraph of the law prohibiting spite fences in Michigan and then apply that law to a situation where Landowner 1 is angry that Landowner 2, who is a neighbor of Landowner 1, built a privacy fence but it blocks Landowner 1's view of a lake. Write in a professionally friendly tone.

Follow-up prompting: Even with an excellent prompt, generative AI and especially GPT work best with follow-ups.  Follow-up prompting works with AI to identify what works for you and what doesn’t. ChatGPT has the ability to remember details from previous prompts. If you don't like the initial response, tell ChatGPT what exactly you don't like and suggest a path for improvement. It is sometimes necessary to go back and forth with ChatGPT a few times to be satisfied with your results. 

Few-shot prompting: Enter a few examples (also called "shots") of what you want it to do. Few-shot prompting allows AI to learn from these 2-5 examples. Few-shot prompting can be especially helpful for formatting assignments or briefing cases in the style that you prefer.  Here is an example of few-shot prompting for creating deposition questions. 

Example of few-shot prompting: Prompt - prepare questions for deposition, Example input- generate questions to ask a witness during a deposition for a car accident case Example outputs: can you describe the events leading up to the accident? What were the weather and road conditions? Did you admit fault or make any statements about the accidents at the scene?

Chain of thought: Chain of thought prompting encourages models to explain themselves by breaking down complex problems into intermediate steps that are solved individually.  This increases accuracy of the output. Here is an example of chain of thought prompting: 

Standard prompting: Q: Roger has 5 tennis balls. he buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: The answer is 11. Q: The Cafeteria has 23 apples.  if they used 20 to make lunch and bought 6 more, how many apples do they have? The standard output is 27 which is wrong. Chain of Thought prompting example: Standard prompting: Q: Roger has 5 tennis balls. he buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. The answer is 11. Q: The Cafeteria has 23 apples.  if they used 20 to make lunch and bought 6 more, how many apples do they have? The chain of thought output states correctly the cafeteria has 9 apples

A variation of chain of thought prompting is tree of thought prompting.  Tree of thought prompting guides AI models like ChatGPT-4 to generate, evaluate, expand on, and decide among multiple solutions. 

Prompt engineering tips for legal research

Tips for law firm use

Taken from Mark G. McCreary, Ethical and Thoughtful Use of AI in the Legal Industry, Fox Rothschild LLP (July 2023)

  • Law firms should adopt a policy regarding the ethical and thoughtful use of AI that reinforces attorneys’ ethical obligations and sets guardrails around use of various AI tools.
  • Lawyers must be trained on how to use generative AI effectively and ethically. This training should cover the basics of generative AI, as well as the ethical considerations associated with its use.
  • Lawyers should not use generative AI to replace their own judgment and expertise. Generative AI should be used to augment the work of lawyers, not to replace them altogether. Lawyers should still be responsible for reviewing and approving all documents generated by generative AI.
  • It goes without saying that generative AI should not be used to create fraudulent or misleading documents. Lawyers should carefully review all documents created using generative AI to ensure that they are accurate and compliant with all applicable laws and regulations.
  • Lawyers should keep in mind that generative AI is not perfect and can sometimes produce inaccurate or misleading information.
    Users must verify the accuracy of any information generated by generative AI before relying on it in any legal proceeding.
  • Lawyers should make clients aware when generative AI is being used to create their documents. This will help to build trust and ensure clients are comfortable with the process. Users must stay current on the latest developments in generative AI and adopt best practices for using it in an ethical and responsible manner.

Generative AI Research Tasks

Generative AI tools can be used for the following types of research tasks:

Find Learn/Investigate Create/Synthesize/Summarize
  • Find some starting cases on a topic
  • Find cases matching query
  • Learn about an area of law
  • Identify the most relevant, timely, & authoritative cases
  • Ensure there is no authority going the other way
  • Identify the relevant rule(s)
  • Ensure you’ve found all cases on point
  • Summarize the leading authorities
  • Prune tangential authorities
  • Harmonize authorities
  • Reconcile authorities in conflict

Adapted from Rebecca Fordon, Cindy Guyer & Adam Lederer, From AND/OR to AI: Techniques for Prompting Generative AI Tools (May 21, 2024).

Evaluating AI-Generated Content

All AI-generated content should be evaluated before it is used or relied on. Always consider the following categories of factors:

Usage: “Did I use the right tool?” Input: “Did I use an effective prompt?” Output: “Did the tool give an acceptable response?”
  • Designed purpose of tool
  • Scope of training and/or RAG
  • Tool transparency
  • Prompt engineering principles (RICE)
  • Influence of follow-up interactions
  • Missing perspectives from prompts
  • Source & accuracy verification
  • Bias & perspective in response
  • Interaction dynamic between AI & user
  • Critical evaluation considering ultimate research objective

Adapted from Mary Ann Naumann, Re-Engineering Research: Integrating Generative AI & Prompt Engineering into Information Literacy Programs (June 30, 2024).