Remember that generative AI does not "know" anything. It is merely responding to your prompt in a way that reasonably responds to your prompt within its understanding of the human language. Consider these visual examples of three different prompts in Bing Image Creator powered by DALL·E:
Prompt 1: "Create a picture of three forks on a wooded path"
Prompt 2: "Create a picture of three branches on a wooded path"
Prompt 3: "Create a picture of one wooded path diverging into three smaller paths entering a beautiful lush forest in the style of Georgia O'Keefe."
Prompt engineering is essential for getting better results from existing generative AI tools. You might be thinking, "sure those pictures are kinda silly but GPT-4 is pretty great already, I mean heck it passed the bar. Why should I take the time to improve on it?" Students cannot use AI as a legal assistant, instead you should think of it as a tool and use it in conjunction with human input and interaction. As generative AI tools become the norm it is necessary that attorneys learn to harness their strengths fully. Why are you even in law school if a computer can do your job? What will make your legal argument stronger than your opponent's if you're using the same generative AI tool as they are? How will you discover novel legal arguments as opposed to relying on regurgitated word choice?
Consider using the RICE framework for better prompts:
Poor prompt | Good prompt |
Draft one paragraph of the law prohibiting spite fences in Michigan | Draft for a layperson one paragraph of the law prohibiting spite fences in Michigan and then apply that law to a situation where Landowner 1 is angry that Landowner 2, who is a neighbor of Landowner 1, built a privacy fence but it blocks Landowner 1's view of a lake. Write in a professionally friendly tone. |
Follow-up prompting: Even with an excellent prompt, generative AI and especially GPT work best with follow-ups. Follow-up prompting works with AI to identify what works for you and what doesn’t. ChatGPT has the ability to remember details from previous prompts. If you don't like the initial response, tell ChatGPT what exactly you don't like and suggest a path for improvement. It is sometimes necessary to go back and forth with ChatGPT a few times to be satisfied with your results.
Few-shot prompting: Enter a few examples (also called "shots") of what you want it to do. Few-shot prompting allows AI to learn from these 2-5 examples. Few-shot prompting can be especially helpful for formatting assignments or briefing cases in the style that you prefer. Here is an example of few-shot prompting for creating deposition questions.
Chain of thought: Chain of thought prompting encourages models to explain themselves by breaking down complex problems into intermediate steps that are solved individually. This increases accuracy of the output. Here is an example of chain of thought prompting:
A variation of chain of thought prompting is tree of thought prompting. Tree of thought prompting guides AI models like ChatGPT-4 to generate, evaluate, expand on, and decide among multiple solutions.
Taken from Mark G. McCreary, Ethical and Thoughtful Use of AI in the Legal Industry, Fox Rothschild LLP (July 2023)
Generative AI tools can be used for the following types of research tasks:
Find | Learn/Investigate | Create/Synthesize/Summarize |
---|---|---|
|
|
|
Adapted from Rebecca Fordon, Cindy Guyer & Adam Lederer, From AND/OR to AI: Techniques for Prompting Generative AI Tools (May 21, 2024).
All AI-generated content should be evaluated before it is used or relied on. Always consider the following categories of factors:
Usage: “Did I use the right tool?” | Input: “Did I use an effective prompt?” | Output: “Did the tool give an acceptable response?” |
---|---|---|
|
|
|
Adapted from Mary Ann Naumann, Re-Engineering Research: Integrating Generative AI & Prompt Engineering into Information Literacy Programs (June 30, 2024).