Microsoft Copilot is available at no cost to all USD faculty, students, and staff. It can assist with brainstorming, content refinement, image creation, and code generation.
Access it through any web browser or via the Copilot mobile app (iOS and Android). Copilot is similar to the paid version of ChatGPT but is more secure and protected because using Copilot with your USD credentials ensures adherence to our privacy and data security standards. To get started, learn how to use Copilot.
Important notice: students' use of AI tools for academic work must be in compliance with the USD's Academic Integrity policy and the syllabus policy as outlined by their instructor.
What are guardrails and what does it mean to "train a model?" This LibTech blog post defines some of the common terms surrounding generative AI.
This tracker identifies the various civil litigation rules and procedures impacted by the recent explosion of generative artificial intelligence (AI) tools available to and used by litigators and courts across the country. Specifically, this tracker notes the individual rules and standing orders implemented by certain federal and state court judges, court administrations, and bar associations governing the use of generative AI in court filings.
Are other schools using generative AI tools?
AI has been around in education for quite some time (Turnitin, Grammarly, IBM Watson), but the generative AI/large language models we're talking about here like OpenAI’s ChatGPT 3.5 and its successor GPT-4, as well as Google’s Bard and Microsoft’s Bing Chat, have charted a drastically different path. Some schools and universities have chosen to ban ChatGPT and AI tools outright, either by banning them on school networks or devices, locking down exams, or instituting AI detection programs. But there’s no stopping students from accessing these tools from a personal phone or computer at home, even with exam protections, or AI detection programs in place. Large language models will have a transformative effect on the education industry and many schools are still figuring out what the future holds for them. In recent months, more schools have shifted to teaching ChatGPT, so students aren’t left behind.
Are law faculty using it?
Yes, faculty across law schools are using ChatGPT and other generative AI tools to write their syllabus, create slides, design learning outcomes, suggest hypotheticals, and even for grading rubrics. Of course you want to be sure to add greater context, nuance, alignment with your course concepts and learning objectives, etc. rather than relying solely on AI generated material. If you choose to use it, do acknowledge your use of generative AI every time, e.g. “ChatGPT helped with the creation of this syllabus." Some faculty are incorporating generative AI programs into their faculty scholarship workflow. There are a lot of tools designed for proofreading and grammar checking, but some of the more prominent ones are Grammarly, QuillBot, and ProWritingAid. Write.law has also designed a course for GPT for Legal Writers for a small fee.
Are law students using it?
We've all heard that ChatGPT-4 can pass the bar exam and the MPRE. Not only could it pass the bar, but it passed it well, now scoring in the top 10% of test takers. So presumably, some law students are using ChatGPT in an attempt to get a better grade on exams. Students have also used generative AI to write an outline for an essay, write an essay, create references/citations list, summarize text, and create slide decks. But ChatGPT is better at some of these than others. And not everyone is jumping on the bandwagon. Students may have several reasons for their reservations: inadvertently running afoul of their school's honor code, concern about the accuracy and validity of research, and concern that it might diminish their legal research skills and make them less competitive in a firm environment. On law school exams, depending on the exam prompt, ChatGPT produces varying results. If the prompt is rather basic, e.g. describe the elements of a contract, ChatGPT has enough samples to generate a reasonable answer. If the prompt is very specific to terms used only in lecture, ChatGPT will not perform as well. In short, ChatGPT can be an easy way to a B- answer. ChatGPT-4 fares significantly better than 3.5.