Guidelines for the use of AI in a law firm

AI Utopia
This is a quick summary of how to use and not use AI in a small law firm. You may find it applicable to your business as well.

Executive Summary

Warning: Before you post anything to an AI system ask yourself if you would say it out loud in a room full of strangers.

Opportunity: AI can save time on routine tasks and boost overall efficiency.

What is AI?

Artificial intelligence (AI) is a broad category of software designed to perform tasks that normally require human judgment. This includes things like …

  • Self-driving cars,
  • Fraud detection systems (e.g., at credit card companies),
  • Recommendation engines (such as Spotify or Netflix), and
  • Image-generation tools.

What is an LLM?

A large language model (LLM) is a specific type of AI designed to work with language — reading, writing, summarizing, and generating text. ChatGPT, Claude, and Gemini are common examples.

Although some LLMs can generate images or analyze documents, their core function is predicting and producing text based on patterns in large volumes of training data.

AI doesn’t know anything.

AI doesn’t understand legal issues, facts, or client intent the way a human does. More generally, it doesn’t understand anything. It’s a very complicated language map. It knows all the ways the word “orange” is used, and how orange is related to apple, or fruit, or Trump, or the Denver Broncos, but it doesn’t know what an orange is.

When AI gives an answer, the answer is based on a statistical prediction of the right text that should follow a query. That process mimics understanding, and it can be very convincing. It’s an illusion. It doesn’t know or understand anything.

AI is not private.

With very limited exception, unless you’ve custom built a private system with special security features, your AI workspace is not private. Before you post anything to AI, think of it as if you’re making the information public and giving it to someone who wants to use it against you and against your client.

AI systems don’t know whether information is confidential, privileged, or sensitive, and even if it knew it doesn’t have the right safeguards in place to protect that information.

LLMs Hallucinate

A large language model is designed to predict how a competent-sounding answer would be worded. It doesn’t know whether the answer is true, and it doesn’t check. (Although you can ask it to check, which can be helpful, and sometimes amusing.)

Hallucination is not a bug, and it’s not an evil plan to deceive you. It’s a side effect of what the LLM is designed to do, which is to produce statistically likely text.

LLMs are not designed to verify facts, check sources, or confirm legal accuracy. When the model lacks sufficient information — or is pushed beyond what it can reliably infer — it will fill in the gaps with something that sounds correct.

In legal contexts, hallucinations commonly take the form of …

  • Invented case law or statutes
  • Incorrect citations that look real
  • Misstated legal standards
  • Confident summaries that subtly alter key facts
  • Procedural rules that are plausible but wrong

The output looks polished, confident, and professional – but there’s no guarantee it’s accurate.

Important: If an LLM gives you a case name, statute, or citation, check it with an officially recognized and reliable source. The LLM might have made it up.

Good and bad uses of AI

Appropriate Uses of AI

These uses are generally low-risk when no confidential information is included.

  • Explaining general legal concepts (e.g., “What does per stirpes mean?”)
  • Rewriting text for clarity or tone, using fully fictional or generic examples
  • Generating checklists such as “What assets might need retitling to avoid probate?” (i.e., without any client details).
  • Summarizing publicly available laws or IRS rules at a high level. (In this case, provide the text yourself. Don’t count on the AI to have the correct version.)
  • Converting formats:
    • image → text
    • long text → short summary
    • bullet points → paragraph
  • Drafting non-legal business content (emails, policies, training outlines)
  • Looking up public facts (e.g., “which county is this address in?”)
  • Looking for a different way to word something (so long as the text does not have client information)

Inappropriate Uses of AI

These uses create legal, ethical, or confidentiality risk:

  • Entering client information, such as an asset list, even if anonymized. (Personal information can sometimes be deduced even from “anonymized” data.)
  • Asking AI to:
    • apply law to a specific situation
    • recommend a legal strategy
    • draft or revise wills, trusts, or estate documents
  • Relying on AI for:
    • legal conclusions
    • compliance determinations
    • tax advice
  • Copying AI-generated citations, statutes, or case law without verification
  • Treating AI output as “research” rather than unverified draft text
  • A non-lawyer relying on AI to define or describe a legal concept for use in a legal document. AI does not authorize non-lawyers to practice law.

Beware Default AI Settings

One of the biggest risks with modern AI tools is not deliberate misuse, but features that are enabled by default. Many everyday productivity tools now include AI-powered summaries, writing assistance, transcription, and note-taking that operate automatically unless turned off.

Rule of thumb: If a tool is giving you an AI-generated output, it is necessarily reading, analyzing, or listening to the underlying content.

What to watch for:

  • Automatic document summaries or side-panel “insights.”
  • Writing, spelling, or grammar assistants that work in real time.
  • AI-generated meeting notes, transcripts, or action items.
  • “Smart” features that suggest replies, rewrites, or next steps.

If you see these things, please report them to management so they can find a way to turn them off.

Conclusion: AI is a tool, not a decision-maker

AI is best understood and used as a drafting and reference assistant that sometimes makes mistakes.

It cannot replace professional judgment, ethical responsibility, or legal accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *