r/Ask_Lawyers 1d ago

Is Ai taking over law?

I think the major issue with Ai taking over law is that it can’t be held accountable but maybe others think different. What are your thoughts? Do you guys use ChatGPT or StandardUnions to buy trained Ai?

0 Upvotes

8 comments sorted by

19

u/theawkwardcourt Lawyer 1d ago

People have asked this question here a lot lately. I always give some variation on the same answer:

I have never used, and will never use, any AI to write anything.  Lawyers have repeatedly gotten in trouble for letting AI write their legal documents.

As I understand it, AI, in its current incarnation, doesn't know or understand anything in the sense that humans do. All it can do is identify and replicate patterns. That is some part of intelligence and of legal reasoning, but there's so much more that is required for truly intelligent decisionmaking. AI can't tell which parts of a pattern are meaningful, or extrapolate meaningfully about potential consequences. The result is that the AI will often just completely make up cases - something which, I'm sure you're aware, we are not allowed to do.

Corporations are spending so much money to develop AI so that they can replace human workers. They think it'll be good for their businesses, to be able to save on labor costs. It's a classic game-theory problem: It may be good for an individual business to get rid of most of their human employees, but if every business does it, it'll be devastating to the economy and human society at large. If people are suddenly unemployable, they'll have no mechanism to exert political power. Even if we worked out some kind of universal basic income, there would still be disastrous political consequences to people not having their work to use as a tool of political influence, and to hold their employers accountable. Not to mention that there'll be no one to pay for all the services being provided by AI, if everyone uses it to replace humans. This is not the oppressive cyberpunk dystopia I signed up for.

As companies seem more and more inclined to use AI to lay off employees, I am profoundly grateful to be a part of a profession with conservative, protectionist institutional culture, and with the social power and incentive to protect its role in society. We need more of these, to resist the lunatic capitalist push to prioritize short-term profits above quality of service, employees' needs, and social welfare.

AI is fantastic if it can help detect cancers and write code, but it should never be a substitute for human judgments about how to resolve personal conflicts, prioritize human needs, or treat people under institutional power. These processes demand accountability and humanity, even if flawed. The decisions will be flawed anyway; but if we know that, we can adjust, in the light of mercy and compassion. The proliferation of AI into these spaces would inevitably lead to the idea that the decisions were being made perfectly, and mercy and compassion would be dispensed with entirely.

For lawyers specifically, there's an additional problem with AI: large language models train on all the data they have access to, including any that you give them. So if you input confidential client information into the machine, that's now a part of its data set, which you've disclosed in violation of your professional obligations. That information could emerge as part of the AI's output in some future use, possibly in ways that could compromise your client's confidentiality or other interests. I would argue that it's an ethical violation for an attorney to give any client data to any LLM AI.

11

u/Fluxcapacitar NY - Plaintiff PI/MedMal 1d ago

No and it won’t. Anybody who screams the sky is falling otherwise is an idiot and knows nothing about the profession or industry

4

u/AliMcGraw IL - L&E and Privacy 1d ago

In my state, new ethics guidelines on the use of AI have all but ruled it out. Putting any specific facts about the case into an AI breaks attorney client privilege, you will be rebuked or even sanctioned if you cite to an imaginary AI-created the case citation.

I suppose it's kind-of useful in helping you think through arguments and find any points you might have missed (If you're decent at prompting, or have a friend who's a decent prompt engineer), but you have to do that with stuff that's already public record or in public court filings, or you risk a breach of confidentiality and privilege.

3

u/Dingbatdingbat (HNW) Trusts & Estate Planning 1d ago

AI will not take over the law, ever, but it will change the way we practice law, similar to how computers fundamentally changed the way lawyers practice.

AI is a tool, and can be a useful tool, but it cannot replace a good lawyer. 

1

u/AutoModerator 1d ago

REMINDER: NO REQUESTS FOR LEGAL ADVICE. Any request for a lawyer's opinion about any matter or issue which may foreseeably affect you or someone you know is a request for legal advice.

Posts containing requests for legal advice will be removed. Seeking or providing legal advice based on your specific circumstances or otherwise developing an attorney-client relationship in this sub is not permitted. Why are requests for legal advice not permitted? See here, here, and here. If you are unsure whether your post is okay, please read this or see the sidebar for more information.

This rules reminder message is replied to all posts and moderators are not notified of any replies made to it.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/ChrisLawsGolden Lawyer 1d ago

As everyone here agrees, AI hasn't taken over the law.

But I'm apparently in the minority that believes in the tremendous impact AI will make as the technology matures in the years to come.

It's true that right now generative AI is primitive. They hallucinate (i.e., make up complete bullshit), and the energy costs are quite high to train the models. We haven't exactly figured out the best uses, especially for the legal field.

And first, it should be stressed that LLMs and generative AI are tools. They are only as good as the user of the tool. And it's true that currently, the tools are rough around the edges. We don't have a great framework for their use.

With that said, the potential for AI is enormous. Generative AI is only in its infancy, and we are already seeing consistent progress. Even with these primitive models, there are some very useful cases for the legal field.

If AI development were permanently frozen, then I would agree that their useful may be limited, but technology doesn't stand still.