
How AI is Shaping the Future of Paralegals
Join us for an engaging exploration into the transformative role of AI in the legal field, specifically focusing on how it is reshaping the responsibilities of paralegals. We are joined by esteemed guests Joseph Vallette, Nancy Golden, and Brian Hance from National University, who share their expertise on integrating AI into the paralegal studies curriculum. Discover how AI is influencing the way paralegals approach tasks, shifting the focus towards those requiring human judgment, and how this technological evolution is being embraced in educational settings to prepare students for a rapidly changing legal landscape.
Listen in as we navigate the challenges and opportunities presented by AI in law office management. Our conversation underscores AI’s potential to enhance traditional methods, offering a supplementary tool for generating ideas and refining procedures without replacing human expertise. We highlight the importance of developing critical thinking skills to craft precise AI prompts, ensuring the effective use of AI while maintaining the integrity of legal practices. The evolving roles of paralegals are also discussed, emphasizing diverse career paths that include traditional roles, e-discovery, and business administration.
Finally, we explore the role of AI in enhancing critical thinking and education, both within legal studies and broader academic contexts. Our guests share real-world examples of AI’s impact, including its role in improving efficiency in small to medium-sized law firms and creating interactive learning experiences that transcend traditional textbooks. Discover how educators at National University are guiding students to maximize AI’s potential, fostering an environment where students are equipped with cutting-edge skills and prepared for the dynamic demands of the job market.
Show Notes
- 0:10:12 – Impacts of AI on Legal Education (133 Seconds)
- 0:15:08 – Developing Specific Prompts in Technology (84 Seconds)
- 0:21:22 – AI in Law Office Management (62 Seconds)
- 0:34:27 – Effective Prompt Engineering in AI Education (133 Seconds)
- 0:47:53 – AI-Assisted Brainstorming and Decision Making (76 Seconds)
- 0:51:21 – AI Implementation in Law Firm Management (49 Seconds)
- 0:56:16 – Effective Use of AI in Education (110 Seconds)
Announcer (00:01)
You are listening to the National University Podcast.
Kimberly King (00:08)
Hello, I’m Kimberly King. Welcome to the National University Podcast where we offer an holistic approach to student support, wellbeing, and success- the Whole Human education. We put passion into practice by offering accessible, achievable higher education to lifelong learners.
A very interesting and relevant conversation to be had today on the podcast about how AI is possibly changing the role of paralegals. According to a recent article, law firms are adapting to AI by restructuring legal support roles. This from a 2024 article in Legal Tech News. It reports that while some paralegals welcome AI’s assistance in handling low-level tasks, others are concerned about its impact on billable hours and job stability.
This has led to some firms to redefine paralegal responsibilities focusing more on tasks that require human judgment and less on those that can be automated. We are discussing how they are teaching this at National University. Stay with us.
On today’s episode, we’re discussing how AI is changing work for paralegals. And joining us is Joseph Vallette, Esquire. Joseph graduated from Emory University in 1986 with a BA in history and political science and earned his JD cum laude from New York Law School in 1991. He has over 30 years of experience practicing law in New York, working as a personal injury litigator,
and has extensive background in consulting with legal professionals on leveraging computer technology through automation for greater efficiency. He has taught paralegal students in a collegiate environment for over 28 years. Currently, he’s a part-time assistant professor at National University, where he teaches computers and the law, advanced legal technology, e-discovery, and law office administration to paralegal students and legal studies majors, as well as AI in the law to law students.
Kimberly King (02:14)
He has also redesigned all four technology courses for the Legal Studies program at National University, incorporating AI in their creation process and integrating AI into the curriculum for three of them.
Also joining us is Nancy Golden. Nancy is a California licensed attorney and an assistant professor at Paralegal Studies at National University. She earned her bachelor’s degree in psychology at UCLA in 1982 and her Juris Doctor from Whittier College School of Law in 1986. Nancy has been teaching paralegal students since 1991 when she started at the University of La Verne and she’s been teaching at National University since 2007. Plus we also have Bryan Hance. Bryan is a licensed California attorney, professor and academic program director for the pre-law and paralegal studies programs at National University. Previously, he was an assistant professor of law at Glendale University College of Law, a partner in the law firm of Lewis, Brisboy, Busquard, and Smith, where he was involved in both litigation and transactional matters, associate general counsel at Pepperdine University, and the executive director of the Center for Conflict Resolution handling mediations, arbitrations, and dispute resolution training. He has received his BA from UCLA, his JD from Pepperdine University School of Law, and his LLM with honors and SJD from UCLA School of Law. Wow, welcome to the show. Thank you so much. We welcome all of you to this podcast. How are you?
Joseph Valette (03:59)
Good, thanks.
Bryan Hance (04:01)
Great. I’m glad to be here.
Kimberly King (04:03)
Thank you. This is an interesting topic and so relevant in today’s environment. We’re talking about how AI is changing work for paralegals. so, Bryan, I’m going to start with you. What technology courses do you currently offer at National University?
Bryan Hance (04:20)
We actually offer four courses. So as the academic program director, my responsibility is to oversee the curriculum. And so one of the things that Joe and Nancy have been helping me with is to build up our technology courses. So we offer four courses. One is Computers and the Law. And that basically teaches computer technology using the latest software for time and building, case management, litigation support, those kinds of things. Then we have a law office administration course and that explores the organization and functioning of a law office. The third one is e-discovery, which is sort of the hot topic these days. And that is basically gathering, reviewing, and then producing electronic information, typically in response to litigation and investigation.
And then the fourth one is advanced legal technology. And so that one, we take more of a deep dive look at the technology, looking at basically the major technology platforms that are out there, the different software that’s used.
Kimberly King (05:32)
You know what I hear you saying basically is how can we be more efficient with the addition of AI? I would imagine with years and years of that backlog, it probably makes it such a faster way to get things done, doesn’t it?
Bryan Hance (05:41)
It makes a huge difference, a huge difference.
Kimberly King (05:45)
What inspired you to integrate AI into the paralegal studies curriculum?
Brian Hance (05:52)
Well, that’s a good question. I think unless you’ve been living under a rock the last couple of years, AI has exploded, it’s everywhere. There are very few industries and sectors of our society that haven’t been impacted by artificial intelligence. But even before ChatGPT came out, burst onto the scene back at the end of 2022, I think, I had a couple of aha moments.
I, in my own practice, was starting to see firsthand how technology was going to have a huge impact on the practice of law. Back in the end of 2006, beginning of 2007, the rules governing federal court litigation changed. And it basically forced civil litigants to comply with new rules about all of this electronically stored information.
Bryan Hance (06:53)
And it was around the same time that I personally was litigating some pretty high profile cases. I had 25 of the 650 clergy sexual abuse cases facing the Catholic church here in Los Angeles. And my cases were teed up to be the first to go to trial. And so we literally were in scramble mode. We had to heavily rely on technology to get those cases ready to go with the thousands of documents that we had. Now, I was working for a really large law firm at the time, so pulling in additional resources was not a big deal. But about 70 % of law firms today are small or medium sized, and so they just don’t have the bullpen of lawyers and paralegals and support staff to pull something like that off. So they have to turn to technology to help them.
The problem is, you on average, lawyers in those firms spend about half their time practicing law. The other half of their time, they’re pushing paper and doing client development and those kinds of things. I think in a recent survey, about 95 % of legal professionals agreed that it’s important to use new legal technology in their practices, but 50 % of them said they didn’t have time to learn it. So it’s a catch-22. You want to learn the technology to save you time, but you don’t have time to learn it. But hopefully, AI is starting to change that, and it’s super easy to use and something even I can use.
Kimberly King (08:33)
Well, and it’s true. I was just going to ask you about the next question about why it’s important to learn AI for paralegal students. But really, this is something our students, having the phone, having technology, this group of students, if you think about the younger set, have really grown up with it, haven’t they? It’s really teaching the- I don’t even know what age to throw out there- but maybe 40 plus, 50 plus.
Bryan Hance (09:00)
Yeah, so another aha moment that I had, you know, to that point was a couple of video clips I watched a few years ago. One of them was a 60 Minutes episode. Some of you may have seen it, but it shows how AI was used to record and preserve the testimony of Holocaust survivors. And what they did was they brought in the Holocaust survivors, obviously, you know, they’re fewer and fewer these days, but they brought them in to record their testimony.
They had cameras all over the place and all this equipment, and they hit them with like four or 5,000 different questions. And that way they could preserve not only their, you know, what they had to say, their stories, but also they created a hologram so you could actually communicate back and forth with them. Obviously all of that’s recorded, but it looks very realistic. And, you know, at the end of the segment, it showed students with headphones and a laptop in a classroom asking questions. And I thought, there goes my teaching job, you know. All Samantha Madura has to do is to find the best law professors in the world, sit them down for a few weeks and pelt them with a few thousand questions. And then there goes my job.
Kimberly King (10:32)
But yeah, that is the personalized side and the expertise side. boy, sure does make things, again, an easier and looking at all of that back research and that history, which is just a treasure, isn’t it?
Bryan Hance (10:44)
Yeah, and when you talk about using the phone, the other video that I happened to catch was the launch of Chat GPT-4.0 where the presenters are interacting with the software and they’re no longer talking to a recording as they did in the 60 Minutes clip. Now AI has its own thought process and personality and it’s now an app on your phone.
Bryan Hance (11:13)
So all of that to say, you know, it wasn’t a matter of if, but when, you know, we were gonna bring this instruction and technology to our program. And we know students are using it. The question is, how do we best teach it and teach them to use it responsibly?
Kimberly King (11:32)
Because it’s not going away. So it’s best to meet everybody and give them those tools and put boundaries around those, guess. So those, what are the practical things that AI can do for practicing attorneys and how can paralegal students assist them?
Bryan Hance (11:48)
Well, some simple ones are just generating drafts of documents, streamlining the drafting of employment manuals, those kinds of things, managing billing systems and office supplies, automating certain processes like calendaring things, scheduling things. Again, for those, particularly those small firms and medium-sized firms that may not have the staff or the funding, the resources to be able to do all those things, AI hopefully will be able to step in and really make their jobs easier.
Kimberly King (12:24)
Excellent. Next set of questions I’m going to direct to Nancy. As a self-professed technology-phobe, what were your initial thoughts of integrating AI into paralegal education? How have your views evolved since working with AI in schools like Poly?
Nancy Golden (12:42)
Well, I was very, very intimidated at first. I didn’t know how difficult it would be to learn how to use it, how accurate it would be. I’m used to just teaching the traditional ways of teaching. But as I started experimenting with it, I mean, I just felt like I had to bite the bullet and try it.
And as I started experimenting with it, I saw all the different uses that it can have to assist. And it made me feel better that it really does not replace the human touch. You still need to verify. And I think my position is still safe for few more years without having AI to help to totally teach the class.
Kimberly King (13:35)
You know, and that is everybody’s fear. Are we going to be replaced? And in every kind of format, no matter what we do, think. That human element is so still very key, and we can never lose sight of that. What do you see as the biggest challenge for students and instructors when transitioning from traditional law office administration methods to AI enhanced learning?
Nancy Golden (14:01)
Well, I think for the students, and I’m going to give a big shout out to Joe, he did an incredible job in setting up the law office management or administration class that I am teaching. It has very detailed instructions on how to set up Poly, who is your virtual assistant. And so that was really great. The problem is we also do have older adults in our classes and there are challenges with them in learning how to set it up and feeling intimidated by the technology and learning to trust the technology.
And for instructors like me, who- I’ve been teaching 33 years and I’ve spent most of that time perfecting my teaching methods- to change over, there was sort of a hesitation to resist change. But I see, again, AI is more of a supplement. First of all, I think Bryan mentioned getting started with a document or procedure, and it’s been helpful as a starter and giving good ideas. More as a mentor and not the end all for everything.
Kimberly King (15:29)
Kind like a prompt or having somebody like if you threw it out, it can-
Nancy Golden (15:31)
Yeah, the biggest, yeah. And we spend a lot of time in our class in developing prompts, the old theory of garbage in, garbage out. So you have to learn to be very specific in your prompts in order to get something good that you can go ahead and start with and then edit afterwards.
Kimberly King (15:57)
You know, and it’s funny to think about it that way. I feel like, at least in my, from my point of view, having kids now that are in their twenties, being the best of both worlds, I would say, before technology and after technology, we do have to learn to be specific. Even when you’re on social media, everything, you know, if you’re verbose, yeah, you really got to cut it down to maybe a hundred if that or 30 words or something. So that’s, that’s a challenge. It’s like learning another language, isn’t it?
Nancy Golden (16:25)
Yes it is.
Kimberly King (16:27)
So tell me a little bit about your experience with what essential skills do paralegals risk losing if they rely too heavily on AI for administering tasks?
Nancy Golden (16:38)
I think one of my fears is their critical thinking skills because they could become too lazy and you still need those skills to edit whatever AI has helped you put out. And I think that is a big challenge because you still need that. When I was in law school, they were just introducing Westlaw, which is a legal research platform. And you really had to be specific in your queries in order to get good research results.
But there was the fear that that was going to take over the book method I think one of my issues that I’ve had was because I’ve also taught legal research for many years, is that I want my students to learn the books first and then they could have better queries into Westlaw for their legal research. So I think also to learn some of the basics before completely relying on ChatGPT and learning how to edit your queries and editing your results is a big challenge.
Kimberly King (17:56)
And I would agree. love that you’re talking about that critical thinking. It has really been lacking, I think, with this next generation, probably because before even AI. But it’s just that sense of being curious and trusting yourself, asking questions just to find out more and probe on both sides, if it’s political or if it’s a legal, but to know that you really ask questions from both points of view.
Nancy Golden (18:23)
And one of the things I love about ChatGPT is that you can build on that. You get a result and then you could build on your query and get a more refined result also, which is a big help.
Kimberly King (18:39)
Well, good for you. I love hearing about this and how we’re utilizing this now with these with the paralegals. Now, Joe, I’m going to go to you. Thank you very much, Nancy.
Nancy Golden (18:49)
Thank you.
Kimberly King (18:51)
How do you see AI transforming law office management in the coming years?
Joseph Vallette (18:57)
I would even say step back and say, it’s transforming the law itself. Bryan mentioned that, you know, we have four courses that we have at National University. My idea in creating them was to give our paralegal students options because there’s three types of paralegals you can become. One, the traditional paralegal, you help the clients and whatnot, and you’re dealing with the computers and the law and the advanced computer course. It could also be more high tech and want to do the e-discovery. It’s more fast paced, traditional. It’s sort of concentrated. There’s companies that hire paralegals just to do that. And then there’s the law office administration, the business side.
For me, I’m more of a, we talk about poly-later and I call her the polymath, but I sort of have that polymath mentality. I believe in the liberal arts and I draw from a lot of different things. So my whole process was from business, from Peter Drucker, a management consultant. He said that in the knowledge economy, paralegals are technologists. They’re the ones that are the means of production of producing. So they’re the ones that use the technology to create what lawyers sell. Okay. And the problem was that around that same time in the late sixties, they- was the advent of the paralegal. Now the paralegal is- have taken over the secretary because lawyers realized that if they get a paralegal, they can bill for their time, but they can’t bill for a secretary’s time.
The problem is paralegals do what lawyers do. They forgot what secretaries do. Secretaries are the administrators. They sent them into legal secretarial schools, they learned how to administrate an office and that’s becoming more and more problematic, especially in the small and mid-sized firms where they can’t afford to hire legal secretaries or even secretaries or administrative staff, even file clerks. Now that you have everything is online, you don’t even need the file clerk staff or the mailroom staff.
I’m looking at it, what can I best do for the students to give them options? So some students may not like to the dealing the day to day- with dealing with clients. They might like the business side of the law better. And that’s where law office management and law office administration comes in, because I’m recognizing that because it’s generative AI, it’s a great brainstormer. So when you look at that, all I have to do is ask, can, you know, what do I do in this type of scenario? And if it’s asked correctly with the right prompts, you’re able to get a response.
Now that’s the technology we’ll talk about as we go through, but that was the main concept. So I feel that AI is going to be involved in all of it. Because again, when you’re writing documents and you have the white page in front of you, it’s easier to have AI create a first draft. When you’re dealing with a drop off in the skills of reading comprehension and writing capabilities of the younger generation, I’m going to be able to, as a lawyer, get professional results as long as the student, the paralegal and legal secretary know how to use the document, especially of good first draft. You know, again, it has to be edited.
You know, the generative AI is only the creative thinking. It has no ability for critical thinking. We are the critical thinkers. So if we don’t know how and train our students to be the critical side of that, then AI is garbage in, garbage out, as Nancy said. You know, we have to be able to say, okay, are these good ideas? How can I further elaborate on that?
And you have to do that with the other legal research and whatnot. But the beauty about doing it with law office administration first is I don’t have to worry about anything going to court. I don’t have to worry about anything going to a client. This is all stuff that’s internal and basically give me a list of this and how do I process that? And what’s the procedure for this? And if something doesn’t work, you change it. There’s no licenses on the line or malpractice on the line. So it was a good starting point for introduction.
Kimberly King (23:20)
You know, and it kind of leads me to my next question, but I wonder about security breaches- possibilities when you’re dealing with, you know, your cases. Do you have guardrails around that when you are working with AI? And I guess what are those key challenges of integrating AI tools into the law office administration?
Joseph Vallette (23:41)
Well, basically I go with the adage, if you’re getting something for free online, then you’re basically the product. So if you’re using free AI, don’t put anything online because that’s the stuff they’re going to be using to train their system. But if you’re using something like Lexis AI, which is an administrative, a legal search engine, or you use the paid version of ChatGPT, you can upload certain things that are not necessarily- you still have to be careful of a certain kind of confidentiality, but you have a higher risk.
There’s hierarchy of things that you can do when you start paying for it. And again, it’s up to the individual using it to know what exactly is being used as training material and what is theirs. again, if you don’t know what the situation is on the other side, then don’t use it. Don’t upload. You have to know what the- read the service agreement.
I’ve done that in the past is when you have those things that no one ever reads. I’ve actually copied and pasted them, put it in AI and said, summarize all the possibilities where I can go wrong. And it gave me a great summary and it was in like three paragraphs. And it’s like, that’s how I use it. AI for me is my assistant. And it’s like, I don’t want to read it all. Tell me where my problems are, what’s in this that they can, that I should look at.
And sure enough, it gives me the list and I get a nice brief summary and then I’ll answer yes, I’ll accept.
Kimberly King (25:15)
My brother-in-law is an attorney in Los Angeles. He’s a district attorney in LA. And every time we do anything, he always looks at that small print. And I’m like, of course you do. But now we have an assistant. So I love that. Can you describe the most impactful AI-driven tasks students engage in during this course?
Joseph Vallette (25:36)
Basically, it’s just learning how to prompt engineer, you know, because one of those things is that everyone has a misconception with regard to AI. And the biggest is that the name they say artificial intelligence, but it’s really not intelligent. You know, when you look at human intelligence, there’s nine different types – linguistic and the spatial and the, mathematic and, know, everyone has all nine, but some are better at others and you can improve what it is. And the intelligence that it’s mimicking in AI is just the linguistic, you know, and it looks for patterns.
So it’s not really intelligent. It’s basically getting a lot of words and putting them back at you that, you know, it’s the old thing of you give a monkey long enough to type, you know, know, right Shakespeare. Well, it’s the same type of thing. We’ve given a monkey a lot of information.
And now we have to be able to find the pattern and send it out. So the prompt engineering, think is the, that, tool that needs to be taught to everyone. And the misconception again was the lack of intelligence. And how I found that was a very simple question. I asked ChatGPT, what color is George Washington’s white horse? And it said white. And I said, but what are you basing that on? And it insisted that it was white.
Well, according to the historical record, Washington’s white horse is gray. So as a subject matter expert, I know that there’s no such thing as a white horse in nature, except if it’s a genetic mutation in an albino. So it could not be white, it had to be gray, but it told me it was white. And then I pressed further, I said, what if I looked at the historical record? Oh yes, I’m sorry, it wasn’t white, was- the horse’s name was Nelson and it was a chestnut. It’s like, well, I know a chestnut’s not going to turn white.
The horse’s name was Blue Skin and it was a gray. So again, I had to go several ideations in to be able to get the right answer. And then it said, well, it was a tautology and that logically it was, the answer was in the question. So that’s why they did it. I said, but it’s not a good question. It’s in law. that would be objected to as a leading question and you can’t put an answer in. So it’s not a good question. So it came back and finally said, okay, you’re right. According to the historical record, it got it right.
And I said, okay, I said, who’s buried in Grant’s tomb? And again, if you go back to Groucho Marx and who bet your life and you know, we’re going way back now. They used to do that as a giveaway, but in reality, it came back and said it was Grant and his wife.
I said, were they really buried? And they said, yes. I said, I thought you can’t get buried in a tomb, and you’re interred. You go, you’re right. No one’s buried in the tomb. But again, I went for one, the leading question and two, specificity of language and it failed on both issues. So again, from the critical point of view, I knew even something so basic, we needed to be able to watch on the critical side because it doesn’t do critical real well. It doesn’t do it at all. It creates. And if you don’t tell it how lawyers got in trouble in my neck of the woods in New York, in federal court, because the lawyer asked ChatGPT to find a case that said X.
Well, it couldn’t find it in the thing, so it made it up. It did exactly what the person asked for. It found a case- now it was a made-up case, but it did what it was asked. Now, if the lawyer said, only find cases that were actually decided by the court. And if you can’t find one, tell me there is none.
That would have been a better approach because then it would have identified and said, no, there is no case. But a poor question, just like what color is Washington’s white horse? Or who is buried in Grant’s tomb? The lawyer said, find me a case. And it did exactly that. And then it did- the violation of law school 101. Never put something in a brief or a memo that you haven’t looked up and determined that it actually exists and failure to do that, that lawyer was sanctioned. And rightly so, you know, relying on a new technology and not being able to justify it. And it wasn’t meant for research that selects as AI, analytical AI. This was generative, so.
Kimberly King (30:15)
So that story, which I love that. again, you’re really teaching the students how to ask the right prompt and be very specific. So now that you’ve put that out there about what color is Washington’s horse or the white horse, does it change it because you’ve already corrected it? So like if I were to ask that right now in AI, or is it still based on my prompt?
Joseph Vallette (30:39)
It’s still gonna be based on your prompt. Cause I’ve done it before and some different AI engines do it a little differently, but not necessarily. So this one was, it didn’t change it. Sometimes it’ll be a little bit more, but I looked at it and said, okay, how could I use this to my advantage? And I said, okay, if I were to ask you to not answer the question first, but to analyze my question or my prompt, could you do that? And it came back and said, yes.
So I said, okay, now you’re going to be a generative, a virtual assistant working in a law firm. And anytime I ask you a question about law office administration, before you answer, I want you to evaluate and analyze my prompt and return that. And that’s how this whole idea of Polly and being able to utilize this virtual assistant and a virtual mentor into the system. Because now instead of them asking the question, I’m having them ask it, but have the question evaluated because we were talking about critical thinking before- lawyers do three things really well based upon their training. They answer questions, they question answers, and they question questions.
And what Polly is trained to do on its own is to answer questions, and it will answer it even if it has to make it up. It’s a creative, it’ll write a novel. It’s, it makes the white page quickly go into a lot of text, but on its own, it doesn’t question answers and doesn’t question questions. And my thought process was what if I could make it? If I can take that step back and say, okay, Polly, you’re no longer just going to answer my question, but before you answer it, I want you to give me a better prompt. I want you to say what’s wrong with it. How could I improve it? But how do I do that?
So what you did was, what I did was wrote a script. So I said, okay, give me a hypothetical of a law firm. We’re going to call it Dewey Cheatham and Howe. We’re going to have so many lawyers, so many powers. They’re going to focus their law practice on personal injury law and contract or whatever I mentioned. And we’re going to be a small to mid-sized law firm in the California area.
And I want to be able to um, administrate this law firm and you’re going to act as a law office administrator. And then when I ask question, you’re going to respond as if you are the administrator, um, replying with administrative tasks. So I wrote the script. It’s about, uh, I I had it uploaded in two different sections. So it’s about 40 pages.
Kimberly King (34:47)
Wow.
Joseph Vallette (34:48)
You know, it went such detail as what I’m using in the prompt, how I want the prompt, the information they’re using. So it’s an extensive type of thing. Unfortunately, they have to do it every time they do an assignment. You have to upload it every time because we haven’t got there yet. We’re just going to be able to do it on its own, but we’re very close and we may be able to pull that off in the next iteration of things I’m working on right now. But this one, we had to do it manually.
We uploaded the script and then before it answers, it gives me the prompt. And then I have the student use that prompt and upload it to get the result. So when I’m talking to the student, the assignments, it’s give me a brainstorming session, give me a workflow analysis, give me a checklist of things to do, give me a process map or things that they may not, a flow chart, things that they may not know how to do, but the AI does. But I have to ask AI to give me that prompt, you know, to better give an example.
Joseph Vallette (34:52)
And then I say, have them give you three or four choices and then mix and match, you know, cause you’re going to be using a, um, mnemonic that I’ve developed for them to use it. You know, being able to talk about tone, the context, the input, the output, the things that you need to know for AI, and that you can mix and match. And the verb, the words that go with it. What are the action words? What are the words that you need to do for a specific component? And that’s all worked into this.
So they’re getting a master class on prompt engineering, but they’re also getting a practical skill because they don’t have to know it all in four weeks. I’m giving them the end result because that’s all programmed into Polly, but they’re able to see that their initial question wasn’t quite there yet. What did they need to do to improve it? And that’s what you need to do in a four week course. You need them to be able to say, okay, I don’t know everything yet, but this is exciting.
I got this stuff. Why am I not getting that results when I type into ChatGPT? Well, because I told you how to respond to you, you know, ChatGPT itself is just going to be, you know, the generic. that’s the big process is that instructors going forward, they need to understand how to make chat GPT do what you need for the student, because that’s going to be the biggest challenge is the new wave of teaching, is that the AI is going to give interactivity and immediate feedback, where now you have to rely on the submission and wait for the professor to grade it. And depending on the ability of the professor and how much detail they’re willing to spend… I’ll tell you that in grading paper, grading discussion groups and things like that, I’ve used AI to help me.
You know, because the administration wants you to respond to the students more than great job, nice job, you know, keep up the good work. They want feedback. And it’s like, okay, how do I do that? Well, I, I’ve uploaded the question that they were asked. I uploaded the rubric of how they would be graded and I’ve uploaded the actual discussion. And it’s actually a harder grader than I am.
Joseph Vallette (37:19)
So when I look at what it did, the feedback it gave, I read through it, won’t go on through, you know, I don’t just, you know, copy and paste from the ChatGPT, but it’s like, okay, now I’m not going to go this harsh. Yeah. I’ll give them more points. Yeah, this is a good point. And then I’ll copy and paste my editing into the comments.
But again, it would have done what I had if I was given enough time to be able to sit through every discussion and go detail by detail of what was there and not get, you know, frustrated of the time it takes after doing 25 students. You know, so again, to be able to edit that and look at the overall approach, you know, it’s another thing that, we’re going to see in the going forward is that you’re going to see instructors more and more give better feedback because they’re going to be able to use their assistance to help them.
You don’t have to worry about the TA, the TA is going to be the AI and they’re going to have much more control over it. Whereas if you have a teaching assistant that’s a graduate student somewhere, you may not know how good they are. You’re always going to know how good the AI is because you’re going to be supervising it.
Kimberly King (38:32)
It really is great practical skills as you’ve been talking about and something you said kind of made me laugh, but probably should be the headline of your class or the very first thing you say is- whatever you prompt it with as you say garbage in garbage out, it could be making it up. So therefore that, you know, do your research and check it. So how has student feedback shaped the development and use of AI in your curriculum as they’re learning all of this?
Joseph Vallette (39:01)
I think it’s one of those where I expect a lot from my students. So I think it was almost that we had to rein it back a little, to make it little simpler. I think that’s where Nancy helped. You know, Nancy was instrumental in, you know, putting this, you know, teaching this course after the revamp. So she sort of got the feedback from the students and, you know, internally gave me the feedback. And, you know, as a group, we decided what comes in and how to change things. But, you know, I think we’re making it so it’s more receptive to the students by keeping the same stringent effects. Instead of doing three exercises, they’ll do one. So reduce the number, but not the quality of what they’re learning.
Kimberly King (39:49)
Okay. So how does AI streamline essential administrative tasks like document generation, scheduling and billing? And we did talk a little bit about this, but is there a streamline that’s essential when you’re using this?
Joseph Vallette (40:04)
Well, it’s not so much the streamlining the documents as much as the understanding the process. So if you’re trying to create a workflow, then basically what is the workflow that currently exists and how can I improve it? So it’s sort of a mix and match. You tell me what you’re currently doing and then ask the ChatGPT. to improve it. When you’re talking about writing a letter, you can give it the basic elements. One of my exercises is another course,
is write a declination letter to a client telling them that you’re not going to take the case because of X, Y, and Z, and that you want it to be from the New York law. And most of the students aren’t in New York, so I do that intentionally so they’re going outside that scope. And I also, in the instructions, make it specific, you know, for me, it’s, it’s, know that the statute of limitations is either two and a half or three years.
Make sure that you do the correct one and tell the student and then, to tell the client. And that’s why we’re usually great is because if they come back and it says it’s the statute of limitations is two and half or three years. I know they didn’t read the instructions and they didn’t do the assignment because that’s a lawyer telling a paralegal.
I knew I know the statute of limitations is somewhere between those you have to find out what it is first before you tell the client. It’s not what you tell a client. Yeah, you know, because you’re opening yourself up for malpractice if they think it’s the three years and it’s really two and a or vice versa. You know, it’s basically if you still feel you can get a new lawyer to take the case, you still have time to do it, but recognize you only have three years from the date of the accident.
I mean, that’s the type of stuff you give them, not the general type of thing I would tell a paralegal to research. And then I also have them to take that same letter that was written and translated into Spanish because the client only understands Spanish. And then I said, okay, make it in a friendly, but professional tone.
And you can also take it, and if you wanted to, can say, write this as if it were on a 10th grade reading level or a third grade reading level. You know, that’s another thing that educators are going to find, you know, especially those in the younger grades, if they ever realize it. My brother, for example, teaches seventh grade English. And he says the biggest problem he has with his group is that they have different reading levels. So they come in at seventh grade with third, fourth, fifth, sixth.
And he said, I can’t get them any faster than one level. You know, the human mind only go, you know, you can only do so many with the hours presented. So he says, is there a way of being able to, you know, give them stuff on their reading level so they can stay [unintelligible]. I said, yeah, just get a novel or something that’s in the public domain, throw it into the chat GPT and have them write it on a different reading level. And then you solve the problem. And his problem was it was too expensive to get so many different level books for his class.
Joseph Vallette (43:14)
But he’s able to solve a simple problem by using ChatGPT on something it’s good at- translation into, you know, a different tone or a different level.
Kimberly King (43:26)
And again, those prompts being just super specific and then even adding in the Spanish language or any other language as well. I do a lot of contracting with law enforcement, with media relations, and everybody has a cadence. Everybody speaks in a different way. So when you prompt it, obviously with the language, but just dumbing it down or putting it at a different grade level, I think that’s so important.
Can you talk a little bit about what are some of the challenges in designing and refining Polly’s AI capabilities?
Joseph Vallette (43:57)
Well, it’s not so much designing its AI capabilities. It’s just programming it so that it highlights the capabilities that it already has. You know, I’m a technologist in the sense that I’m an application specialist, not a technologist for, you know, an IT person that’s going to go in there and change code. So I’m not doing anything with the AI except giving it a list of parameters in which it’s supposed to respond. So basically I’m giving it the ultimate prompt before someone prompts it. I’m giving it the hypothetical, how it’s supposed to respond, its first duty is, is to respond to the question, not the answer. That type of thing is how I program it. So I guess that would be the biggest challenge is to figure out what I want to put into that script.
Kimberly King (44:52)
Okay. So let’s ask about if you see AI replacing traditional office managers or like we’ve talked about a little bit earlier, will it serve mainly as a support tool for paralegals?
Joseph Vallette (45:04)
Well, you got to look at it from the point of view of the small law firm. The small law firm just has just as much administrative needs as the large law firm. The large law firm can hire a law office administrator who usually has an MBA and they earn a salary comparable to most lawyers or at least the younger lawyers. So you’re talking about someone with a specific skill set. Small to mid-sized firms can’t afford that.
But if my paralegals know how to use AI, they can do the majority of what a skilled law office administrator could do. And then what they’d be able to do is take the place of the administrator in the small firm. They’re not going to take the place of the administrator in the large firm, assuming that the large, the administrator knows how to use AI too. Because again, what I tell my students is that the AI is not going to replace the paralegal. It’s not going to replace the lawyer.
But it is going to replace the paralegal and lawyer that don’t know how to use AI. Because they’re going to be like the person that didn’t know how to use the computer in the eighties. They’re the ones that stick to the typewriter and they’re going to be phased out. You know, it’s here. So you need to know how to use it. And when you’re dealing with AI, it’s a different set of parameters than if you’re doing research. You know, when you’re dealing with, I’m sorry, not AI. When you’re dealing with administration, it’s different than if you’re doing research.
When you’re dealing with administration, it’s basically take the administrative arts and have a way to quickly be able to say, how do I manage the file? How do I purge the file after the case is over? How do I, bill- what’s the best procedure to put in dates in a calendar when I receive a motion? Because what happens when you get a notice of motion that comes in and says it’s returnable to court on May 5th? Okay.
If all the student does is go to the calendar and puts May 5th in return date, and then everyone forgets about it, May 5th comes around and says, okay, where’s the opposition papers? We didn’t do them. Why? Because we put a May 5th, but we didn’t put the others. So what’s the procedure and the process to administrate a simple date? So let’s say, okay, we need at least two weeks before to be able to do an opposition paper, which means I want to be reminded a week before that. So that’s three weeks. And then I need a start date and then I need a return date and then I need a reply date and then I need to, you know, a court date.
So what dates do I enter in and how far apart should I do it? And that could be an office decision, but the thought process from the administration point of view, when I put into AI, what date should I consider when I’m doing a motion and then you may add dates yourself. You don’t have to rely on what AI says, but it’s a great brainstorming session.
And because it has no critical thinking skills, it’s like a great brainstorming- Cause normally if you ever go into a brainstorming session, you always have some guys saying, you can’t do that. No, that’s not going to work. You can’t do that. Well, that’s not what you want in a brainstorming session. I want all the ideas, no matter how, off they may be because they may spark another idea. So again, there’s none of that. It’s giving you all the ideas, good and bad. And then when you get that list, it’s up to you to say, don’t need that. This is redundant. This is good. I want this one. What if we also include that? Regenerate, but include this date. Boom, it’ll regenerate and you get a whole new set. And what if we consider this, regenerate and consider this. And it’s like, which one do you like better? I’ll take this one.
It basically is an assistant that’s sort of giving me my thoughts back in front of me quicker than I can do myself. And that’s how I look at it. These are my thoughts, you know, and as a subject matter expert creating the courses, I look at this and saying, okay, if I were writing a textbook, this is what I want. You know, and that’s another thing we’re using with AI is our students, we don’t want them to have to purchase textbooks if they don’t have to. So a lot of the information that we’re using in our lessons, I get from the AI and put into a rise lesson that they’re able to go through and read and interact with because it’ll be include interactivity. I create games using the AI. I’ve done who wants an A plus, which is often who wants to be a millionaire game set.
We have scenarios and simulations in which they’re given three options and based upon which option, they get feedback. I’ve created a crossword puzzle based on the terms. I’ve given it the set of terms and it would create the crossword puzzle that they could use to learn terminology. So there’s unlimited things that you can do as a professor trying to get interact with your students to be able to use AI to get that course material more than the talking head.
The thing I hate most is the talking head. I think you lose people in this day and age. It’s not one of those, it’s like watching television, it’s a passive activity. And for learning, you need to have that interactivity. And I’m old school, I grew up with a, my intelligence, was on the linguistic level and reading comprehension and, you know, going to school and reading 16 novels in a semester was not unheard of. Now it is, you know, it’s how do you get them to, you know, interact with what they’re reading, you know, and that’s where, again, we talk with the law office administration, I expect more from the students from a scholastic point of view.
So realistically, having AI- It gives them that capability to be more realistic because again, it’s the interactivity. They’re writing the letter in that course or they’re writing the information using AI to do it rather than reading. How do you create a flow chart? How do you create a checklist? It’s basically relying on what’s already been done before.
Kimberly King (51:48)
Wow. I love that you are thinking outside the box too with the crossword puzzles and the games and anything we can do to keep our students engaged. I love that. So what are some real world possibilities of law firms successfully adopting AI for office management?
Joseph Vallette (52:07)
I think you see a lot more efficiency in the smaller to midsize firm if they could, because again, I do think it’s one of the biggest deficiencies right now. You know, if you actually went and looked at some of the old timer, you know, I’m old enough to, you know, have worked with the greatest generation and the silent generation, you know, I saw how they practiced and managed everything was with bait stampers and date stampers and they had, folders that they used rulers and crust and check marks and everything was managed. ‘82 to ‘85 come around. They bring these computers on, replace, they get rid of the typewriter or they put the computer and they set up. You don’t have to worry about typewriting anymore. The computer will do it. You know, okay. They didn’t teach them how to save anything onto the computer, but that’s another story.
And then we started getting it to the nineties. We went to case management. I said, by this case management program, practice management program, you don’t have to worry about collecting all this information anymore the hard way. But no one ever taught them how to take the information they were collecting and putting it on the case management or the practice management. So it was basically- they fell behind because they stopped doing what they did to be successful up to that point. And now they weren’t managing it anymore and they were getting into trouble.
You know, we always had the red book. I don’t know if the lawyers had a red book sitting on their desk or the secretaries did it that was the calendar. It was basically a diary and people would write in what they were supposed to do for a certain day. And then what would invariably happen is the day would pass, something didn’t get done and no one transferred what was on that page to the page going forward.
So in the early 2000s, malpractice insurance companies for attorneys, stopped insuring attorneys if they still use the red book because too much was happening. But they were getting these programs, but no one was teaching them how to, we had the problem, like I said, with the cases, the dates. What dates that need to be, how does it trigger whoever’s supposed to get it? You put the, something comes in the mail, it gets, you know, the paperless law offices now, especially since the advent of COVID, you know, so things get, into the mail room gets scanned.
How does it get sent to the attorney, the paralegal, you know, is it by email? Is it a text? Is it some other notification? Is it through the calendar? You know, what’s the process and then what dates go into the calendar? Who sees what first? Does the paralegal see it first? Does the lawyer see it first? Who’s doing the research? Who’s doing the papers?
You know, again, that’s administration and it’s sort of a lost art and a forgotten art, but I think that’s where a course like this comes into play because it equalizes those that are in the larger firm that have that and have customized programs built in and things of that nature. And it’s the great equalizer to be able to have small firms compete on a larger scale because they will have the same information that the larger firm has.
Kimberly King (55:31)
So my last question is, again, and this has all been great advice, by the way, for all three of you, but what advice would you give to other institutions that are looking to integrate AI into their legal studies programs?
Joseph Vallette (55:48)
I think first you need the subject matter expert that also knows how to do the AI. Because if they’re not comfortable with using the prompts to be able to generate their course, I don’t think that, you know, the integration of the AI is going to be as effective. you know, they might be able to create the course, but it’s not going to be AI responsive, you know, again.
When I get asked for assignments, I’m not asking for them to give me the final product. I’m asking- give me the AI interaction. You know, I want to see how they interacted with the AI and where they went wrong. Not what the end product looks like because, you know, everyone can hand in something that looks pretty and, you know, meets the criteria, but how did they get there? You know, and that’s the part that I look at. And again, it’s not necessarily the subject matter expert, but also the people teaching the course.
You need to have at least a desire to learn how to prompt and know how to effectively do it and make sure that they’re not just putting in simple questions. You said before that everything is streamlined. I didn’t want to interrupt then, but I think prompting is you have to go in the opposite direction. The more you give it, the better.
And I think we have to sort of rethink how we’re doing it because you have to give it, almost tell it in every, I want you to act like a law office administrator and I want you to respond in a certain tone and I want you to provide a certain type of document and I want it to be a certain length, you know, and again, the more you give it, the more it will respond to what you want, but you have to know what it can do for you before you can ask it. And that’s the give and take of the instructor with the AI. And then again, they have their knowledge base and if they want to be able to present it. How can they best present it using the AI? Like doing who wants to be a millionaire? You know, okay. I didn’t want to, who wants to be a millionaire and the numbers give me an A plus. Okay. I got 12 questions from a D to an A plus.
And I’m developing a new course for the law school now in which I’m actually taking that concept, and the last question is interacting directly with AI. We’re going to be going to the next step in the new course. I don’t know how it’s going to work yet, but when they answer the question, AI is going to give them feedback directly. So we’re, we’re already less than a year of being able to release law office administration. And now we’re getting to a point where we actually may get more interactive with the student, which I think is a great thing because if they can get the immediate interaction and almost the appearance of a professor or a senior partner or a paralegal and they’re responding to them with their answer, boom, they get the immediate feedback. It’s sticky. They want to learn more. It’s not watching a passive screen and falling asleep and saying they watched the whole thing. It’s on in the background. Yep, I got my hour in.
Kimberly King (59:57)
Yeah, no, you know what, and it is all about that immediate feedback. Bryan, I think you have your hand up there.
Bryan Hance (59:04)
Yeah, so Joe’s done a terrific job pointing out a lot of the benefits and some of the limitations with AI. On the academic side of the house, you know, as you’re talking about other programs that might want to do this. Yes, we absolutely want to equip students with cutting AI technology and skills and tools so that they’re competitive in the job market. Joe also talked about how faculty can use AI to make their jobs easier, to grade papers and provide personalized feedback to students. But there’s some downsides that we are seeing as administrators and faculty. Students, oftentimes, and I’m seeing a huge increase in my classes where students are using AI. And if they’re not permitted to do so under a policy, it’s considered cheating.
Bryan Hance (01:00:02)
And it’s a challenge a lot of times for faculty to identify what’s AI and what’s not. We have a lot of very good students who are excellent writers. And so sometimes it’s hard to distinguish between what’s their thought process and what’s not. Back to the blank page also that Joe mentioned. Yes, it can be a benefit for students if they’re stuck trying to start an assignment.
You can put in that, assuming you get the prompt correct, put in a good prompt and get some ideas back. But at the same time, I think there’s some benefits at staring at that white page for a while, right? Forces you to think, problem solve. And we want to really encourage our students both at the undergraduate and the Juris Doctor level to learn how to think, to be able to think on their feet.
Students can’t go into court and say, hold on, your Honor, let me go on my phone and check with, you know, ChatGPT and the software and see if I can get you an answer. I’ll get back to you in just a minute. You can’t do that. You’ve got to be able to think and speak with, you know, knowledge and be able to articulate your thoughts. And those are skills that…
We need to go back to one of the first things we talked about is learning how to use AI responsibly and intelligently, not just open the floodgates, especially in the field of law.
Kimberly King (1:01:38)
And you know what, again, it really does come back to critical thinking and also in guiding, having that as a guide for us, but making sure that what the information that comes back to us is checks and balances. And again, yeah, that being able to quickly think on your feet. This has been so interesting. And again, it’s so timely, it’s relevant. So thank you all. Thank you, Brian and Nancy and Joe for all that work that you’re putting into here. And hopefully, you know, you’ll…
you will be the beacon for other universities to jump on board with this. We thank you for sharing your knowledge and if you want more information, you can visit National University’s website at nu.edu and thank you so very much for your time all of you.
Joseph Vallette (01:02:22)
You’re welcome.
Bryan Hance (01:02:23)
Thank you.
Kimberly King (01:02:28)
You’ve been listening to the National University Podcast. For updates on future or past guests, visit us at nu.edu. You can also follow us on social media. Thanks for listening.
Show Quotables
“Generative AI is only [good for] creative thinking. It has no ability for critical thinking. We are the critical thinkers… We have to be able to say, okay, are these good ideas? How can I further elaborate on that?” – Joseph Vallette, https://shorturl.at/SWPXy
“You’ve got to be able to think and speak with knowledge and be able to articulate your thoughts… We need to [focus on] how to use AI responsibly and intelligently, not just open the floodgates, especially in the field of law.” – Bryan Hance, https://shorturl.at/SWPXy