Bridgetower Media Newswires//February 10, 2025//
Bridgetower Media Newswires//February 10, 2025//
IN BRIEF
The recent rise of artificial intelligence has had an obvious economic impact, but some lawyers may be wondering if it’s just a passing fad or it will impact their practice. Data compiled by the National Bureau of Economic Research offer an answer: a quarter of American workers are using AI at least weekly and more than 10% every workday.
The legal profession is famously slow to adapt, but with lawyers’ clients and their employees widely using AI, even attorneys who work outside traditional tech circles need to shed that Luddite attitude in 2025.
Lawyers must understand how the tech will impact their clients to ensure continued regulatory compliance and protect against other risks. And embracing AI can help attorneys stay competitive as their rivals look to leverage these tools.
AI is nothing new, of course; it has been in commercial use for decades. All of us have long been interacting with AI tools, or receiving services provided using AI, many without realizing it. A recent wave of technical developments has made these technologies more directly available to American workers and consumers for tasks from the mundane to complex.
Those not yet fully familiar with AI tools may think of them as primarily used by engineers and other tech employees. While this is certainly the case, AI is being adopted far beyond tech teams.
Executives may be using tools to generate slide decks from notes and business data. Sales teams may use AI to draft outreach messages for clients or potential clients. Marketing teams may generate images or text for advertisements or social media posts. HR may be exploring tools for automated resume review or for employee assessments or disciplinary action. IT may leverage AI to assist with cybersecurity.
Expert users of AI tools know that, while undoubtedly helpful, they are far from perfect. This understanding is, unfortunately, not as widespread as tool usage. As with any scenario in which risk outpaces the ordinary user’s understanding, regulators have taken notice and are racing to catch up.
AI usage thus introduces significant and evolving legal risks, especially if employees are operating without the constraints of a corporate AI policy or legal department oversight.
Potential algorithmic bias, general merchantability concerns due to error risk in products incorporating AI, data privacy and cybersecurity questions, intellectual property risk and more follow where AI goes.
These risks can be, and often are, mitigated, but it requires in-house lawyers or their outside counsel to understand the technology well enough to understand the risks. For example, hiring teams increasingly are using AI systems for automated resume review, to help with surfacing quality applicants for further review or phone screen. While this is within the capabilities of an AI tool, a company should first understand what data was used to train the particular tool chosen and what information it prioritizes when evaluating a resume.
Without proper oversight on training, an AI tool might inadvertently learn to discriminate against certain applicant groups, leading not only to lost potential high-quality employees but to potential lawsuits from rejected applicants or oversight agencies.
Ignorance of AI laws and regulations can lead to hefty fines or reputational damage for clients and lost clients for lawyers.
Importantly, the risk perspective is in flux as lawmakers and regulators worldwide scramble to address new risks. From Europe’s AI Act to Colorado’s landmark AI law, AI laws are already rolling out. With proposed U.S. state and federal legislation and regulations, new rules are constantly emerging that will affect businesses across industries, even before accounting for how a new administration will affect this regulatory landscape.
As clients and their employees clamor to use these tools, lawyers taking a purely reactive posture will find themselves constantly responding to the newest regulation. If they are instead positioned to understand the technology, or engage outside counsel who are, lawyers can provide issue-spotting more effectively, anticipate regulation and its impact on clients, and ensure continued regulatory compliance as the ground shifts.
But understanding AI goes beyond the legal risks and into potential business opportunities. Law firms and legal departments are increasingly turning to AI tools to improve efficiency and cut costs.
As the pace of AI innovation accelerates, staying informed isn’t just an advantage — it’s a necessity.
Case law and predictive analytics tools can help find the most persuasive case law and ensure briefs make the best arguments or anticipate how a judge’s track record could impact arguments. Discovery tools can sift through mountains of documents in seconds to flag relevant information for review, and contract review tools might identify how and where a contract deviates from the average.
By embracing these technologies, lawyers can focus on higher-value tasks that require human judgment as well as keep pace with opposing counsel. Ignoring this trend risks falling behind competitors.
Lawyers looking to use these tools should, however, also ensure they meet a threshold level of understanding of the technology. Ethical obligations require lawyers to competently counsel clients even when AI tools are used and ensure that privilege concerns are addressed when providing information to a tool.
Understanding will also help avoid tool misuse. Many are already familiar with tales of lawyers filing AI-written briefs that include citations to nonexistent cases. These stories are unsurprising to those familiar with how AI operates. Such mishaps are easily avoided by ensuring you or your outside counsel understand at a high level how a particular AI tool operates and what it can and cannot do reliably. This ensures you gain the efficiency advantages the tools offer without winding up a cautionary tale.
As the pace of AI innovation accelerates, staying informed isn’t just an advantage — it’s a necessity. By approaching AI with curiosity and caution, lawyers can ensure they are well-positioned to continue to advise their clients in light of these changes.
Andrew “A.J.” Tibbetts is an intellectual property and technology shareholder at Greenberg Traurig. A former software engineer, he counsels on matters related to software-implemented tech across a range of industries, from networking, financial technology and natural language processing to life sciences, AI, medical records and medical devices.
Related Articles