
Artificial Intelligence (AI) is a recent technological advancement that is changing the world for better and for worse. AI usage has been linked to increasing environmental damage, educational challenges for teachers and students, and is now quickly becoming an area of legal concern. Without a landmark case ruling or heavily relied-on precedents, AI usage raises significant legal questions that have yet to be answered.
Generative vs Agentic Artifical Intelligence: A Split in Legal Responsibility
Artificial intelligence services are traditionally provided by companies; for example, ChatGPT (an artificial intelligence software) is run by OpenAI, a San Francisco-based software generation company. Therefore, suing over AI usage can be from user to parent company, user to user, user to non-user, and so on. When it comes to legal exploration of AI usage, and most notably, copyright infringement and intellectual property regarding AI, nuance is necessary. AI programs that are generative (prompt-based or that need input from the user to create) will require different legal considerations than programs that are Agentic. IBM defines Agentic AI as “focused on decisions as opposed to creating the actual new content, and doesn’t solely rely on human prompts nor require human oversight. Early-stage agentic AI examples include things like autonomous vehicles, virtual assistants, and copilots with task-oriented goals.“
As of now the distinction between generative and agentic artificial intelligence is a key factor on how judges rule on a case by case basis. The problem with such few artificial intelligence cases is that the usage of artificial intelligence, any legal loopholes, the propensity for copyright infringement all makes it difficult to weed out the underlying applicable rule. Major cases involving AI discuss in length the intention of AI usage, the result of usage for companies (particularly in data creation or software designing), and the defining the type of AI used. This all begins the very case specific rabbit hole of alleged infringement, but also areas that could be legally fine if replicated differently.
Challenges with AI and Legal Accountability
Primary issue is the case law is light; secondary issue is applying said case law to the up and coming cases involving AI is difficult because AI is constantly changing meaning even generative artificial intelligence can be used slightly different for the same process.
In February of this year, the first major AI involvement copyright case in Delaware, Thomson Reuters Enterprise Centre GMBH v. ROSS Intelligence Inc., made a decision regarding artificial intelligence. Thomas Reuters owns a legal research platform called Westlaw. Ross Intelligence was formulating its own AI-based legal research platform, but needed data and inquired about Westlaw, which Thomas Reuers refused, after all, once built, they would be offering competing legal research search engines and platforms. The basis for the lawsuit for copyright infringement is that another platform, “LegalEase, gave those lawyers a guide explaining how to create those questions using Westlaw headnotes, while clarifying that the lawyers should not just copy and paste headnotes directly into the questions. D.I. 678-36 at 5–9. LegalEase sold Ross roughly 25,000 Bulk Memos, which Ross used to train its AI search tool.” Ultimately, Ross was able to access Westlaw’s information and, in turn, used that information for their own programming. In 2023, BIBAS, Circuit Judge, denied Thomas Reuters’ motions, yet then revisited the case and ruled in favor of the plaintiff. Jackson Walker News notes that the court noted that Ross’s AI was “not generative AI” and that the process by which he service is used is similar to Westlaw’s generation of legal content. “This supported a finding that Ross used the Westlaw headnotes to build a competing product, i.e., a for-profit legal research tool that serves the same purpose as, and is a potential market substitute for, Westlaw.”

On a slightly different note, utilizing AI in the legal field has not always provided acceptable results. In Massachusetts, an attorney was sanctioned for $2000 for including AI-generated citations. The Maryland State Bar Association News finds that the attorney was “citing fictitious cases in court pleadings that were produced by an AI.” Earlier this month, an attorney from Gordon Rees Scully Mansukhani used artificial intelligence in a court filing. Reuters finds that the firm was “representing a creditor in an Alabama hospital bankruptcy case, in a Thursday filing,” and offered apologies to all involved parties, “after one of its lawyers submitted a court filing with inaccurate and non-existent citations that were generated by AI.”
What Now?
The problem with AI is that the way in which the user is provided the service (agentic versus generative), and how a company (and members of companies) use this service in the past year, has become more problematic. The legal community needs more discourse and to hear more cases to create a basis of comparison and somewhat of a foundational criterion for AI.
As artificial intelligence enters everything from our offices to our homes, the law must follow. But the legal standards have to be quicker for the next issue, or maybe even legal infringement regarding artificial intelligence, might have already happened.










