Court Ponders Treatment of Innovations Made With AI
A legal test could follow, spurring philosophical and legal debates about the meaning of intelligence and how AI systems will be viewed and regulated in the future
LIKE SO MANY other issues involving artificial intelligence (AI), the Thaler v. Vidal patent case has received outsized international media attention despite its rather pedantic administrative procedural question: Under the Patent Act, can an AI software system be "listed" as the inventor on a patent application?
It’s an interesting question, and its timing says a lot about how courts are just now beginning to confront the challenges posed by advanced AI technologies, which have been spreading throughout the economy for the better part of a decade.
But Thaler’s debate may have more to do with a second query, one the three appellate judges of the U.S. Court of Appeals for the Federal Circuit who decided Thaler posed but did not answer: Are inventions made by people "with the assistance of AI" eligible for patent protection?
Answering the former question was relatively straightforward for the judges, led by Chief Judge Kimberly Moore, who is perhaps the preeminent active jurist in the U.S. when it comes to patent law matters.
Addressing the latter query, however, is much more complex, and could have implications far beyond patent law. If presented to a future court, the AI assistance question could raise all sorts of metaphysical and philosophical issues for a judge to consider, including what intelligence, creativity, and ideation—the capacity to form ideas—mean in a world full of AI technologies.
Thaler reached the Federal Circuit following a decision by the U.S. Patent and Trademark Office rejecting Dr. Stephen Thaler's patent applications in 2020, as patent offices in other countries’ have done. His applications named only "Device for the Autonomous Bootstrapping of Unified Science" (DABUS), an AI system employing various machine learning techniques, as the sole inventor instead of himself and his colleagues. In the Federal Circuit’s Aug. 5, 2022, decision, the court affirmed the Patent Office's rejection of Thaler’s applications on the ground that they were incomplete for not listing a human. "Nothing in the Patent Act indicates Congress intended to deviate from the default meaning" of "individual," the court wrote. "To the contrary, … the Patent Act supports the conclusion that 'individual' in the Act refers to human beings." To underscore their reasoning, the judges also pointed to the statute's use of “whoever.”
Notably, the Federal Circuit’s decision mirrors that of a U.S. Copyright Office tribunal, which earlier in 2022 rejected Thaler's argument that DABUS could be identified on a copyright application as the author of a graphical work, on the ground that copyright law requires human authorship.
What about an AI assistant?
Pondering whether inventions made by people "with the assistance of AI" are eligible for patent protection does not appear to ask whether an AI system itself may be patented. Indeed, the U.S. Patent Office has been granting patents for many years to inventors who create AI technologies (let’s hope the nation’s patent court is not questioning the validity of all those previously-issued patents).
The court’s query also does not seem to ask whether inventions made by people using an AI system as a tool are patent eligible. Patents are issued all the time for discoveries made by scientists and engineers using software, often embedded in analytical and testing devices, that aid an inventor’s work.
What the judges seem to be questioning, if we reductively parse their query, is how the law should treat relative contributions to an invention from advanced AI systems powered by, for example, modern, massive, deep learning models, which by many measures do things humans can’t. Answering that query may require rethinking what it means to discover or innovate.
In patent law, evaluating whether something is patent eligible requires an understanding of its conception, which involves understanding the origin of a permanent idea of a complete and operative solution to a problem. Most people have the faculty for forming ideas, and can exercise this capacity. But invention requires something beyond a mere idea to solve a problem. The law does not recognize a theoretical or abstract solution. What is required instead is something more concrete, a mental picture of how something will work in practice. The law is well settled that only an inventor (or multiple inventors working together) can be credited with conception.
Let us consider a hypothetical AI example. Say an AI-powered system identifies a candidate drug molecule for cancer therapy. The system consists of knowledge-bases, databases, and massive machine learning models trained with datasets containing, among other things, relevant feature information about human physiology, pharmacological properties of molecules and molecular structure, and the three-dimensional folding structure of certain proteins (information recently developed by DeepMind, which John Moult, a leading expert, said was the first time in history that a serious scientific problem has been solved by AI). In this scenario, a human researcher reduces the idea to practice by synthesizing the recommended molecule in the lab using known chemical and physical processes, tests it in the lab to see how certain kinds of cells react to it (and how much is needed for that reaction, a process that itself may involve other AI tools), and conducts clinical trials to confirm the drug's safety and efficacy in humans.
Should this drug therapy invention, made by a person with the assistance of AI (or, arguably, made by an AI system with the assistance of a person), be eligible for patent protection? If the AI system were a human being, no question; many new drug therapies have been patented by joint inventors before.
A different framework for evaluating AI assistance?
As courts will do, we can expect a judge at some point in the future to craft a legal test for what "with the assistance of AI" means (assuming Congress has not already enacted legislation to clarify the point). The test may default to normative meanings of "individual" and “whoever” used by lawmakers when drafting laws. This human-first test would require treating any AI system, no matter its form, capacity, or role in an endeavor, as no more than a helper or tool, something used by a person. After all, the AI system in the drug therapy scenario merely applied math to find the most statistically likely candidate drug, right? In this sociocentric framework, anything the AI system contributes to a discovery would redound to the human who used the system. Such a test would ignore the manner in which the claimed invention was made. In that way, the precedential body of law governing patent eligibility, starting with questions of inventorship, conception, abstractness, etc., can be applied.
Or the court could fashion a different test, one that compares the relative contributions of an anthropomorphized construct of the AI system and the human innovator, putting them on relatively even footing for purposes of analysis. This approach is consistent with the tendency, when discussing AI, to anthropomorphize systems that embody it and judge them using measures used to assess humans. Machine learning outputs, after all, are called “decisions”; large language models are described as “writing” prose (with comparisons to favorite poets); bots “chat”; and neural networks “learn.” The graphic at the top of this article, like many others used in media to illustrate AI, supposes the artificial as something human-like. The court’s pondering of AI as “assisting” someone in an intellectual endeavor seems rooted in the same narrative.
It’s not hard to understand why. Consider robotic devices with human-like faces or limbs. Looking at something like Boston Scientific's Atlas robot, with its array of sensors giving it awareness of and the ability to navigate its surroundings, one cannot not think about the human things it might be capable of performing. Researchers at Columbia University recently demonstrated that a robot arm can self-learn the relationship between its physical self and its environment using cameras and mirrors, creating, in essence, a self-image of itself stored in its neural network. When an AI-powered robot arm broke a kid’s finger playing chess, some of the rhetoric that followed seemed directed at the robot as an autonomous being that intended harm.
Even machine learning algorithms operating in the cloud (a remote data center) are treated as the disembodied brains of something akin to human, especially those making decisions about people (decisions people used to make). Lawmakers have even introduced legislation that asks how much of an algorithm's actions "replicates human activity," when determining if it should be regulated.
So under this test, traditional rules of conception, reduction to practice, and inventorship might still be applied, but to both the person and the AI systems. The analysis could suggest a new form of joint inventorship, one that requires naming, on a patent application, a person but also an identification of the AI system that helped make the discovery (e.g., “Inventor Y, with the assistance of System X”). The existing body of law governing patent eligibility could then test the joint discovery for patent eligibility. This test does not care whether most of an invention can be traced to the AI system, though the human joint inventor would get all the benefit of the patent rights that arise from the joint discovery, even if their contribution was small in comparison.
Alternatively, the court might choose to evaluate an AI's contribution using an entirely different frame of reference, one that transcends what we use to describe human intelligence. As Rob Toews observed in his excellent article Reflecting On ‘Artificial General Intelligence’ And AI Sentience (Forbes, July 24, 2022), "AI is its own distinct, alien, fascinating, rapidly evolving form of cognition." Indeed, we know that machine learning models, by amalgamating large datasets, do more than just memorize. Arguably, they can “learn” by finding a trend, a correlation, a connection of many dots, a meaningful separation of data in higher-order dimensions that would be impossible for humans to discover. How can we apply traditional notions of intelligence, creativity, and ideation, or for that matter “individual” and “whoever,” to such fully trained, large neural networks? Prompted with information about the world, such systems discover things. If we only use human intelligence as “the ultimate anchor and yardstick for the development of artificial intelligence,” Towes cautions, “we will miss out on the full range of powerful, profound, unexpected, societally beneficial, utterly non-human abilities that machine intelligence might be capable of."
With this in mind, a court might evaluate an AI-generated contribution to a discovery using measures that cannot be applied to human innovation (and, presumably, humans evaluated by such measures would fail every time). This alternative test may require a framework that looks at patent eligibility from a different perspective when an AI system is involved. Default notions of “individual” and “whoever” would no longer be used to exclude AI under this new paradigm.
Whatever test courts come up with, there is an urgency here: patent applications are being filed all the time by companies based on discoveries that only AI systems, powered by the most advanced machine learning models (ones that are getting better and better at generalizing across domains and tasks), could make. The expectation of securing intellectual property rights for those AI-assisted discoveries helps drive research investment leading to greater innovation, more competitiveness, even improved national security. Delays in resolving the AI assistance question adds to commercial and legal risks for companies and, frankly, the nation.
That said, courts should proceed with caution. A tweaking of the nation’s patent laws to address the AI assistance question could spill into other areas of law where the capacity of machine learning to do things human can’t may be an issue.
Hi Brian, I found this article in my research for a sci-fi short story. "Little Assistance" is about the first federal judge on the moon and how he uses AI reconstructions of past judges such as Learned Hand and Justice Scalia to help him rule on a difficult case (for better or worse). If you have a chance to read it, I would love to hear what you think! https://slate.com/technology/2023/09/little-assistance-stephen-harrison.html