When AI Replaces Lawyers, Who Will Fight for the Public's Rights?
And why the asymmetries in power between AI tech companies and the public today spell trouble for lawyers battling tech in the future
The digital realm is overtaking and redefining everything familiar even before we have had a chance to ponder and decide.
--Shoshana Zuboff (The Age of Surveillance Capitalism)
Today there’s a growing sense that the corporate innovators driving AI’s development pursue their discoveries with unbending zeal, that their quest to discover artificial general intelligence (AGI), or something AGI-adjacent, has imposed a daunting array of potential risks upon all of us. At the same time, there’s a sense of futility, that we have few options but to navigate the modern digital world subject to the AI creations launched upon us. Whether it’s the AI technology operating in cars, phones, and wearables; the machine-learning algorithm that steers us when we go online; or the data-based medical device in our doctors’ offices that diagnoses and treats us, many feel powerless against the whims of tech’s AI ambitions.
We are right to be concerned. AI’s risks of harm are well documented; many of the risks are as threatening to society as anything seen since AI technologies began spreading economy-wide starting a little over a decade ago. They include perils like erroneous allocation of or withheld government services from those who most need them, insidious bias in models that reinforce and amplify discrimination and stereotypes, ever-watchful surveillance of people online and in the real world, infringement of artist’s and author’s substantive intellectual property rights, amplification of disinformation that erodes trust in institutions and neighbors, a looming specter of potential job losses due to automation, and perhaps the ominous possibility of AI systems contributing to a future loss of human life. “There’s some chance – above zero – that AI will kill us all," Elon Must told reporters on Capitol Hill following a Senate hearing on AI in September 2023. "I think [the risk is] low but there’s some chance." He added, understatedly, “The consequences of getting AI wrong are severe.”
Feeling powerless against AI tech companies and their AI systems is not new. Nor is the manner in which AI companies have accumulated their power: by exploiting open and mostly unregulated markets that capitalistic economic frameworks provide, like those found in western democracies. "Power" may be defined broadly as the ability of companies to change the incentive structures of citizens (consumers and others), as well as to influence policy- and lawmakers, to bring about outcomes favorable to the companies. In short, sociopolitical power. In the ever-transforming AI landscape, the asymmetries in knowledge (i.e., big, proprietary datasets) and resources (i.e., money to build massive computing infrastructure to process all that data and pay for lobbyists and lawyers to maintain power) has contributed to a dramatic accumulation of sociopolitical power in the hands of some tech companies, which is expected to grow more pronounced absent some kind of intervention. This is not to suggest that simply possessing sociopolitical power leads, ipso facto, to power being asserted for self-interest or nefarious reasons. But the reactions by state and federal lawmakers in the last few years to the perceived power of big tech suggests at least a common concern that the asymmetries in knowledge and resources have crossed some sort of line. Underpinning those reactions is also an understanding that, short of an industry-wide, conscientious shift in priorities that elevates public interests above financial ones, tech’s power may become too great to control.
That’s why many have called for legislative and administrative institutions in the US to use their powers to impose much needed laws and regulations on the AI industry and its developers. And in fact, lawmakers in some instances have granted consumers the authority to take certain prescribed actions vis-à-vis the tech companies, nudging the power imbalance more toward the public. This granted authority is generally reflected in the nature of statutory rights, one purpose of which is to give certain people specific authority where historically they had none or had very little, at least in comparison to those who most would see as being powerful. The rights possessed by people, given by action of law, endow them with specific kinds of authority that can affect the dynamic that would otherwise progressively tilt power (in economic, cultural, religious, social, and political terms) toward a minority group made up of the most powerful. Recently-enacted and proposed state and federal data privacy laws, for example, provide several new rights related to AI companies and their data-based systems (e.g., the right to opt-out, the right to know when an AI is being used, etc.). The White House’s AI Bill of Rights sets forth a number of additional rights (the White House calls them "principles": the right to be protected from unsafe or ineffective systems, no discrimination by algorithms, protection from abusive data practices, notice of AI, explanations as a means to challenge adverse outcomes, etc.). Look closely and you’ll notice that many of these rights relate to people’s sovereignty or autonomy (or liberty, if you will). The idea is that people should be free to make their own choices and actions without covert influences that lead some to make choices and act in ways corporations want them to choose and act.
But these personal rights and authority only subsist when lawmakers act. Unfortunately, even if the White House and Congress did take immediate action in the way of enacting federal laws around AI, such as erecting guardrails to steer AI toward greater social benefit and lower risk (without jeopardizing competitiveness and national security), the asymmetries entrenched within the current socio-technical-economic landscape would likely still tilt in tech’s favor. The public could fight back and repel unwelcome AI technologies by using superior technology, which, after all, is how wars are won. But where can they source such technology when the creation and training of resource-intensive AI models demands specialized skills and a trove of money?
So if the AI industry won’t voluntarily change its approach to AI, the White House and Congress don’t act in a timely and meaningful way before the worst harms from AI technology happen, and the public can’t create for itself more power to counter big tech’s, what options do people like you and me have in the face of high risk AI technologies and the worst harms they might inflict? Are we expected to take the drastic step of opting out of meaningful participation in the modern digital society, relinquishing the many conveniences the digital economy offers (which for many, can have dire consequences)?
For now, existing laws will have to do, but they’re no good unless they are asserted or enforced, and that will require specialized lawyers, specifically tech savvy, independent, public interest lawyers who will advocate for the public’s interests against the powerful AI tech industry. And if there ever was a time for these lawyers to seek out and engage the powerful on the people’s behalf, leveraging the power of the courts to impose fines, assess damages, and order injunctions, it is now.
In some contexts, this is already happening. Lawyers at the FTC are engaged in a battle with big tech, by bringing anti-competitive lawsuits against two of the world’s largest AI tech-enabling companies—Google and Amazon—which, depending on their outcomes, could close the power gap by weakening those companies’ monopoly powers (the FTC alleges that Google’s actions toward rivals and sellers helps it “illegally maintain its monopoly [market] power”; monopoly power is different but not entirely unrelated to social power). Independent lawyers are also engaged, in states whose lawmakers have enacted strong data privacy laws, by bringing large class action data privacy lawsuits, in effect asserting the public’s statutory rights related to their personal and private data.
There is and will be a growing need for specialty lawyers like these, ones who possess the legal skills and technical understanding of AI to represent people whose power is far less than the wealthy and well-connected individuals, groups, and companies who have concentrated power to pursue their AI-specific goals. Call them public interest advocates, “guardian[s] of our freedom” (Walters v. Radiation Survivors, 473 U.S. 305, 371 (1985) (J. Stevens dissenting), a new AI lawyer corps, or something else, they will be lawyers practicing in human rights, civil rights, privacy, publicity, intellectual property, trade, and labor and employment law, to name a few. They are practicing in non-profit organizations, pro-bono programs of for-profit organizations, law firms, and government agencies. They will represent the powerless (or the less powerful) individuals in AI-related cases involving technologies that can lead to discrimination; voter suppression; deceptive trade practices; adverse employment decisions (hiring, promoting, firing); collection and use of biometric data; unlawful use of other’s visual, literary, and performative arts; and misappropriation of trade secrets, among other cases. In the future, they may even be called upon to represent the less powerful in direct legal disputes between humans and truly autonomous AGI systems, where responsibility and liability may not trace back to a person’s actions but to an algorithm acting independently.
This lawyer corps will consist of lawyers who graduate from law schools that, if they don’t already, mandate AI training before students can graduate, just as they are taught basic civil procedure techniques. And not just training on the use of the latest legal tech applications, which are AI tools for augmenting the repetitive knowledge tasks that lawyers and paralegals perform daily. Rather, workshop-like training that helps train new lawyers to find and address the risks of harm to people’s rights, regardless of the area of law in which those new lawyers intend to practice. The respective practice bars should also mandate continuing legal education in AI technologies. Same with the judiciary, which will decide complicated legal questions of fact and law raised by AI technologies, such as evidentiary issues involving machine versus natural intelligence, machine “intent,” machine expertise and testimony, the progeny and flow of data, and adequacy of humans-in-the-loop and other risk-mitigation measures.
The development and training of AI lawyers is needed now to help close the power gap, before doing so becomes impossible: when lawyers advocating on behalf of the less powerful are simply replaced by AI technologies made by the parties they are fighting in court. According to Geoffrey Hinton, the so-called godfather of AI, we’re moving into a period when for the first time ever, some AI systems are “more intelligent than us.” Indeed, it has been argued that most or all creative tasks and knowledge-based jobs, including lawyers, are susceptible to being replaced by AI within just a few years. After all, many of the same mental processes involved in legal work can arguably be performed by language generative models and inference decision systems trained on all relevant statutes, laws, judicial decisions, lawyer briefs, facts/conclusions of law, contracts, patents, policies, academic books, and law review articles ever written on a subject. Not even the best human lawyer can, faced with a hypothetical query from a judge containing a complex and unique set of facts, effortlessly and nearly immediately output the best, most persuasive arguments and precedent supported by the most relevant case law in various jurisdictions, or offer colorable arguments advancing changes to precedent backed by reasoned analysis drawn from all relevant opinions or laws review ever written. AI models truly can be said to know everything, if they’ve been fed all known data.
This is not to suggest that the legal profession is or will be the last line of defense protecting the public’s interests from harmful AI technologies. That role, ultimately, may rest in the hands of those with the desire to make ethical AI systems. But if legal advocacy is ever reduced to algorithms, the powerful AI tech companies may continue to gain power at the expense of all of us. And no army of public interest lawyers will be able to stop them.