From Courtroom Bottlenecks to Algorithmic Advocacy

Building an AI-First Legal Support Infrastructure

Woman consulting an AI legal aid

We talk about justice as if it were a noble principle — blindfolded statues, marble courts, words etched in constitutions. But for billions, justice isn’t a principle. It’s a line item they can’t afford. It’s a parking ticket they don’t have time to fight, a landlord who pockets their deposit, a benefits office that loses their paperwork and shrugs. Justice, for them, is not blind. It’s missing.

The numbers are staggering: 1.4 billion legal needs unmet globally each year. But behind each statistic is a fracture in an ordinary life — hours lost, wages withheld, shelter threatened, dignity eroded. These are not cinematic trials. They are quiet defeats, endured by people who know that hiring a lawyer will cost more than the claim is worth, or who sense that the system was never designed to hear their voice.

This is the paradox we live with: in an age of exponential computation, we still ration fairness. In a world where large language models can generate thousand-word briefs in seconds, people still forgo their rights because asserting them is too costly, too complex, too slow.

What Advocacy AI offers is not perfection, but leverage. A way to give ordinary people back the strength to stand in systems that otherwise exhaust or ignore them. A prosthetic for fairness — persistent, affordable, accessible to anyone with a phone.

The question is not whether such systems will exist. They are already here, messy and uneven, some riddled with bias, others quietly transformative. The question is whether we will build them well — with guardrails, with equity, with the courage to treat access to justice not as a luxury good, but as public infrastructure as vital as clean water and roads.

What Advocacy AI Is — and Why It Matters

Advocacy AI is not a courtroom Perry Mason and it is not a science-fiction fantasy of robo-lawyers. At its core, it is far simpler and more practical: a persistent, low-cost system that handles the procedural, factual, and negotiation work of legal advocacy — with human lawyers stepping in only where law or prudence requires them. Think of it less as replacing the lawyer, and more as replacing the endless paperwork, citations, and bureaucratic labyrinths that prevent most people from asserting their rights in the first place.

The stakes for defining this clearly are high, because legal advocacy isn’t just about winning cases. It’s about whether fairness is gated by wealth, whether disputes choke the system with needless backlog, and whether people enforce their rights at all when the cost of fighting outweighs the benefit.

Advocacy AI is designed to push back against that resignation.

The Field in 2025: Messy, Uneven, Real

Already, we can see early versions emerging. Clio Duo integrates with Claude AI to help lawyers draft and review documents, while ensuring that AI output can be securely handed off to human counsel. CoCounsel, another entrant, has gone a step further: its AI-powered citators automatically verify whether precedent is valid, tackling one of the most tedious but critical elements of legal work. Legal aid organizations, chronically underfunded, are experimenting with landlord–tenant bots that prepare filings in minutes — a process that once consumed staff time they never had enough of.

But alongside these promising steps are the failures and risks that prove this is no silver bullet. In 2023, Mata v. Avianca became infamous when lawyers filed briefs with AI-fabricated citations that didn’t exist. In early 2025, Indiana courts sanctioned attorneys for submitting AI-generated cases that were similarly fictitious. Outside the courtroom, audits from the GAO and UNEP flagged biases in AI-driven insurance claim denials — raising the alarm that algorithmic advocacy could entrench injustice just as easily as it could expand access.

The risks don’t end there. In Canada, landlord–tenant bots that began as filing assistants are now under Competition Bureau scrutiny after evidence surfaced they may have been co-opted into AI rent-setting tools — raising the specter of algorithmic collusion. What began as a tool for tenant empowerment was quickly bent into a lever for landlords to tighten the screws.

The lesson is unmistakable: capability is accelerating faster than governance. Systems that can generate legal briefs, file disputes, or negotiate with bureaucracies already exist — but without guardrails, they risk being weaponized or misused.

Why It Matters Now

This messy frontier is exactly why Advocacy AI matters. It isn’t a theoretical tool on the horizon — it’s a practice already emerging in uneven fits and starts. Done well, it can expand equity, unclog inefficiency, and give people resilience in the face of unjust systems. Done poorly, it risks creating a two-tiered future where the wealthy use AI to sharpen their advantage while everyone else is left navigating automated gatekeepers.

That’s the knife’s edge we stand on. Advocacy AI is not about imagining what’s possible in 2050. It’s about deciding, here and now, how we shape the tools already slipping into the justice system — whether they widen the cracks, or help close them.

Social, Ethical, Economic & Environmental Lens

The idea of Advocacy AI carries a dangerous temptation: to assume that because the tool can scale cheaply, it will automatically scale fairly. History tells us otherwise. Without careful design and oversight, these systems can just as easily reinforce inequities as relieve them.

Bias and Risk of Reinforcing Inequity

The most immediate risk is bias baked into the data. If the training corpus underrepresents certain dialects, communities, or case types, the AI will perform unevenly. A bot designed to help tenants in eviction disputes, for example, might falter when the tenant is an immigrant whose paperwork is irregular, or when the landlord–tenant law differs by jurisdiction.

Organizations like OneTrust have begun piloting structured audits to measure these disparities, but the challenge is vast: how do you ensure an AI trained on case law from wealthy jurisdictions serves just as well in rural or marginalized contexts? Without correction, Advocacy AI could become a mirror of existing injustice — cheaper, faster, and just as exclusionary.

This is the uneasy reality: the very tools meant to democratize justice could calcify its inequities if left unexamined.

Building for Inclusivity

And yet, the inverse is also true: with deliberate design, Advocacy AI could bring entire populations into the justice system for the first time. Inclusivity isn’t a nice-to-have here — it’s existential.

These aren’t features; they are lifelines. Designing for the edges ensures the system works at the center. Economic Ripples

The labor market will not escape unscathed. The World Economic Forum estimated that by 2025, 85 million jobs would be displaced globally by AI, while 97 million new roles would emerge. The legal field is already feeling that churn. A new category — legal AI specialist — is beginning to appear, blending law, compliance, and prompt engineering.

For the U.S. alone, estimates suggest 266 million lawyer-hours could be freed each year by AI handling routine filings, discovery prep, and procedural motions. That does not mean 266 million hours of unemployment — it means a chance to redirect scarce human expertise toward complex, high-stakes litigation, negotiation, and reform. The knife-edge here is whether institutions reinvest those freed hours in access for the many, or in higher margins for the few.

Who Pays? Models and Trade-offs

Funding models will shape equity more than technology will.

The trade-off is stark: efficiency without equity is exploitation by another name.

The Balance Point

Taken together, these lenses show both peril and possibility. Advocacy AI could either widen the cracks — embedding bias, deepening inequality — or it could stitch them closed, extending justice to billions who have never truly had it.

The truth is unsettling but clarifying: technology itself is neutral only in theory. In practice, it will reflect the incentives we build around it. The next decade will decide whether Advocacy AI becomes an accelerant of inequity, or a scaffolding of fairness that future generations take for granted, the way we now take for granted clean water or universal schooling.

Rollout Roadmap (2025–2035) — Anecdotes, Precedents & Stress Tests

The path from promise to practice isn’t abstract. It’s measured in the kinds of disputes people actually face: the ticket that eats into rent money, the deposit that never comes back, the benefit denied on a technicality. The rollout of Advocacy AI will succeed or fail not in conference rooms, but in the lives of people like Rosa, Ethan, Linda, and Malik.

Phase 1 (2025–2027) – Easy Wins

Every technology earns its place with early victories. For Advocacy AI, that means tackling the low-stakes but high-volume disputes that clog courts and quietly drain lives: parking tickets, airline refunds, FOIA requests.

In 2025, U.S. Department of Transportation chatbots began helping passengers claim refunds after a rule change on delayed flights. It was a small but telling proof: automation can cut through forms that once took hours.

For Rosa, it was more personal. She’s a nurse, working back-to-back double shifts. After one exhausting week, she returns to find a $150 parking ticket on her windshield — even though she had paid the meter. Fighting it the old way would mean losing hours she didn’t have. Paying it meant skipping groceries for the week.

Instead, she pulls out her phone, snaps a photo of her receipt, and uploads it. Advocacy AI checks the jurisdiction’s rules, drafts an appeal citing the exact statute, and files electronically. Days later, the fine is overturned. Rosa doesn’t miss a shift, doesn’t sacrifice groceries, and doesn’t carry the quiet bitterness of knowing the system was stacked against her.

A small win? Yes. But to Rosa, it was oxygen. And to the public, it was proof that AI could stand with ordinary people, not just corporations.

Phase 2 (2027–2030) – Mid-Stakes Expansion

With trust earned, the scope widens: telecom billing disputes, landlord–tenant deposit recoveries, small claims. These aren’t life-or-death, but they are meaningful — hundreds or thousands of dollars that can destabilize a household.

In Canada, legal aid bots piloted in 2025 cut filing prep for tenants from hours to minutes. But the same technology also triggered scrutiny, as regulators uncovered evidence of landlords experimenting with AI-driven rent-setting tools. The lesson: capability cuts both ways.

Ethan knows this firsthand. When he moved out of his apartment, his landlord refused to return his $1,800 deposit, claiming vague “damage.” For Ethan, that money wasn’t a bonus — it was next month’s rent for the new place. Weeks passed. Emails went unanswered. The dread in his chest grew: was he about to fall behind, just because someone stronger in the system could stall him?

With Advocacy AI, he uploads his lease, inspection photos, and correspondence. The system ingests tenancy law, drafts the claim, and files with the board. Within two weeks, the deposit is ordered returned. Ethan feels more than relief — he feels vindicated. For once, the playing field didn’t tilt automatically against him.

Phase 3 (2030–2032) – Adversarial Arenas

By now, Advocacy AI will face harder tests: benefits appeals, utility disputes, immigration paperwork. These are asymmetric-power arenas, where institutions have lawyers on retainer and ordinary people face exhaustion.

The risks are real. The ABA in 2025 already warned of AI flooding courts with frivolous claims if left unchecked. Counter-AI tactics — stonewalling, delay, or automated rejection — are likely. But if designed with guardrails, this phase proves whether Advocacy AI can survive in the rough water of contested power.

Linda’s story makes the stakes clear. A disabled veteran, she lives on a razor-thin budget. When her benefits are cut off due to “missing paperwork,” panic sets in. She knows she submitted the documents. She knows the agency is wrong. But she also knows how long appeals take, and how quickly unpaid rent leads to eviction.

On the edge of despair, she tries Advocacy AI. The system scans her case, retrieves the agency’s own timestamped receipt of her paperwork, and files an appeal citing their error. It escalates automatically to ensure review. Days later, before rent is due, Linda’s benefits are restored.

She doesn’t just feel relief. She feels seen — by a system that had ignored her until a machine demanded otherwise.

Phase 4 (2032–2035) – Full-Spectrum Advocacy Short of Litigation

By the early 2030s, Advocacy AI will extend across nearly the full range of civil disputes short of courtroom litigation: tax disputes, insurance denials, multi-claim coordination. Here, the system’s ability to negotiate AI-to-AI becomes decisive.

Insurance pilots in 2025 already tested structured AI-to-AI negotiations, with humans stepping in only for outliers. Imagine scaling that to millions of claims, unclogging a process long notorious for delay and denial.

For Malik, this is not a thought experiment. A hurricane rips through his neighborhood, leaving his home battered and unlivable. When he files a claim, his insurer denies it — citing “pre-existing damage.” The subtext is clear: stall, delay, hope he gives up.

But Advocacy AI doesn’t give up. It compiles weather data showing storm severity, inspection records proving the home’s prior condition, and even testimony from neighbors. It negotiates directly with the insurer’s AI, citing precedent and compliance obligations. Within a week, the denial is reversed, and Malik has the funds to rebuild.

For him, it is more than money. It is the difference between despair and recovery. Between watching his family displaced, and knowing the system — for once — worked in their favor.

Why This Roadmap Matters

These phases aren’t just technical milestones. They are stress tests of public trust. Rosa’s ticket, Ethan’s deposit, Linda’s benefits, Malik’s claim — each is a crack where lives can splinter. Each successful resolution is a stitch, binding people back into a fabric of justice they had begun to doubt.

By 2035, if this roadmap succeeds, Advocacy AI won’t be seen as an app or a novelty. It will be understood for what it truly is: infrastructure. The quiet backbone of fairness, carrying lives that would otherwise slip through the gaps.

Technical Milestones by Phase

For Advocacy AI to evolve from scattered pilots into civic infrastructure, the technology must advance in deliberate steps. Each phase brings specific technical capabilities that make broader deployment possible. These aren’t speculative fantasies; most are already in development. What matters is sequencing them into a coherent arc.

Phase 1 (2025–2027): Foundations of Trust

The first stage is about grounding Advocacy AI in reliability. Early use cases like parking tickets and refund claims don’t demand brilliance — they demand accuracy.

These may sound mundane, but they are transformative. A system that files accurately, with the right citations, in the right place, every time, earns trust.

Phase 2 (2027–2030): Scaling Across Venues

Once reliability is proven, the challenge becomes breadth. Different disputes land in different venues — small claims courts, tenancy boards, telecom regulators. Coordinating them requires new capabilities:

At this stage, Advocacy AI becomes less like a calculator and more like a paralegal with perfect recall — capable of carrying cases across multiple forums simultaneously.

Phase 3 (2030–2032): Adversarial Readiness

By the time Advocacy AI enters high-stakes arenas like benefits appeals and immigration paperwork, it must be ready for resistance. Institutions will not stand still; counter-AI tactics are inevitable.

This is where the system graduates from “assistant” to “advocate.” It doesn’t just file paperwork — it strategizes against opposition.

Phase 4 (2032–2035): Full-Spectrum Negotiation

The final pre-litigation frontier is direct negotiation, where most disputes are resolved today. Advocacy AI must meet counterpart systems as equals.

At this stage, millions of disputes can be resolved in days instead of months, with courts reserved for true outliers.

Ongoing Challenge: Language Nuance

Even with these advances, one stubborn gap remains: language. 2025 studies of machine translation continue to show significant loss of nuance in legal registers. Idioms, statutory phrasing, and cultural context resist automation. No roadmap can ignore this. Addressing it will require continuous iteration, human oversight, and perhaps specialized models trained solely on legal multilingual corpora.

The cumulative picture is startling. What once seemed like the domain of science fiction — an AI that can draft, file, argue, and negotiate on behalf of ordinary people — is already piecing itself together. None of these milestones alone is world-changing. But together, they form the skeleton of a system that could, within a decade, carry disputes end-to-end outside the courtroom.

For those willing to look closely, the realization is unavoidable: this isn’t wishful thinking. It is already happening. And it is technically, institutionally, humanly doable.

Building the Ecosystem (Governance + Adoption Strategy)

Technology alone will not close the justice gap. We’ve seen too many promising tools wilt when they met the real-world gauntlet of governance, funding, and adoption. Advocacy AI will be no different. If we treat it as a consumer app, it will fragment, stall, or — worse — be captured by the very institutions it was meant to check. If we treat it as infrastructure, with the right incentives and guardrails, it can endure.

Public Infrastructure: Foundations of Equity

Some pieces must be universal. Just as governments fund public defenders, they will need to mandate AI-augmented legal aid. This isn’t charity; it’s baseline equity. Every citizen should be able to access Advocacy AI for common disputes, regardless of income.

One promising model is pooled defense subscriptions: groups of citizens (like tenants in a building or workers in a sector) contribute small fees to a shared Advocacy AI service, ensuring collective coverage. It’s the legal equivalent of pooled health insurance — cheaper, fairer, and harder for institutions to steamroll individuals.

Market Incentives: Catalyzing Innovation

At the same time, private markets must play a role. Hybrid billing models — where users pay small fees per resolved case, subsidized by public funds — can create sustainable growth without pricing out the vulnerable. Specialist registries, where vetted AI providers are certified for housing, benefits, or insurance cases, can accelerate innovation while keeping quality standards intact.

The balance is delicate: enough private incentive to drive speed and experimentation, but enough public guardrails to prevent predation.

Institutional Buy-In: Making AI a Partner, Not a Threat

Courts, bar associations, and legal institutions will resist unless Advocacy AI is framed as augmentation, not replacement. Procurement mandates can help — requiring that agencies adopt certified Advocacy AI systems for filings, discovery, or mediation. This signals legitimacy and accelerates adoption across the justice system.

The key is trust. If lawyers and judges see Advocacy AI as a way to reduce backlog and free human expertise for higher-order disputes, resistance softens. If they see it as a zero-sum replacement, the system hardens against it.

Regulatory Barriers: Clearing the Path

No rollout happens without friction. Already, the ABA’s 2025 filing rules restrict AI-drafted submissions unless they include human sign-off and disclosure. Some regions have outright banned AI filings, fearing abuse. These barriers are not trivial.

The challenge is to build governance that prevents abuse without freezing innovation. Rate limits, disclosure requirements, and evidence-linking can prevent spam filings and fabricated cases. What matters is harmonization: if every jurisdiction sets its own rules, Advocacy AI fragments into regional silos. If common standards are agreed upon, scaling becomes viable.

Metrics: Measuring What Matters

The final piece of ecosystem-building is accountability. If we can’t measure outcomes, we can’t prove value. Clear metrics must be tracked from the outset:

Metrics are more than dashboards; they are political ammunition. Policymakers and funders will only back Advocacy AI if they can point to numbers showing justice made faster, fairer, cheaper.

Rolling Up Sleeves

This is the inflection point: Advocacy AI can either become another overhyped promise — celebrated at launch, forgotten in practice — or it can be woven into the fabric of governance. Getting there requires pragmatism: funding models that balance equity and efficiency, regulatory frameworks that enable without smothering, institutions that see augmentation instead of threat.

The technology is within reach. The question is whether we have the political will, the institutional imagination, and the sheer determination to treat justice not as an app, but as infrastructure.

Safeguards, Privacy & Security

Advocacy AI is powerful, but so was every technology that came before it. And history is clear: power without guardrails rarely bends toward justice. If we fail to build safeguards, the very tools meant to democratize fairness could be weaponized against the people who need them most.

Preventing Scarcity Pricing

The first danger is economic. If access to Advocacy AI is priced like a premium service, we will have created nothing new — just another legal luxury. Scarcity pricing would gut the very promise of the system, turning fairness into a subscription tier.

The countermeasure is pooled coverage. By spreading costs across groups — tenants, workers, consumers — and leaning heavily on automated preparation, we ensure access remains broad and cheap. Think less “individual retainer” and more “public utility.” The moment Advocacy AI becomes a profit-maximizing gatekeeper, its legitimacy collapses.

Preventing Abuse

The second danger is procedural. A system capable of filing thousands of cases could just as easily flood courts with spam or bad-faith claims. Without checks, Advocacy AI could be hijacked by corporations, political actors, or malicious individuals to overwhelm the very system it is meant to support.

That’s why rate limits matter — capping the number of filings an AI can make in a given window. Evidence-linking ensures that every claim is anchored in verifiable documentation, reducing frivolous filings. Bad-faith detection algorithms can flag actors who repeatedly attempt to abuse the system, locking them out before damage spreads.

It’s not paranoia; it’s prudence. The stronger the tool, the greater the incentive to bend it toward abuse.

Privacy and Security

Legal disputes are among the most sensitive data humans generate. Medical records, financial statements, immigration status — all pass through these systems. If Advocacy AI is compromised, the fallout would be catastrophic.

The baseline must be zero-knowledge proofs and end-to-end encryption, so that no third party — not even the AI provider — can read the underlying evidence. Immutable logging systems like those offered by Varonis ensure every access attempt is recorded and auditable. Transparency is not optional; it is survival.

Guarding Against Over-Reliance

Finally, there is the danger of blind trust. A world where AI files every dispute without human oversight risks errors becoming systemic, invisible until too late. The American Bar Association’s 2025 rules already require human sign-off and disclosure on AI-generated filings for precisely this reason. Fail-safes must remain: random audits, disclosure mandates, escalation to human review when cases exceed predefined thresholds.

The aim is not to throttle progress, but to prevent fragility. Over-reliance without oversight is not empowerment — it’s abdication.

Vigilance as Civic Duty

These safeguards are not footnotes. They are the difference between Advocacy AI becoming a civic scaffold and Advocacy AI becoming another instrument of inequity. Abuse, scarcity pricing, privacy failures, blind trust — each is a known failure mode, seen in past technologies.

The challenge is not to imagine new dangers, but to have the courage to confront the familiar ones before they metastasize. If Advocacy AI is to stand as civic infrastructure, it must be defended like civic infrastructure: with vigilance, redundancy, and the assumption that anything exploitable eventually will be.

Regional Pilots & Global Adaptability

Justice does not look the same everywhere. Neither should Advocacy AI. Its future will not be shaped by a single legal system, but by dozens of experiments unfolding across the world — each revealing what’s possible, and what pitfalls to avoid.

High-Digital Environments: API-Ready Courts

In places like Estonia, Denmark, and Singapore, the ground is already fertile. Courts and agencies there have invested for years in digital-first infrastructure: API-based filings, integrated registries, seamless identity verification.

For these nations, Advocacy AI can plug in almost directly. A filing generated by AI doesn’t need to be printed, signed, and mailed; it can be transmitted, logged, and tracked through existing government APIs. In Singapore, where the Smart Nation initiative has turned digital governance into a national brand, it is easy to imagine Advocacy AI being adopted as a natural extension of state services — efficient, secure, tightly coupled with public systems.

These environments function as “early bloomers,” showing what Advocacy AI can achieve when institutions are already digital by default.

Mid-Digital Environments: Bridging Analog and Digital

Then there are Canada (British Columbia), New Zealand, and Portugal — jurisdictions with partial digitization. Some courts accept e-filings; others still drown in paperwork. Some agencies run modern case-management systems; others cling to legacy platforms.

Here, Advocacy AI must straddle worlds. It can automate e-filing where APIs exist, but it must also generate PDF forms, prepare print packages, or even schedule courier delivery for analog systems. It’s not glamorous, but it’s realistic.

British Columbia offers an instructive case: its Civil Resolution Tribunal (CRT) has already digitized small claims and tenancy disputes, creating a natural beachhead for AI. But because Canada’s broader legal infrastructure is fragmented, scaling beyond provincial boundaries remains challenging. These “bridge” environments remind us that Advocacy AI must be versatile, able to thrive in patchwork ecosystems where modernization is uneven.

Stress Tests: Large, Complex Systems

The United States and the United Kingdom present another type of proving ground. Both have well-developed legal frameworks and enormous caseloads, but also entrenched bureaucracy, uneven digitization, and political resistance.

Here, small claims courts become the crucible. If Advocacy AI can survive in the chaos of U.S. small claims — with its inconsistent procedures, overwhelmed judges, and litigants navigating without counsel — it can survive almost anywhere. Success in such environments doesn’t just prove technical capability; it proves resilience under pressure.

These are the stress tests: where Advocacy AI must show it can function even in the least forgiving systems.

Low-Digital Environments: Justice by SMS

Finally, there are the places where formal legal systems barely reach. Large parts of Africa and South Asia still rely on informal dispute resolution, SMS-based government services, and mobile-first platforms.

In 2025, legal aid pilots in East Africa demonstrated the power of SMS/USSD-capable AI: tenants sending a simple text could receive draft eviction defenses tailored to local law, without needing smartphones or broadband. For many, it was their first meaningful access to legal recourse.

These contexts show that Advocacy AI need not wait for perfect digitization. By designing lightweight, mobile-first systems, it can meet people where they are. The lesson is simple: sophistication is less important than accessibility.

A Global Mosaic

Taken together, these pilots and environments reveal a mosaic. Advocacy AI is not one system, but many — adapted to the contours of each legal culture, each infrastructure, each economic reality.

What unites them is not uniformity, but possibility. Whether through API-ready courts in Estonia, hybrid tribunals in Canada, or SMS-driven pilots in Kenya, the same principle emerges: people want disputes resolved fairly, affordably, and quickly. The details differ; the human need does not.

That is the source of cautious optimism. Advocacy AI is not a parochial experiment or a startup sideshow. It is a tool already finding footholds across the globe, shaped by necessity as much as by innovation. The question is not whether it can adapt, but how fast we will let it.

Conclusion

The numbers should haunt us: 1.4 billion legal needs unmet each year. Billions of hours lost. Billions of dollars wasted in backlogs. Entire communities left to absorb injustice as a cost of daily life. And yet, the tools to change this — to recover as much as $20 billion in global legal productivity — are already here.

The question is whether we will treat Advocacy AI as what it must become: civic infrastructure. Not a gadget. Not an app. Not a boutique service for the already-privileged. But a backbone of fairness as vital as clean water, electricity, or roads. Infrastructure that people can depend on every day, invisible when it works, devastating when it fails.

This is the fork in the road: pilot now, or watch the chasm widen. If Advocacy AI is left to markets alone, it will tilt toward scarcity pricing, corporate capture, and inequity. If it is left to bureaucracy alone, it will stagnate, another promise smothered in red tape. But if we move with urgency — combining governance, safeguards, and adoption strategies — we can make it what it was always meant to be: the scaffolding of justice in a digital age.

The stakes are not abstract. Rosa’s parking ticket. Ethan’s lost deposit. Linda’s denied benefits. Malik’s insurance claim. Behind every unmet legal need is a life bent out of shape by a system too costly to engage. Advocacy AI is not about convenience. It is about dignity.

So the provocation is simple: In a world where AI could unlock $20 billion in legal productivity, will we ensure it is shared equitably — or will we allow it to harden inequity into code?

We do not get to avoid this choice. Delay is a decision. Inaction is complicity. We can seize this moment to rebuild justice as something universal and accessible, or we can watch it rot deeper into a luxury good.

The technology is already here. The need is undeniable. The only question that remains is whether we have the will to make fairness as common — and as unquestioned — as turning on the tap.

- Iarmhar

November 23, 2025

Follow on X to be notified when new essays are posted.