Ask ChatGPT, Claude, or Gemini a concrete legal question — the difference between an S-corp and a C-corp for a solo practitioner, the enforceability of a non-compete clause in California, how equitable distribution works in a specific state — and you will often see the answer cite or paraphrase a handful of named law firms. Not the biggest firms, necessarily. Not the most prestigious. The firms that happen to have published the piece of practice-area content the model found most useful.
That selection mechanism is the Generative Engine Optimization (GEO) opportunity for law firms, and it is unusually tractable. A small or mid-sized firm that publishes careful, substantive legal writing on the topics its practice areas actually cover can, within a year or two, become a named source in AI answers for those topics. The signals the models reward are closely aligned with what good legal writing already looks like. The inputs that work for other industries — reviews, schema, product content — are not the lever here. Writing is.
What follows is a walk-through of why law firms are well-suited to this, the one discipline that separates firms that get cited from firms that do not, and the common mistakes that show up in audits of legal websites.
Why law firms are structurally well-placed
Three features of legal content align unusually well with how language models weight source quality.
Topic authority matches domain expertise. Models infer authority from signals including backlink profiles, outbound citations to authoritative sources, topical depth within a narrow subject, and consistency of voice across a content corpus. Law firms, almost by definition, have narrow topical depth within their practice areas. A family law firm that has published three hundred pieces on equitable distribution, custody arrangements, and grounds for divorce in its jurisdiction produces the kind of topic coverage models treat as authoritative.
The citation chain is native to the work. Good legal content cites statutes, case law, regulations, and secondary sources. Those outbound citations are themselves a quality signal models use to evaluate a source. A piece of legal content that cites the relevant statute, the controlling appellate decision, and the model rule looks materially different to a language model than a piece of marketing content that cites nothing.
The audience question is specific. Users who ask a language model a legal question are usually asking a specific, well-scoped question: what happens if, what are my rights, how do I. Specific questions have good answers. The firms with specific, well-scoped pieces of writing about those questions are the firms that end up in the composed answer.
None of that is true by default. It becomes true when a firm commits to publishing substantive content, consistently, on the topics its practice areas cover. The firms that do that well tend to show up. The firms that rely on thin service-page copy and undifferentiated blog posts do not.
The one discipline that separates cited firms from invisible ones
The discipline is depth. Not volume, not SEO-polish, not keyword density. Depth.
A piece of content that gets cited in AI answers about a legal topic tends to have several characteristics in common. It addresses a specific, well-scoped question. It is written in the firm's own voice, not a ghost-written freelance-pool voice. It cites the actual legal authorities in play and links to them where reasonable. It anticipates the reader's follow-up questions and answers them. It distinguishes itself from adjacent questions that look similar but require different analysis. It is long enough to do the topic justice and not a word longer.
In aggregate, the firms that consistently produce content of that shape build a Knowledge Depth and Sentiment & Authority profile in language model visibility audits that is difficult for a competitor to dislodge. The firms that publish thin, undifferentiated content — or worse, run a blog on autopilot with outsourced writers who do not practice law — do not build that profile, and often end up with Recognition without Authority, which is the worst combination in the category.
Two things are worth emphasizing about this depth discipline.
It is compatible with modest volume. A firm publishing two substantive pieces per month on well-scoped topics within its practice areas will typically out-perform a firm publishing ten thin pieces per month on every keyword a content strategist suggested. The signal density per piece is what matters; more mediocre content does not compound.
It is not the same as academic writing. Legal content that gets cited by language models is written for the intended reader, which is usually a prospective client or referrer, not a law review editor. The analysis needs to be rigorous, but the tone needs to be accessible. Firms that produce treatise-grade writing aimed at peer lawyers often do less well in AI answers than firms that produce careful, accessible writing aimed at the person who is actually asking the question.
What gets measured in a law firm audit
A GEO audit of a law firm looks at the same six dimensions as any other brand audit, but certain dimensions matter more than others for legal practice visibility.
Knowledge Depth is the dimension most firms have the most to gain on. It measures whether the model, when asked about the firm's practice areas or the lawyers at the firm, produces substantive and accurate description. A firm with a deep content corpus tends to score well here because the model has material to draw from.
Sentiment & Authority is the second high-leverage dimension for law firms. It tracks whether the model cites the firm as a source on category-level questions, not just in response to direct queries about the firm. This is where the citation payoff actually shows up — the difference between "ChatGPT knows the firm exists" and "ChatGPT cites the firm when asked about equitable distribution in the firm's jurisdiction."
Contextual Recall measures whether the firm surfaces in category queries. For a law firm, the question that matters is not "what does this firm do" but "who should I consult for a matter of this type in this jurisdiction." A firm that has built authority on its practice-area topics tends to get named in those category-level answers; a firm that has not does not.
Recognition and Competitive Context are usually adequate for established firms; they become weaknesses for newer or rebranded firms that have not yet accumulated citation history.
AI Discoverability is a technical layer — schema, crawl access, robots.txt — and is a blocker if it fails but not a differentiator if it works. Most law firm websites pass this check with minor corrections.
The common failure patterns
Firms that run their first audit tend to fall into one of three profiles.
The thin-content firm. Practice-area pages exist but are generic — a few paragraphs each, written years ago, lightly maintained. The model recognizes the firm's name but has no material to draw from when asked about practice areas. The result is adequate Recognition and weak Knowledge Depth and Authority.
The outsourced-blog firm. The firm publishes regularly, but the content is produced by an outsourced writer who does not practice law. The pieces are competent English but lack the specific authority of work written by an actual practitioner. The model treats the content as adequate but not distinctive; citation in AI answers is rare.
The restricted-content firm. The firm takes publishing seriously but restricts content behind contact forms, insists on PDF downloads instead of HTML, or serves content in a way the models cannot parse. The quality exists but is invisible. This failure mode is the most frustrating because the substance is there and the fix is usually technical.
A small number of firms show up with a strong profile across the board. They share a pattern: one or two partners take content seriously, write it themselves or edit it heavily, publish in open HTML on the firm's domain, and stay with it for a multi-year horizon.
A practice-area content program that actually works
For a firm serious about building GEO visibility over a twelve to twenty-four month horizon, a defensible program has a handful of components.
Identify the twenty questions that drive the practice. For each practice area, what are the specific questions prospective clients actually ask in the intake meeting? Those are the topics worth publishing on. Not the abstract practice-area headers — the concrete, scoped questions that land in consultations.
Commit to practitioner-authored content. The partner or senior associate who handles the matter type should write the piece, or the piece should be written from a detailed interview with them and edited for accuracy by them. The quality differential is visible in the text and visible in how the model treats the content.
Publish in HTML on the firm's own domain. Not in a PDF. Not behind a form. The content should be accessible to any crawler, including AI crawlers, with appropriate schema (Article, LegalService, FAQPage where appropriate) marking up the authorship and topic.
Keep content current. Law changes. A piece written before a statute was amended and never updated is worse than no piece at all — it teaches the model outdated law, which can then be cited in answers to current questions. A quarterly review cadence for the existing content corpus catches the drift.
Earn citations, do not chase backlinks. Good legal content tends to be cited by other legal content, by trade publications that cover the practice area, and by adjacent firms writing on related topics. Those citations are what models treat as authority signals. Link-building campaigns, in the 2018 SEO sense, do not replicate the same signal.
What to stop doing that does not carry over
Three habits from the pre-GEO legal marketing playbook are worth interrogating.
Stop treating attorney bio pages as the marketing centerpiece. Bio pages are necessary for Recognition, but they are not what gets cited. The practice-area and topic-specific content is what the model uses when composing an answer. A firm that over-invests in bios and under-invests in topical content has the ordering exactly wrong for the AI-answer era.
Stop relying on press mentions as a proxy for authority. A firm ranked in a legal directory or mentioned in a trade publication is a signal, but the signal is weaker than the signal from being the firm that actually wrote the authoritative explainer on the topic. Paid awards and directory listings, historically a large share of legal marketing spend, produce lower returns than the same dollars put into content.
Stop treating the firm blog as a marketing afterthought. The blog, if it is anything, is the firm's primary GEO asset. It deserves editorial attention at a level comparable to what the firm would spend on a major matter. Firms that treat it that way show up in AI answers; firms that treat it as a marketing checkbox do not.
The patience question
Building GEO visibility in a legal practice is a multi-year project. The signals compound, but slowly. A firm that starts a serious practice-area content program in Q2 of one year will usually not see the compounding effect until Q3 or Q4 of the following year, and the full payoff is often a two to three year curve.
That horizon is worth setting explicitly with firm leadership. The payoff, when it lands, is durable in a way that paid channels are not — being cited by name in category-level AI answers about your practice areas produces a steady inflow of prospective clients at the top of the funnel, with the language model doing the qualification work. Firms that establish that position early in the AI-answer era tend to hold it.
For the broader framework on how AI visibility is measured, see What Is AI Brand Visibility? A 2026 Primer. For the accounting-and-professional-services cousin of this discussion, see GEO for Accounting and Professional Services.
If you want to see how the five major language models currently describe your firm — and where the Knowledge Depth and Authority gaps actually sit — you can run an audit in about two minutes, free for seven days, no credit card required.
See how AI describes your brand
BrandGEO runs structured prompts across ChatGPT, Claude, Gemini, Grok, and DeepSeek — and scores your brand across six dimensions. Two minutes, no credit card.