Programmatic SEO Strategies for Landing Pages
What programmatic SEO is (and what it isn’t)
Programmatic SEO (often shortened to pSEO) is a publishing system for creating landing pages at scale—hundreds or thousands of highly-specific pages—by combining structured data with templated pages and a repeatable rollout process.
The goal isn’t “make more pages.” The goal is: match long-tail search intent with pages that are genuinely useful, and do it in a way your team can maintain without drowning in manual content ops.
Definition: templates + structured data + scalable publishing
pSEO works when you treat your website like a product: you model information as data, design a page type around user intent, and then generate pages from that dataset with consistency and quality controls.
Templates: A modular page layout (sections/blocks) designed for a specific intent (e.g., “compare,” “find a provider,” “browse locations,” “pricing by plan”). Good templates aren’t one-size-fits-all—they use conditional modules so pages change meaningfully based on what data exists.
Structured data: A dataset with clear entities and attributes (and relationships) that can power unique on-page content: specs, pricing, availability, ratings, feature flags, locations, integrations, benchmarks, “vs” matchups, etc.
Scalable publishing: A controlled workflow that handles generation, internal links, indexation rules, and QA—so you can ship in cohorts and avoid flooding Google with low-value URLs.
Put differently: pSEO is database-driven content + intent-driven template design + operations discipline. If any one of those is missing, pSEO usually turns into a site-wide quality problem.
pSEO vs. “AI spam” and doorway pages
A lot of people hear “programmatic SEO” and think “auto-generated pages.” That’s where projects get into trouble. pSEO is not a loophole—it’s a way to publish at scale without sacrificing usefulness.
What pSEO is:
Intent-matched pages that answer a specific query pattern (and are meaningfully different from each other).
Data-backed content users can verify and act on (tables, comparisons, filters, availability, pricing, specs, real differences).
Unique value per URL because the underlying inputs differ—not because you swapped city names in a paragraph.
Indexation-safe scaling (you don’t publish 50,000 URLs and “hope Google figures it out”).
What pSEO isn’t:
Doorway pages: pages made primarily to rank for variations and funnel users to the same destination with no distinct value per page.
“Swap-a-word” templates: identical copy with a few tokens changed (city, industry, tool name) and nothing else that’s uniquely helpful.
Thin content at scale: pages that look complete but provide little beyond a headline, a generic intro, and a CTA.
Uncontrolled faceted index bloat: letting filters, parameters, or combinations create millions of crawlable URLs with near-identical content.
Here’s a practical litmus test: if you removed the templated prose, would the remaining data still make the page worth bookmarking? If not, you’re probably building pages that exist to rank—rather than pages that deserve to rank.
When pSEO works best: marketplaces, directories, comparisons
pSEO shines when your business naturally has repeatable “units” of information people search for—and you can express those units as data.
Marketplaces & directories: “plumbers in Austin,” “best CRM consultants,” “coworking spaces in London.” These win when each page has inventory depth, sorting, filters, reviews, and clear next steps.
Comparisons & alternatives: “Notion vs Confluence,” “Shopify alternatives,” “best email tool for startups.” These work when you can standardize feature/pricing data and add real decision support (tables, use cases, constraints).
Product/feature matrices: “API monitoring with OpenTelemetry,” “SOC 2 compliant password managers,” “invoice software with multi-currency.” These win when attributes are explicit and verifiable.
Location and service combinations: “{service} in {city},” “{product} for {industry}.” These only work when you can add localized/segment-specific value (not just a location token).
If your niche doesn’t have consistent attributes, or every page would rely on generic copy to fill space, pSEO is the wrong tool. But if you have (or can build) a solid dataset and a template that delivers outcomes—compare, shortlist, choose, contact—then pSEO becomes a durable growth system, not a content stunt.
The pSEO prerequisites checklist (before you build anything)
Programmatic landing pages can be a growth lever—or an expensive way to publish thousands of URLs that don’t rank, don’t convert, and clog your crawl budget. Before you touch templates, run this go/no-go checklist. If you can’t confidently pass most of it, fix the inputs first (keywords, intent, data), or you’ll just scale problems.
1) Validate keyword patterns (modifiers, entities, combinations)
pSEO only works when you have keyword patterns that scale cleanly. A “pattern” is a repeatable query structure you can map to a dataset and a consistent page type.
Common scalable patterns (choose 1–2 to start):
[service] + [location] (e.g., “payroll software in Texas”)
[category] + [feature] (e.g., “CRM with workflow automation”)
[brand] + alternative (e.g., “Notion alternatives”)
[X] vs [Y] (e.g., “Stripe vs Square”)
best [X] for [Y] (e.g., “best invoicing tool for freelancers”)
Go / no-go checks:
You can name the entity behind the URL. If you can’t define what a page “is,” you can’t model it. Examples:
Location pages: entity = location (city/state), attributes = industries served, regulations, pricing, availability
Comparison pages: entity = pair of products, attributes = features, pricing, integrations, reviews
Directory pages: entity = item (tool/provider/listing), attributes = category, tags, ratings, regions
The pattern creates enough unique, real pages. A pattern that produces 30 pages is not pSEO—just a small landing page set. A pattern that produces 300,000 pages is dangerous unless you have the data and internal links to support it.
There’s meaningful long-tail demand. Don’t get tricked by one head term’s volume. You want a distribution where hundreds/thousands of variants have some intent and aren’t all zero-volume noise. Validate with a keyword tool, then spot-check in the SERP (next step).
The modifiers actually change intent. “Cheap,” “near me,” “for teams,” “open source,” “HIPAA compliant” can indicate different needs. If modifiers don’t change the user’s job-to-be-done, you may end up creating near-duplicates.
Decision: If you can’t define (a) the entity, (b) the URL rule, and (c) why the modifier changes what users need, it’s a no-go pattern—don’t scale it.
2) Confirm search intent per pattern (SERP sanity check)
Most pSEO projects fail here: teams validate volumes, then generate pages that don’t match search intent. Your job is to confirm that Google is rewarding the same page type you plan to ship.
Run a lightweight SERP analysis on a representative sample (at least 10–20 queries across head + mid + long tail): Use SERP analysis to validate keyword patterns before scaling.
What to check in each SERP:
Dominant page type: are the top results landing pages, listicles, product pages, category pages, marketplaces, or UGC/forum threads?
Content format signals: do ranking pages have tables, filters, “top picks,” maps, pricing modules, or deep editorial guides?
Local intent: map pack present? “Near me” behavior? If yes, your pages may need LocalBusiness signals, addresses, or a different approach.
Brand bias / aggregator lock: if the SERP is dominated by major directories (Yelp, G2, Capterra) or Google-owned surfaces, you’ll need a sharper value prop and stronger internal linking to compete.
Evidence of templated ranking: do you see many similar pages from the same domain ranking across variants? That’s a green flag—if those pages look genuinely useful (not swap-a-word text).
Go / no-go rule: If your intended programmatic landing pages don’t resemble what’s already ranking (structure, modules, depth), treat it as a no-go until you change the page type—or the keyword set.
3) Define the page’s job: rank, convert, or both
Be brutally clear about what each programmatic set is supposed to do. pSEO pages often fail because they try to do everything with the same template.
Rank-first pages: goal is to capture long-tail demand and introduce users to your solution. These need strong relevance + usefulness (definitions, comparisons, data modules, FAQs) and clear next steps.
Convert-first pages: goal is to drive signups/leads fast. These need crisp positioning, proof (logos, testimonials), and minimal friction—but still must satisfy intent or they won’t rank.
Rank + convert pages: hardest mode. Only attempt if you can reliably support intent with real data and still keep a focused CTA.
Practical mapping: Write one sentence per template: “When someone searches [pattern], they want [outcome]. This page will help by [primary value], and then drive action via [CTA].” If you can’t write that sentence, your template brief isn’t done.
4) Ensure you can add unique value beyond the template
Here’s the uncomfortable truth: Google doesn’t reward pages for being generated—it rewards pages for being useful. Templates are the container. Your data and modules are the product.
Minimum uniqueness requirements (set these as hard gates):
Unique primary data: each page must have a minimum number of non-empty, query-relevant fields (e.g., pricing, availability, feature coverage, ratings, supported integrations).
At least one “reason to exist” module: a table, filter, comparison summary, pros/cons, benchmarks, screenshots, or a curated shortlist—something the user can act on.
SERP-aligned FAQs: 3–6 questions that show you understand the query (not generic marketing FAQs). Pull from “People also ask,” support tickets, and on-site search.
Conditional sections: modules should appear only when data exists, so pages don’t look identical (e.g., show “Compliance” only for regulated industries; show “Top integrations” only when you have verified integrations).
Red flags that mean “no-go (for now)”:
Your dataset is mostly empty (lots of nulls) or copied from the same source without differentiation.
Every page would share the same intro with only the H1 swapped—classic thin/duplicate pattern.
You can’t explain why a user would bookmark or compare your page versus what already ranks.
Go / no-go framework summary:
Pattern: repeatable + maps to a clear entity and URL rule
Intent: SERP shows your page type can rank for that pattern
Job: each template has a single, explicit purpose (rank/convert/both)
Value: you have enough unique fields + modules to avoid thinness at scale
If you pass these prerequisites, you’re ready to design the system (data model → templates → internal links → rollout). If you don’t, don’t “just publish a smaller batch” and hope—it’ll still teach Google that your programmatic landing pages aren’t the answer.
Find keyword patterns that scale (without cannibalization)
In programmatic SEO, keyword research isn’t about finding “a few big keywords.” It’s about finding repeatable query patterns you can satisfy with a dataset and a modular template—then making sure every URL has a clear job and doesn’t compete with its neighbors.
The win looks like this: you publish hundreds (or thousands) of pages that each match a specific intent. The failure mode is just as common: you generate pages that are too similar, overlap in intent, and trigger cannibalization—or worse, you build pages Google treats as doorway-like because they don’t add unique value.
Step 1: Start with pattern types (not individual keywords)
Most scalable pSEO sets fall into a handful of patterns. You’re looking for combinations where (1) the intent is stable, (2) the “entity slots” can be filled from data, and (3) the SERP rewards landing pages (not just blog posts).
[Service + Location] → classic location pages (e.g., “payroll software in Austin”, “IT support in Toronto”).
[Category + Feature] → directory/collection pages (e.g., “CRM with SMS”, “project management with Gantt charts”).
[Brand + Alternative] → high-intent mid/late-funnel pages (e.g., “Notion alternatives”, “Asana alternative for agencies”).
[X vs Y] → head-to-head comparison pages (e.g., “HubSpot vs Salesforce”).
[Best X for Y] → curated list intent (e.g., “best invoicing software for freelancers”).
Your job is to pick 1–2 patterns to start, not five. Each additional pattern multiplies template complexity, internal linking rules, and QA surface area.
Step 2: Validate intent on the SERP (a quick sanity check)
Before you generate anything, verify that the SERP actually rewards the page type you plan to create. A pattern can look perfect in a keyword tool and still fail because the SERP is dominated by a different format (guides, videos, job boards, Google local pack, etc.).
Use SERP analysis to validate keyword patterns before scaling and check:
Page type match: Are top results landing pages, lists, directories, or editorial posts? If your planned template doesn’t match, expect slow traction.
Intent stability: Do results look consistent across multiple keywords in the pattern (e.g., 10 locations or 10 features), or does intent change per modifier?
Dominant content elements: Are rankings driven by tables, filters, maps, pricing, reviews, “top picks,” or deep explanations? Those are clues for what your template must include.
Brand vs non-brand mix: If the SERP is mostly brands (or mostly aggregators), you’ll need stronger differentiation and authority to compete.
If the SERP intent is inconsistent, that’s a red flag. You may need to split the pattern into smaller sets (separate templates) or abandon it.
Step 3: Estimate true coverage (and avoid “inflated” page counts)
pSEO math is seductive: “We have 200 cities and 50 services, so that’s 10,000 pages.” In reality, only a subset deserves an indexable URL.
Use this quick coverage framework:
Head vs long tail distribution: Often, 10–20% of combinations drive 80–90% of the opportunity. Start with the high-signal subset.
Data completeness threshold: If your dataset can’t populate the key modules (pricing, availability, providers, feature support, ratings), those combinations shouldn’t ship as indexable pages.
SERP competitiveness: Some modifiers are “winner-take-most.” If you can’t realistically compete yet, gate them for later cohorts.
Intent uniqueness: If multiple combinations collapse into the same intent, don’t create multiple URLs (this is where cannibalization begins).
A practical starting target for most SMBs: 50–200 pages in the first cohort across one pattern, then expand only after you see indexation + early impressions.
Step 4: Prevent cannibalization with cluster rules (one intent per URL)
At scale, you don’t “avoid cannibalization” by hoping Google figures it out. You avoid it by designing a system where each page has a unique role in the topic map.
That starts with keyword clustering: group terms by intent and map each cluster to exactly one canonical target (one URL, one template). If you want a scalable approach, Build cleaner topic clusters to avoid pSEO cannibalization before you generate the URL set.
Use these guardrail rules:
Rule 1: One primary keyword cluster → one indexable URL. If “payroll software Austin” and “payroll services Austin” have the same SERP and intent, pick one page as the destination and treat the other as a variation (on-page copy + synonyms), not a separate page.
Rule 2: Don’t let templates compete. If you have a “/locations/” template and a “/industries/” template, decide which one owns “Service + Location + Industry” intent. Otherwise you’ll end up with three weak pages instead of one strong one.
Rule 3: Enforce a single canonical per entity combination. If “X vs Y” exists, don’t also create “Y vs X” as a separate indexable page. Choose one canonical and redirect or canonicalize the duplicate.
Rule 4: Use hierarchy, not duplication. If a “Service + Location” page exists, your “Service + Location + Neighborhood” pages should only be indexable if they have truly distinct intent and content depth. Otherwise they become doorway-ish satellites.
Canonical targets: when multiple URLs are inevitable
Sometimes you do need multiple URLs that are closely related (especially with location pages and comparisons). The key is to be explicit about the preferred page and why.
Comparison pages: Keep “X vs Y” indexable; treat “X versus Y,” “X compared to Y,” and “Y vs X” as duplicates that canonicalize (or 301) to the chosen URL.
Location pages: If you have city and state pages, ensure each has a distinct purpose:
State page = overview + directory + internal links to cities.
City page = local proof, availability, pricing ranges, case studies/testimonials, and FAQs tied to that city.
Feature vs use-case pages: If “CRM with SMS” and “CRM for sales texting” overlap, pick one primary and fold the other language into the same page.
Practical “no-go” signals (don’t scale these patterns yet)
SERP intent is mixed across sampled modifiers (half listicles, half definitions, half local pack). You’ll struggle to build one template that wins.
Your data can’t differentiate pages beyond swapping the H1 (no pricing, no availability, no inventory, no comparative attributes, no unique local facts).
The pattern creates near-duplicates by design (e.g., too many synonyms or permutations that map to the same result set).
You’re relying on facets to create indexable pages without strict rules (a fast path to crawl traps and index bloat).
Bottom line: scalable pSEO keyword sets are engineered. Lock down the pattern, prove intent on the SERP, model coverage realistically, and use keyword clustering rules to assign one intent per URL. That’s how you scale landing pages without turning your site into a self-competing mess.
Choose (and maintain) the right data sources
Programmatic SEO lives or dies on data quality. Templates can scale output, but only a solid dataset can scale usefulness. If your pages feel like “swap the city name” clones, it’s usually not a writing problem—it’s a coverage, accuracy, and data freshness problem.
Before you generate hundreds (or thousands) of landing pages, treat your dataset like product infrastructure: define what’s true, what’s required, what can be missing, and how it stays current. That’s how you avoid thin content, stale pages, and rankings that never stick.
1) Pick SEO data sources you can actually maintain
Most teams can generate pages. Few can keep them correct six months later. When evaluating SEO data sources, prioritize maintainability over novelty.
Internal databases (best for defensibility): Your CRM, product DB, inventory, usage data, pricing, availability, reviews, tickets. This is often the most “unique value” you can publish because competitors can’t copy it.
Vendor/partner APIs (best for freshness): Great when real-time updates matter (locations, pricing, listings, job posts). Watch rate limits, uptime, and terms.
Spreadsheets / Airtable (best for MVP): Perfect for launching a first cohort. The risk is governance—without validation rules, spreadsheets turn into silent data debt.
Scraped data (only if verified): Scraping can bootstrap coverage, but scraped-and-published is where pSEO gets reckless. If you scrape, add verification, citations where appropriate, and a process for removals/changes.
Decision rule: if you can’t explain who owns the dataset, how it updates, and how errors get fixed, don’t scale it into indexed pages.
2) Define “good enough” data requirements (coverage, uniqueness, accuracy)
To rank, your pages need more than a template—they need distinct information that matches intent. Set minimum requirements per page type before you publish.
Coverage: Do you have enough rows (entities) to justify the project? And enough fields (attributes) to avoid thin pages?
Uniqueness: Which fields will vary meaningfully across pages? If the only variable is the H1, you’re building a doorway-page factory.
Accuracy: Can you trust the data? Wrong pricing, wrong availability, wrong address = bad conversions and reputational damage, not just SEO risk.
Data freshness: How quickly does the real world change compared to your update cadence? “Best restaurants” and “software pricing” go stale fast; “definitions” don’t.
A practical approach is to create a field-level requirements matrix per template:
Required: pages won’t publish without it (e.g., name, category, primary feature set, location, slug).
Recommended: pages can publish, but are flagged for enrichment (e.g., pricing, screenshots, benchmarks, reviews).
Optional: used to unlock conditional modules (e.g., integrations list, “works with” data, FAQs, comparison candidates).
3) Model your dataset like a search engine: entities, entity attributes, and relationships
Programmatic landing pages need a clean data model so Google (and users) can understand what each URL represents. This is where structured data starts: not schema markup yet, but your underlying tables and relationships.
Think in three layers:
Entities: the “things” you can build pages about (e.g., City, Service, Tool, Job, Provider, Integration).
Entity attributes: properties that describe the entity (e.g., city population, service price range, tool features, provider response time). These attributes should fuel on-page modules like tables, pros/cons, “what you get,” and FAQs.
Relationships: how entities connect (e.g., Tool ↔ Integration, Service ↔ City, Provider ↔ Category). Relationships are the backbone of internal links, “related” modules, and comparisons.
What becomes a URL? Use the entity/relationship model to define URL “units” that map to a single, stable intent. Examples:
Single-entity pages: /tools/notion/ (Tool entity)
Two-entity combination pages: /plumbers/austin-tx/ (Service + City)
Relationship pages: /integrations/slack/asana/ (Integration relationship)
Comparison pages: /compare/asana-vs-trello/ (Tool vs Tool relationship)
Keep combinations honest: the more dimensions you stack (service + city + “cheap” + “24/7” + neighborhood), the more you risk generating pages that have no distinct content and no real search demand.
4) Design the dataset to power modular, non-thin pages
Your template will only be as good as the fields it can render. If you want pages that feel genuinely specific, your dataset needs to support modules that answer real questions—not just headings.
High-leverage fields that tend to improve usefulness:
Comparable metrics: price ranges, response times, delivery windows, ratings, review count, availability, SLAs.
Feature-level detail: not “has feature,” but how (limits, tiers, requirements, compatibility).
Evidence fields: sources, “last verified” timestamps, methodology notes, citations, policies.
Local/context fields: service area, regulations, seasonality, typical timelines, neighborhood coverage.
Decision helpers: who it’s best for, alternatives, “if you care about X, choose Y,” common pitfalls.
This is also where you can build “defensibility” into pSEO: proprietary scoring, internal benchmarks, aggregated insights, or curated comparisons based on your product usage data (when compliant and anonymized).
5) Governance: prevent silent decay with freshness, validation, and audit trails
Scaling pages without data governance is how you end up with thousands of outdated URLs, inconsistent claims, and support tickets asking why your site is wrong. Put basic guardrails in place early.
Update cadence by field: “pricing weekly,” “availability daily,” “feature list monthly,” “company description quarterly.” Tie cadence to data freshness requirements, not convenience.
Validation rules: enforce data types, allowed values, and ranges (e.g., price cannot be negative; state must be from a known list; coordinates must be valid).
Completeness scoring: calculate a per-row score so you can decide whether to publish, enrich, or hold back.
Error handling: if an API fails or a required field goes missing, decide whether the page should (a) show a fallback module, (b) be temporarily noindexed, or (c) return a 404/410.
Audit trails: store “last updated,” “source,” and “change history” for each entity. This protects you operationally and makes debugging ranking drops far faster.
Pro tip: add a visible “Last updated” or “Data verified on” element when it improves trust. It won’t magically rank a page, but it can increase conversions and reduce bounce when users care about recency.
6) A quick preflight checklist before you scale publishing
Before generating the next 500 URLs, run this sanity check on your dataset:
Do we have enough unique fields to make each page materially different (not just a swapped keyword)?
Do we have an owner for each major dataset and a documented update cadence?
Can we model entities and relationships cleanly so one URL maps to one intent?
Do we have validation rules to prevent broken pages (missing required fields, invalid values)?
Can we trace every key value back to a source (internal DB, API, verified collection)?
Do we have a plan for freshness (what happens when data changes or expires)?
Get this right, and templates become a force multiplier instead of a liability: your pages won’t just exist at scale—they’ll stay accurate, differentiated, and worth indexing.
Design page templates that can rank (and convert)
In programmatic SEO, page templates are your “publishing engine.” But ranking doesn’t come from the template existing—it comes from the template consistently producing pages that match intent, demonstrate unique information value, and support solid on-page SEO without looking like a thousand copy-paste variants.
The winning approach is modular: build a library of reusable content modules, then use conditional logic so each page renders a different mix based on what your dataset actually knows. That’s how you avoid thin pages and lift conversion rate at the same time.
Template anatomy: the repeatable “rank + convert” skeleton
Most high-performing pSEO landing pages follow a similar anatomy. Use this as a baseline, then adjust modules by page type (location, comparison, integration, directory listing, etc.).
H1 + promise: The exact query match, plus a clear outcome.
Intro (2–4 sentences): Confirm intent, preview what’s on the page, and mention what makes your data useful.
Primary data module: The “main reason this page exists” (table, list, comparison block, directory results, pricing snapshot).
Secondary value modules: Deepen usefulness (filters, pros/cons, use cases, benchmarks, screenshots, availability, methodology).
FAQs: Answer predictable objections and long-tail questions (and earn more SERP real estate when combined with schema later).
Conversion modules: CTA, lead capture, “next step” flow, trust signals.
Related pages: Keep users moving (and help crawlers discover the programmatic set).
If you only nail one thing: ensure the primary data module is genuinely helpful and not generic filler. Templates don’t rank. Useful modules do.
Block-based templating: build modules, not “one big page”
Think in content modules you can mix and match. This prevents every URL from looking identical and lets you scale without rewriting the entire page type.
Core modules (usually required)
Hero (H1 + summary + CTA)
Main table/list (your dataset output)
Methodology or “How we calculate this” (short, but specific)
FAQs (SERP-aligned)
Optional modules (conditional)
Filters (price range, features, availability, rating, size)
Comparison widget (“X vs Y” or “Top alternatives to X”)
Use-case fit (“Best for teams like…”)
Local context (if location-based): coverage area, response times, regulations, seasonality
Proof: reviews, case studies, logos, compliance badges, uptime/status
Media: screenshots, short demo video, map, charts
Design rule: A module should exist because it satisfies intent or increases trust—not because you need more words.
Conditional sections & fallbacks: how to make every page meaningfully different
Conditional logic is your anti-thin-content system. Instead of printing the same blocks everywhere, you set rules for when a section appears, what it says, and what happens when data is missing.
Use a three-tier approach:
Hard requirements (publish gate): If these fields are missing, the page should not be indexable yet.
Entity name / primary keyword target
Unique attributes that justify the URL (e.g., location data, feature coverage, pricing tier, integration availability)
At least one differentiating dataset-driven module (table/list/comparison) with a minimum item count
Conditional enhancements (render if present): Only show sections when you have real supporting data.
Show “Pricing” only if you have current pricing fields and a last-updated timestamp
Show “Availability in {city}” only if you have verified coverage or service radius
Show “Top alternatives” only if you have a comparison set that’s not circular and not identical across pages
Smart fallbacks (avoid empty sections): If a data field is missing, switch to a different module that still helps the user.
No reviews? Show “What to look for” buyer checklist (templated, but tailored with 1–2 data-driven points)
No local stats? Show “Typical timelines + factors that affect cost” (with any available benchmarks)
Not enough items for a table? Show a curated subset + a CTA to get a full list (or don’t publish/index)
Practical uniqueness test: If you place 10 pages side-by-side, can a user spot why each one exists within 5 seconds—based on the data modules, not the wording? If not, the template is too flat.
On-page SEO at scale: make templates “SEO-safe” by default
Programmatic pages fail when teams treat on-page SEO like a one-time checklist. At scale, you need rules—so every URL ships with consistent fundamentals and fewer edge-case mistakes.
Title tags: Use a stable pattern, but vary with data where it matters.
Good: “{Service} in {City} (Pricing, Availability & Top Providers) | {Brand}”
Avoid: repeating the exact same suffix/prefix across thousands of pages with no informational lift
Meta descriptions: Treat as a CTR lever, not a summary.
Include one data-backed detail when possible (count, price range, update date, score).
Headings (H2/H3): Make modules scannable and intent-aligned.
Prefer: “Pricing in {City}”, “Best {Category} for {Use Case}”, “Compare {X} vs {Y}”
Avoid: generic headings like “Overview” repeated everywhere unless the content truly differs
Media & alt text: Use charts, maps, screenshots to add non-text value.
Alt text should describe the asset and context (“Pricing comparison chart for {Service} in {City}”).
URL & indexability rules: Keep URLs human-readable and stable; avoid infinite combinations.
Example: /{service}/{state}/{city}/ (not /{service}?city={city}&sort=rating&minPrice=…)
One more safeguard: create a “keyword-to-template mapping” so each query pattern has one primary page type. That reduces accidental overlap where two templates chase the same intent and dilute rankings.
Conversion elements: bake conversion rate wins into the template
Ranking is pointless if the page can’t convert. The best pSEO pages treat conversion as a module system too—so the template guides users to a next step without turning the page into an ad.
Primary CTA (above the fold): One action tied to intent (“Get quotes”, “Compare plans”, “Book a demo”, “See providers”).
Contextual CTAs inside modules: Place actions where users make decisions (e.g., below a comparison table, next to filters, after “Top picks”).
Trust signals: Stats, logos, verification badges, methodology notes, review counts, “last updated” timestamps.
Comparison tables: Convert well because they reduce choice overload.
Include 3–7 columns max, highlight differences, and keep “best for” or “ideal for” visible.
Filters (with guardrails): Great UX, but don’t let them create index bloat.
Let users filter client-side or via non-indexable parameters, and only index curated “filter states” that have proven search demand.
Lead capture for high-intent patterns: If the query is transactional, add a lightweight form, but keep it optional and fast.
Template decision that matters: Don’t force the same CTA everywhere. A “{Brand} alternatives” page likely needs a different offer than a “{Service} in {City}” page. Same system, different conversion modules.
A simple template QA rule: require “data weight,” not word count
Teams often try to solve thin content by inflating copy. That’s the wrong lever in pSEO. Instead, enforce a minimum “data weight” per page—i.e., the amount of unique, query-relevant information.
Minimum unique fields per URL (e.g., 6–10 attributes that can vary meaningfully)
Minimum primary module payload (e.g., at least 8 listings, 5 comparisons, or 1 chart + 1 table)
At least one unique differentiator (score, pricing band, availability, feature coverage, “best for”)
No empty sections (if data missing, swap module or hide it)
Build your templates this way, and you’ll ship landing pages that look less like “generated pages” and more like a scalable product experience—useful to users, readable to crawlers, and designed to convert.
Structured data and metadata: make pages eligible for rich results
In programmatic SEO, structured data and metadata are your consistency layer. They help search engines understand thousands of similar pages without guessing what each one represents, and they can unlock richer SERP treatments (FAQs, breadcrumbs, product details, lists). Done right, this also creates cleaner architecture signals at scale—so Google can crawl, index, and categorize your pages predictably.
Schema types that usually fit pSEO landing pages
Don’t “schema everything.” Pick types that match the page’s job and what’s visibly on the page. The most common winners for programmatic landing pages:
BreadcrumbList: reinforces hierarchy (Hub → Category → Entity) and often improves breadcrumb display in SERPs.
ItemList: ideal for directory/list pages (“X in Y”, “Best X for Y”) where you show a list of entities with consistent fields.
FAQPage: works when your FAQs are genuinely helpful and present on-page (not hidden). Can improve CTR when eligible.
Product (or SoftwareApplication for SaaS): for pages that represent a specific offer with real attributes (pricing, features, reviews).
LocalBusiness (and subtypes): for location/service pages that represent a real business presence with NAP data.
Guardrail: Rich results are not guaranteed. But accurate schema markup increases eligibility and reduces ambiguity—especially when you’re publishing at volume.
Generate JSON-LD from your dataset (safely)
Use JSON-LD (not microdata) because it’s easier to generate from a dataset and less fragile in templated environments. Treat your schema generation like code: deterministic inputs, validation, and strict fallbacks.
Map your data model to schema fields. Start with the minimum viable fields you can populate reliably across most pages (e.g., name, url, image, aggregateRating only if you truly have it).
Only output fields you can stand behind. If a value is missing, omit it—don’t invent it. “Unknown” or fabricated values can create trust and compliance problems.
Make it match what users see. If schema says you have 120 reviews or a $49 price, those should be visible and consistent on-page.
Validate continuously. Run template-level tests with Google’s Rich Results Test / Schema Validator and bake checks into your QA process (more on that later).
At scale, the biggest risk isn’t “no schema”—it’s wrong schema replicated across thousands of URLs. Your fix is conditional logic: output a schema block only when required fields exist and pass basic sanity checks.
Breadcrumbs, hreflang (when needed), and canonical rules
Structured data works best when it aligns with clean metadata signals. Three pieces matter most in pSEO:
Breadcrumbs (UI + schema): Show breadcrumbs on-page and mirror them with BreadcrumbList JSON-LD. This reinforces your hub-and-spoke structure and reduces “orphaned page” vibes.
hreflang (only if you truly have international variants): If you publish the same template across countries/languages, implement hreflang with correct self-referencing and return tags. If you’re not intentionally targeting multiple locales, skip it—half-implemented hreflang is worse than none.
Canonical tags: These are non-negotiable in pSEO. Define rules upfront so you don’t accidentally create thousands of near-duplicates competing with each other.
Self-canonical for the primary, indexable version of the page.
Canonical to the parent when a child variant doesn’t add unique value (e.g., minor parameter changes).
Canonical + noindex for “helper” URLs you need for UX but don’t want indexed (common in filtered/faceted experiences).
Rule of thumb: if two URLs satisfy the same intent, pick one to index. Everything else should consolidate via canonical tags (and often noindex), not “hope Google figures it out.”
Sitemaps at scale: index, segmented sitemaps, and faster debugging
If you’re generating hundreds or thousands of URLs, your sitemap strategy becomes an ops tool—not a checkbox. The goal is simple: help discovery, control indexation, and isolate issues by template.
Use segmented sitemaps (by template or page type): e.g., /sitemap-locations.xml, /sitemap-alternatives.xml, /sitemap-comparisons.xml. This makes it obvious which cohorts are indexing and which are failing.
Create a sitemap index that references each segmented file. This scales cleanly as you add new templates.
Only include canonical, indexable URLs. If a URL is noindex, parameterized, or non-canonical, keep it out of your sitemaps. Sitemaps are a “please index this” list—treat them like one.
Roll out in cohorts. Add URLs to sitemaps in batches so you can measure impact and catch systemic template problems early (before you flood the index with low-value pages).
In practice, segmented sitemaps plus consistent canonical tags are what keep programmatic launches from turning into index bloat.
Quick implementation checklist (so you don’t scale mistakes)
Schema markup is present and matches visible content on every indexable template.
JSON-LD is generated from real dataset fields (no invented ratings/prices/reviews).
Conditional schema sections only render when required fields exist.
Breadcrumbs are consistent in UI, internal links, and schema.
Canonical tags follow one rulebook across templates (no “surprise” canonicals).
Sitemaps are segmented by template/cohort and include only canonical indexable URLs.
Bottom line: pSEO wins when search engines can understand your pages as a well-structured system. Schema and metadata are how you communicate that system—cleanly, repeatedly, and without drama.
Internal linking: the difference between indexed and invisible
In programmatic SEO, internal linking isn’t a nice-to-have—it’s how Google discovers your pages, decides which ones matter, and allocates crawl budget. You can generate 10,000 landing pages, but without a deliberate site architecture, you’ll end up with a familiar outcome: a small % indexed, an even smaller % ranking, and a long tail of “Crawled – currently not indexed.”
The goal is simple: create a system that (1) makes every valuable page reachable within a few clicks, (2) funnels authority to the pages that should rank, and (3) scales without turning your site into a spammy lattice of exact-match anchors.
Hub-and-spoke architecture for pSEO (how to build topic hubs that actually work)
pSEO pages don’t want to live as orphans. They want to live inside topic hubs that explain the category and then route users (and crawlers) to the right specific page.
A clean hub-and-spoke model looks like this:
Hub (category / directory root): “Email marketing integrations”
Sub-hub (facet group or segment page): “Email marketing integrations for Shopify” / “Integrations with real-time sync”
Spokes (programmatic landing pages): “Klaviyo + Shopify integration”, “Mailchimp + Shopify integration”, etc.
Practical rules that keep this scalable:
Every generated page must belong to exactly one hub. If it fits multiple hubs, you’ll create duplicates and split signals.
Hubs should link down; spokes should link sideways and back up. Hubs distribute authority. Spokes reinforce topical neighborhoods and help discovery.
Keep click depth tight: aim for Hub → Sub-hub → Spoke (2–3 clicks from a crawl entry point like your nav, sitemap, or a strong hub page).
If you’re building this at scale, it helps to document link rules the same way you document your templates. For more on operationalizing link rules and keeping anchors consistent across hundreds of pages, see Scale internal linking safely across hundreds of pages.
Contextual links vs. faceted navigation links (and how crawl traps happen)
Most pSEO sites accidentally rely on faceted navigation for discovery (filters like “price,” “rating,” “color,” “remote,” “2026,” etc.). That’s risky because faceting multiplies URLs, creates near-duplicates, and wastes crawl budget on pages you never intended to rank.
Use both link types—but with different jobs:
Contextual links (editorial/module links): Pass meaning and relevance. These are your primary SEO links.
Faceted navigation links (filters/sorts): Improve UX. Only a small subset should become indexable landing pages.
Guardrails that prevent crawl traps:
Decide which facets are “SEO facets” vs. “UX facets.” SEO facets get dedicated, curated URLs. UX facets stay non-indexable (or are blocked from generating infinite combinations).
Limit indexable combinations. Index “/integrations/shopify/” and “/integrations/real-time-sync/” if they map to real demand—don’t index “/integrations/shopify/?sort=price&page=9&trial=true”.
Use pagination carefully. Paginated hub lists are fine for users, but don’t let them become your only discovery path. Your strongest spokes should be linked from the first page of a hub (or from curated modules).
Automated anchor text rules (how to scale without over-optimizing)
At pSEO scale, the easiest mistake is repeating the same exact-match anchors everywhere (“best accounting software for startups” linking to that page from 400 URLs). That pattern looks manufactured, and it’s not great for users either.
Instead, define anchor text like you’d define a schema: a small set of allowed formats, chosen dynamically based on context.
Anchor text playbook (simple and safe):
Primary format (majority of links): Brand/entity name or page label (“Klaviyo”, “Shopify integration”, “Austin, TX”)—clean, natural, not stuffed.
Secondary format (some links): Partial-match anchors that describe the relationship (“Integrates with Shopify”, “Alternative to Intercom”).
Context format (sprinkled in): Sentence-style anchors (“See supported Shopify triggers and actions”).
Implementation rule: For any given target page, cap exact-match anchors to a small percentage across the site. In templates, randomize (or conditionally choose) anchors based on module type: tables use concise labels; FAQs use natural language; “Related” modules use entity names.
Also: don’t link everything from everywhere. A good default is 3–8 contextual links per page (not 30), chosen by relevance and hierarchy—not by “we can generate it.”
Breadcrumbs + related pages + “people also compare” modules (your scalable link engine)
The most reliable pSEO linking system is a combo of three modules that can be generated from your dataset and your taxonomy:
Breadcrumbs (always on): These reinforce site architecture, reduce click depth, and help Google understand hierarchy.
Example: Home → Integrations → Shopify → Klaviyo + Shopify
Rule: Breadcrumbs should map to real hub/sub-hub pages (not invented labels with no destination).
Related pages module (entity neighbors): Link to “similar” or “adjacent” entities based on shared attributes.
Example: On “Klaviyo + Shopify,” link to other “Email tools that integrate with Shopify.”
Rule: Prefer a tight relevance threshold (shared category + shared key attribute) over “popular pages.”
People also compare / Alternatives (intent neighbors): This captures comparison intent and distributes authority to high-converting pages.
Example: On “Intercom alternatives,” link to “Zendesk vs Intercom,” “Help Scout vs Intercom,” and the primary “Customer support software” hub.
Rule: Keep it curated by rules (top N by similarity, category, or user behavior), not by generating every possible comparison.
These modules do double-duty: they improve UX and they create predictable crawl paths so your important pages aren’t dependent on a sitemap alone.
A simple internal linking checklist for pSEO launches
No orphan pages: every landing page is linked from at least one hub/sub-hub and at least one “related” module.
Click depth control: priority pages reachable in ≤3 clicks from a hub or primary navigation path.
Anchor governance: exact-match anchors are the exception, not the default.
Facet index rules: only pre-approved facet URLs are indexable; everything else stays non-indexable and non-crawlable as needed.
Template consistency: breadcrumbs + related module render reliably (and don’t break when data is missing).
If you treat internal linking as a first-class system—not a last-minute plugin—you’ll see the difference quickly: faster discovery, more stable indexation, and better distribution of rankings across the long tail (instead of a few “lucky” pages doing all the work).
Scalable publishing workflow (from generation to live)
Programmatic SEO only “scales” if your scalable publishing system scales with it. The fastest way to kill a pSEO project is to generate 5,000 URLs and treat publishing like a button click—then discover weeks later you shipped duplicates, thin pages, broken canonicals, and index bloat.
What you want instead is an SEO workflow that behaves like engineering: staged releases, clear owners, QA gates, and monitoring by template and cohort—so you can expand confidently without accumulating invisible technical debt.
1) Staging → review → publish: approvals and roles
Separate “page creation” from “page release.” In pSEO, your CMS or database can contain tens of thousands of generated pages, but indexation should only happen after a page passes the gates.
Data owner: validates entity completeness, freshness, and edge cases (missing fields, conflicting values, bad geos).
SEO owner: signs off on template rules (titles, canonicals, schema, internal linking, indexability).
Content/reviewer: spot-checks a statistically meaningful sample for usefulness, intent match, and “reads like a real page.”
Engineering/Web: ensures performance, rendering, routing, and crawl controls (facets, pagination, parameters).
Operationally, treat each template like a product feature release:
Generate to staging (or a non-indexable environment) using the latest dataset snapshot.
Run automated QA (duplicates, thinness rules, canonical logic, schema validation).
Manual sampling: review the top entities, the long-tail entities, and known “weird” edge cases.
Approve + promote only the cohort that passed, not the entire dataset.
2) Batch launches: publish in cohorts to validate impact
Cohort-based publishing keeps you honest. If a template has an issue, you want to find it after 50 pages—not after 50,000.
A practical rollout sequence:
Cohort 1 (≈50 pages): pick a representative mix (high-volume head terms + mid-tail + long-tail). Validate rendering, intent match, and early crawl/index behavior.
Cohort 2 (≈200 pages): expand coverage. Watch for duplication patterns you didn’t see at 50 (especially intros, FAQs, and repeating modules).
Cohort 3 (≈500 pages): stress test internal linking, sitemap ingestion, and crawl load.
Scale to 1,000+: only after you can predict outcomes at smaller scale (indexation rate, quality signals, conversions).
Tip: cohort by template + pattern (e.g., “service + city” vs “city + category”) so you can isolate what’s working. If one pattern underperforms, you can pause it without stalling the entire program.
3) Indexation strategy: control what Google sees (and when)
Your indexation plan is your safety plan. The goal is to make it easy for Google to discover your best pages, while preventing low-value or unfinished pages from polluting your site’s quality footprint.
Use “noindex until ready” for generated pages. Create pages in production, but keep them noindex until they pass your QA gates (and have the required data modules populated). Then flip them to indexable in controlled cohorts.
Sitemaps: publish only indexable URLs in XML sitemaps. Keep them segmented by template or pattern (e.g., /sitemap-service-city.xml, /sitemap-alternatives.xml) so you can troubleshoot and roll back quickly.
Canonical rules: ensure every page self-canonicals unless there’s a deliberate consolidation target. Don’t canonical thousands of pages to a hub as a “cleanup” hack—fix the page set instead.
Parameter/facet controls: if filters generate URL variants, define which are indexable and block the rest (noindex, robots rules, or parameter handling). Otherwise, you’ll create crawl traps that drown out your real landing pages.
Submission: use Google Search Console to submit your segmented sitemaps and monitor ingestion. Avoid relying on “Request indexing” as a scaling tactic—it doesn’t scale.
What to look for in Google Search Console during rollout:
Indexing > Pages: spikes in “Crawled - currently not indexed” can indicate thinness, duplication, or low perceived value at scale.
Sitemaps report: track “Discovered URLs” vs “Indexed URLs” per sitemap to understand which templates Google trusts.
Crawl stats: sudden crawl spikes often point to faceted navigation traps, infinite URL spaces, or redirect loops.
4) Monitoring: templates, data drift, and performance alerts
Publishing is the start of the feedback loop, not the finish line. pSEO pages can rot quietly as data changes, entities go out of date, or templates evolve.
Set monitoring at three levels:
Template-level: performance and quality by template (indexation rate, impressions, CTR, average position, engagement, conversions).
Cohort-level: each rollout batch as its own experiment (did cohort 2 index faster than cohort 1 after template changes?).
Entity/data-level: alerts when critical fields go missing, stale, or contradictory (e.g., pricing becomes null; location mismatch; “last updated” too old).
Practical alerts to implement:
Indexation drop alert: if indexed URLs in a template sitemap fall by X% week-over-week, investigate duplication, canonicals, or quality signals.
Thin-page alert: if required fields coverage drops below a threshold (e.g., <80% pages have the primary data module populated), pause publishing until data is fixed.
Data drift alert: if key attributes change drastically (e.g., category counts, availability, ratings), re-render affected pages and re-check on-page claims.
Internal link coverage alert: ensure each new page has a minimum number of inbound internal links from hubs/related modules—otherwise it’s technically “published” but practically invisible.
If you’re building this as an automation pipeline, treat each step like a deployable stage (with logs and rollback): keyword set → dataset snapshot → generation → QA → cohort publish → sitemap update → monitoring. For a more automated, repeatable approach, see Automate the brief-to-publish workflow for pSEO cohorts.
QA checklist: prevent the most common pSEO failures
Most programmatic SEO projects don’t fail because “templates don’t work.” They fail because pSEO QA wasn’t treated like a real release process—so you ship duplicate content, thin content, broken canonicals, crawl traps, or factually-wrong pages at scale.
Use the checklist below as a copy/paste preflight. The goal is simple: automate what you can, manually sample what you must, and only index what meets your bar.
0) Preflight rules (set these before you generate anything)
Define a “publishable” page spec: required data fields, required modules, and what makes the page uniquely valuable (not just “has words”).
Set your QA gates: what triggers an automatic block (e.g., missing required fields, 4xx/5xx, duplicate title tags, broken canonicals, low similarity uniqueness score).
Use cohorts: ship 50–200 pages first, review results and defects, then expand. Scaling broken pages just creates bigger cleanup later.
Indexation safeguard by default: new templates/pages start as noindex until they pass QA + manual sampling.
1) Duplicate content detection (near-duplicates, not just exact copies)
Duplicate content in pSEO rarely looks identical. It’s usually near-duplicate: same intro, same headings, same “benefits,” same FAQs—only the entity name changes.
Near-duplicate similarity check (automated)
Run similarity scoring across pages in the same template set (e.g., SimHash/MinHash, shingle-based similarity, or embedding similarity).
Fail threshold (example): flag pages that are > 80–90% similar in main content area (exclude nav/footer, but include intro + body modules).
Cluster duplicates by “most similar to” so you can fix patterns (usually a templated intro or repeated FAQ block).
Template intro + boilerplate controls
Cap templated intros (e.g., 1–3 short sentences) and force the page to earn uniqueness via data modules (tables, attributes, comparisons, local details).
Use conditional sections: only render blocks if the page has enough data to make the block specific.
Require at least X unique fields per entity (see “Thin page rules” below).
Unique SERP surface area checks
Ensure title tags and H1s are not duplicated across the set.
Ensure meta descriptions aren’t identical boilerplate (not a ranking factor, but a strong QA smell).
Canonical and duplication logic (technical)
If multiple URLs represent the same entity (e.g., parameterized URLs, alternate paths), define a single canonical target and enforce it.
If you have unavoidable near-duplicates (e.g., “service in city” where your data is sparse), consider consolidating into fewer, stronger pages rather than shipping hundreds of weak variants.
2) Thin content rules (make “indexable” a strict bar)
Thin content in pSEO is usually a data problem: pages render, but they don’t deliver enough information to satisfy intent. The fix is to define a minimum “usefulness spec” and enforce it.
Minimum unique fields (automated gate)
Create a required-field list per template (e.g., for a location page: city, service availability, pricing range, lead time, at least 3 providers, at least 5 FAQs).
Fail pages that don’t meet minimums; keep them in a “data backlog” until enrichment is available.
Track “% of pages passing required fields” as a core QA KPI.
Required modules (render logic gate)
Define which modules must appear for indexable pages (e.g., primary table + comparison block + FAQ + CTA).
If a required module has no data, don’t replace it with fluff—either (a) pull alternative data, (b) show a meaningful fallback, or (c) block indexation.
Intent match check (manual sampling)
For each cohort, manually review 10–20 pages across the distribution (head terms + long tail) and ask: “If I searched this query, does this page complete the job?”
Verify your template matches what Google is rewarding (lists vs. detail pages vs. comparisons). If you didn’t do this earlier, fix it now before scaling.
Content-to-template ratio (quality signal)
Measure the percentage of the page that is templated boilerplate vs. data-driven unique content (tables, entity attributes, user-generated fields, unique FAQs).
If boilerplate dominates, that’s a scaling risk—improve data coverage or shrink boilerplate.
3) Technical SEO QA (the stuff that quietly nukes performance)
Technical SEO issues scale brutally in pSEO. A single wrong rule can create thousands of broken URLs, incorrect canonicals, or index bloat.
Status codes & indexability
All indexable pages return 200. No soft-404 templates. No “empty state” pages indexed.
Noindex until ready: validate that “noindex” is removed only after passing QA gates (and never removed via a manual one-off that your system can’t reproduce).
Confirm robots.txt isn’t blocking critical assets or template paths you want crawled.
Canonical tags (make them boring and consistent)
Self-referential canonicals on standard pages.
Parameterized/faceted variants either (a) canonical to the clean URL, or (b) are noindex, depending on your strategy.
Spot-check canonicals on a sample of pages with edge cases (missing data, special characters, duplicates).
Facets, filters, and pagination (prevent crawl traps)
Define which filter combinations are allowed to be indexed (usually a small, intentional subset).
Block infinite URL combinations: control parameters, enforce canonical rules, and consider disallow rules where appropriate.
For pagination: implement consistent internal links, and make sure page 1 is the canonical primary target unless you have a deliberate alternative.
Sitemaps at scale
Generate segmented sitemaps by template/type (and optionally by cohort/date).
Only include URLs that pass QA and are intended to be indexed.
Monitor sitemap ingestion vs. indexation in Google Search Console to catch systemic problems early.
Redirects and URL hygiene
No redirect chains. No internal links pointing to redirects.
Consistent slug rules (case, hyphens, encoding) to prevent accidental duplicates.
Performance & rendering basics
Templates should be fast enough to crawl (watch TTFB and JS rendering bloat).
Verify core content renders without requiring complex client-side hydration that bots might delay or misread.
4) Content QA (especially if you use AI anywhere)
If AI is part of the workflow, treat it like an assistant—not a source of truth. Your QA should prevent confident nonsense from reaching indexed pages.
Hallucination and factual accuracy checks
Only allow AI to write using fields present in your dataset (or citations you can verify). If a claim can’t be grounded, it should not be stated as fact.
Spot-check “high risk” fields: pricing, availability, legal/regulatory statements, guarantees, and competitor comparisons.
Lock down brand names, product terms, and regulated language via a style/compliance dictionary.
Policy and compliance
Add disclaimers where needed (e.g., pricing subject to change; not legal/medical advice).
Ensure you have the rights to display any third-party data (reviews, logos, scraped info).
Readability and formatting consistency
Headings follow one structure per template (H1 once, logical H2/H3 hierarchy).
No “template scars” like leftover placeholders, repeated sentences, broken tables, or empty FAQ accordions.
5) Internal QA process: automate + sample (the scalable combo)
Automated QA gates (run on every build)
Required fields present
Uniqueness score above threshold (near-duplicate detection)
Indexability rules correct (noindex, canonical, robots)
200 status, no broken links, no mixed canonical signals
Schema validation passes (if you output JSON-LD)
Manual sampling (run per cohort)
Review 10–20 pages per cohort across entity types, locations, and long-tail modifiers.
Check intent satisfaction, UX, and “would I trust this?” credibility.
Do at least one “SERP compare” for a few target queries: does your page type match what’s ranking?
Launch gate
Only after passing automated checks + manual sampling do pages move from noindex → indexable and get added to the indexable sitemap.
If defect rate is high, stop the rollout—fix the template/data—then relaunch the cohort.
6) Quick failure-mode cheatsheet (symptom → fix)
Many pages crawled, few indexed → tighten thin-content gates, improve unique data modules, remove low-value URLs from sitemaps.
Pages indexed but don’t rank → intent mismatch, weak internal linking, or templates too similar; enrich data and add differentiators.
Rankings unstable across similar pages → duplicate/near-duplicate clusters and cannibalization; consolidate, canonicalize, or differentiate.
Crawl stats spike → facet crawl traps or parameter explosion; restrict indexable filters, enforce canonicals/noindex.
If you’re automating production, pair this checklist with a formal quality-control workflow. See Keep quality high when scaling SEO content production for a deeper set of review gates and guardrails that prevent pSEO from turning into a thin/duplicate content factory.
Pitfalls to avoid (and what to do instead)
Most programmatic SEO projects don’t fail because “Google hates templates.” They fail because teams scale low-information pages, create uncontrolled URL sprawl, and ship without guardrails—turning a publishing system into an avoidable SEO risk. Here are the failure modes that get pSEO stalled, deindexed, or quietly ignored—plus the safer patterns that keep growth compounding.
1) Doorway pages and “swap-a-word” templates
What it looks like: Hundreds of pages where the only difference is the city/service/feature token, with the same paragraph blocks, same offers, and no meaningful differentiation. This is where pSEO starts to resemble doorway pages—pages built primarily to rank for slightly different queries and funnel users to the same destination.
Why it fails: If the page doesn’t satisfy the query better than a category page (or a competitor), Google has no reason to index it, rank it, or keep it indexed. At scale, near-duplicates also dilute quality signals across the whole set.
Do this instead (guardrails):
Set a “publishability” threshold: a page only goes live/indexable if it meets minimum unique data requirements (e.g., X unique attributes populated, Y items in a list, Z review/price/availability fields).
Use modular, conditional sections: render blocks only when you have data (and hide/replace with a helpful fallback when you don’t). If every page shows the same modules in the same order with the same copy, you’re asking for duplication.
Make the page’s job explicit: is it a directory list, a comparison, a “best for” page, or a location landing page? Mixed intent often produces generic copy that matches nothing.
Prefer “one strong page” over “ten weak pages”: if an entity doesn’t have enough data to stand alone, roll it up into a higher-level hub page.
2) Index bloat: millions of low-value URLs
What it looks like: You generate every possible combination (“CRM for dentists in Austin under $50 with Zapier integration…”) and accidentally create an ocean of URLs—most with no search demand and little content. Google crawls a fraction, indexes fewer, and trust erodes.
Why it fails: Index bloat wastes crawl budget, clutters reporting, and makes it harder for your best pages to be discovered and maintained. It also creates long-term technical debt: more URLs to monitor, update, canonicalize, and internally link.
Do this instead (guardrails):
Start with a capped universe: only generate pages where (a) keyword pattern has validated intent, and (b) you can actually add unique value with your data.
Use cohort rollouts: publish 50–200 pages, validate indexing + performance, then scale. If cohort #1 doesn’t work, cohort #10 won’t magically fix it.
Noindex until “ready”: keep new sets
noindexuntil QA checks pass and internal links are in place; then flip to indexable in controlled batches.Segment sitemaps: separate by template/type so you can see which sets index and which ones drag.
Prune aggressively: if a page has no impressions/clicks after a reasonable window and can’t be improved, merge it into a hub or remove it.
3) Faceted navigation creating crawl traps
What it looks like: Filters (price, feature, rating, location, “in stock”) create infinite URL variations: ?price=, &sort=, &brand=, multi-select facets, pagination parameters, and combinations that multiply endlessly. Googlebot spends time crawling junk variations instead of your money pages. That’s the classic recipe for crawl traps.
Why it fails: Facets can be great for users—but if you let every combination become indexable, you’ll create duplicate/near-duplicate pages and spiderwebs of internal links that explode your URL count.
Do this instead (guardrails):
Decide which facets deserve indexable landing pages: only “SEO facets” with proven demand (e.g., “CRM with QuickBooks integration”) get clean, static URLs and dedicated templates.
Keep user facets non-indexable by default: use
noindex,followfor parameter/faceted URLs; block truly infinite patterns inrobots.txtwhen appropriate (carefully), and avoid linking to parameterized combinations sitewide.Canonicalize to the nearest meaningful page: facets that don’t deserve indexation should canonical back to the core category/hub.
Control internal links: don’t output “all filter combinations” as crawlable links. Link only to curated, high-intent facet pages.
4) Cannibalization and competing templates
What it looks like: Multiple pSEO page types target the same query space: a “/integrations/zapier/” page, a “/features/zapier-integration/” page, and a “/compare/tool-a-vs-tool-b/” page all trying to rank for overlapping intents. Or city pages overlap with “near me” pages and category hubs.
Why it fails: Google gets mixed signals about which URL should rank, impressions scatter, CTR drops, and you end up “ranking” with the wrong page type (bad conversion) or not ranking at all.
Do this instead (guardrails):
One intent per URL, one primary template per intent: decide the canonical page type for each pattern and enforce it.
Write explicit canonical + redirect rules: when two URL patterns overlap, pick a winner early and consolidate signals.
Enforce keyword-to-template mapping: if a query includes “vs,” it maps to comparison; if it includes “alternative,” it maps to alternatives; etc. Don’t let multiple generators create pages for the same cluster.
Use clustering to prevent overlap at scale: Build cleaner topic clusters to avoid pSEO cannibalization.
5) Over-reliance on AI without data grounding
What it looks like: AI-generated intros, “benefits,” FAQs, and comparisons written without strong underlying data—leading to vague content, inaccuracies, and repetitive patterns. The pages read fine at a glance, but they don’t add real information.
Why it fails: At pSEO scale, small inaccuracies become systemic. Thin, generic copy also becomes a footprint—easy for users (and algorithms) to recognize as low-value templating.
Do this instead (guardrails):
Data first, AI second: generate copy only from approved fields (pricing, availability, specs, reviews, supported features, locations served, benchmarks). If the data isn’t there, don’t “invent” it.
Require citations/traceability: every generated claim should map to a dataset field or a vetted source.
Lock down repeatable sections: keep boilerplate minimal and move value into data modules (tables, lists, comparisons) that naturally differ by page.
Add QA gates specific to automation: near-duplicate detection, hallucination checks, and “thin thresholds” before indexation. For more on review gates when automating, see Keep quality high when scaling SEO content production.
Quick “safety rails” checklist (copy/paste for your rollout)
Doorway pages check: Would this page still be useful if the user never clicked to another page? If not, it’s a doorway risk—add data/modules or don’t publish.
Index bloat check: Do we have a hard cap on generated URLs per pattern + a pruning plan? If not, pause scaling.
Crawl traps check: Are facets and parameters prevented from creating infinite indexable URLs?
Cannibalization check: Does every query pattern map to exactly one template + canonical URL rule?
AI grounding check: Can every factual statement be traced to a field or vetted source?
Bottom line: pSEO is safest when it behaves like an engineered system—controlled inputs, controlled templates, controlled indexation. Scale the pages that deserve to exist, and put guardrails everywhere your URL count can explode.
How to add unique value to every generated page
The fastest way to kill a programmatic SEO project is to confuse “scalable” with “interchangeable.” Google doesn’t punish templates. It ignores (or devalues) pages that don’t add unique value beyond swapping a keyword into the same copy.
Use this rule of thumb: if a visitor can’t make a better decision on your page than on a generic listicle or aggregator, your page is at risk of being “thin” even if it’s long.
1) Make your pages data-first (not text-first)
The safest path to non-thin pSEO is data-driven content: original, structured facts that change meaningfully from URL to URL. Your template should “render” insights from the dataset—not generate paragraphs to fill space.
Proprietary metrics and scoring: Create a repeatable score (e.g., “Value score,” “Compliance readiness,” “Fastest setup,” “Shipping reliability”) with a transparent formula and inputs per entity.
Pricing, availability, and plans: Show ranges, minimums, what’s included, or plan-to-feature mappings. Even “starting at” + last updated date is better than nothing.
Performance benchmarks: Time-to-value, average adoption, response times, uptime history—anything you can source responsibly and update on a cadence.
Reviews and sentiment summaries: Aggregate ratings (with source attribution), common pros/cons by category, and “best for” tags derived from review text.
Freshness signals: Add “Last updated,” data source, and update cadence. Stale pSEO pages quietly lose trust.
Operational guardrail: Define “minimum viable data” per template. If a record lacks the required fields (e.g., price + 3 features + 1 differentiator + 1 review), either enrich it before publishing or hold it back (draft/noindex) until it meets the bar.
2) Use comparison tables that actually help people decide
Most templated pages fail because they’re descriptive, not decisive. Comparison tables are a high-leverage module because they compress decision-making into a scannable format—and the content naturally varies by entity.
Feature-by-feature matrices: Map core features, limits, integrations, support, and security to checkmarks or values (not vague “Yes/No” when you can provide specifics).
Plan comparisons: If you’re comparing products, show each product’s most comparable plan tier with clear assumptions (and link to details).
Use-case fit: Add “Best for” rows based on segments (SMB vs enterprise, regulated industries, teams size, budget band).
Explain the why: One short “How we chose these criteria” expandable note prevents the table from feeling arbitrary.
Template tip: Make the table conditional. If you don’t have enough attributes to populate a meaningful matrix, swap the module for a “Key differences” block with the handful of fields you do trust.
3) Add filters and interactive UX—then control indexation hard
Filters can turn a directory into a real product. They also create crawl traps if you let every combination produce indexable URLs. The goal is: users get infinite flexibility, but Google only indexes what matters.
Index only high-intent filter pages: Predefine a small set of filter combinations with proven demand (e.g., “CRM with WhatsApp integration,” “Accounting software for nonprofits”). Everything else stays non-indexable.
Use “static” landing pages for winners: When a filter combo consistently performs, promote it to a dedicated URL with curated copy, unique modules, and internal links.
Prevent crawl bloat: Block endless parameter variations via canonical rules, noindex for low-value facets, and careful handling of pagination/sorting parameters.
Show counts and coverage: “47 providers match” is simple, data-driven value that changes per page and improves trust.
Reality check: If your pSEO project includes faceted navigation, your unique value plan must include a crawl/indexation plan. Otherwise, you’ll scale problems faster than rankings.
4) Build FAQs from real demand (SERPs + support + on-site queries)
Generic FAQs are filler. High-performing pSEO FAQs answer objections and clarify intent. Source them from places where people already ask:
SERP questions: “People also ask,” related searches, and competitor headings for the same query pattern.
Support tickets and sales calls: The questions that block conversion are gold for landing pages.
On-site search and chat logs: If users keep asking “Does this work with X?” or “How long does setup take?” your pages should answer it.
Make FAQs modular and conditional:
Global FAQs: Apply to the template set (e.g., “How do we calculate the score?”).
Category FAQs: Apply to a subset (e.g., “Is this compliant with SOC 2?”).
Entity-specific FAQs: Only appear when your data supports a factual answer.
Important: Don’t invent answers. If AI helps draft, it should only rephrase known facts from your dataset or vetted sources. Unsupported claims are a long-term quality tax.
5) Add localized or contextual insights (without faking “local”)
If you’re generating location pages or market-specific landing pages, avoid the classic “swap-a-city” intro. Instead, add context that truly differs by location/segment:
Benchmarks and norms: Typical price ranges, lead times, adoption patterns, or seasonality by region/industry.
Regulations and requirements: Licensing, data residency, taxes, compliance considerations—only if you can keep it updated.
Availability and coverage: Service areas, delivery windows, languages supported, or regional support hours.
Local proof: Case studies, customer logos, or testimonials filtered by region (when legitimate).
Guardrail: If you can’t source real localized data, don’t pretend. Publish fewer pages that are accurate, not more pages that are fluffy.
6) Add an editorial layer where it matters most
Not every page needs human-written narrative—but your “money pages” often benefit from a light editorial pass that makes them feel earned.
“Best picks” modules: 3–5 recommended options with short, specific reasons tied to the table criteria.
Expert notes: One paragraph: what to watch out for, common pitfalls, or selection tips for that category/use case.
Explain the tradeoffs: “If you need X, you’ll likely compromise on Y.” This is hard to fake and easy to differentiate.
Practical workflow: Apply editorial effort by cohort: ship the template with strong data modules first, then upgrade the top-performing 10–20% of pages with deeper notes, screenshots, examples, and tighter conversion copy.
7) Use a “uniqueness checklist” as a publishing gate
If you want to scale safely, define what “unique value” means in measurable terms. Before a page goes indexable, require it to pass a simple checklist:
Unique data present: at least N entity-specific fields rendered (not just the entity name repeated).
Decision support included: a comparison table, ranked list, or “best for” logic appears where relevant.
Intent satisfied: the page answers the core query without forcing a user to click elsewhere for basics.
FAQs are demand-backed: questions map to SERP/support/on-site data, not generic filler.
No empty modules: conditional sections only show when you have real content; no “TBD” vibes.
That’s how you avoid thinness in practice: not by writing longer intros, but by ensuring every generated URL ships with useful, query-matching, data-backed differences that a human can validate.
Measuring success: what to track in the first 30–90 days
Most pSEO projects don’t fail because the pages can’t be generated—they fail because teams scale before they know which templates and cohorts actually work. The fix is simple: measure performance by template and by cohort (the batch you launched together), then iterate before you publish the next 500 URLs.
Below is a 30–90 day measurement framework built for programmatic launches. It focuses on the SEO metrics that tell you whether you’re (1) getting discovered, (2) getting indexed, and (3) earning clicks and business value—without creating index bloat.
Set up reporting so you can see “template performance,” not just site averages
If you only look at domain-wide charts, you’ll miss the signal. pSEO wins and losses are usually concentrated in a few templates, modifiers, or data segments.
Tag every URL with a template ID (in your CMS, route pattern, or a hidden HTML meta field). Examples: /cities/ pages vs /alternatives/ pages vs /compare/ pages.
Launch in cohorts (e.g., 50 → 200 → 500 pages) and keep a changelog: template version, data snapshot date, internal linking rules, schema changes.
Segment in Google Search Console (GSC) using URL prefixes, patterns, or Search Console “Performance” filters so you can report per template/cohort.
Days 1–14: Coverage (are your pages discoverable and indexable?)
In the first two weeks, your #1 job is to confirm you didn’t create invisible pages or low-quality index bloat. This is where index coverage and crawl diagnostics matter more than rankings.
Index coverage by template/cohort
Indexed / Submitted (from GSC Sitemaps + Pages reports)
Crawled - currently not indexed and Discovered - currently not indexed counts (watch for spikes after launches)
Duplicate, Google chose different canonical (often a sign of thin/near-duplicate templating)
Sitemap ingestion health
Submitted URLs vs discovered URLs per sitemap (segmented sitemaps make this obvious)
Errors: 404s, soft 404s, redirect chains, “Submitted URL blocked by robots.txt”
Crawl stats signals (early warning)
Sharp crawl spikes + low indexation = Google is sampling and not convinced
High crawl on faceted/filter URLs = you may have a crawl trap
Go/no-go rule: If a cohort has poor index coverage after 10–14 days (and you’re confident the tech is sound), pause the next cohort. Fix the template/data first—don’t “outpublish” the problem.
Days 15–45: Performance (are you earning impressions and clicks?)
Once pages are being crawled and indexed, shift to query matching and SERP traction. You’re not looking for instant head-term rankings—you’re looking for distribution: lots of pages earning long-tail impressions and a subset converting that into clicks.
Impressions by template/cohort
Are impressions concentrated in a few pages, or spread across the set? Spread = pattern validation.
Do some modifiers (e.g., “near me,” “pricing,” “best,” “vs”) outperform others? That’s your roadmap.
CTR by template (and by top query group)
Low CTR with solid impressions often means your title/meta doesn’t match intent, or SERPs demand richer formats (lists, tables, comparisons, local packs).
Compare CTR across templates—this is one of the fastest ways to diagnose “wrong page type” at scale.
Average position is a supporting metric, not the goal
Use it to spot outliers (templates that never break into the top 20), not as the primary KPI.
Long-tail wins
Count how many unique queries each cohort captures (GSC “Queries” export). pSEO should expand query coverage quickly.
Track “Top 3 / Top 10 keyword count” for the cohort over time—growth here is a leading indicator.
Go/no-go rule: If you’re getting indexed but impressions stay flat across most pages, you likely have an intent mismatch, insufficient unique value, or internal linking that isn’t surfacing pages effectively. Don’t publish more until you can explain why the current set isn’t being pulled into relevant SERPs.
Days 30–90: Quality + business impact (are these pages worth keeping indexed?)
At 30–90 days, you should start judging whether the project is creating real value. This is where you prevent “lots of pages, little impact” scenarios.
Engagement and satisfaction signals
Scroll depth / time on page (directional, not absolute)
On-page interactions with tables, filters, or comparison modules
Pogo-sticking indicators (e.g., very short sessions paired with low CTR can indicate poor intent match)
Conversions and assisted conversions
Primary conversions: demo requests, trials, lead forms, purchases
Assists: first-touch landings that later convert via branded or other pages
Measure by template/cohort: some pSEO templates are “rank-first” and assist-heavy; others should convert directly.
Quality thresholds to protect the index
Pages with zero impressions after a reasonable window (often 45–90 days depending on authority) should be audited.
Pages with impressions but persistently poor CTR may need intent-aligned rewrites (titles/meta, above-the-fold modules).
Pages that are thin due to missing data should be improved or removed from indexation until they meet your minimum content rules.
The iteration loop: decide what to scale, what to fix, and what to prune
This is the system part: treat templates like products. Each cohort gives you feedback. Your job is to ship a better “v2” before you multiply output.
Diagnose by failure mode (template-level)
Not indexed: canonical/duplication, thinness, weak internal links, or crawl traps.
Indexed but no impressions: intent mismatch, wrong page type, missing entities/attributes in copy and headings.
Impressions but low CTR: titles/meta not aligned, SERP requires comparisons/ratings/pricing, snippet uncompetitive.
Traffic but low conversions: weak CTAs, missing trust signals, no “next step” module, wrong funnel stage.
Improve the template before the dataset
Small template changes (conditional modules, better intros, stronger comparison tables, clearer FAQs) can lift an entire cohort at once—this is the compounding advantage of pSEO.
Enrich data where it moves rankings
If top-performing SERPs show attributes you don’t have (pricing, availability, reviews, benchmarks), add them to the model. Data quality and coverage are ranking features in disguise.
Prune or de-index underperformers (don’t hoard URLs)
Merge near-duplicates into a canonical “best” page.
Noindex pages that fail minimum quality thresholds until they’re upgraded.
Kill pages that represent fake demand (no impressions) or unusable data (no value).
Only then scale the next cohort
The goal is controlled expansion: each batch should launch with a template that’s measurably better than the last.
A simple 30–90 day pSEO scorecard (copy/paste)
Coverage: Index coverage %, “Not indexed” reasons by template, sitemap ingestion success rate
Performance: Impressions growth, clicks growth, CTR by template, long-tail query count per cohort
Quality: Engagement with core modules, pages failing thin/duplicate rules, pages with zero impressions
Business: Conversions + assisted conversions by template, CTA click-through rate, lead quality notes (if applicable)
Decision: Scale / Fix / Prune (with the template version you’re shipping next)
When you measure like this, “template performance” becomes obvious—and you stop scaling the wrong pattern. That’s how pSEO turns into an engineering-grade growth system instead of a content factory.
Putting it on autopilot: the pSEO system architecture
Programmatic SEO only “scales” when the whole machine scales—not just page generation. If your workflow still relies on ad-hoc keyword picks, messy spreadsheets, and one-off template edits, you’ll ship thousands of URLs… and spend months untangling indexation, cannibalization, and thin-page problems.
Think of pSEO as end-to-end SEO: a repeatable publishing system with clear inputs (keyword patterns + structured data), deterministic outputs (templates + metadata + links), and hard gates (QA + index controls). That’s what autopilot SEO looks like in practice.
Workflow map: keyword research → data → templates → internal links → publish
Here’s the system architecture you’re aiming for. Each step has an “automation surface area” and a quality control point—so you can move fast without breaking trust or crawlability.
Keyword pattern discovery (and intent validation)
Input: seed topics + entities (locations, integrations, industries, competitors, use cases).
Output: validated patterns (e.g., [category] in [location], [tool] alternative, [X] vs [Y]) + a rule for “one intent per template.”
Gate: don’t scale a pattern until you’ve verified the SERP intent and what the ranking pages actually look like.
Use SERP analysis to validate keyword patterns before scaling
Topic assignment (prevent cannibalization before it happens)
Input: the full keyword list for a pattern + your planned URL set.
Output: clusters mapped to a single canonical URL/page type (no competing templates for the same intent).
Gate: collision checks (two URLs targeting the same cluster; the same query mapping to multiple templates).
Data ingestion + modeling (your content quality engine)
Input: internal DB, spreadsheets, APIs, or verified third-party data.
Output: a clean entity model (entities, attributes, relationships) that determines what becomes a URL and what modules a page can render.
Gate: coverage, freshness, accuracy, and “unique fields” required to make pages meaningfully different.
Template + module system (render pages, not placeholders)
Input: design system + SEO requirements + data model.
Output: modular templates with conditional sections (only show blocks when data exists) and fallbacks that still add value.
Gate: thin-content rules (minimum unique attributes, required primary module, and a “fails closed” behavior—no publish if requirements aren’t met).
Internal linking rules (make pages discoverable and non-orphaned)
Input: site architecture plan (hubs, categories, geographic trees, comparisons) + linking policy.
Output: consistent links via breadcrumbs, hub pages, related modules, and contextual in-content links.
Gate: anchor text governance and limits (avoid auto-spam footprints across thousands of pages).
Publishing + indexation controls (cohorts, not floods)
Input: approved pages from staging.
Output: cohort-based launches (e.g., 50 → 200 → 500) with segmented sitemaps and monitoring by template.
Gate: noindex-until-ready, then flip to index only when the page passes QA and has required modules/data.
QA + monitoring loop (keep the machine healthy)
Input: crawl data, index coverage, template-level performance, data drift signals.
Output: alerts, pruning lists, template improvements, and data enrichment tasks.
Gate: near-duplicate detection, thin-page thresholds, and crawl trap prevention (especially around facets/filters).
Where SEO automation helps most (and where humans must review)
The goal isn’t to remove humans—it’s to use SEO automation for repeatable steps and reserve human attention for judgment calls that affect rankings and trust.
Automate (high leverage)
Pattern expansion: generate combinations, estimate coverage, and flag duplicates/collisions.
Data pipelines: ingest, validate, and log changes (with audit trails and rollbacks).
Page rendering: titles, H1s, schema JSON-LD, tables, and conditional modules from structured data.
Internal links: rules-based hubs, breadcrumbs, “related” modules, and consistent anchors with guardrails.
Publishing ops: cohort scheduling, segmented sitemaps, and noindex/index toggles.
QA checks: status codes, canonicals, empty states, near-duplicate thresholds, and thin-page rule enforcement.
Keep human review (non-negotiable)
SERP reality checks: does this query want a list, a local pack, a comparison, a calculator, or an editorial guide?
Page “job to be done”: what must the page help the user decide or do?
Template UX and conversion logic: CTAs, trust signals, and the clarity of the primary module.
Editorial value layers: explanations, “best picks,” caveats, and nuance that competitors can’t auto-generate.
Compliance and factual accuracy (especially if any AI generation is involved).
If you’re aiming for true content workflow automation, design the workflow so it fails closed: when data is missing or intent is unclear, the system should pause, queue it for review, or keep the URL noindexed—rather than publishing a weak page that bloats the index.
Minimum viable pSEO stack vs. an end-to-end platform
You can build pSEO with a patchwork stack, but the operational overhead tends to sneak up on you. The tradeoff is simple: tools are cheap; coordination is expensive.
Minimum viable pSEO stack (good for proving a pattern)
Keyword research + clustering tool
Airtable/Sheets as the dataset + light validation
CMS templates (or static pages) with basic conditional rendering
Manual internal linking strategy applied in batches
Manual QA sampling + basic crawl checks
End-to-end platform approach (what you need to scale safely)
Unified workflow from keyword patterns → data model → templates → links → publishing
Built-in QA gates (duplicate/thin checks, empty states, canonical rules, index controls)
Cohort-based rollout tooling with segmented sitemaps and monitoring by template
Governance: permissions, approvals, audit logs, and repeatable runbooks
That’s the difference between “we generated 10,000 pages” and “we built an end-to-end SEO system that can publish 10,000 pages and maintain quality, discoverability, and performance.” When you treat pSEO like production software—inputs, templates, tests, releases—you get scale without the chaos.