TL;DR: Over the past two months we published 314 blog posts on autorank.so/blog using a Programmatic SEO playbook — scraping competitor sitemaps and rewriting their articles with AI. We did it deliberately to test whether mass AI content can rank in 2026. Twenty-eight days later we had 5 clicks. Five. The diagnosis was a mix of a hard technical bug (an SEO plugin that was silently disabled the entire time), a content quality verdict from Google (“Discovered, not indexed” on most posts), and the deeper truth that the pSEO playbook just doesn’t work the way it used to. Here’s the full data, what we changed, and what we’d do differently.
What we did
The plan, executed February through April 2026:
- Scraped SEObot‘s blog sitemap → 203 article URLs
- Scraped Outrank.so‘s blog → 144 articles, filtered to 88 topically relevant
- For each, fetched the source HTML, fed it to Claude Sonnet, and produced a unique-title rewrite that kept the substance but changed the wording
- Published every output via the WordPress REST API to autorank.so/blog with a featured image, focus keyword, meta description, and tags
- Ran daily for ~12 weeks. Total published: 314 blog posts
The thesis was the standard pSEO bet: pick keywords competitors already rank for, publish “good enough” content, capture some fraction of their traffic over time. We weren’t trying to copy them — every output had a unique title, fresh paragraphs, our own framing — but the topical scope was deliberately a clone.
What happened (the brutal data)
Here’s the Google Search Console performance for those 314 posts over the most recent 28 days (April 4 → May 1, 2026):
| Metric | Value |
|---|---|
| Total impressions | 25,378 |
| Total clicks | 5 |
| CTR | 0.02% |
| Average position | 77.2 (page 8+) |
| Posts with at least 1 click | 3 of 314 |
The impression distribution is the more useful chart. Of 314 posts:
| Posts | % of total | Bucket |
|---|---|---|
| 251 | 80% | Zero impressions in 28 days |
| 20 | 6% | 1–9 impressions |
| 23 | 7% | 10–99 impressions |
| 13 | 4% | 100–999 impressions |
| 7 | 2% | 1,000+ impressions |
Eighty percent of what we published was completely invisible to Google. Not “ranking poorly” — actually never shown to a human user. Google indexed some, refused to index others (“Discovered – currently not indexed”), and for the rest the verdict was effectively “your content isn’t worth our index slot.”
The 7 posts that did get over 1,000 impressions all clustered in the 70–95 position range — page 7 to 9. Plenty of impressions, zero clicks, because nobody scrolls to page 8.
The smoking gun: an SEO plugin that wasn’t doing SEO
While diagnosing the failure, we found something embarrassing: the entire time we’d been publishing, our SEO plugin (Rank Math) had its frontend module silently disabled. Specifically, in the plugin’s bootstrap code:
public function init_frontend() {
if ( $this->container['registration']->invalid ) {
return; // ← bailed out for months
}
$this->container['frontend'] = new RankMathFrontendFrontend();
}
The setup wizard had never been completed (no rank_math_registration_skip option set, no account connected), so registration->invalid was true and the Frontend class was never instantiated. The downstream effect:
- No
<meta name="description">on any post - No proper
<meta name="robots">output (only WordPress’s bare default) - No og:* tags (broken Facebook/LinkedIn sharing)
- No twitter:* card tags
- No JSON-LD schema (no Article, no BreadcrumbList, no rich-results eligibility)
- No structured data of any kind
The rank_math_focus_keyword and rank_math_description values were being written to the database correctly via our publishing API. The plugin just never read them. Every one of those 314 posts shipped to Google with the bare HTML <title> WordPress generates by default and nothing else.
Position 77 is exactly what you’d expect for content with no SEO signals attached. Google has to guess what the page is about from the body text alone, and 314 generic AI-written articles don’t differentiate themselves from thousands of similar pages on the same topics.
So how much of the failure was the plugin bug?
This is the question we keep coming back to. Three plausible attributions:
- “Mostly the plugin bug” — most pSEO content with proper SEO meta would have ranked at position 30–50 instead of 77. With the plugin fixed, rankings should improve substantially over the next few weeks.
- “Mostly the content quality” — Google’s helpful-content classifier would have downranked these articles regardless of meta. Position 77 is what you get for “yet another rewrite of the SEMrush-vs-Ahrefs article.” The plugin bug is real but masks the underlying problem.
- “Both, equally” — the plugin bug compounded with weak content. With the plugin fixed, rankings improve some but not enough to actually drive traffic without a content quality lift.
We think it’s #3. We’re going to know definitively in 4–6 weeks once Google has had time to recrawl every page with the new meta. If the average position drops from 77 to ~40 we’ll attribute roughly half to the plugin bug. If it stays at 70+, the content was the bigger problem all along.
What we changed (the recovery work)
Today we shipped a recovery pass that addresses the technical and partial-content issues:
- Enabled Rank Math’s frontend module. One option update (
rank_math_registration_skip = 1) and the plugin started outputting all the meta tags it had been silently dropping. Every one of the 63 indexable posts now has a proper meta description, og: tags, twitter cards, and Article + BreadcrumbList JSON-LD schema. Rich-results eligible. - Made the sitemap a real sitemap index. Previously the FastAPI app and the WordPress install each had their own sitemap, with only the FastAPI one declared in
robots.txt. Google had to discover the WP sitemap on its own. Now/sitemap.xmlis a<sitemapindex>pointing at both, androbots.txtdeclares both explicitly. - Noindexed the 251 zero-impression posts. Google had already decided not to show them; making it explicit cleans up the site-quality signal that was likely dragging down the rest of the domain.
- Resubmitted sitemaps to Google + IndexNow ping. Tells Google + Bing “come refetch — there’s new metadata.”
- Added “Related reading” internal-link blocks to all 63 keepers. Each post now links to its 3 most topically similar peers — a real topical hub instead of 314 islands.
- Rewrote the top 2 posts. The 8,415-impression “best SEO reporting tools” article got a comparison table, a “How we tested” methodology section, and “Verdict” lines per tool. The “How to Block AhrefsBot” article got an answer-first structure with a comparison table of the 5 blocking methods, copy-paste code for each, and a “Will blocking AhrefsBot hurt my SEO?” section addressing the most common reader concern.
What pSEO got wrong in 2026
Honest take, having now run the experiment ourselves and dug into the data:
1. The “good enough” bar is much higher than it was
When pSEO worked best (2018–2022), Google’s quality systems weren’t great at distinguishing decent content from great content. A correctly-structured article with the right keyword density was sometimes enough. In 2026, Google’s helpful-content classifier looks at the article and asks: “Does this provide unique value, or is it a rewrite of something that already exists?” Mass-rewritten content fails that test — the system was specifically designed to catch it.
2. New domains compound the disadvantage
Established sites with topical authority can publish similar pSEO content and have it rank because Google trusts the domain. A new domain (autorank.so was three months old when we ran this) starts from zero authority. Google’s bar for “do I trust this site enough to surface this content?” is higher, and “another rewrite of an existing article” doesn’t pass it.
3. The competitive shape of “best X tools” is tougher than it looks
Most pSEO playbooks target listicles (“10 best X for Y”) because they’re easy to template. But that’s also where every pSEO operation in the world is competing. The top-ranking results in those SERPs have built up signals over years — backlinks, brand searches, click-through rates, time-on-page. Out-publishing them with rewrites doesn’t move you up; you have to outdo them on the underlying signals.
4. SEO meta isn’t optional in 2026
Our plugin bug took us from “competing badly” to “completely invisible.” A 60% effort job on meta would have at least put us in the page-3-to-5 range where we could measure real movement. If you’re publishing at scale, monitor the actual head HTML being shipped — don’t assume the plugin is doing its job.
5. Internal linking matters more than people think
314 posts published, zero internal links between them. Each article was an island. Google interprets that as “these aren’t connected to a topical hub” — it weakens the authority signal across all of them. A simple “Related reading” block on every post (linking to 3 topically related peers) materially changes how Google sees the cluster.
What we’d do differently
If we ran this again with what we know now:
- Publish 30 high-effort articles, not 300 rewrites. One genuinely useful, opinionated, original-data piece beats ten generic rewrites every time. The 30 wouldn’t all rank, but the ones that did would compound — earning links and brand mentions that lift the whole domain.
- Build a topical cluster, not a buffet. Pick 2–3 narrow topic areas and dominate them with hub-and-spoke linking, instead of writing about every SEO topic that exists. Topical authority is a real ranking factor.
- Test the meta tags before publishing 100 posts. A single
curl | grep robotson a sample post would have caught the Rank Math bug before we’d published anything at scale. Build that into the publish pipeline. - Don’t publish anything you wouldn’t link to from your homepage. If an article isn’t good enough that you’d recommend it to a real reader, it’s also not good enough to help your domain. The 251 zero-impression posts we just noindexed never should have been published in the first place.
- Get backlinks before publishing at scale. Domain authority is the lever — it determines whether Google trusts you enough to surface your content. New-domain pSEO without authority work first is rolling a boulder uphill.
What we’re watching for
The next 4–6 weeks tell us whether the recovery work was enough or whether we need to escalate to deeper content rewrites:
- Average position — should drop from 77 toward 50 if the meta tag fix is meaningful
- “Discovered – currently not indexed” count — should fall as Google recrawls and re-evaluates
- Total clicks — the only metric that matters, currently 5 over 28 days
- Rich-results impressions — Article schema is now flowing; we should see SERP appearances we weren’t getting before
- Indexed pages count — should drop as the 251 noindexed posts fall out, then stabilize
If by mid-June the picture hasn’t materially improved, the diagnosis was wrong and we move to plan B: aggressive pruning down to 10–15 articles plus a single comprehensive pillar page per topic cluster. We’ll publish a follow-up either way.
Bottom line
The pSEO playbook that worked in 2020 is mostly a trap in 2026. It’s still possible to rank with AI-assisted content, but the bar has moved: genuinely useful, original, authored content with a clear point of view, written by someone with credibility on the topic. Mass rewrites with weak SEO meta on a young domain produce exactly what we got — 5 clicks in 28 days.
The good news: the technical recovery work is cheap. The actual content lift is the hard part, and that’s the work ahead.
If you’re building an AI content platform and want to do this without making our mistakes, that’s literally what we built Autorank for. We learned the lesson; the product reflects it.