[This is a placeholder essay — the structure and key arguments are in place; refine before publishing publicly.]

The standard SEO audit is broken

Most SEO audits in the wild today end the same way. A consultant runs Screaming Frog, exports the report, layers it with their own findings, and hands the client a 60-page PDF. Severity labels — Critical, High, Medium, Low — get sprinkled across the issues. There's a one-page executive summary. Sometimes a roadmap.

And then nothing happens. The client thanks the consultant, files the PDF in a Google Drive folder, and goes back to whatever they were doing before. Six months later, somebody references "the audit we got from those folks last year" and notes that they should probably do something about it.

The audit failed. Not because the findings were wrong — usually they're correct, often they're insightful. The audit failed because the document didn't end with executable work. Issues are not deliverables. Severity labels are not sequencing decisions. A 60-page PDF is not a project plan.

Three things audits should produce, but usually don't

1. Implementation specifications, not problem descriptions

"Your title tags are too long" is not a deliverable. A deliverable is: "Update the title tag on the following 47 product pages from {{template A}} to {{template B}}, where the new template incorporates the primary keyword in the first 60 characters and removes the redundant brand name. Code change required in theme/templates/product.liquid, line 12. Acceptance criteria: all 47 pages have titles between 50-60 characters with the primary keyword in the first 60 characters; brand name appears once at the end, not at the start."

That's a deliverable. A developer can pick it up and ship it without asking the auditor a follow-up question.

2. Sequencing, not prioritization

"This is High priority" tells you nothing about when to do it. The right output is a sequence: this fix first, then this one, then these three in any order. The sequence should reflect dependencies (you can't optimize internal linking until you've fixed the canonicalization issues), risk (don't change the URL structure during peak traffic season), and quick-win economics (knock out the trivially easy fixes first to build momentum).

3. Acceptance criteria, not vague "improvements"

How do you know a fix shipped correctly? An auditor who can't answer that has handed you something incomplete. Every recommendation should specify what success looks like — concretely, testably. "Schema validates in Google's Rich Results Test." "Mobile Lighthouse Performance score increases from 67 to 90+." "Internal link count to /pricing/ increases from 4 to 20+." Without acceptance criteria, neither you nor your developer can tell when the work is done.

Why most consultants skip the spec work

Spec writing is hard. It requires the consultant to understand the implementation enough to describe it concretely. It exposes assumptions ("oh, you're on a custom CMS — that changes the recommendation"). It takes 3-5x longer than just listing problems.

And it shifts who looks competent. A vague audit lets the consultant stay clever and abstract. A spec-heavy audit forces them to commit to specific positions that can be tested when the work ships.

For consultants who plan to maintain a long retainer relationship, the vague approach is actually rational — it keeps clients dependent on follow-up calls and "implementation support." For clients trying to ship the work, it's the opposite of useful.

What to demand from any auditor you hire

Three questions to ask any SEO consultant before you sign an audit engagement:

  1. Will every recommendation come with an implementation specification a developer can execute without follow-up questions? If yes, ask for a sample. If they hedge, you have your answer.
  2. Will the audit specify acceptance criteria so we can verify each fix shipped correctly? Same logic. Ask for a sample.
  3. Will you re-test after we ship the fixes, and tell us honestly if anything we shipped didn't deliver the result you expected? Most won't commit to this. The good ones will, and they'll mean it.

If the consultant can't answer those three questions affirmatively, you're paying for expensive feedback. Sometimes that's what you need. Often it's not.

An audit that ends with a PDF is incomplete. An audit that ends with shipped fixes is complete. Almost no consultants in the market work this way. Hire the ones who do.

The alternative is what every Palmetto audit produces

This is the ethos behind every audit we deliver. Not because the work is more impressive that way, but because the work is more useful that way. Every recommendation has a spec. Every spec has acceptance criteria. We re-test after fixes ship and tell you honestly when something we recommended didn't deliver — including the moments where we got the diagnosis wrong.

If that sounds like the kind of audit you've been hoping someone would deliver, that's what we do.

Working through this in your business?

30-minute calls. No pitch deck. We'll either be useful or we won't, and you'll know in the first 10 minutes.

Start a conversation