Field Notes • April 2026

The Receipts

What goes into a LinkedIn post when every claim has to hold up.

530 words 8 research collections 6,000+ verified artifacts 3 draft variants
← Back to Field Notes

The Post

We recently published a LinkedIn post introducing who we are and what we do. It's 530 words. It took a full working session to produce. Here's why.

The post makes specific claims: that corrective exercise programming and AI verification share a structural discipline, that transparent pricing frameworks can be built on research rather than convention, that federal grant evaluation methodology transfers across domains. Each of those claims had to be something we could actually defend.

This is the story of how we built it.

The Evidence Base

Before writing a single word, we inventoried every substantive claim the post might make and mapped each one to a verified research collection. If a claim couldn't trace to evidence we'd already built, it didn't go in the post.

8
Research Collections
6,000+
Verified Artifacts
60+
Independent Sources
0
Unsupported Claims
Claim in PostResearch BaseScaleVerifiable At
Corrective exercise & scapular dyskinesis Exercise science research — 8 peer-reviewed sources on shoulder rehabilitation, kinetic chain mechanics, and corrective programming 3,745 artifacts Training Research
Federal health grant evaluation Dual-source evaluation framework — federal grant rubric criteria + nonprofit program outcomes across 17 independent sources 778 artifacts Health Grant Assessment
Transparent pricing built on research Pricing psychology research — NYOP/PWYW models, trust-based purchasing behavior, transparency effects on conversion 675+ artifacts BAF Training
AI verification methodology Demonstrated across portfolio — the verification architecture visible in every case study is itself the evidence Portfolio-wide Full Portfolio

Why this matters: Most LinkedIn posts make claims from authority or experience. This one makes claims from evidence. Every substantive assertion in 530 words traces to research we built, indexed, and can show you. That's not typical content creation. That's the same standard we apply to client work.

The Process

We didn't write one draft and call it done. We built three distinct variants, each grounded in different evidence and structured around different opening strategies:

Variant A: The Paradox

Opens with two concrete projects — shoulder program and grant evaluation — side by side. Drops the reader into the incongruity with zero setup. Selected.

Variant B: The Discovery

Opens with pricing research, then reveals the same methodology applied to federal grants. Names "balance" explicitly as the connecting principle.

Variant C: The Algorithm

Opens with LinkedIn's own recommendation engine — 8 personal training jobs, 0 AI roles — as a live demonstration of the categorization problem the post is about.

Each variant was tested against a voice standard: does this sound like something you'd say to a friend who asked what you do, or does it sound like a LinkedIn post? If it sounded like LinkedIn, it was wrong.

Variant A was selected because the paradox hits hardest in concrete terms. The specificity — "scapular dyskinesis," not "exercise science" — carries more credibility than any claim of expertise could.

The Logging Discipline

A good personal trainer logs every session. Not because they enjoy paperwork, but because the log is the program. Sets, reps, RPE, movement quality notes, regression and progression decisions — the record is what turns individual sessions into a coherent training arc.

We log every step of our AI work for the same reason. Research sources are indexed. Decisions are documented. Verification steps are recorded. When we say "3,745 verified artifacts from 8 peer-reviewed sources," that number isn't estimated. It's counted, because the system that produced it was designed to count.

This case study exists because of that logging discipline. We can tell you exactly how the post was built — the evidence inventory, the variant testing, the voice calibration — because we have the record. The same record-keeping that makes our client deliverables auditable makes our own process transparent.

The connection: The discipline of logging sessions, verifying sources, and checking whether things actually worked rather than just feeling like they worked — that's the thread running through everything we do. Personal training. AI verification. Grant evaluation. Content creation. The domain changes. The discipline doesn't.

The Algorithm Paradox

While building this post, we assessed the LinkedIn jobs feed. The recommendation algorithm shows 8 personal training positions and 0 AI-related roles — despite the profile headline reading "AI Implementation Specialist." The algorithm sees one category and stops looking.

But when you actively search for "prompt engineer" or "AI consultant," 90+ relevant positions appear. LinkedIn's own system even flags a "top applicant" match for an AI evaluation role. The market recognizes both identities. The algorithm just can't hold them at the same time.

That observation didn't make it into the post directly. But it informed the post's thesis: balance isn't a concept you explain. It's the structural condition of existing between categories that machines treat as separate.

What This Demonstrates

This isn't a case study about a LinkedIn post. It's a demonstration of what it looks like when the same methodology used for federal grant evaluations and corrective exercise programs is applied to something as simple as introducing yourself online.

The post is 530 words. The evidence behind it is 6,000+ verified research artifacts built across eight separate research efforts over eighteen months. The ratio is the point.

If you're curious about what that methodology looks like applied to your work, the portfolio has the full picture. And if you'd rather just talk about it, we're here for that too.

View the portfolio →     Start a conversation →