The Problem
A grant review organization needed evaluators for Washington DC's FY2026 economic development programs — scoring small business applications against a published rubric worth up to $90,000 per award. The challenge wasn't understanding the rubric. It was demonstrating the ability to apply it rigorously, with verifiable evidence, in a field where we hadn't worked before.
A resume listing "research experience" wouldn't differentiate us. The evaluator's actual job is structured assessment — reading applications, mapping evidence to criteria, scoring consistently, and flagging gaps. The only credible proof of that capability is a working example.
The Approach
Rather than submitting qualifications and hoping for an interview, we built the deliverable itself: a retrospective evaluation of a real grant awardee, scored against the actual published criteria, using only publicly available information.
- Program Research. Compiled all publicly available DC DMPED program documentation — RFAs, scoring rubrics, NOFAs, and program pages — into a structured research base. This gave us the exact evaluation criteria reviewers use.
- Awardee Selection. Identified a real FY24 Great Streets Retail awardee — Arcay Chocolates, a Venezuelan-owned artisan chocolate business in Georgetown — whose public footprint was rich enough to support a substantive evaluation.
- Source Compilation. Gathered 15 independent public sources: DMPED press releases, SEC financial filings, local press coverage, business directory listings, community bond offering data, and the company's own website.
- Structured Assessment. Evaluated the business against all five Section F.1 scoring criteria (Business Summary, Financial Viability, Job Creation, Corridor Impact, Business Growth Plan) with evidence-backed scores and explicit reviewer notes on information gaps.
- Verification Architecture. Built a citation system where every factual claim links to its source — readers can verify any assertion in two clicks without taking anything on trust.
The Build
The assessment draws on two distinct research bases: one covering the grant programs themselves (scoring rubrics, eligibility requirements, program goals), and one covering the business being evaluated (financial data, press coverage, community engagement, growth trajectory). Keeping these research bases separate ensures that rubric interpretation doesn't bleed into business evaluation — the same discipline a reviewer would apply when handling a real application.
| Source Category | Examples | What It Provides |
|---|---|---|
| Program Documentation | FY2026 Great Streets RFA, Growth Fund RFA, NOFAs | Scoring rubric, eligibility criteria, program objectives |
| Financial Filings | SEC Form C (EDGAR), SMBX bond prospectus | Revenue, assets, debt structure, use of funds |
| Government Records | DMPED press releases, Mayor's office announcements | Prior award history, dual-program status confirmation |
| Local Press | The Georgetowner, NBC4 Washington, The Eagle | Community engagement, corridor involvement, business narrative |
| Business Listings | Georgetown DC directory, Union Market, Yelp | Location verification, customer reception, operating details |
Key design decision: Information gaps are treated as findings, not failures. Where public data can't fill a scoring requirement — like specific job creation commitments — the assessment flags exactly what would need to come from the application itself. This is how experienced reviewers actually work: identifying what's missing is as important as evaluating what's present.
The Outcome
The completed assessment includes a visual scoring dashboard, detailed criterion-by-criterion analysis, reviewer notes flagging data gaps, and a cross-program comparison showing how evaluation frameworks differ between Great Streets and the Growth Fund.
Every factual claim in the document is linked to its source. Dotted underlines in the body text are direct links to the relevant passage in the original source document. Footnotes provide full citations with links to the source pages. This verification system means readers don't have to take anything on trust — they can confirm any assertion in two clicks.
The assessment was sent to the reviewer organization alongside an introductory email — artifact-first, not resume-first. The work speaks for itself: rather than claiming evaluation capability, we demonstrated it.
Technical Details
The assessment was built from two independent research compilations totaling over 1,000 structured data points drawn from 18 harvested sources. Research was processed through a multi-stage pipeline that extracts, normalizes, and indexes text at the sentence level — enabling precise citation and source-faithful phrasing throughout the deliverable.
The citation architecture uses three layers: bidirectional footnote anchors for internal navigation, source links with verification badges in footnotes, and text-fragment links (#:~:text=) on key claims that navigate directly to the relevant passage in the source document. This architecture was adapted from legal research citation standards used in our regulatory analysis work.
The scoring methodology applies the published Section F.1 criteria from the FY2026 Great Streets Retail RFA. Each criterion was evaluated against available public evidence, with conservative scoring where application-specific data would be required for a complete assessment. The cross-program comparison draws on both the Great Streets and Growth Fund program documentation to illustrate evaluation framework differences.
Structured Research That Speaks for Itself
Whether you need grant evaluation, regulatory analysis, or evidence-grounded research — we build work products with built-in verification so you never have to take our word for it.
Get in Touch View More Projects