The question this answers:
What data do we need to collect, when, and how?
What your grant problem looks like without an evaluation and data collection framework
Your grant program has been running for two years. An evaluation is commissioned. The evaluator asks for outcomes data.
You have application forms that asked about expected outcomes in free text. You have acquittal reports that asked whether projects were completed. You have some photos and case studies.
None of it answers the evaluator’s questions. Expected outcomes weren’t standardised. Actual outcomes weren’t collected. There’s no baseline to compare against.
The evaluation becomes a salvage operation, trying to extract insight from data that was never designed to support it.
What the deliverable actually is
A practical outcomes framework that sets out:
- What data to collect: Tied directly to your outcomes hierarchy
- When to collect it: At application, during delivery, at acquittal, post-completion
- How to collect it: Specific questions, formats, and methods
- Who is responsible: Applicants, grant managers, external evaluators
The framework includes recommended questions for:
- Application forms (baseline data, expected outcomes)
- Progress reports (early indicators, course correction)
- Acquittals (outputs delivered, immediate outcomes)
- Post-completion follow-up (sustained outcomes, where relevant)
It’s designed so evaluation is built into the workflow, not bolted on at the end.
What good looks like vs what bad looks like
Bad: An acquittal form that asks: “Please describe the outcomes achieved by your project.”
You’ll get narrative. Some will be specific, most will be vague, none will be comparable. The evaluator can’t aggregate it. The minister can’t cite it.
Good:
| Data point | Collection stage | Question format | Purpose |
|---|---|---|---|
| Number of direct participants | Application (expected) / Acquittal (actual) | Number field | Output measurement, comparison to target |
| Participant demographics | Acquittal | Checkboxes (age, location, cohort) | Disaggregated reporting, equity analysis |
| Participant-reported skill gain | Acquittal | Scale 1-5: “I gained new skills through this project” | Short-term outcome indicator |
| Participant-reported social connection | Acquittal (or post-survey) | Scale 1-5: “I made new connections in my community” | Outcome indicator aligned to program logic |
Data is structured, comparable, and tied to the outcomes hierarchy. Evaluation has something to work with from day one.
Why it matters
Evaluation can only work with the data it’s given. If outcome data isn’t collected, or isn’t collected in a usable format, evaluation becomes guesswork.
A data collection framework ensures you’re gathering the right information, at the right time, in a format that supports analysis. It reduces burden on grant recipients by being specific about what’s needed.
And it means that when the minister asks whether the program worked, you have an answer based on evidence, not anecdote.
Other Outcomes Architecture & Learning Frameworks Deliverables
How to Connect Funding Decisions to Grant Program Outcomes → An outcomes architecture that maps how your program’s funding logic connects to the outcomes it claims to achieve. If the connection between what you fund and what you measure doesn’t hold, the program cannot demonstrate value regardless of how well individual projects perform.
What Can Your Existing Data Actually Say About Outcomes? → A diagnostic assessment of what your current data can and cannot support. Where evidence is missing, where collection is duplicated or misaligned, and which specific changes would materially improve your ability to demonstrate program outcomes. This is the starting point for programs that have been running without evaluation design.







