The question this answers
How does what we fund connect to the outcomes we’re supposed to achieve?
What the problem looks like without it an outcomes hierarchy
Your grant program funds community projects. The minister announces it will “strengthen social cohesion and improve community wellbeing.” Grant agreements are signed. Projects are delivered. Acquittals are submitted.
Two years later, someone asks: Did it work? Did social cohesion improve? Is there evidence of better wellbeing?
You have acquittal reports showing projects were delivered. You have photos of events and attendance numbers. But you have no way to connect any of it to the outcomes that justified the program in the first place.
The link between activity and impact was never mapped. So it can’t be demonstrated.
What I deliver
A logic model that shows, step by step, how the program is expected to create change:
- Activities: What gets funded (events, services, infrastructure, training)
- Outputs: What gets delivered (number of events, participants reached, facilities built)
- Short-term outcomes: What changes immediately (skills gained, connections made, access improved)
- Longer-term outcomes: What changes over time (sustained participation, behaviour change, community capacity)
- Impact: The ultimate goal (stronger social cohesion, improved wellbeing)
The deliverable is a clear diagram with supporting notes explaining the logic. Not an academic theory of change. A practical tool you can put in a brief, attach to a business case, or hand to an evaluator.
What good looks like vs what bad looks like
Bad: “The grant program aims to support community organisations to deliver projects that contribute to positive social outcomes.”
This is a sentence, not a logic model. It doesn’t explain how activities lead to outcomes. It can’t be tested or evaluated.
Good:
| Level | Example | Indicator |
|---|---|---|
| Activities | Community events funded | Number of events delivered |
| Outputs | Residents attend events | Number of attendees |
| Short-term outcome | Attendees report new social connections | % reporting new connections (post-event survey) |
| Longer-term outcome | Increased participation in community activities | Repeat attendance rates; community group membership |
| Impact | Stronger local social cohesion | Community wellbeing survey (longitudinal) |
Now the logic is visible. Each step can be measured. Evaluation has something to work with.
Why it matters
Grant programs are funded to achieve outcomes, not just distribute money. But if the link between funding and outcomes isn’t mapped, it can’t be demonstrated.
An outcomes hierarchy makes the program’s logic explicit. It tells everyone, from program staff to ministers to evaluators, what the program is trying to achieve and how. It’s the foundation for meaningful reporting and credible evaluation.
Without it, you’re left with activity reports and hope.
Other Outcomes Architecture & Learning Frameworks Deliverables
What Outcomes Data Should Your Grant Program Be Collecting? → An evaluation framework designed into the program architecture so data collection happens through existing touchpoints: application forms, progress reports, and acquittals. Evaluation is built into the workflow, not created as a separate reporting burden after funding decisions are already made.
What Can Your Existing Data Actually Say About Outcomes? → A diagnostic assessment of what your current data can and cannot support. Where evidence is missing, where collection is duplicated or misaligned, and which specific changes would materially improve your ability to demonstrate program outcomes. This is the starting point for programs that have been running without evaluation design.







