What Can Your Existing Data Actually Say About Outcomes?

The question this answers:

 

We’re already mid-stream. What can our current data tell us, and what’s missing?

 

What your grant problem looks like without a gap analysis for existing programs

 

Your grant program has been running for three rounds. An evaluation is coming. Or budget pressure is mounting. Or a new minister wants to know whether it’s working.

You suspect the data isn’t great. But you don’t know exactly what you have, what’s missing, or what would need to change to get a credible answer.

You’re flying blind, hoping the evaluation will somehow extract insight from whatever exists.

 

What I deliver

 

A short, focused report that assesses your current data against your program’s intended outcomes. It answers:

  • What can your current data support? Which outcomes can be credibly measured with existing information?

  • Where are the gaps? Which outcomes have no data, or data that’s too inconsistent to use?

  • What’s the impact of the gaps? What questions can’t be answered? What risks does this create?

  • What changes would help? Which gaps can be closed going forward, and at what cost?

 

The report is practical, not academic. It tells you what you’re working with and what to prioritise.

 

What good looks like vs what bad looks like

 

Bad: No assessment. The evaluation is commissioned and discovers the gaps too late. The final report is full of caveats. The findings are inconclusive. The investment in evaluation is largely wasted.

 

Good:

Intended outcomeData availableGapImpactRecommended action
Increased community participationAttendance numbers (inconsistent format)No baseline; no post-event follow-upCannot measure change over timeStandardise attendance field; add 3-month follow-up question in next round
Improved social connectionAcquittal narrative onlyNo structured outcome dataCannot quantify or compareAdd scaled question to acquittal form
Strengthened community organisationsNo data collectedComplete gapCannot evaluate this outcomeAdd capacity indicator to application and acquittal
Outputs deliveredStrong data (events, participants, locations)NoneCan report outputs confidentlyMaintain current approach

 

Now you know where you stand. You can make informed decisions about what to fix, what to accept, and how to position the evaluation.

 

Why it matters

 

Not every gap can be closed. Some grant programs are too far along to change data collection meaningfully. Some gaps are too expensive to fix. Some outcomes were never going to be measurable.

A gap analysis gives you clarity. It prevents the unpleasant surprise of an evaluation that can’t answer the questions it was commissioned to answer. It lets you set realistic expectations with stakeholders. And it identifies where modest changes now could make a material difference to what you can demonstrate later.

Better to know the gaps than to discover them in a final report.

Other Outcomes Architecture & Learning Frameworks Deliverables

 

How to Connect Funding Decisions to Grant Program Outcomes → An outcomes architecture that maps how your program’s funding logic connects to the outcomes it claims to achieve. If the connection between what you fund and what you measure doesn’t hold, the program cannot demonstrate value regardless of how well individual projects perform.

 

What Outcomes Data Should Your Grant Program Be Collecting? → An evaluation framework designed into the program architecture so data collection happens through existing touchpoints: application forms, progress reports, and acquittals. Evaluation is built into the workflow, not created as a separate reporting burden after funding decisions are already made.

more Deliverables