Designing Grant Programs for Outcomes and Systems: A Case Study

What happens when a funder stops counting outputs and starts measuring change?

A Case Study by Geoffrey Clow | Expert Grant Program Advisory

 The Dirty Secret

Here’s something nobody in government wants to say out loud: most grant programs are not designed to change anything.

They’re designed to be defensible.

Every year, billions flow out across Australia. State, Commonwealth, local. Community wellbeing. Economic participation. Place-based initiatives. Housing. Health. The words “capacity building” appear in approximately ten thousand guidelines documents, and nobody can tell you what it means.

Everyone works hard. Program managers juggle impossible workloads. Assessment panels plough through applications written in a dialect of English that exists nowhere else on earth. Grantees spend weeks on acquittals that prove exactly one thing: money was spent. Not whether anyone’s life got better. Not whether the system shifted. Just that receipts exist.

Then we do it all again next financial year.

This is not a story about bad people. The people in this system are, by and large, trying. The system itself is the problem. It rewards compliance over curiosity. It mistakes activity for impact. It treats “risk management” as a synonym for “trust no one” and wonders why grantees game their reports.

This case study follows one funder that decided to stop pretending.

Meet APIF: A Funder You'll Recognise

The Australian Public Impact Fund doesn’t exist. But you’ve met it.

APIF is a composite: part state department, part Commonwealth program, part council community grants scheme. It uses an off-the-shelf grants management system.

 

It runs annual rounds. It publishes guidelines that run to 50 pages because legal got nervous and policy wanted “flexibility.” It receives PDF acquittals that tick boxes and say nothing.

 

APIF’s staff are competent and exhausted. Their applicants are skilled at writing applications and increasingly cynical about whether any of it matters. Their ministers want good news stories. Their finance teams want clean audits. Nobody is asking the only question that should matter:

What actually changed?

Until, one day, someone did.

The Brief: Three Non-Negotiables

APIF’s leadership, to their credit, got uncomfortable. They’d read enough evaluation reports full of weasel words. They’d seen enough “successful” programs that left no trace. They asked for a review with three conditions:

  1. Outcomes become the organising principle. Not a reporting afterthought. Not a box in the acquittal template. The thing the whole program is built around.

  2. Administrative noise goes down. If we’re asking grantees to do more thinking about outcomes, we’re asking them to do less pointless paperwork somewhere else. Net burden must fall.

  3. At least one portfolio genuinely tackles systems change. Not “coordination” as a buzzword. Actual structural shift in how services connect, how data flows, how the system learns.

The constraints were real: multiple portfolios, three tiers of government looking over shoulders, political cycles that reward announcements over results, and a grants platform that was paid for and wasn’t going anywhere.

This wasn’t a blank canvas. It was a renovation while people were still living in the house.

Design Move 1: Outcomes at the Centre (For Real This Time)

Most grant programs claim to be “outcomes-focused.” What they mean is: there’s a field in the acquittal form that says “outcomes.” 

APIF did something different. They started the entire redesign by asking: What does success look like for people and places, not for the grant program?

This sounds obvious. It isn’t.

We ran a workshop with the executive team. Smart people. Committed people. We asked them to list the KPIs they reported to the minister. Every single one was an activity measure. Applications received. Grants awarded. Funds disbursed. Acquittals completed. Projects delivered.

Not one measure answered: Did anyone’s life get better?

There was a long silence.

Then we started again.

For each policy domain, we built an outcomes map. Not in a boardroom, but with grantees, community organisations, and the people programs are supposed to serve. We borrowed from Lotterywest’s Impact Guides. We drew on the outcomes-based contracting work happening in pockets across the country. We asked the unsexy question: How would we know if this worked, two years from now?

“Number of workshops delivered” became “proportion of participants in stable employment 12 months later.”

“Services provided” became “reduction in repeat crisis presentations.”

It’s not complicated. But it requires admitting that what you’ve been measuring was never the point.

Design Move 2: A Lifecycle That Learns

Grants programs have a lifecycle: design, application, assessment, contracting, monitoring, reporting, acquittal. In most programs, each stage is a compliance checkpoint. A hurdle to clear. A box to tick.

APIF rebuilt each stage around a different question: What are we learning?

 

Design: Grant programs now start with a theory of change and testable assumptions. Not “we’ll fund good projects and good things will happen.” Actual hypotheses. “We believe that connecting young people to mentors in their first six months of employment will improve retention. We’ll test this by tracking outcomes across funded and unfunded cohorts.”

 

Application: Forms got shorter. The old 20-page narratives about organisational history and governance structures? Gone. The new question: “How will your work contribute to these shared outcomes, and how will we know?”

 

Assessment: Panels stopped rewarding the best writers and started rewarding the clearest thinkers. Assessment criteria shifted from “shovel-ready” to “learning-ready.” Can this applicant adapt if the data shows something unexpected? Do they have the relationships to know what’s actually happening on the ground?

 

Contracting: Milestones became flexible. Not “deliver three workshops by March” but “demonstrate progress toward participant employment outcomes, with latitude to adjust method based on what you’re learning.” This required trust. It required program managers to actually know their grantees.

 

Reporting: The old acquittal was 20 pages of retrofitted justification. The new report is two pages: a handful of standardised outcome indicators, plus a narrative section that answers one question: What did you learn, and what will you do differently?

 

Here’s the thing about reducing paperwork: you can only do it if you’re clear about what actually matters.

Complexity is often a symptom of confusion. APIF got simpler because they got clearer.

Design Move 3: Systems Change (The Hard Part)

“Systems change” is the grants sector’s favourite buzzword and least-practiced discipline.

It’s easy to fund projects. Projects have start dates and end dates. They have budgets and deliverables. They can be evaluated on their own terms. If they fail, someone else’s project might succeed. The portfolio hedges its bets.

Systems change is different. It means funding the spaces between organisations. The relationships. The coordination. The shared infrastructure. It means accepting that no single project will move the dial, and that the funder’s job is to help the system learn, not just to pick winners.

APIF chose one portfolio, regional employment, to pilot genuine systems work.

Instead of funding 15 separate employment programs that would compete for the same cohort of participants and learn nothing from each other, they commissioned a systems analysis first. Where were the gaps? Where was effort duplicated? Where were organisations holding information that others needed?

Then they funded differently. Action-learning networks instead of standalone projects. Shared data infrastructure. Coordination roles that sat between organisations, not within them. Multi-year commitments that gave relationships time to form.

 

Program managers had to change too. The old job was compliance: check that milestones were met, chase late acquittals, manage risk by managing paper. The new job was coaching: help grantees learn, connect them to each other, surface patterns across the portfolio, bring the hard questions to the executive.

 

This is not comfortable. It’s slower. It’s harder to explain to a minister who wants a ribbon-cutting. But it’s the only way to address problems that no single organisation can solve alone.

Design Move 4: Culture Eats Guidelines for Breakfast

You can redesign every template, rebuild every process, and still change nothing. Because culture is the water everyone swims in. If the culture says “don’t admit failure,” no reporting template will surface honest learning. If the culture says “grantees are supplicants,” no co-design workshop will build trust.

APIF knew this. So alongside the technical redesign, they ran a parallel culture project.

They called them Impact Labs. Informal sessions, borrowing from Lotterywest’s Impact Cafés, that brought together program staff, grantees, and community members. No PowerPoints. No hierarchy. Just honest conversation about what was working, what wasn’t, and what everyone was afraid to say.

 

The first few sessions were awkward. People were used to the performance. Grantees were used to telling funders what funders wanted to hear. Funders were used to pretending they had all the answers.

 

But over time, something shifted. Staff started admitting they didn’t know if their programs worked. Grantees started sharing failures without fear of losing funding. The conversation moved from “how do we look good” to “how do we get better.”

This isn’t soft stuff. This is the hard infrastructure of learning. Without it, outcomes frameworks become another compliance burden. With it, they become a shared language for improvement.

Technology: The Lie We Tell Ourselves

Here’s a comfortable fiction the sector loves: “We’d track outcomes properly if only we had better systems.”

Nonsense.

APIF didn’t buy a new system. They used the same platform they’d had for years. The same one that had been collecting data nobody looked at, generating reports nobody read, and producing dashboards that measured exactly the wrong things.

The problem was never the technology. The problem was that nobody had decided what actually mattered.

Most grant systems are configured to answer questions like: How many applications did we receive? How quickly did we process them? What percentage of funds were acquitted on time? These are operational metrics. They tell you whether the machine is running. They tell you nothing about whether the machine is pointed in the right direction.

APIF reconfigured their system to answer different questions: Which programs are shifting employment outcomes? Where are place-based collaborations gaining traction? Which grantees are learning and adapting, and which are just surviving? Where should the next dollar go if we actually want impact?

 

The before and after looks like this:

Monday morning, old system: Program manager opens inbox. Seventeen acquittal reminders to chase. A compliance report showing 94% of milestones met. No idea whether any of it matters. The system is green. The community outcomes are invisible.

 

Monday morning, new system: Program manager opens dashboard. Three programs showing strong outcome signals, two showing early warning signs, one showing an unexpected pattern worth investigating. A grantee in the struggling cohort has flagged a pivot in their approach. The question isn’t “are we compliant?” but “what are we learning?”

This isn’t about technology. It’s about what you decide to pay attention to.

The sector has spent decades building increasingly sophisticated systems for tracking the wrong things. We can tell you to the cent how much was spent. We can tell you to the day when acquittals were submitted. We can produce compliance dashboards that glow green while communities stay stuck.

APIF stopped asking “is the money accounted for?” and started asking “is the money working?” The technology followed. It always does, once you’re clear about the question.

Early Signals

We’re not going to pretend APIF solved poverty or fixed housing or transformed regional employment in 18 months. That’s not how systems change works, and anyone who tells you otherwise is selling something.

 

But within two funding cycles, three things shifted:

Less waste. When you track outcomes, you see which projects aren’t working. APIF stopped reflexively refunding “zombie” programs that had outlived their usefulness. Funds moved to higher-value work. Not because anyone was punished, but because the data made the case.

 

Better conversations. Grantees reported something unexpected: relief. The old system forced them to pretend everything was going well. The new system gave them permission to say “this isn’t working, and here’s what we’re trying instead.” Program managers became allies instead of auditors.

 

Stronger systems. In the regional employment portfolio, the action-learning networks started to stick. Duplication fell. Referral pathways improved. Organisations that had competed for years started sharing data. The system got smarter, not just the individual projects.

Are these results audited and peer-reviewed? No. They’re patterns drawn from experience, consistent with what happens when funders take outcomes and systems seriously. The rigorous evidence base is still being built, across Australia and internationally. APIF is contributing to it, not waiting for it.

This Isn't Innovation. This Is Remembering

There’s a temptation to frame outcomes-and-systems grantmaking as cutting edge. Experimental. Ahead of its time.

It isn’t. It’s how the big wins actually happened.

The Green Revolution, the most consequential grant-funded initiative of the twentieth century, didn’t emerge from thousands of small, disconnected projects with annual acquittals. Rockefeller and Ford foundations invested patiently, over decades, in research infrastructure, technology diffusion, field institutions, and the relationships between them. They funded systems, not just activities. They backed evidence-building, not just service delivery. They accepted that transformation takes time and that learning is the work, not a reporting requirement.

The result was one of the largest reductions in human suffering in history.

Nobody asked Norman Borlaug to submit a PDF acquittal proving he’d delivered the agreed number of wheat varietals by March.

 

Somewhere along the way, we forgot this. Grantmaking became a compliance exercise. Risk management meant trusting no one. “Accountability” became a synonym for paperwork rather than results. We built elaborate systems to track whether money was spent as promised, and stopped asking whether the spending changed anything.

 

APIF isn’t doing something new. It’s returning to first principles. It’s asking the question that Rockefeller and Ford asked: what would it take to actually shift the system, and are we willing to fund that way?

The uncomfortable truth is that most funders aren’t. Not because they don’t want to. Because the short political cycles, the annual budget rounds, the audit culture, and the fear of headlines all push toward safe, small, forgettable grants that change nothing and offend no one.

APIF decided to be uncomfortable instead.

Why This Matters (Regardless of Your Tier)

If you’re Commonwealth: Portfolio-level outcome dashboards give you a defensible story for ministers and central agencies. Not “we spent the money” but “here’s what changed.” In an era of tightening budgets and increasing scrutiny, that’s not a nice-to-have.

 

If you’re State: Outcomes-oriented, systems-aware programs help you coordinate across the silos that fragment effort in housing, health, justice, and education. Place-based work only works if someone is tracking place-based outcomes.

 

If you’re Local: Councils can simplify processes for community organisations while seeing clearer links between small grants and long-term community wellbeing. Less admin, more impact, better relationships with your community.

The Offer

I’m Geoffrey Clow, founder of Expert Grant Program Advisory.

I work alongside your existing policy and grant administration teams, bringing the outcomes-and-systems lens, co-design methods, and practical templates now proven in leading Australian and international practice.

I don’t replace your people. I help them do work that matters.

If your grant program is ready to move from counting outputs to measuring change, start with a conversation.

Request a consultation →

This case study is a composite drawn from real grant program design work across Australian government and philanthropic funders by Geoffrey Clow. Names and details have been fictionalised. The patterns are real.

more case studies