An Open Protocol for Grantmaking: The Case for Shared Design

Why Australia's Major Public Funders Should Co-Design the Infrastructure Layer of Grants

Then Compete on Everything Else

A White Paper by Geoffrey Clow | Founder & Principal Consultant, EGA – Expert Grant Program Advisory

Why are the Same Organisation Filling Out Three Applications for the Same Work?

I once sat in a meeting where a community health organisation was preparing three grant applications. Same quarter. Same region. Same population. Three different departments.

The CEO had printed the guidelines side by side. She was highlighting the overlaps. There were a lot of overlaps. Organisational capacity. Financial viability. Community need. Risk management approach. Governance structure. Stakeholder engagement plan. Evidence base.

“It’s the same questions,” she said. “Just in a different order, with different word limits, using different definitions of the same things.”

She was right. One department wanted “outcomes” described using a logic model. Another wanted a “theory of change.” The third wanted “key performance indicators aligned to program objectives.”

She was describing the same work three times in three different languages for three departments that had never spoken to each other about how they designed their programs.

Each department believed its approach was unique because its policy objectives were unique.

Each was wrong.

The policy objectives were different. The infrastructure underneath them was not. And that CEO, the one who should have been running services, spent the best part of a fortnight translating identical information into three incompatible formats. Her reward, if successful, would be three different reporting frameworks describing the same activities in three different ways for the rest of the funding period.

This is not an edge case. This is Tuesday.

What You'll Find in this White Paper

What Does Grant Program Fragmentation Actually Cost?

Grant program design in Australia is treated as a bespoke exercise. Every new grant program is built from scratch, as if the fundamental mechanics of grantmaking have never been solved before. As if nobody has ever designed an assessment rubric, defined an outcomes framework, or structured a reporting template. Each design team starts with a blank page and a sense of originality that is, frankly, unearned.

The result is a cascade of problems so embedded in the system they have become invisible. Like mould behind the plasterboard. You cannot see it, but you can smell it, and eventually the wall falls down.

 

For applicants and grantees, fragmentation means burden. The Queensland Audit Office found that some councils engage with up to eight different state funding agencies, each with its own processes. Nearly 200 community groups engaged with at least three. One community group was navigating twelve. Twelve. That is not a grants ecosystem. That is an obstacle course with paperwork.

The administrative cost of this fragmentation falls entirely on the organisations least resourced to bear it. Nobody in Treasury is filling out twelve different application forms. The small community organisation with two staff members is. And every hour spent translating the same project into twelve different formats is an hour not spent delivering the services the grants were supposed to fund.

 

For government, fragmentation means blindness. When every grant program defines its outcomes differently, measures impact differently, and reports differently, you cannot see what the grants system is actually achieving. Try asking a simple question: across all Commonwealth programs that fund community mental health, what are we getting for the money? You will find that nobody can answer it. Not because the data does not exist, but because it exists in dozens of incompatible formats that were never designed to talk to each other.

I have watched a senior official try to brief a minister on the cumulative impact of grants in a particular region. The briefing was mostly caveats. Different grant programs, different measures, different reporting periods, different definitions of “success.” The minister asked a reasonable question. The system could not answer it. That is not a technology problem. It is a design problem. And it was designed in.

 

For the public, fragmentation means waste. The Centre for Public Integrity noted that since 2019, every single ANAO performance audit of grant administration found the relevant programs to be flawed. Not most. Not many. Every one. That is not bad luck. That is a systems failure. And systems failures do not get fixed one program at a time.

 

To be clear about what this white paper is not arguing. There is good work being done on operational standardisation within grant programs. Consistent forms. Reliable assessment processes. Better reporting templates. That work matters and it should continue. But it has a ceiling.

A grant program can standardise its own processes beautifully and still be completely incompatible with the grant program next door. Operational improvement within programs is necessary. It is not sufficient. This white paper is about the layer underneath: the shared design logic across programs and across funders that does not currently exist. That is a different problem at a different altitude, and it requires a different conversation.

 

For grant program designers, fragmentation means loneliness. Every new grant program is designed in isolation, without access to the design patterns, outcome definitions, evidence frameworks, or assessment logics that other programs have already worked through. I have been in rooms where someone asks, “Has anyone else done something like this?” and the honest answer is: probably, but we have no way of knowing, because nobody keeps track.

Imagine if every restaurant had to invent cooking from first principles. No shared techniques. No common understanding of heat, seasoning, or mise en place. Every kitchen reinventing the sauté. Every chef figuring out from scratch that onions should be diced before they go in the pan. That is how we design grant programs. And then we wonder why so many of them leave a bad taste.

Why Do Banks Share Payment Rails but Government Funders Build Everything from Scratch?

There is a concept in technology and industry that government grantmaking has not yet discovered: pre-competitive collaboration.

The idea is simple. Competitors in an industry agree to build shared infrastructure together, then go back to competing on the things that actually differentiate them.

Banks agree on payment rails. Telecommunications companies agree on data protocols. Airlines agree on baggage handling standards. Pharmaceutical companies jointly fund basic research into disease mechanisms before each developing their own drugs.

The logic is not complicated. Some problems are not competitive. Payment systems are not where banks differentiate themselves. Baggage handling is not where airlines build their brand. Disease biology is not where pharma companies win market share. These are infrastructure problems. Solving them independently is wasteful, produces worse results, and makes everyone’s job harder for no reason.

Now think about grantmaking.

Imagine if every bank built its own ATM network. Different cards. Different PINs. Different screen layouts. Different withdrawal limits. You would need a different card for every bank, and when you finally found an ATM that accepted yours, you would feel grateful. You might even write a thank-you note to the bank for the privilege of accessing your own money.

That is the current grants system. And the thank-you note is not hypothetical. It is called an acquittal report.

The way a department defines community outcomes is not where it differentiates itself. The data standard it uses to collect applicant information is not a source of competitive advantage. The structure of its assessment rubric is not a policy position. The format of its reporting template is not an ideological statement.

These are infrastructure. They are the rails on which grant programs run. And building them separately, program by program, department by department, is exactly as wasteful as it would be if every bank built its own payment network from scratch. The only difference is that banks eventually worked this out, and government grantmaking has not. Yet.

What Would a Shared Design Layer for Australian Grants Actually Look Like?

What if the major public funders in Australia, Commonwealth, state, and territory, agreed to co-design the infrastructure layer of grantmaking as shared public goods, then competed only on policy priorities, delivery models, and political emphasis on top of that layer?

Not a single system. Not a centralised grants authority. Not a one-size-fits-all template imposed from Finance. God knows we have had enough of those.

Something more like an open protocol. Shared design. A set of shared standards, frameworks, and design components that individual programs can adopt, adapt, and build on without having to reinvent the sauté every time. Not shared delivery. Not shared platforms. Shared design: the outcomes logic, the assessment dimensions, the evidence standards, the data structures, the proportionality settings that sit underneath every grant program and that every department currently invents from scratch.

Think of it as the grantmaking equivalent of the internet protocol stack. TCP/IP does not dictate what websites look like or what content they carry. It provides the shared infrastructure that makes all of it possible. Nobody would build a website by inventing their own internet. But that is essentially what we do every time we design a grant program.

What the shared layer would include:

A common outcomes taxonomy. A structured, hierarchical framework of outcomes that grant programs across all levels of government can reference. Not a rigid template. A shared vocabulary. When three departments fund community wellbeing, they should be able to describe what they mean using the same language, even if their specific targets and priorities differ.

The US federal government’s GREAT Act and associated data standards work at Grants.gov shows what this looks like in practice: standardised data elements across all federal grant programs, designed to reduce burden and enable cross-program analysis. Australia has GrantConnect, which tells you a grant exists. It does not help you understand what it is trying to achieve in terms that connect to anything else. That is the difference between a noticeboard and an operating system.

 

Standard data schemas for applications and reporting. Agreed formats for collecting the information that every grant program needs: organisational details, financial information, project descriptions, risk assessments, progress updates, financial acquittals.

Not identical forms. Interoperable structures. An organisation that provides its ABN, financial statements, and governance details to one program should not have to re-enter that information for another. The RMG 412 guidance already says officials should apply the “report once, use often” principle. Beautiful words. In practice, the infrastructure to make it happen does not exist, so it remains an aspiration pinned to a noticeboard in a corridor nobody walks down.

 

A shared assessment dimensions library. A catalogue of assessment dimensions, with validated rubrics and scoring guidance, that program designers can draw from. Merit. Feasibility. Value for money. Organisational capacity. Track record. Community need. Innovation. Equity impact.

These dimensions appear in almost every grant program. They are defined inconsistently, scored differently, and weighted without reference to what works. I have personally assessed applications where “value for money” was defined three different ways in the same grant round because nobody had written it down properly. A shared library would not prescribe which dimensions a program uses. It would ensure that when two programs both assess “value for money,” they mean the same thing and measure it the same way. A radical proposition, apparently.

 

Trust-based defaults and proportionality settings. Here is where it gets interesting. Pre-configured settings for different levels of grant size and risk, establishing defaults for reporting frequency, acquittal requirements, evidence thresholds, and monitoring intensity. Small grants get light-touch defaults. Large grants get more intensive ones. Programs can override defaults where policy needs require it, but the override is documented rather than the norm.

Trust-based philanthropy has spent a decade demonstrating that reducing compliance burden does not reduce accountability. It increases transparency and strengthens relationships. The evidence is in. The practice is proven. But in government grantmaking, trust-based approaches are still treated as experimental, even radical, because they are not built into the infrastructure. They are optional add-ons that require a brave program designer to argue for them. Bravery should not be a prerequisite for sensible design. Build the trust into the defaults. Make distrust the thing you have to justify.

 

Evidence tiers and impact measurement protocols. A shared framework defining what counts as evidence at different levels of rigour. What constitutes self-reported outcomes versus independently verified results versus rigorous evaluation. Programs focused on innovation can accept lower evidence thresholds. Programs delivering established interventions can require higher ones. But the framework is common, so “evidence-based” means the same thing across the system and not, as it currently does, whatever the program designer decided it meant on the day they wrote the guidelines.

 

Learning loops. This is the one that should be obvious but somehow is not. Built-in mechanisms for capturing what programs learn about design, delivery, and impact, and feeding those lessons back into the shared infrastructure.

I once asked a program team whether they had looked at what a similar program in another department had learned from its evaluation. They looked at me like I had suggested they read someone else’s diary. The evaluation existed. It was published. Nobody in their design team had read it. Not because they were lazy. Because the system has no mechanism for connecting what one program learns to what the next program designs. Every kitchen reinventing the sauté, remember. Except this time it is $50 million of public money and the communities being served by the program cannot tell you why this round feels exactly like the last one.

How Do You Build Shared Standards Without Creating a One-Size-Fits-All Disaster?

Building shared grantmaking infrastructure is not just a technical exercise. Get the principles wrong and you end up with something worse than what you started with: a standardised system that nobody uses because it was designed by a committee that has never run a grant round.

 

Adopt, adapt, or explain. Grant programs should be expected to use shared infrastructure by default, adapt it where their needs genuinely diverge, or explain why they need a completely bespoke approach. This is how the UK Government’s Digital Service Standards work: a presumption of reuse, not a mandate. The explanation is the accountability mechanism. Most programs will discover, when forced to articulate why they need a bespoke outcomes framework, that they do not actually need one. They just never had a shared one to use.

 

Interoperable, not identical. Programs targeting regional infrastructure, early childhood development, and arts funding will rightly look different. The goal is not uniformity. The goal is that they share enough common structure to reduce burden on applicants, enable cross-program learning, and support system-wide accountability. A children’s hospital and an orthopaedic clinic look different, but they both use the same medical records system. Nobody thinks this is controversial.

 

Designed for the recipient, not just the administrator. The current system is designed around the needs of the administering entity: its risk appetite, its reporting cycles, its accountability requirements. The shared layer should start with the experience of the organisation receiving the grant and work backwards. I know this sounds obvious. I also know that in twenty years of grants work, I have almost never seen it done.

 

Open and contributory. The shared infrastructure should be genuinely open. Any participating funder can propose improvements, contribute validated tools, and share what they have learned. This is how open-source software works: the protocol improves because everyone who uses it contributes to it. The alternative is what we have now: thousands of program designers, each sitting alone, each convinced they are solving a unique problem, each too busy reinventing the wheel to notice the one spinning perfectly well in the next office.

 

Systems-thinking as a default. Every grant program sits inside a system of other programs, other funding sources, other government interventions, and other community assets. The shared layer should make it easy for program designers to see what else exists in the same space, avoid duplication, and design for complementarity. Systems-grantmaking is not a niche methodology. It is the only honest acknowledgement of how the world actually works. Everything else is theatre.

What Happens to Priya's Whiteboard When Three Departments Share One OS?

Let me make this concrete.

Three Australian Government departments. Employment. Health. Community services. Each funds grant programs supporting people experiencing disadvantage. Today, each designs independently.

 

Today:

A community organisation runs employment services, primary health access, and family support. It applies to all three. Three different application formats. Three different outcomes frameworks. Three different reporting regimes.

The organisation’s CEO, let’s call her Priya, spends the first two weeks of every quarter on grant administration. Not service delivery. Administration. She has a whiteboard in her office with three columns, one for each funder, tracking which definitions of “community outcomes” each department uses. She once accidentally submitted Department A’s outcomes framework in Department B’s progress report. Nobody noticed. Make of that what you will.

Priya’s finance officer maintains three separate spreadsheets because the acquittal templates are incompatible. The board receives three different progress reports that describe the same work in three different languages. The chairman, who volunteers his time, has stopped trying to reconcile them. He just signs.

And the three departments? They have no mechanism to see that they are funding the same communities, addressing overlapping needs, or creating gaps between their programs. An evaluator working for Department A recommended more mental health support. Department B was already funding it. Nobody knew.

 

With a shared OS:

All three programs use the common outcomes taxonomy. They reference different branches of it, but the branches connect. Priya’s application data flows through a standard schema: she enters her organisational information once, her financial details once, her governance structure once. Each program adds its own policy-specific questions on top. The shared assessment dimensions, organisational capacity, financial viability, track record, are assessed once and recognised by all three.

Reporting uses the shared evidence framework. Priya reports outcomes against the common taxonomy. Each department filters for its own policy priorities. But the underlying data is the same. An evaluator can now ask: what is the combined impact of these three programs on this community? Are they complementary or duplicating? What would happen if we coordinated them?

Priya’s reporting burden drops by roughly two-thirds. Her finance officer maintains one set of records. Her board receives one coherent report. Her chairman reads it. And Priya gets back two weeks a quarter to do the thing the grants were supposed to fund: delivering services to people who need them.

The departments get something too: the ability to see what they are collectively achieving. For the first time, someone can answer the minister’s question.

Outcomes Funds and Pay-for-Results: A Module, Not a Religion

One of the advantages of shared infrastructure is that it makes more sophisticated funding models possible without requiring every program to adopt them.

Outcomes funds and pay-for-results contracts require precisely the kind of infrastructure that most grant programs lack: clear outcome definitions, agreed measurement protocols, validated evidence tiers, and transparent pricing models. Building this infrastructure for a single social impact bond is expensive and slow. Most of the cost is not the bond. It is the negotiation over what counts as a result and how to measure it. If that infrastructure already existed as part of the shared layer, a program designer who wanted to incorporate outcomes-based elements could plug in rather than build from scratch.

This does not mean every grant program should use pay-for-results models. Most should not. But here is the thing: the infrastructure that makes them possible, clear outcomes, reliable measurement, transparent standards, is the same infrastructure that makes all grant programs better. You do not build a kitchen to make one dish. You build a kitchen that makes every dish possible.

Building the outcomes infrastructure once, collaboratively, is simply less stupid than building it repeatedly, separately, and inconsistently.

What Happens When AI Meets a Fragmented Grants System?

Artificial intelligence does not change the argument for shared grantmaking infrastructure. It makes the argument urgent.

AI tools are already being used to draft grant applications. They will soon be used to screen them, assess them, and generate reports. Every one of these use cases works better with standardised data. An AI system that screens applications against eligibility criteria needs those criteria to be structured and machine-readable. An AI system that assesses merit needs the assessment dimensions to be clearly defined and consistently applied.

Without shared infrastructure, AI will do what technology always does to fragmented systems: amplify the fragmentation. Each department will build or buy its own AI tools, trained on its own idiosyncratic data, producing results that cannot be compared, aggregated, or validated across the system. The dashboards will look fantastic. The underlying mess will be unchanged. That is the grants sector in a nutshell, actually. Beautiful interfaces on top of structural dysfunction.

With shared infrastructure, AI becomes genuinely useful. A common data schema means applicant information can be pre-populated, validated, and cross-referenced. A common outcomes taxonomy means impact can be measured and compared at scale. Standard assessment dimensions mean AI-assisted screening produces consistent, auditable results.

The choice is not whether to adopt AI. That ship has sailed, and it is being captained by a large language model that writes better application prose than most humans. The choice is whether to give it a coherent foundation or let it amplify incoherence at scale. One of these options costs more. The other costs more in ways that are harder to measure and easier to ignore, which, in government, means it will be chosen by default.

How Do You Actually Build This? A Five-Year Path That Starts with One Portfolio

This is not a five-year transformation program with a steering committee, a consulting firm, and a logo. It is a series of practical steps that build on work already underway.

 

Year 1: Start in one portfolio. Choose a portfolio where multiple programs fund overlapping populations. Map what exists: the outcome definitions, assessment dimensions, data requirements, and reporting frameworks currently in use. You will discover, as everyone who does this exercise discovers, that roughly 80% is common and roughly 20% is genuinely unique. Design the first version of the shared layer for that 80%. Test it on one new program and one existing program being refreshed. Measure what changes.

 

Year 2: Validate and extend. Evaluate the first implementations. Did applicant burden drop? Did data quality improve? Did anyone notice? Refine the shared layer based on what the first users learned. Extend to a second portfolio. Begin cross-portfolio mapping to identify where outcomes connect across policy domains. Publish the shared infrastructure as open resources that state and territory governments can adopt if they choose.

 

Year 3: Scale to cross-agency use. Establish a cross-agency governance mechanism for the shared layer. Make it lightweight and contributor-driven, modelled on open-source governance rather than traditional intergovernmental committees. Anyone who has sat through an intergovernmental committee meeting will understand why this distinction matters. Integrate with GrantConnect to enable “apply once, use often” for core organisational data. Develop the trust-based defaults and proportionality settings.

 

Years 4-5: Mature the ecosystem. The shared layer becomes the expected starting point for new program design. Programs that deviate explain why. The evidence framework supports outcomes-based funding models where appropriate. Learning loops produce annual insights about what works across the grants system. Australia has, for the first time, a coherent picture of what its grants investment is achieving. Priya’s whiteboard has been retired.

Why Hasn't This Happened Already?

There are real barriers to this. Departments protect their autonomy. Ministers want visible, branded programs. The machinery of government creates incentives for differentiation, not collaboration. Internal systems, procurement contracts, and existing processes all resist standardisation. And nobody gets promoted for building infrastructure that makes someone else’s program work better. The performance review does not have a box for “contributed to a shared public good that is invisible to my minister.”

These barriers are genuine. They are also the same barriers that existed before every pre-competitive collaboration in every other sector. Banks did not naturally want to share payment infrastructure. Telcos did not naturally want to agree on data standards. The barriers were overcome because the cost of not collaborating eventually became too obvious to ignore.

Grantmaking is approaching that point. The applicant burden is unsustainable. The administrative duplication is wasteful. The inability to measure system-wide impact is indefensible. The ANAO keeps finding the same problems in different programs because they are the same problems, generated by the same fragmented approach to design, repeated across the same system that never learns from its own audits.

And the arrival of AI is about to make the cost of fragmentation dramatically, embarrassingly visible. Because when someone finally builds the dashboard that shows ministers what the grants system is actually achieving, the answer, without shared infrastructure, will be: “We don’t know. The data is incompatible.” That is not a conversation anyone wants to have. But someone will have it, and soon.

Someone will need to facilitate the conversation that comes next. Someone who understands grant program design well enough to know where the common ground is, and experienced enough to know where the genuine differences lie. Someone who can translate between policy teams who think in outcomes, technology teams who think in data, and grants officers who think in processes.

That is not a technology vendor. It is not a Big Four consultancy. It is not an auditor.

It is a design problem. And design problems need designers.

Is This About Better Grant Programs, or a Better System?

There is an entire conversation happening right now about making individual grant programs work better.

Standardising forms. Streamlining assessment. Improving reporting. That conversation is useful and overdue. But it is a conversation about optimising each kitchen separately.

This paper is about something else entirely: shared design. The question of whether every kitchen needs to invent cooking from first principles, or whether there are foundations that should be shared across the whole system.

That is not an operational improvement. It is a structural shift. And it is the shift that makes all the operational improvements actually add up to something.

The upstream question is different. It is not: how do we make this program’s application form better? It is: why does every program build its own application form in the first place?

The answer, when you trace it back, is not that programs need different forms. It is that nobody has built the shared infrastructure that would make common forms possible.

Nobody has defined the common outcomes taxonomy, the standard data schemas, the shared assessment dimensions, the trust-based defaults, or the learning loops that would allow programs to differentiate where they genuinely need to and standardise where they do not.

That infrastructure does not exist yet. But it could. And the first funders who build it together will discover something that pre-competitive collaborators in every other sector have already learned: the shared rails do not constrain innovation. They enable it.

The banks that share payment infrastructure did not become less competitive. They became free to compete on the things that actually matter to their customers.

Grant programs could do the same. Shared design at the foundation level. Competition on policy priorities, delivery models, community engagement, political vision. The things that actually should be different. No more competing on application form layouts, reporting templates, and definitions of “value for money.”

That is not a loss of sovereignty. It is a gain of sanity.

more white papers