Policy tells you why.
Guidelines tell people how.
Most grant programs confuse the two
A Case Study by Geoffrey Clow | Expert Grant Program Advisory
Why Can't Applicants Tell If They're Eligible?
Here’s a fun experiment. Ask anyone who’s ever applied for a grant to describe the guidelines in their own words.
Watch their face.
You’ll see a flicker of something between confusion and low-grade trauma. They’ll mention length. They’ll mention jargon. They’ll mention the bit where they read the same paragraph four times and still couldn’t tell if they were eligible.
Then ask the people who wrote those guidelines whether applicants understood them.
Watch their face.
Same expression. Different trauma.
Grant guidelines are the most important document in any program and the least likely to be read properly.
They sit at the exact point where policy meets delivery, where intent meets interpretation, where a minister’s announcement meets a community group trying to figure out if this is for them.
And most of them are terrible.
Not because the people who write them are incompetent. Because guidelines are asked to do too many jobs at once, written by people who know too much, reviewed by people who add caveats, and published in a format that assumes everyone reads from start to finish.
Nobody reads from start to finish.
The Two Problems Nobody Separates
Most advice about grant guidelines focuses on communication. Use plain English. Keep it short. Put the closing date on the first page. Make sure the phone number works.
That advice is correct. It’s also insufficient.
Because guidelines have two distinct problems, and fixing one doesn’t fix the other.
Problem one: the language.
Guidelines are written in policy speak. They use words like “capacity building” and “strategic alignment” and “demonstrable community benefit” as if everyone agrees what those mean. They hedge. They qualify. They include phrases like “applications will be assessed on their merits” without specifying what merits, assessed how, by whom, against what standard.
The result: applicants guess. Good applicants self-exclude because they’re not sure they fit. Poor-fit applicants apply because the language was vague enough to let them believe they might. Staff spend weeks answering the same questions. Assessment panels argue about what the criteria actually mean.
This is a communication failure. Plain English helps. Structure helps. Testing the document with real humans helps.
Problem two: the design.
But here’s the thing. You can write the clearest, most accessible guidelines in Australia, and if the underlying design is broken, you’ll just communicate the confusion more efficiently.
If the program doesn’t know what outcomes it’s trying to achieve, the guidelines can’t either. If eligibility is fuzzy because nobody wanted to make hard decisions, plain English just makes the fuzziness more readable. If assessment criteria can’t distinguish quality because they were written to avoid conflict, no amount of formatting will fix it.
Most guidelines are a communication layer over a design vacuum.
Meet ADPF: A Funder You'll RecogniseAdd Your Heading Text Here
The Australian Direct Participation Fund doesn’t exist. But you’ve met it.
ADPF is a composite: part state department economic participation program, part Commonwealth community grants scheme, part local council quick-response fund. It runs annual rounds. It publishes guidelines that run to 38 pages. It receives hundreds of enquiries every round asking questions that should have been answered in the document.
ADPF’s guidelines have been through seven revisions over five years. Each revision added content. Legal wanted disclaimers. Policy wanted alignment with the new framework. The minister’s office wanted the key messages up front. Governance wanted clearer acquittal requirements. IT wanted instructions for the portal.
Nobody took anything out.
The result is a document that technically contains all the information an applicant needs, in the same way that a phone book technically contains your friend’s address. Good luck finding it.
ADPF’s program team knows the guidelines don’t work. They can see it in the enquiries log. They can see it in the applications that miss the point. They can see it in the assessment panels debating what “demonstrated capacity” actually means.
But every time they try to fix it, they end up adding more words.
The Brief: Two Non-Negotiables
ADPF’s leadership, to their credit, tried something different. They commissioned a guidelines review with two conditions:
- Applicants must understand what the program is for, whether it’s for them, and how to succeed. Not after reading 38 pages. Within the first two.
- The guidelines must resolve ambiguity, not create it. If staff are still interpreting eligibility case-by-case, if panels are still debating what criteria mean, the guidelines have failed.
The constraint was real: the policy settings weren’t changing. The budget wasn’t changing. The assessment framework wasn’t being rebuilt. This was a guidelines fix, not a program redesign.
But here’s what ADPF learned: you can’t fix guidelines without confronting design. The two are inseparable.
Design Move 1: Find Out What You're Actually Funding
Before touching anything, we asked ADPF a simple question: what does a good grant application look like?
Not in theory. In practice. Show us five applications from last round that were exactly what the grant program should be funding.
There was a long pause.
They could show us applications that scored well. They could show us applications that were funded. But “exactly what the program should be funding” turned out to be harder to answer than expected.
We pushed further. What’s the difference between a good application and an adequate one? What separates “funded” from “funded and we’re excited about it”?
More pauses.
This is the design vacuum. ADPF had been assessing applications for years without ever articulating, precisely, what they were looking for. The guidelines couldn’t communicate it because nobody had pinned it down.
So we pinned it down.
Working with program staff and assessment panel members, we built a profile of the ideal application. Not a template. A profile. What problem is it solving? For whom? With what evidence that the approach works? What does success look like in 12 months?
Once that was clear, the guidelines had something to say.
Design Move 2: Eligibility That Decides, Not DefersAdd Your Heading Text Here
ADPF’s eligibility section was three pages long and still didn’t answer the most common questions.
It included phrases like “organisations that can demonstrate alignment with program objectives” and “applicants with sufficient organisational capacity to deliver the proposed project.” These sound like eligibility criteria. They’re not. They’re judgment calls dressed up as rules.
Real eligibility is binary. In or out. You either meet the criterion or you don’t.
“Registered not-for-profit with an ABN” is eligibility. “Sufficient organisational capacity” is an assessment criterion pretending to be eligibility.
When eligibility is fuzzy, three things happen:
- Staff spend hours on the phone explaining what “sufficient capacity” might mean in different scenarios.
- Applicants who aren’t eligible apply anyway, hoping they’ll scrape through.
- Assessment panels inherit decisions that should have been resolved at the gate.
We rebuilt ADPF’s eligibility as a set of binary filters. Twelve questions, yes or no. If you answer no to any of them, you’re not eligible. If you answer yes to all of them, you are.
The hard decisions, the edge cases, the “what about…” scenarios, were resolved in the design phase. We wrote a decision log explaining how each edge case would be treated and why. That log lives behind the guidelines, not in them.
The eligibility section went from three pages to one. Enquiries about eligibility dropped by half.
Design Move 3: Criteria That Distinguish
ADPF’s assessment criteria were a classic of the genre:
- Alignment with program objectives (20%)
- Project design and feasibility (25%)
- Organisational capacity (20%)
- Value for money (20%)
- Community benefit (15%)
Five criteria that could apply to literally any grant program in Australia. They sound reasonable. They’re almost useless.
What does “alignment with program objectives” look like in a strong application versus an adequate one? What’s the difference between a 3 and a 4 on “value for money”? If two panel members disagree, what’s the tiebreaker?
ADPF’s panels had developed informal norms over time. Experienced assessors knew what “good” looked like. New assessors didn’t. The guidelines offered no help.
We rebuilt the criteria around the ideal application profile. Instead of “alignment with program objectives,” the criterion became: “The application identifies a specific problem affecting a defined cohort, with evidence that the problem exists and that the proposed approach has a reasonable prospect of addressing it.”
Instead of a percentage weighting and a vague descriptor, each criterion got a rubric. A 1 looks like this. A 3 looks like this. A 5 looks like this. If you’re torn between two scores, here’s how to decide.
The rubrics went to assessors, not applicants. But the criteria themselves, rewritten in plain language, went into the guidelines. For the first time, applicants could see what the panel was actually looking for.
Design Move 4: Structure for Scanning, Not Reading
Nobody reads 38 pages.
People scan. They look for the bits that apply to them. They skip to eligibility. They search for “how much.” They scroll past anything that looks like boilerplate.
The old ADPF guidelines were structured like a legal document: definitions, background, objectives, eligibility, assessment criteria, conditions of funding, acquittal requirements, complaints process, contact details. Logical, if you assume linear reading. Useless for how people actually behave.
We restructured around questions.
Is this grant for me? (Eligibility, in plain terms, on page one.)
What can I apply for? (Funding amounts, project types, what’s in and out of scope.)
How will my application be assessed? (Criteria, in plain language, with examples of strong applications.)
What happens if I’m successful? (Conditions, reporting, acquittal. Moved to the back because you don’t need it until you’ve won.)
How do I apply? (Portal instructions, closing date, contact details.)
Each section answers one question. If you only care about eligibility, you read one page. If you want the detail on assessment, it’s there. If you need the conditions of funding, they’re at the back, not cluttering up the decision about whether to apply.
The document went from 38 pages to 14. Not because we removed information. Because we removed repetition, repositioned detail, and stopped trying to make everyone read everything.
Design Move 5: Write Like a Human
Policy language is a dialect. It has its own grammar, its own vocabulary, its own strange habit of never saying anything directly.
“The program seeks to support initiatives that contribute to enhanced community resilience and wellbeing outcomes through the delivery of targeted interventions aligned with government priorities.”
Translation: “We fund projects that help communities.”
Policy language survives in guidelines because it’s safe. It’s hard to argue with. It covers multiple interpretations. It can be defended in estimates.
It’s also the reason applicants email you asking what you actually mean.
We rewrote ADPF’s guidelines in delivery language. Not dumbed down. Clear.
“This program funds community organisations to run projects that help people find and keep jobs. We’re looking for projects that work with people who face real barriers to employment, like long-term unemployment, disability, or caring responsibilities. We want to fund things that have a reasonable chance of working, not just good intentions.”
That paragraph would give some policy teams heart palpitations. It makes commitments. It names things. It could be quoted in a complaint.
It also tells applicants what the program is for.
The test we applied: could someone with no prior knowledge of the program read this sentence and understand it? If the answer was no, we rewrote it.
Early Signals
Let’s be clear: new guidelines didn’t solve poverty. They didn’t transform employment outcomes. They’re guidelines. They tell people how to apply for money. That’s it.
But within one funding round, the phone stopped ringing.
Enquiries dropped by 40%. Not because staff got better at answering them. Because applicants stopped needing to ask. The questions that remained were genuine edge cases, the weird stuff, not “can you please explain what you mean by ‘demonstrated capacity’?” repeated forty times a week.
The applications got better. Not because applicants got smarter. Because the form stopped rewarding waffle and started rewarding answers. The proportion of clearly ineligible applications dropped. The proportion that actually addressed the criteria rose. Panels spent less time on applications that should never have made it to the room.
And the panels themselves? Faster. Less arguing about what criteria meant. More time on the actual question: is this worth funding? Moderation sessions that used to run three hours finished in ninety minutes. Feedback letters were easier to write because there was something real to say.
Is this peer-reviewed? No. But it’s what happens when guidelines are designed instead of inherited. Every time.
The Uncomfortable Truth
Here’s the thing nobody wants to admit in the planning meeting.
Most guidelines aren’t designed. They’re inherited.
Someone wrote a version years ago. Someone else added a section when legal got nervous. Policy updated it when the framework changed. Each round, someone tweaks the dates, updates the funding amounts, and hits publish. Nobody asks whether the document actually works. Nobody tests it with real applicants. Nobody checks whether the criteria can distinguish quality.
And then, when things go wrong, when panels can’t agree, when complaints come in, when the minister asks why that project got funded, people look at the guidelines expecting answers. But the guidelines can’t help. Because they were never designed to. They were accumulated. Layer by layer, year by year, caveat by caveat, until the thing is forty pages long and says nothing clearly.
ADPF decided to stop. Not because they had extra budget or a special mandate. Because someone got sick of answering the same questions every round and finally asked: is there a better way?
There is. It just requires someone to admit the current guidelines don’t work, which is harder than it sounds when your name is on the last revision.
What This Means For You
Here’s a quick test.
If your guidelines are over 20 pages, something is wrong. Nobody is reading all of that. They’re searching for the bit that applies to them and guessing the rest.
If staff spend more than an hour a day answering enquiries during the application period, something is wrong. The document isn’t doing its job. Your team is doing the document’s job, one phone call at a time.
If your assessment panels argue about what criteria mean instead of how to apply them, something is wrong. The criteria are decoration. The real assessment framework lives in the heads of your most experienced panellists, and when they leave, it walks out the door with them.
If you can’t describe, in two sentences, what a great application looks like, something is wrong. And if you can’t, your guidelines definitely can’t, which means applicants are guessing. Some of them guess well. Most don’t.
These aren’t communication problems you can fix with plain English and better formatting. They’re design failures. The only fix is to go back to the foundations and ask what you’re actually trying to fund, who should be applying, and how you’ll know quality when you see it.
That’s uncomfortable. It’s also the only thing that works.
The Offer
I’m Geoffrey Clow, founder of Expert Grant Program Advisory.
I design guidelines that work: clear enough that applicants understand what you want, precise enough that staff don’t spend weeks interpreting edge cases, structured so assessment panels can actually distinguish quality.
I don’t just rewrite your document in plain English. I find out what’s underneath it, what’s missing, what’s been deferred, what nobody wanted to decide. Then I fix that first.
The guidelines follow.
If your current guidelines are producing enquiries you shouldn’t have to answer, applications you shouldn’t have to assess, and decisions you struggle to defend, let’s talk.
This case study is a composite drawn from real guidelines development work across Australian government and philanthropic funders. Names and details have been fictionalised. The patterns are real.
