The question this answers
How do we make sure the right grant applications get funded?
What the problem looks like without assessment criteria with weightings and decision rules
Your guidelines list five assessment criteria. Assessors score each one. The highest scores get funded.
But no one defined what each criterion really means. One assessor thinks “community benefit” means number of people reached. Another thinks it means depth of impact. A third thinks it means targeting disadvantaged groups.
And when two applications tie on total score, there’s no rule for which one wins. Someone makes a call. It’s not documented. Six months later, the unsuccessful applicant asks why they missed out, and no one can explain it.
The criteria existed. The framework didn’t.
What the deliverable actually is
A complete assessment framework that includes:
- Criteria definitions: What each criterion means, specifically and unambiguously
- Weightings: How much each criterion contributes to the overall score, with rationale
- Decision thresholds: Minimum scores required to be fundable (e.g., must score at least 3/5 on every criterion)
- Tie-break logic: How to decide between applications with equal scores
- Edge case guidance: How to handle common difficult situations (incomplete information, innovative but risky projects, first-time applicants)
Delivered as a document ready to brief assessors, include in guidelines (where appropriate), and hand to audit or governance.
What good looks like vs what bad looks like
Bad: “Applications will be assessed against the following criteria: community benefit, organisational capacity, value for money, project design, and sustainability.”
This tells assessors what to score. It doesn’t tell them how to score it, how to weight it, or what to do when it’s ambiguous.
Good:
| Criterion | Definition | Weight | Threshold | Tie-break priority |
|---|---|---|---|---|
| Community benefit | The extent to which the project addresses a demonstrated need in the target community, with evidence of who will benefit and how | 30% | Minimum 3/5 | 1st |
| Project design | The clarity, feasibility, and logic of the project plan, including realistic timelines and identified risks | 25% | Minimum 3/5 | 2nd |
| Organisational capacity | Evidence the applicant can deliver, including relevant experience, staffing, and governance | 20% | Minimum 3/5 | 3rd |
| Value for money | The relationship between the funding requested and the outcomes expected, including in-kind contributions and leverage | 15% | No minimum | 4th |
| Sustainability | The extent to which benefits will continue beyond the funding period | 10% | No minimum | 5th |
Tie-break rule: Where applications have equal total scores, rank by community benefit score. If still tied, rank by project design score. If still tied, prefer first-time applicants.
Now every assessor is working from the same framework, and every decision can be explained.
Why it matters
Assessment is where fairness lives or dies. If criteria are vague, assessors fill the gaps with their own interpretation. Different assessors score differently. Similar applications get different outcomes.
A robust criteria framework creates consistency. It makes scoring defensible. And it protects the people making decisions by giving them a structure to follow rather than forcing them to improvise.
Other Assessment Design Deliverables
Do Your Assessment Scores Mean What You Think They Mean? → A scoring architecture where each level is defined by an evidence threshold, not a adjective. Assessors match evidence to descriptors. There is no interpretation step. Score variation becomes a design failure to fix, not a moderation problem to manage.
Are Your Panel Processes Protecting the Program Or Exposing It? → A decision architecture for panels. Who decides what, on what basis, with what constraints, and what gets recorded. Designed so the process produces defensible outcomes by structure, not by relying on experienced panellists to compensate for missing design.







