Services

I advise on and design grant programs that can answer the question: 

 

What actually changed?

My work sits across three tiers:

 

Tier 1 is the core offer: outcomes-first grant program design.

Tier 2 is the set of specialist design disciplines that deliver it, from guidelines and eligibility through to assessment, application design, and outcomes architecture.

Tier 3 activates when money is large, scrutiny is public, or failure has consequences.

You can engage at any level. Most clients start with a specific problem in Tier 2 and discover the upstream design work that would have prevented it.

Founder & Principal Consultant, Geoffrey Clow

EGA – Expert Grant Program Advisory

Tier 1: What I Design

Outcomes-First Grant Programs

This is grant program design:

 

  • Outcomes before activities. The program starts with what you’re trying to change, not what you’re going to fund.

  • Systems before documents. Design choices are resolved before guidelines are written and announcements lock you in.

  • Learning before compliance. The program is built to adapt and improve, not just survive audit.

Most grant programs are designed to be defensible. The good ones are designed to be effective. I help funders build programs that can be both.

Tier 2: How I Design Them

Outcomes-first grant program design is built from specific design disciplines.

 

Each one addresses a point in the grant lifecycle where poor upstream decisions create downstream problems.

 

The patterns are predictable: confused applicants, inconsistent decisions, indefensible processes, data that cannot demonstrate impact.

These capabilities can be engaged individually or as part of a full program design. Either way, the principle is the same: resolve the design problem before it becomes an operational one.

To see how these design disciplines work together in practice, read the case study: Designing Grant Programs for Outcomes and Systems →

Guidelines design

Problem: Most grant guidelines are written in policy language that was never translated for the people who actually need to use them.

Risk: Confused applicants flood your enquiries line. Good applicants self-exclude. Poor-fit applications increase. Rejection rates climb. Questions follow about whether the grant program was ever clearly designed.

Solution: I redesign grant guidelines so they communicate clearly: what the program is for, whether it’s for you, and how to respond well. The output is a plain-English rewrite, but the value is in the structural decisions behind it.

Benefit: Reduced noise. Fewer incomplete applications. Stronger alignment between program intent, applications, and outcomes. 


What you get

“Why don’t grant applicants understand your guidelines?” → Guidelines restructured so they communicate program design, not policy language. Applicants understand what the program is for, whether they are a fit, and what evidence they need to provide. Clarity comes from the structural logic behind the guidelines, not from simpler words.

“How do you brief stakeholders on grant guidelines changes?” → A design rationale document that explains the structural decisions behind the guidelines for executives, ministers, audit committees, and internal teams. This is not a summary of what changed. It explains why the program communicates differently and what that solves.

“Are your grant guidelines compliant?” → A compliance review ensuring the redesigned guidelines meet CGRG or relevant state framework requirements. Program design that cannot pass governance review is not usable design.

“Why do grant guidelines get worse each round?” → A design decision record showing the logic behind every structural change, suitable for audit, governance review, or onboarding new program staff. This is not tracked changes on a document. It is a record of why the program communicates the way it does.

“Can you prove what changed in your grant guidelines?” → A design decision record showing the logic behind every structural change, suitable for audit, governance review, or onboarding new program staff. This is not tracked changes on a document. It is a record of why the program communicates the way it does.

Eligibility Design

Problem: Fuzzy eligibility criteria push judgment calls onto staff instead of resolving them up front in design stage. When rules are vague, eligibility becomes interpretive rather than procedural.

Risk: Inconsistent decisions. Edge cases that cannot be defended. Complaints, audits, and reviews that trace back to criteria that were never precise enough to apply reliably.

Solution: I design eligibility rules to be genuinely binary: in or out. The hard decisions are made at the design stage, before applications open, rather than being pushed onto assessors after the fact.

Benefits: Cleaner gates. Fewer borderline cases. Less staff time spent debating eligibility. Decisions that are consistent, explainable, and defensible under audit, FOI, review, or public scrutiny.

 

What you get

“Why can’t your staff agree on who’s eligible?” → Eligibility architecture designed so routine decisions are determined by structure, not interpretation. Ambiguity is resolved at the design stage. Staff apply rules rather than exercise judgement. When eligibility outcomes are consistent regardless of who applies them, the design is working.

“Why Are Ineligible Applicants Still Getting Through?” → Front-end eligibility logic built into the application pathway so ineligible applicants are filtered before they invest time and before staff need to intervene. The program enforces its own rules rather than relying on staff to catch what the design should have prevented.

“Why do your eligibility rules sound clear but resolve nothing?” → Eligibility rules designed to be genuinely binary, including explicit treatment of edge cases and commonly disputed scenarios. The hard decisions are made during design, documented, and defensible. When a borderline case arrives, the answer already exists.

Problem: Most grant application forms reward narrative skill over evidence, and AI is making that worse. The forms collect data that can’t be reused across the lifecycle, leaving you with vague responses and fragmented information.

Risk: You spend more time for weaker applications. You cannot reliably distinguish solid projects from plausible writing. Decisions become harder to defend under audit, complaint, or scrutiny. Funding skews toward grant‑literate applicants rather than genuine capability, and your data cannot support meaningful portfolio analysis or outcome reporting.

Solution: I design grant application forms as decision tools, not questionnaires. Questions are structured to elicit evidence, force specificity, reduce subjectivity, and resist gaming. The form is built to serve the entire lifecycle: eligibility, assessment, contracting, reporting, evaluation, and learning.

Benefits: You get more reliable applications, less staff time spent interpreting vague responses, fairer access for applicants, data that can be reused across systems and programs, and processes that are easier to defend, explain, and audit.


What you get

Are You Funding Good Writers, Not Good Projects? → An intelligently designed grant application form that functions as a decision engine, not a questionnaire. Conditional logic creates evidence pathways scaled to project size and risk. Structured prompts force specificity and internal consistency, making weak proposals and AI-generated responses expose themselves without staff needing to detect them. Assessors compare evidence, not narrative skill.

Why do you keep asking applicants for the same information? → A lifecycle data architecture built into the form so every question maps to eligibility, assessment, contracting, reporting, evaluation, and cross-program analysis. Information is collected once, structured for reuse, and eliminates the duplication most programs never notice until reporting season.

Why do small organisations give up before they finish your application? → Proportionate evidence pathways built through conditional logic so the form automatically scales. Small projects follow a short pathway with minimal evidence requirements. Complex projects provide deeper substantiation including implementation plans, governance, and risk management. The burden of proof matches the funding risk.

What Does a Grant Application Design Specification Actually Look Like? → A structured blueprint covering question wording, sequencing, conditional logic rules, field types, validation rules, evidence requirements, and data structure. This is the build-ready design that a grants platform (Fluxx, SmartyGrants, Foundant, or similar) can implement directly. You are not receiving a prettier form. You are receiving a decision architecture ready for implementation.

Assessment Design

Problem: Most assessment frameworks evolved rather than were designed. Criteria get inherited from previous rounds, scoring scales go undefined, and panel processes rely on experienced people compensating for missing structure.

Risk: The grant program funds the wrong projects and cannot explain why it funded the right ones. Assessment outcomes reflect assessor judgement rather than program logic. When results are questioned, there is no design to point to.

Solution: I design assessment frameworks backwards from program intent. Criteria, scoring architecture, and panel processes are engineered so the right applications score highest by structure, not by luck or assessor skill.

Benefit: Funding decisions that are correct by design and defensible under scrutiny because the architecture holds up, not just the paperwork.

 

What you get

Does Your Assessment Framework Pick the Right Applications? → Assessment criteria engineered backwards from program intent. Every criterion exists because a funding decision depends on it. Weightings and decision logic are structural, not advisory. The framework makes the decision architecture visible so assessors execute programme logic rather than substitute their own.

Do Your Assessment Scores Mean What You Think They Mean? → A scoring architecture where each level is defined by an evidence threshold, not a adjective. Assessors match evidence to descriptors. There is no interpretation step. Score variation becomes a design failure to fix, not a moderation problem to manage.

Are Your Panel Processes Protecting the Program Or Exposing It? → A decision architecture for panels. Who decides what, on what basis, with what constraints, and what gets recorded. Designed so the process produces defensible outcomes by structure, not by relying on experienced panellists to compensate for missing design.

Outcomes Architecture & Learning Frameworks

Problem: Grant programs often cannot demonstrate impact because the data needed was never designed in at the start.

Risk: When budgets tighten or ministers ask whether the program was worthwhile, the answer is incomplete. Programs that cannot show outcomes are easier to cut.

Solution: I design outcomes and evaluation frameworks at the outset, working upstream of grants management platforms and tools so whatever system you use produces data that is meaningful, defensible, and usable.

Benefit: Grant programs that can answer the question ‘what actually changed?’ with evidence, not anecdote; when ministers ask, when auditors look, and when communities want to know their experience mattered.

 

What you get

How to Connect Funding Decisions to Grant Program Outcomes → An outcomes architecture that maps how your program’s funding logic connects to the outcomes it claims to achieve. If the connection between what you fund and what you measure doesn’t hold, the program cannot demonstrate value regardless of how well individual projects perform.

What Outcomes Data Should Your Grant Program Be Collecting? → An evaluation framework designed into the program architecture so data collection happens through existing touchpoints: application forms, progress reports, and acquittals. Evaluation is built into the workflow, not created as a separate reporting burden after funding decisions are already made.

What Can Your Existing Data Actually Say About Outcomes? → A diagnostic assessment of what your current data can and cannot support. Where evidence is missing, where collection is duplicated or misaligned, and which specific changes would materially improve your ability to demonstrate program outcomes. This is the starting point for programs that have been running without evaluation design.

Tier 3: When the Stakes Are High

Assurance & Edge Capabilities

These capabilities activate when money is large, scrutiny is public, or failure has consequences. This is where senior decision-makers lean in.

AI‑Augmented Grantmaking

Problem: AI is being bolted onto grant design, assessment and reporting faster than governance, program architecture and decision frameworks are being redesigned. That creates probity, fairness and data-quality risks at the same time as pressure to “do more with less”.

Risk: If you cannot clearly explain or defend how AI influenced a decision or workflow, accountability does not sit with the system. It sits with you in audits, reviews, parliamentary scrutiny, and public challenge.

Solution: I design AI-augmented grantmaking end-to-end. That means governance, decision architecture and workflows are rebuilt first, and AI is introduced only where it strengthens decision quality, integrity and outcomes, and where it delivers genuine efficiency gains. The goal is AI that takes mechanical work off the pile entirely, freeing your team for the decisions that require judgment, not AI that adds another review layer. 

Benefit: AI that improves productivity and insight, scales with volume, and remains transparent, defensible, auditable and aligned with legislative, policy and equity obligations.

 

What you get

Where Can AI Be Used in Grant Programs? → An AI governance architecture that defines where AI strengthens decision quality and where it introduces risk. Every AI touchpoint has a defined purpose, boundary, and accountability structure. The program knows exactly what AI does, what it doesn’t, and who is responsible at each point.

Who decides what’s in an AI-assisted grant program? → A decision architecture mapping human and AI roles across the full lifecycle. Where AI handles mechanical work entirely, where it assists human judgement, and where human oversight is non-negotiable. Designed so AI removes work from the pile rather than adding a review layer.

Is your AI use in your grant program audit-ready? → An explainability and audit architecture built into every AI-influenced step. Every decision that AI touches can be reconstructed, explained, and defended under audit, FOI, parliamentary scrutiny, or public challenge. Accountability is designed in, not documented after the fact.

What makes grant program data AI-ready? → Grant program architecture designed to produce the structured, consistent data that AI requires. Guidelines, forms, and reporting rebuilt so AI operates on reliable inputs rather than inheriting the ambiguity and inconsistency of legacy design.

Fraud, Risk & Probity

Problem: Fraud and integrity controls are usually designed after a grant program launches, if they’re designed at all. The gaps only become visible when something goes wrong: an adverse audit finding, a media story, a ministerial brief nobody wants to write.

Risk: Preventable incidents, adverse audit findings, and personal exposure for decision-makers held accountable for frameworks that failed to manage risk.

Solution: I design fraud, risk, and probity controls at the architecture stage, before vulnerabilities become incidents.

Benefit: Controls that prevent problems rather than documenting them after the damage is done.

 

What you get

Do You Actually Understand Your Grant Program’s Fraud Risks? → Fraud and corruption risk architecture designed around how your program actually operates. Vulnerabilities are identified at the design stage, with risk treatments built into program structure rather than layered on as compliance documentation.

Are your grant program integrity controls mapped to real risks? → A control architecture where every identified risk has a defined control, a named owner, and a monitoring mechanism. Gaps are visible by design. Accountability is structural, not assumed.

Would your grant decisions stand up to a probity complaint? → Probity architecture built into panel and decision-making processes. Conflict management, confidentiality, and conduct requirements are designed into how decisions are made, not issued as guidance that people are expected to read and follow independently.

What if your grant program could tell you what actually changed?

If you’re standing up a new grant program, rerunning a legacy round, or feeling uneasy about whether a design will hold up under scrutiny, talk to me early. That’s when this work has the most impact.