What happens when a funder asks what AI is actually for, before turning it on?
A Case Study by Geoffrey Clow | Expert Grant Program Advisory
The Pressure and the Panic
Every grantmaker in Australia is having the same conversation right now.
It starts in a leadership meeting. Someone mentions AI. Someone else mentions the efficiency dividend. A third person mentions that other departments are “already using it.” Nobody is quite sure what “using it” means, but nobody wants to be left behind.
Then the questions start.
Can we use AI to assess applications? Can we use it to write feedback letters? Can we use it to check acquittals? Can we use it to detect fraud? Can we use it to do more with less, because we’ve been told to do more with less, and there’s no more less left to do more with?
The answers, from vendors and consultants and breathless LinkedIn posts, are always yes. Yes, AI can do that. Yes, AI can do this. Yes, AI will transform your grant program, reduce your costs, improve your decisions, and possibly make you a coffee.
What nobody asks, until it’s too late: should it?
What Happens When You Automate a Mess?
Here’s the uncomfortable truth about AI in grantmaking.
Most grant programs are not ready for it. Not because the technology isn’t capable. Because the programs themselves don’t know what they’re doing.
AI is very good at automating processes. It’s very good at finding patterns. It’s very good at doing things faster.
But “faster” is not the same as “better.” And if your process is broken, AI will break it faster.
Consider: if your eligibility criteria are fuzzy, AI will make fuzzy decisions at scale. If your assessment criteria can’t distinguish quality, AI will automate indistinction. If your guidelines are vague, AI will generate vague feedback letters with impressive efficiency.
The fantasy is that AI will fix your program. The reality is that AI will amplify whatever your program already is.
If your program is well-designed, clear about what it’s trying to achieve, with sharp eligibility, meaningful criteria, and genuine outcome measures, AI can help. It can take mechanical work off the pile. It can surface patterns humans would miss. It can free your team for the decisions that actually require judgment.
If your program is a compliance machine running on inherited templates and unexamined assumptions, AI will make it a faster compliance machine. Congratulations. You’ve automated mediocrity.
Meet AIFG: A Funder You'll Recognise
The Australian Innovation and Futures Grant doesn’t exist. But you’ve met it.
AIFG is a composite: part Commonwealth innovation fund, part state economic development program, part council small business grants scheme. It processes thousands of applications per year. It has a small team. It has been told, repeatedly, to do more with less.
AIFG’s leadership is under pressure. The minister wants faster turnaround times. Finance wants lower cost-per-assessment. The audit committee wants to know about AI governance. The staff want to know if they’re about to be replaced.
Everyone has read the same articles about AI transforming government. Everyone has seen the same vendor demonstrations. Everyone is nervous about being left behind, and equally nervous about being the department that ends up in the newspaper.
AIFG’s executive director asked a different question: what is AI actually for, in our context, and how do we use it without creating a mess we can’t explain?
The Brief: Three Non-Negotiables
AIFG commissioned an AI-augmented grantmaking review with three conditions:
- Governance first. Any AI use must be explainable, auditable, and defensible. If we can’t explain how a decision was influenced by AI, we can’t use AI for that decision.
- Genuine efficiency, not theatre. AI must take work off the pile entirely, not add a review layer to everything. If humans still have to check every AI output, we haven’t saved anything.
- No harm to applicants. AI must not disadvantage applicants based on factors that have nothing to do with merit. If we can’t demonstrate fairness, we don’t proceed.
The constraint was real: AIFG wasn’t going to stop using AI because of risk. The pressure was too great. But they weren’t going to bolt it on blindly either.
They wanted a design, not a demo.
Design Move 1: Map the Decisions First
Before talking about AI, we mapped AIFG’s decision architecture.
Every grant program is a series of decisions. Some are mechanical: does this applicant have an ABN? Did they submit before the deadline? Is the requested amount within the funding range?
Some are judgment calls: is this project likely to achieve its stated outcomes? Does this organisation have the capacity to deliver? Is this value for money compared to other applications?
Some are hybrid: is this applicant eligible under criterion 4(b), which sounds binary but requires interpretation?
AI is good at mechanical decisions. It’s dangerous for judgment calls. It’s treacherous for hybrids that look mechanical but aren’t.
AIFG had never mapped this before. They had processes, but they’d never asked: which of these decisions are genuinely rule-based, and which require human judgment?
We built a decision matrix. Every decision point in the lifecycle, from application receipt to acquittal sign-off, categorised by type:
- Mechanical: Can be automated with high confidence. Human review only on exception.
- Assisted: AI provides input, human decides. The AI output is one factor, not the answer.
- Human-only: Too consequential, too contextual, or too risky for AI involvement.
This matrix became the foundation for everything that followed. It told AIFG where AI could help and where it would create problems.
Design Move 2: Automate the Mechanical, Entirely
The vendors will sell you AI that processes applications faster. That’s fine, as far as it goes. But speed isn’t the problem most funders actually have.
The problem is that assessors spend hours on extraction before they can spend minutes on judgment. They read the same application three times to find the budget rationale. They flip between attachments trying to reconcile what the applicant said in section 3 with what they said in section 7. They do this hundreds of times per round, and by the end they’re exhausted and pattern-matching rather than thinking.
AIFG deployed AI differently. Not to decide faster, but to see better.
Summarisation that actually helps. AI reads each application and produces a structured summary: what problem is the applicant trying to solve, what’s their proposed approach, what evidence have they provided, what are the stated outcomes, what’s the budget breakdown. Assessors get this summary alongside the original application. They’re not starting from scratch. They’re starting from a map.
Inconsistency detection. AI flags applications where the narrative doesn’t match the budget, where section 3 contradicts section 7, where the letters of support don’t align with the stated partnerships. These aren’t disqualifying flags. They’re “look closer here” flags. The assessor still decides what it means.
Pattern recognition across the portfolio. AI identifies applications that look structurally similar to previously successful projects, and applications that are outliers worth a closer look. Not to prejudge them, but to help assessors calibrate: is this a familiar type of project, or something genuinely new?
Value-for-money signals. AI compares proposed costs against similar funded projects from previous rounds. Not to generate a score, but to surface questions: this project is proposing to do X for half what similar projects cost. Is that efficiency or under-scoping? This one is twice the typical cost. What’s driving that?
None of this replaces judgment. All of it supports it. Assessors still read. Assessors still think. Assessors still decide. But they’re deciding with better information, and they’re not exhausted from extraction by the time they get to the hard calls.
The key principle: AI should make humans smarter, not redundant. If your AI implementation means assessors do the same work but faster, you’ve missed the point. If it means assessors do better work because they can see patterns they couldn’t see before, you’re getting somewhere.
Design Move 3: Assist the Judgment, Carefully
Assessment is where AI gets dangerous.
Vendors will tell you AI can assess applications. They’ll show you demos where AI scores applications against criteria, generates assessment reports, ranks projects by predicted impact.
What they won’t tell you: that’s not how accountability works in government.
When a funding decision is challenged, through FOI, complaint, or audit, someone has to be able to explain why this application was funded and that one wasn’t. The explanation can’t be “the AI said so.” The explanation has to be grounded in criteria, evidence, and human judgment.
AI can assist that judgment. It can’t replace it.
Here’s how we designed it for AIFG:
AI reads first. Every application is processed by an AI system that extracts key information: what problem is the applicant trying to solve? What’s their proposed approach? What evidence have they provided? What are the stated outcomes?
This extraction goes to human assessors as a structured summary. Not a score. Not a recommendation. A summary.
Humans assess. Assessors read the AI summary. They read the original application. They apply the criteria. They make the judgment.
AI supports consistency. After assessors score, AI flags outliers: applications where different assessors gave very different scores, applications where the score seems inconsistent with the stated reasoning. These go to moderation.
AI never decides. At no point does AI determine whether an application is funded. That’s a human decision, made by humans accountable for it.
AIFG can now process applications faster. Assessors spend less time on extraction and more time on judgment. But every decision is still defensible, because every decision is still made by a human.
Design Move 4: Govern the Black Box
AI systems are opaque. Even the engineers who build them often can’t explain exactly why they produce a particular output.
That’s a problem for public accountability.
AIFG needed to know: what is the AI system doing? What data is it using? What assumptions is it making? Where might it go wrong?
We built a governance layer with three components:
Decision logs. Every AI-influenced step is logged. What input went in, what output came out, what human decision followed. If someone asks “why was this application flagged for review?”, there’s a record.
Bias testing. Before deployment, and periodically after, the AI outputs are tested for patterns. Are applications from certain postcodes being treated differently? Are certain types of organisations systematically scored higher or lower? Are there patterns that can’t be explained by merit?
Override protocols. Humans can override AI at any point. When they do, they record why. Those overrides are reviewed quarterly. If humans are constantly overriding the AI in the same way, the AI is wrong, and gets retrained.
This isn’t foolproof. No AI governance framework is. But it means AIFG can answer the questions that will come from audit, from estimates, from journalists, from applicants who want to know why they weren’t funded.
“We used AI” is not an answer. “Here’s exactly how we used AI, here’s the human decision that followed, and here’s the audit trail” is an answer.
Design Move 5: Draw the Lines That Don't Move
Some decisions are not appropriate for AI involvement. Full stop.
Not because AI couldn’t technically do them. Because the consequences of getting them wrong are too high, or the context is too complex, or the accountability has to sit with humans without qualification.
AIFG drew the following lines:
AI does not make funding decisions. Ever. AI can inform, summarise, flag, and assist. The decision itself is human.
AI does not assess First Nations applicants without cultural oversight. The risk of bias, of inappropriate criteria, of context that AI can’t understand, is too high. These applications go to assessors with relevant expertise, with AI involvement limited to mechanical tasks only.
AI does not generate applicant-facing communications without human review. Feedback letters, rejection notifications, and acquittal responses are reviewed by humans before sending. The reputational risk of AI-generated tone-deafness is not worth the efficiency gain.
AI does not access personally identifiable information beyond what’s necessary. The system sees what it needs to see for the task. It doesn’t see medical information, personal circumstances, or sensitive disclosures unless that’s specifically required and specifically governed.
These lines are not negotiable. They don’t move when someone asks for faster turnaround. They don’t move when budget gets cut. They’re the commitments AIFG makes to applicants and to the public about how AI will and won’t be used.
When AI Gets It Wrong, Who Answers?
Here’s what the vendors won’t tell you, and what most AI strategies politely avoid:
AI in grantmaking is not primarily a technology decision. It’s an accountability decision.
When something goes wrong, and something will go wrong, who is responsible?
If an application is incorrectly marked ineligible by an automated system, who answers the complaint? If a pattern of bias emerges in AI-assisted assessments, who explains it to the minister? If a funding decision is overturned on review because the AI summary missed critical information, whose name is on the brief?
The answer is never “the AI.” The answer is always a human. Usually the program manager, the SES officer, the executive director.
AI doesn’t absorb accountability. It obscures it.
AIFG’s governance framework exists to make accountability clear again. To ensure that every AI-influenced decision has a human owner, a documented rationale, and a trail that can be followed when questions come.
This is not anti-AI. This is pro-accountability. The two are not in conflict unless you’re trying to use AI to avoid responsibility rather than to do better work.
Early Signals
AIFG is one year into their AI-augmented approach. It’s too early for definitive results. But early signals are encouraging.
Genuine time savings. The mechanical automation freed 300+ hours per round. Not theoretical hours. Actual hours that staff now spend on assessment and applicant support.
Faster processing, same quality. Average time from application to decision dropped by three weeks. Assessment consistency, measured by inter-rater reliability, stayed the same or improved slightly.
No complaints about AI… yet. Applicants don’t know which parts of the process are AI-assisted, because the human decisions are still human. There’s been no increase in complaints, FOI requests, or review applications.
Staff aren’t panicking. The early fear that AI would replace assessors hasn’t materialised. Assessors are doing more interesting work, not less work. The boring mechanical tasks are gone. The judgment calls remain.
Audit is satisfied, for now. The governance framework passed its first internal audit review. Auditors noted that the documentation and accountability trails were clearer than in some fully-human processes.
These are early signals, not proof. The real test will come when something goes wrong, as it eventually will. When that happens, AIFG will find out whether their governance framework actually holds.
The Funder's Choice
Every funder in Australia will use AI in grantmaking within the next five years. The pressure is too great, the technology too capable, the efficiency dividends too tempting.
The question is not whether. It’s how.
Option one: bolt it on. Buy a vendor product. Plug it into your existing processes. Hope the efficiency gains outweigh the governance gaps. Deal with the problems when they surface in estimates or on the front page.
Option two: design it in. Map your decisions first. Know what’s mechanical and what’s judgment. Automate entirely where you can, assist carefully where you must, draw lines that don’t move. Build governance that can answer the questions that will come.
Option one is faster to implement. Option two is faster to defend.
AIFG chose option two. Not because they were technology-averse. Because they understood that AI without governance is a liability, and AI with governance is an asset.
What This Means For You
If you’re being pressured to use AI in your grant program, ask these questions first:
Do you know which decisions are mechanical and which require judgment? If you can’t map this, you can’t govern AI use.
Can you explain how AI will influence each decision? If the answer is “it just helps,” you’re not ready.
Who is accountable when AI gets it wrong? If the answer is unclear, don’t proceed.
Will your AI use survive an FOI request? If you can’t show the trail from AI input to human decision, you’re exposed.
Are you automating something worth automating? If the underlying process is broken, AI will break it faster.
These aren’t technology questions. They’re design questions. The technology is the easy part.
The Offer
I’m Geoffrey Clow, founder of EGA – Expert Grant Program Advisory.
I design AI-augmented grantmaking that’s governed, explainable, and auditable. That means mapping your decisions first, not buying a product first. It means governance frameworks that can answer the hard questions. It means AI that takes mechanical work off the pile entirely, freeing your team for work that actually requires judgment.
I don’t sell AI hype. I don’t promise transformation in 90 days. I help funders use AI in ways they can defend when the questions come.
If you’re under pressure to “do something with AI” and you want to do it properly, let’s talk.
This case study is a composite drawn from real AI governance and grantmaking design work. Names and details have been fictionalised. The patterns, and the risks, are real.
