The question this answers
Where exactly can AI be used, and who gets to decide?
What the problem looks like without an AI decision matrix
AI gets introduced piecemeal. Someone uses it to help draft assessor comments. Someone else uses it to rank applications. A third team uses it to flag duplicates.
None of this is documented. None of it has been approved. No one has thought through the probity implications. And when an applicant lodges a complaint or a journalist asks how decisions were made, no one can explain what AI touched and what it didn’t.
The risk isn’t that AI was used. The risk is that no one can say where, how, or under what authority.
What I deliver
A decision matrix that sets out:
- Every stage of your grant lifecycle where AI could be used (design, triage, assessment, moderation, reporting, acquittal, evaluation)
- For each stage: what’s permitted, what’s prohibited, what requires approval
- The level of risk for each use case (low, medium, high)
- The probity implications (fairness, transparency, equity impacts)
- Whether AI can take the work off the pile entirely, or whether human review is required
- Who has authority to approve each type of use
- What documentation is required when AI is used
It’s not a policy statement. It’s not a set of principles. It’s a practical tool your team can use to answer two questions: “Can we use AI for this?” and “Will it actually save us time, or just add a review layer?”
What good looks like vs what bad looks like
Bad: “AI may be used to support assessment processes where appropriate, subject to human oversight.”
This tells you nothing. It’s not usable. It protects no one.
Good:
| Stage | Use case | Permitted | Risk | Probity impact | Scales? | Approval required | Documentation |
|---|---|---|---|---|---|---|---|
| Triage | AI flags incomplete applications | Yes | Low | Minimal if human reviews flags | No – still requires human check | Program manager | Log of flagged applications |
| Triage | AI rejects incomplete applications (black and white rules) | Yes | Low | Minimal if rules are unambiguous | Yes – no human review needed | Program manager | Rejection log with reasons |
| Assessment | AI generates draft scores | No | High | Significant: removes human judgment from scoring | N/A | N/A | N/A |
| Assessment | AI summarises application for assessor reference | Yes, with conditions | Medium | Moderate: assessor may over-rely on summary | No – assessor still reads full application | Assessment panel chair | Summary attached to record |
| Moderation | AI identifies scoring outliers | Yes | Low | Minimal if used to prompt review, not override | Partial – humans only review flagged items | Moderation lead | Outlier report retained |
| Reporting | AI aggregates outcome data across cohort | Yes | Low | Minimal if data is structured | Yes – no human review needed | Program manager | Report retained |
A matrix like this means anyone on your team can check whether a proposed use is permitted, what the probity implications are, whether it will actually scale, and what they need to document.
Why it matters
When the Ombudsman asks how AI was used in your grant program, you don’t want to be reconstructing the answer from emails and vendor contracts.
But there’s another question that matters just as much: did AI actually help, or did it just add work?
A decision matrix helps you distinguish between AI that takes mechanical work off the pile entirely and AI that just adds a review layer. For a 1000-application round, that difference is the difference between coping and drowning.
You want to hand over a document that shows you thought about it before you started, considered the probity implications, designed for genuine efficiency, and kept records.
That’s what a decision matrix gives you. Not innovation theatre. Defensible design that actually scales.
Other AI‑Augmented Grantmaking Deliverables
Who decides what’s in an AI-assisted grant program? → A decision architecture mapping human and AI roles across the full lifecycle. Where AI handles mechanical work entirely, where it assists human judgement, and where human oversight is non-negotiable. Designed so AI removes work from the pile rather than adding a review layer.
Is your AI use in your grant program audit-ready? → An explainability and audit architecture built into every AI-influenced step. Every decision that AI touches can be reconstructed, explained, and defended under audit, FOI, parliamentary scrutiny, or public challenge. Accountability is designed in, not documented after the fact.
What makes grant program data AI-ready? → Grant program architecture designed to produce the structured, consistent data that AI requires. Guidelines, forms, and reporting rebuilt so AI operates on reliable inputs rather than inheriting the ambiguity and inconsistency of legacy design.







