Responsible Use of AI
CE Canvas includes AI features that help you plan engagement, draft documents, and analyse community feedback. This guide explains what each feature does, where human judgement is essential, and how your data is protected.
AI in CE Canvas is a support tool, not a substitute for professional judgement or community insight. Always review what it produces.
What AI does in CE Canvas
| Feature | What it does |
|---|---|
| EVA Chat (Ask EVA) | Answers questions about your project, helps draft canvas sections, refines plan content, and suggests stakeholder approaches — conversationally |
| Draft Canvas | Generates a first draft of all 12 Engagement Canvas sections at once, based on your uploaded resources |
| Regenerate (canvas sections) | Produces a draft for a single canvas section using your resources and other completed sections as context |
| Draft My Document | Drafts all sections of the Engagement Plan using your resources and canvas content |
| Create Report | Generates an Engagement Outcomes Report from your results data and project context |
| AI Content Helper | Suggests improvements to in-app text fields — clearer language, better structure, more inclusive phrasing |
All AI features are optional. Nothing is published or saved without your action.
Where human judgement is always required
AI can generate plausible-sounding content that is factually wrong, contextually inappropriate, or missing crucial local knowledge. There are specific areas in community engagement where human review is non-negotiable:
What is genuinely open to community influence EVA cannot know which decisions are fixed and which are open. The Negotiables / Non-Negotiables section of the Engagement Plan and the Decision Context section of the canvas must reflect your actual project constraints — not what sounds reasonable in a draft.
Stakeholder sensitivities and community history EVA has no knowledge of historical tensions, prior broken commitments, cultural protocols, or trust issues in your community. The Known Sensitivities canvas section must be completed from your own knowledge.
The engagement level you can honestly deliver Choosing Inform, Consult, Involve, Collaborate, or Empower is a commitment. EVA can suggest a level based on the project type, but only you can confirm whether your team, timeline, and organisational mandate can genuinely deliver it.
Closing the loop commitments The specific promises you make to communities about when and how they’ll hear what happened to their input must come from you. EVA can help structure the language — the commitment itself is human.
Any content shared publicly or with communities Review all AI-generated content before it goes outside your team. Check facts, verify context, and ensure the tone is appropriate for your community.
Practical review checklist
Before using any AI-generated content:
- Does this accurately reflect our actual project constraints and scope?
- Does it match what I know about this community and its history with our organisation?
- Are the stakeholder groups correct and complete for this specific project?
- Is the engagement level one we can genuinely deliver?
- Have I removed or corrected anything that sounds plausible but is actually wrong?
How your data is handled
- AI requests in CE Canvas are processed via secure, enterprise-grade APIs
- Your project data is not used to train AI models
- Data is encrypted in transit
- Requests are processed temporarily to generate a response and are not retained by the AI service
- All platform data is hosted in Australia
- AI features follow the same access controls as the rest of CE Canvas — team members only see data from projects they have access to
Reporting a problem
If AI generates content that is inaccurate, biased, or inappropriate, contact us at support@cecanvas.com. Include the context or prompt you used, what was generated, and why it was a problem. Your feedback directly informs how we improve and safeguard AI features.
Our principles
CE Canvas’s approach to AI is guided by the Australian Government’s AI Ethics Principles:
- Human-centred — people remain in control; AI assists, it does not decide
- Fair and inclusive — we work to reduce bias and promote equitable engagement outcomes
- Private and secure — data is handled responsibly and kept safe
- Transparent — you always know when and how AI is being used
- Accountable — we take responsibility for our AI systems and welcome feedback