Why bother with an AI policy?
AI tools now draft your newsletters, tag support tickets and design campaign visuals.
Great—until someone pastes a donor list into a public model, or a chatbot gives dangerous advice.
A concise AI policy:
- Protects beneficiaries & data (GDPR, safeguarding, brand).
- Gives staff permission to innovate — no more shadow AI.
- Reassures funders & trustees you have risk in hand.
Need the legal foundations first? See AI, GDPR & Data Protection
What An AI Policy Is (And Isn't)
It’s not a 40-page technical manual. It’s a living, two-to-five-page document that:
- Names the tools you allow.
- Defines what data can (and cannot) enter them.
- Sets human-review checkpoints.
- Explains training & incident response.
If a new tool passes the same rules, add it. If a breach occurs, the policy shows you who signs off next steps.
Principles Before Pages
Step-By-Step Build
1. Map current AI use
Quick survey: what tools, what data, what outputs? Shadow usage surfaced early is safer than surprise leaks later.
2. Classify data
Public, anonymised, personal, special-category. Anything past personal triggers a DPIA (template link below).
3. Tier your tools
- Tier 1 (Team/Enterprise GPT, Claude 4) ➜ personal data allowed with safeguards.
- Tier 2 (free models) ➜ no personal data.
- Tier 3 (open-source local) ➜ needs on-prem security sign-off.
4. Assign roles
Data Steward (usually the DPO), AI Lead (digital/ops manager), Reviewer (content or service lead).
5. Draft the policy
Use the outline in Section 5.
6. Run a DPIA
For any personal-data workflow.
7. Train staff
15-minute lunch-and-learn beats a 50-slide deck.
8. Launch + log
Every AI output in an “AI Lab” Slack/Teams channel.
9. Quarterly review
what to stop, start, keep.
Policy Outline (copy/paste starter)
1. Purpose & scope
“This policy governs AI tools used by [Your Charity] staff, volunteers and contractors.”
2. Definitions
(AI, personal data, DPIA)
3. Approved tool list
Tool | Licence tier | Data allowed | Owner |——|——|——|——|
4. Data handling rules
- No personal data in Tier 2 tools.
- All prompts anonymised unless Tier 1.
5. Human review checkpoints
External copy, visuals, chatbots.
6. DPIA triggers & process
Use template for any new personal-data use
7. Incident response
Notify Data Steward within 24 h; escalation ladder.
8. Training & review cadence
Quarterly “Stop/Start/Keep” meeting; annual trustee sign-off.
Implementation Tips
- Start with a pilot — write policy around your first real use-case so language stays practical.
- Keep it visible — pin in Teams/Slack; print a one-pager for noticeboards.
- Use checklists — staff tick “Tool, Data Type, Reviewer” before hitting submit.
- Celebrate compliance wins — shout-out the first team that logs an AI success and the policy reference number.
Common Pitfalls To Avoid
Next Actions & Resources
- Download the Konekt AI Policy Template → siehe link oben.
- Read the GDPR deep-dive → konekt.group/blog/ai-gdpr
- Pilot a no-personal-data workflow from Blog 5 → konekt.group/blog/ai-experiments
- Add “AI Policy review” to next board agenda (use briefing pack → konekt.group/blog/ai-governance)
Questions? Email [email protected] or DM @KonektInsights.
AI Strategy Services
Our AI Strategy Pack is designed to help your business confidently navigate the AI landscape—identifying clear, actionable opportunities to integrate AI where it matters most. With 77% of businesses now exploring or investing in AI, those with a defined strategy are set to lead (McKinsey, 2023).
This service supports you in identifying impactful use cases, prioritising initiatives, and aligning AI adoption with your business goals. You'll walk away with a tailored AI roadmap, opportunity scorecard, and key recommendations across tools, workflows and governance—empowering your teams to take action.




