Action for Happiness AI Policy
1. Purpose of the Policy
This policy outlines our commitment to use artificial intelligence (AI) ethically, safely and transparently - using it to drive new innovation and increase efficacy in supporting our mission and charitable aims (enhancing individual and collective well‑being), while also protecting the dignity, safety, rights and trust of everyone we engage with.
2. Scope
Applies to all trustees, employees, volunteers, contractors, partners, and service providers working on behalf of Action for Happiness, across all use cases - including internal operations, content creation, digital services, partnerships and public‑facing tools.
3. Definitions
- AI: computer systems or applications capable of performing tasks normally requiring human intelligence (e.g. machine learning models, generative AI, rule‑based systems).
- Generative AI: AI that produces text, image, audio, or visual outputs.
- High‑risk AI: systems whose use could significantly affect people, aligned with UK and EU frameworks.
4. Principles & Ethical Commitments
Building on the UK’s five principles for responsible AI regulation, our approach balances responsibility with innovation:
- Human-centred Oversight – AI supports, but does not replace, human judgement and compassion.
- Fair, Ethical & Inclusive – we aim to avoid bias, harm, or misuse, and will only use AI in ways consistent with our values.
- Transparency and Explainability – we will be clear where AI plays a meaningful role in creating or delivering content.
- Safe & Secure – we will comply with data protection law, safeguarding and sector standards.
- Innovative & Learning-led – staff are encouraged to explore AI tools that could save time, improve quality or spark creativity - provided they follow this policy.
- Safe & Secure – we will comply with data protection law, safeguarding and sector standards.
- Public Benefit – we will ensure that AI supports and enhances our charitable objectives more than it creates risk.
5. Governance & Roles
- Board Oversight: Trustees will review and approve all major AI initiatives and this AI policy will be reviewed regularly.
- Designated Lead: Our named AI Lead (currently our Director, Mark Williamson) will oversee adherence to this policy, including any training, system assessments or impact audits which may be required.
- Cross‑team collaboration: Staff with expertise in Safeguarding, outreach/comms, technology and community building will regularly review AI use and any issues.
6. Use Policy: Permitted & Prohibited Uses
Permitted (with oversight):
- Drafting social posts and public communications (always reviewed by humans).
- Research support (summaries, idea generation, literature scoping).
- Administrative automation, e.g. scheduling and routine communications (non‑sensitive contexts).
- Creating and editing assets, such as videos and audio, for sharing with our community or on public platforms.
- Generating appropriate and/or tailored content to be shared with users in relevant services/programs, ensuring this is transparent and has been carefully tested.
- Analysing our audience and understanding their behaviour or needs, e.g. user analytics, audience segmentation, targeting for campaigns.
Prohibited without explicit board approval:
- AI that makes significant decisions affecting individual supporters or beneficiaries.
- Use on non-anonymised sensitive personal data and special category data given in a safeguarding context.
- Content that may be misused to produce deepfakes, disinformation or image‑generated likenesses of individuals.
7. Data Protection & Security
- Strict compliance with GDPR/Data Protection Act.
- Data minimisation: only the data necessary for the intended purpose is used.
- Consent: any personal data used with explicit informed consent.
- Security: organisational IT policies apply to AI tools; maintain secure access, encryption and vendor due diligence.
- Staff must only use Action for Happiness-approved accounts when analysing personal data.
- We seek to prevent LLM providers from training new models on sensitive AFH data and special category data.
8. Risk Assessment & Innovation
- New AI tools or use cases will undergo a risk/impact assessment, considering potential bias, errors, safeguarding needs, reputational impact and potential for leaking sensitive content.
- Where AI is used in published material, attribution will be included where relevant.
- Staff are encouraged to share learnings, so we can adopt what works, learn from any issues and avoid pitfalls.
- Staff are encouraged to be flexible, innovative and open to the ways AI can support us in achieving our mission, while remaining aware of the potential risks.
9. Monitoring, Review & Evaluation
- Regular reviews of AI system performance, risks and emerging best practices
- Annual policy review by senior leadership in consultation with the Board of Trustees.
- Open channels for staff feedback and incident reporting if AI outputs raise issues.
10. Training & Awareness
- AI awareness training for relevant staff and volunteers, where needed for their roles.
- Role‑specific training for people working directly with AI tools.
- Updates/refresher sessions as technology and policy evolve.
11. Integration with Other Policies
This policy complements and will align with existing AFH policies, including:
- Privacy Policy https://actionforhappiness.org/privacy
- Safeguarding Policy https://actionforhappiness.org/safeguarding
- Accountability Principles: https://actionforhappiness.org/accountability
- DEIB Policy https://actionforhappiness.org/deib
12. Compliance & Breach Management
- Breaches or non‑compliance are taken seriously. Actions may include retraining, suspension of AI tool use or other disciplinary measures.
- Any significant incident or external risk (e.g. reputational harm) must be escalated to Trustees and documented transparently.
Transparency & Public Commitment
We commit to making this policy publicly available on our website.
This policy was drafted by AI and edited by a human.
Where feasible, we will note which parts of our public content or services are AI-assisted, including tool usage and human oversight.
This policy reflects the best current UK charity sector guidance - including Charity Digital Code of Practice, Charity Excellence AI frameworks, and Charity Commission principles.
Last updated: 24 September 2025

Download the FREE Action for Happiness app for iOS or Android
Gives you friendly nudges with an action idea each day Sends you inspiring messages to give you a boost Helps you connect & share ideas with like-minded people
