AI Usage Policy
Last updated: June 2025
At Synergy Safeguarding Ltd, we are committed to using Artificial Intelligence (AI) tools in a way that is ethical, transparent, and aligned with our safeguarding principles. This policy outlines how we use AI technologies across our services, communications, and internal operations.
1. Purpose of AI Use
We use AI tools to:
Enhance the accessibility and inclusivity of our materials.
Support content creation, such as drafting training, blog posts, and policy templates.
Improve operational efficiency, including scheduling, analytics, and customer support.
We do not use AI for automated decision-making in safeguarding.
2. Ethical Principles
Our use of AI is guided by the following principles:
Transparency: We disclose when AI-generated content is used in our materials.
Human Oversight: All AI-assisted outputs are reviewed and approved by qualified professionals.
Fairness and Inclusion: We actively mitigate bias and ensure AI tools support inclusive learning environments.
Privacy and Security: We do not input personal, sensitive, or confidential data into AI tools unless explicitly anonymised and compliant with GDPR.
3. Third-Party AI Tools
We may use third-party AI platforms (e.g. for content generation or accessibility enhancements). We ensure these tools:
Have clear privacy policies.
Do not retain or misuse data.
Are reviewed for compliance with UK data protection laws.
4. Limitations and Accountability
AI is a tool—not a substitute for professional judgment. We do not rely on AI for:
Legal advice.
Safeguarding decisions.
Final content without human validation.
If you believe AI has been used inappropriately in our services or communications, please contact us at hazel@synergysafeguarding.co.uk.
5. Policy Review
This policy is reviewed annually or in response to significant changes in AI regulation or usage. We welcome feedback from clients, partners, and community members.