Purpose of policy
At MDCVS, we recognise the potential of Artificial Intelligence (AI) to enhance our services, improve efficiency, and support our mission. This policy outlines how AI is used within our organisation to ensure it aligns with our values, remains ethical, and benefits our staff, volunteers, and the communities we serve.
This policy applies to all staff, volunteers, and partners using AI tools as part of their roles. It covers AI-driven services such as research assistance, chatbots, automated recommendations, and any other AI applications we adopt. It sets out clear principles for responsible AI use, ensuring accuracy, transparency, data protection, and human oversight.
By adhering to this policy, we ensure AI remains a tool that supports—not replaces—human expertise while upholding our commitment to fairness, privacy, and inclusivity.
1. AI Usage Principles
Our AI-driven services (such as research assistance tools, a chatbot guide, and bespoke resource recommendations) are governed by clear principles. These principles ensure the AI benefits all users and operates ethically. They are informed by sector best practices (e.g. The National Lottery Community Fund’s AI principles):
- Human-Centric and Empowering: AI enhances human expertise; it does not replace it. For example, our AI research tool gathers information swiftly, allowing staff to focus on personalised advice.
- Human Oversight: Significant AI-driven processes involve human review to ensure outputs are appropriate and mission-aligned. However, we advise users that Chatbots can make mistakes so although they are hugely useful, users of any Chatbot should check important information before relying on it.
- Transparency and Accountability: We inform users when AI-generated content is provided, such as our chatbot identifying itself as an AI assistant. We take responsibility for AI outputs and address any issues that arise.
- ·nclusivity and Accessibility: Our AI services are designed to be unbiased and accessible, catering to users regardless of digital literacy, background, or ability. We offer alternative support methods, like phone or email assistance, to ensure inclusivity.
- Privacy and Data Security: We protect user privacy by minimising data collection and ensuring no personal data is used without consent. Our AI
- systems comply with UK data protection laws, including GDPR.
Our website states where information is stored, e.g. chat sessions on the user’s device.
- Ethical Use and Fairness: AI applications align with our charity’s ethical values. We conduct small-scale trials before full deployment and assess the environmental and societal impacts of AI use.
- Continuous Improvement: We monitor AI usage and outcomes, regularly updating systems and policies to adapt to best practices and regulations. User feedback is actively sought to evolve our approach.
2. AI Risks
We have assessed potential risks that AI could pose in our services and developed strategies to mitigate each risk. The following register identifies key operational, ethical, reputational, and compliance risks specific to our charity’s AI use, along with how we address them:
- Misinformation: AI tools may produce incorrect information.
Mitigation: Use up-to-date content, undertake testing and verification of AI content; staff review AI-generated outputs before releasing a system.
- Bias and Fairness: AI systems can inadvertently carry biases.
Mitigation: Use diverse training data (where applicable), conduct bias review, and provide mechanisms for users to report biased AI responses. Ensure human review and validation processes are in place
- Data Privacy and Security: AI applications could pose data privacy risks.
Mitigation: Conduct Data Protection Impact Assessments, exclude personal data from AI training, secure AI systems per IT security policies and minimise data collection. We advise users not to enter personal or confidential information.
- Reputational Damage: AI errors could harm our or our user’s charity reputation.
Mitigation: Ensure human accountability, set usage boundaries, and have an incident response plan for AI errors. Advise users accordingly.
- Dependency on AI: Over-reliance on AI could weaken human skills.
Mitigation: Treat AI as a support tool, provide staff training, and have fall-back procedures for AI system failures. Advise users of the same.
- Accessibility and Digital Exclusion: Some users may struggle with AI tools, and others may be dramatically enabled.
Mitigation: Design AI to be user-friendly, adhere to accessibility standards, and offer alternative support methods, and personal coaching.
3. Internal AI Policy
Our internal AI policy guides staff and volunteers on responsible AI use. Staff are required to be familiar with our AI usage principles as outlined in this policy and ensure they are followed in practice. Staff are made aware that
- AI should assist, not replace, human decision-making and they remain responsible for verifying AI-generated content before using it in decision-making, reports, or communications. Critical decisions (e.g., legal, financial, or HR-related) require human review.
- Staff must verify AI outputs against reliable sources before acting on them. AI-generated information should not be assumed to be correct without validation. Employees should cross-check facts, figures, and legal/statutory guidance.
- AI should be used as a tool to support accomplishing a task and staff are encouraged to utilise their own critical thinking, independent judgment, knowledge and collaboration.
- Sensitive, confidential, or personal information should not be inputted into AI tools and data protection regulations must be followed at all times.
Project specific guidance around the use of AI will be added to this policy as Appendices as appropriate.
4. Compliance and Data Protection
MDCVS is committed to protecting the privacy and security of our users, staff, and stakeholders. Our use of AI is guided by strict data protection principles to ensure compliance with UK laws, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018
- Minimal Data Collection: We only collect and process the data necessary to provide services. Personal or sensitive data is never entered into AI tools unless explicitly required and with consent.
- User Privacy: AI-generated interactions, such as chatbot conversations, will be utilised in a way to protect privacy.
- Transparency: We clearly inform users when AI is being used and explain how it interacts with their data.
- Security Measures: We take appropriate technical and organisational measures to safeguard data.
- Third-Party AI Tools: If external AI services are used, we ensure they meet data protection standards and do not compromise user privacy.
E-portal-Essex approach to AI – APPENDIX ONE
The E-Portal-Essex utilises Artificial Intelligence (AI) I to provide information and guidance to other charities and community groups through an online portal. This framework outlines our principles and approach for using AI on this portal and ensuring a high standard of service encompassing the AI usage principles laid out in this policy.
Delivery Framework
Our approach to AI guides staff and volunteers on responsible AI use. Roles and responsibilities for AI oversight are clearly assigned – for example, a project lead must approve any new AI tool deployment, and a data protection officer reviews compliance aspects. Trustees and senior management remain ultimately accountable for AI use; they are kept informed through regular AI updates and training sessions, building a shared understanding of AI at the board level
- Governance and Oversight: The project working group oversees AI initiatives to ensure alignment with our objectives. Roles for AI oversight are assigned. The Project/Operational Lead is responsible for deployments, with trustees and senior management remaining accountable.
- Ethical Use and Accountability: AI systems undergo a risk/ethics review before implementation. Staff are briefed on AI ethics, and static AI-generated content is reviewed and approved by a staff member before publication.
- Transparency and Communication: We maintain an inventory of AI tools and communicate their use to staff and the public. Third-party AI services are vetted, albeit in a fast-moving industry where T&Cs can change without notice.
- Staff Training and Guidance: We provide training on effective and safe AI use, including guidelines on data input, quality assurance, appropriate use cases, and monitoring. We discuss ethical AI practice within the project team.
- Review and Continuous Improvement: The AI policy is reviewed quarterly to stay current with best practices and regulations. Employee feedback is incorporated to refine our governance and guidelines, as is sector best practice.
Public AI Statement – APPENDIX TWO
E-portal-Essex
Our Public AI Statement section provides a public-facing explanation of our AI approach, intended for our website and user communications
Our Commitment to Responsible AI
We use Artificial Intelligence (AI) in our support portal to help you find information and resources more quickly. We work hard to ensure our use of AI is responsible, transparent, and centred on your needs. AI is a tool to enhance our services, such as instantly answering questions through a chatbot or suggesting tailored funding opportunities, but we always keep people in charge of important decisions. Our commitment is that AI will never replace human support; instead, it will assist our team in providing you with better and faster help.
Transparency and Inclusivity
- Our AI is designed to be fair, inclusive, and beneficial to all users. We will be transparent about when and how AI is involved in our services. We regularly use AI to help draft or improve our content but the final product is always the responsibility of a human.
- We actively work to prevent any bias or discrimination in AI outputs, and we regularly check the AI’s suggestions to ensure they are accurate and appropriate.
Data Protection and GDPR
- We comply with all UK regulations, ensuring AI does not process sensitive data without consent.
- Users can request human assistance at any time, if you would like to check/query anything or have a complex query.
Continuous Improvement
- We regularly assess AI performance and welcome user feedback to refine and improve our approach.
- The technology, its use and regulation are changing rapidly so we strive to incorporate charity best practice.
- For any AI-related concerns or feedback, please contact the E-portal-team on hello@e-portal-essex.co.uk