What Should Your AI Policy Cover?
One issue that we are consistently hearing from our users and the wider industry community is that implementing formal usage policies for AI is a daunting challenge, but one that increasingly needs to be met, especially as compliance and regulation catch-up to the technology.
A strong AI policy provides clarity for employees on how to safely and effectively use generative AI tools in the workplace. It eliminates ambiguity around inappropriate usage and ensures the organisation’s risks are managed effectively and in line with legal and compliance requirements.
The key is to keep the policy concise, clear, and focused on addressing the most critical risks, the following is a good entrypoint to start the discussions at your company:
1. Permitted Use
Clearly define when and how employees can use generative AI tools:
- Usage must be aligned with the employee’s role and job responsibilities.
- Consider approval by line manager to ensure visibility.
- Generative AI tools should only be accessed via approved corporate accounts.
- All outputs generated by AI must be reviewed for accuracy, relevance, and compliance with company standards.
2. Prohibited Use
Outline situations where generative AI usage is unacceptable:
- Do not use AI tools to share confidential, proprietary, or sensitive company information.
- Avoid using AI for decisions with significant legal, ethical, or operational consequences without thorough human oversight.
- Never upload company data, client information, or any sensitive material unless explicitly approved by authorized personnel.
- Personal accounts for AI systems must not be used for any work-related activities.
3. Accountability and Monitoring
- Employees must take responsibility for the content they create or use with AI tools.
- Organizations should reserve the right to monitor AI usage to ensure compliance with policies and prevent misuse.
4. Training and Awareness
Employees should be well-informed about:
- Proper usage of generative AI tools.
- Potential risks and how to mitigate them.
- Ethical and security considerations.
- Regular training sessions can ensure staff remain up-to-date on best practices and evolving risks.
5. Legal Requirements and Standards Compliance
Ensure that AI usage aligns with:
- Applicable laws and regulations, for all the countries (and potentially states) the business and its users are in.
- Industry standards, such as those maintained by ISO and NIST.
This is a rapidly developing area of law/policy, so it is essential that company legal teams stay across it.
6. Continuous Review
AI technologies evolve rapidly, and so should your policy:
- Conduct periodic reviews to ensure the policy reflects the latest technological and regulatory developments.
- Update guidelines as needed to address new challenges or opportunities.
It is essential that all layers of the business are represented, from executive leadership, to legal, compliance, security and risk teams, to the teams building or using AI services. Make sure that all of your stakeholders are represented.
Feel free to get in touch with us if you need some advice on forming a policy tailored to your companies specific risks and needs.