AI ACCEPTABLE USE POLICY
[ORGANIZATION NAME]
Effective Date: [__/__/____]
Version: [____]
TABLE OF CONTENTS
- Purpose and Scope
- Definitions
- Permitted Uses
- Prohibited Uses
- Risk Classification
- Data Protection Requirements
- Procurement and Approval
- Human Oversight and Accountability
- Transparency and Disclosure
- Compliance with Applicable Law
- Violations and Enforcement
- Policy Review
1. PURPOSE AND SCOPE
This AI Acceptable Use Policy ("Policy") establishes the standards, restrictions, and obligations governing the use, procurement, development, and deployment of artificial intelligence ("AI") systems by [ORGANIZATION NAME] ("Organization") and all personnel, contractors, and third-party vendors who access or use AI tools on behalf of the Organization.
Applicability: This Policy applies to all employees, independent contractors, consultants, temporary workers, interns, agents, and third-party service providers.
2. DEFINITIONS
"AI System" means a machine-based system that, for explicit or implicit objectives, infers from input data how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, consistent with the definition in the EU AI Act, Regulation (EU) 2024/1689, Art. 3(1).
"High-Risk AI System" means an AI system that makes or materially supports consequential decisions affecting natural persons in areas including employment, credit, housing, education, healthcare, insurance, or law enforcement, as referenced in Colorado SB 24-205, § 6-1-1701(7) and EU AI Act Annex III.
"Algorithmic Discrimination" means any condition in which the use of an AI system results in unlawful differential treatment or disparate impact on the basis of a protected classification, as defined in Colorado SB 24-205, § 6-1-1701(1).
"Deployer" means any person doing business in the applicable jurisdiction that deploys a high-risk AI system.
3. PERMITTED USES
The following uses of AI are permitted subject to compliance with this Policy:
☐ Internal productivity and workflow automation (document summarization, scheduling, data analysis)
☐ Customer service augmentation with required human-in-the-loop review
☐ Marketing content generation with human editorial review and AI disclosure
☐ Code development assistance with security review before deployment
☐ Research and data analysis using properly licensed datasets
☐ Administrative functions that do not involve consequential decision-making
4. PROHIBITED USES
The following uses of AI are strictly prohibited:
☐ Processing protected health information (PHI) through unapproved AI tools in violation of HIPAA, 45 C.F.R. Parts 160, 164
☐ Making final employment, credit, housing, or insurance decisions without meaningful human review
☐ Real-time biometric identification in publicly accessible spaces (prohibited under EU AI Act Art. 5(1)(d))
☐ Social scoring of individuals based on social behavior or personality characteristics (prohibited under EU AI Act Art. 5(1)(c))
☐ Inputting attorney-client privileged communications, trade secrets, or material nonpublic information into external AI tools
☐ Generating deepfakes or synthetic media for deceptive purposes
☐ Circumventing security controls, content filters, or safety guardrails on AI systems
☐ Using AI to engage in unauthorized surveillance of employees or third parties
☐ Deploying AI systems that manipulate persons through subliminal or deceptive techniques (prohibited under EU AI Act Art. 5(1)(a))
5. RISK CLASSIFICATION
All AI systems used by the Organization must be classified according to the following risk tiers:
| Risk Level | Description | Approval Required | Examples |
|---|---|---|---|
| Unacceptable | Prohibited by law or policy | Not permitted | Social scoring, manipulative AI |
| High | Consequential decisions affecting individuals | AI Governance Committee + Legal | Hiring tools, credit scoring, medical diagnosis |
| Limited | Interaction with individuals requiring transparency | Department Head + IT Security | Chatbots, content recommendation |
| Minimal | Low-risk productivity tools | Manager approval | Spell-check, calendar scheduling |
6. DATA PROTECTION REQUIREMENTS
6.1 No personal data, as defined under GDPR Art. 4(1), CCPA Cal. Civ. Code § 1798.140(v), or applicable state privacy laws, shall be processed through an AI system without a completed data protection impact assessment ("DPIA") pursuant to GDPR Art. 35 or equivalent risk assessment.
6.2 All AI-processed data must comply with the Organization's data retention and destruction policy and applicable minimization principles under GDPR Art. 5(1)(c).
6.3 Biometric data shall not be collected or processed by any AI system without prior written informed consent in compliance with the Illinois Biometric Information Privacy Act, 740 ILCS 14/15(b).
6.4 AI vendors must execute a data processing agreement that prohibits using Organization data to train, improve, or fine-tune models for the vendor's or any third party's benefit.
7. PROCUREMENT AND APPROVAL
7.1 All AI tools and systems must be approved through the Organization's AI procurement process before deployment.
7.2 The requesting department shall complete an AI Use Case Assessment Form, including:
- Purpose and business justification
- Data types to be processed
- Risk classification under Section 5
- Vendor security and privacy assessment
- Regulatory impact analysis
7.3 High-risk AI systems require review and written approval by [TITLE OF AI GOVERNANCE OFFICER OR COMMITTEE].
8. HUMAN OVERSIGHT AND ACCOUNTABILITY
8.1 All high-risk AI systems must incorporate meaningful human oversight mechanisms consistent with EU AI Act Art. 14 and Colorado SB 24-205, § 6-1-1703(3)(b).
8.2 No AI-generated output that constitutes a consequential decision shall be communicated to an affected individual without human review and approval by a qualified decision-maker.
8.3 Each deployed AI system shall have a designated Responsible Individual who maintains accountability for the system's performance, outputs, and compliance.
9. TRANSPARENCY AND DISCLOSURE
9.1 The Organization shall disclose to consumers and affected individuals when they are interacting with an AI system, pursuant to EU AI Act Art. 50(1) and Colorado SB 24-205, § 6-1-1703(3)(a).
9.2 When an AI system has been a substantial factor in making a decision adverse to a consumer, the Organization shall provide notice including:
- That an AI system was used
- A description of the AI system's role in the decision
- The consumer's right to appeal and obtain human review
- Contact information for inquiries
9.3 AI-generated content published externally shall bear clear disclosure consistent with the Organization's AI-Generated Content Disclosure Policy.
10. COMPLIANCE WITH APPLICABLE LAW
This Policy is designed to facilitate compliance with, at minimum:
- EU AI Act, Regulation (EU) 2024/1689
- Colorado AI Act, SB 24-205, C.R.S. § 6-1-1701 et seq. (effective June 30, 2026)
- CCPA/CPRA, Cal. Civ. Code § 1798.100 et seq. (ADMT regulations effective January 1, 2026)
- Illinois BIPA, 740 ILCS 14/1 et seq.
- GDPR, Regulation (EU) 2016/679
- FTC Act, 15 U.S.C. § 45 (unfair or deceptive acts or practices)
- Title VII of the Civil Rights Act, 42 U.S.C. § 2000e et seq. (AI in employment)
- Equal Credit Opportunity Act, 15 U.S.C. § 1691 et seq.
- Fair Housing Act, 42 U.S.C. § 3601 et seq.
11. VIOLATIONS AND ENFORCEMENT
11.1 Any employee who violates this Policy may be subject to disciplinary action up to and including termination of employment.
11.2 Third-party vendors who violate this Policy may be subject to contract termination and indemnification obligations.
11.3 Reports of suspected violations should be directed to [COMPLIANCE OFFICER NAME/TITLE] at [EMAIL ADDRESS] or through the Organization's anonymous reporting mechanism.
12. POLICY REVIEW
This Policy shall be reviewed and updated no less than annually or upon material changes in applicable law, technology, or organizational AI usage. The next scheduled review date is [__/__/____].
ACKNOWLEDGMENT
I, [________________________________], acknowledge that I have received, read, and understand this AI Acceptable Use Policy, and I agree to comply with its terms.
Signature: [________________________________]
Printed Name: [________________________________]
Title: [________________________________]
Date: [__/__/____]
This policy template does not constitute legal advice. Organizations must consult qualified legal counsel to customize this policy to their specific operational, regulatory, and jurisdictional requirements.
Need help customizing this document?
Get 3 days of intelligent editing. Tailor every section to your specific case.