AI System Impact Assessment

Ready to Edit

AI System Impact Assessment

System Name: [____________________________________]

Organization: [____________________________________]

Assessment Date: [__/__/____]
Next Review Date: [__/__/____]


1. System Overview & Deployment Context

1.1 System Description

What is the system?

[________________________________]

What decisions or recommendations does it make?

[________________________________]

What data inputs does it use?

[________________________________]

Who deployed it (vendor/internal)?

[________________________________]

1.2 Deployment Scope

Geographic jurisdictions where the system is used:

☐ New York City (NYC LL 144 applies)
☐ European Union / EEA (EU AI Act applies)
☐ United States—Federal EEOC jurisdiction
☐ Other: [____________________________________]

Operational context:

☐ Employment hiring/screening
☐ Employment promotion or wage decisions
☐ Layoff or termination decisions
☐ Biometric identification
☐ Access to essential services (public or private)
☐ Law enforcement
☐ Educational/vocational training
☐ Other: [____________________________________]

Frequency of use:

  • Expected decision volume per [week/month/year]: [__________]
  • First deployment date: [__/__/____]
  • Significant model changes since launch: ☐ Yes ☐ No

2. Affected Stakeholder Mapping

2.1 Categories of Affected Persons & Groups

Direct beneficiaries/subjects of decisions:

[________________________________]

Demographic categories at elevated risk:

☐ Race/ethnicity (specify: [____________________])
☐ Sex/gender identity
☐ Age (40+)
☐ Disability status
☐ Religion
☐ National origin
☐ Genetic information
☐ Pregnancy-related status
☐ Intersectional combinations
☐ Other vulnerable groups: [____________________]

Downstream affected parties (e.g., rejected candidates, monitored employees, community members):

[________________________________]

Estimated number of persons affected per cycle:

[__________]


3. Harm Taxonomy & Risk Identification

3.1 Potential Harms

Mark each applicable risk and describe how it may materialize:

Discrimination & Disparate Impact

Discriminatory screening: System rejects candidates disproportionately based on protected characteristic.

Risk description: [________________________________]

Protected classes most at risk: [____________________]

Proxy discrimination: System uses facially neutral data that correlates with protected status (e.g., zip code as proxy for race).

Risk description: [________________________________]

Intersectional bias: Compounded harm affecting individuals in multiple protected categories (e.g., Black women, older disabled workers).

Risk description: [________________________________]

Fundamental Rights Violations (EU FRIA)

Right to non-discrimination: System violates Charter of Fundamental Rights Article 21.

Right to fair trial/due process: System lacks transparency or appeal mechanism.

Right to private/family life: System collects or uses sensitive personal data.

Right to freedom of expression/information: System restricts access to information or opportunities.

Right to work/employment: System unfairly blocks access to employment.

Right to education: System affects access to training or skill development.

Other fundamental rights: [____________________]

Algorithmic Opacity & Autonomy Risks

Lack of interpretability: End-users cannot understand why decisions were made.

Autonomous decision-making: System operates with minimal human oversight.

Feedback loops: Biased historical data reinforces discriminatory patterns.

Adversarial manipulation: System is vulnerable to gaming or evasion.

Accessibility & Accommodation Gaps

Failure to accommodate disability: System does not offer alternative assessment methods (e.g., accessible video interview format).

Accessibility barriers: Interface/process inaccessible to individuals with disabilities.

3.2 Severity & Likelihood Assessment

For each identified risk:

Risk Severity (High/Medium/Low) Likelihood (Probable/Possible/Remote) Overall Level
[Risk 1] ☐ High ☐ Med ☐ Low ☐ Probable ☐ Possible ☐ Remote ☐ Critical ☐ High ☐ Medium
[Risk 2] ☐ High ☐ Med ☐ Low ☐ Probable ☐ Possible ☐ Remote ☐ Critical ☐ High ☐ Medium
[Risk 3] ☐ High ☐ Med ☐ Low ☐ Probable ☐ Possible ☐ Remote ☐ Critical ☐ High ☐ Medium

4. Regulatory Compliance Checklist

4.1 NYC Local Law 144 (AEDT Bias Audit)

Applies if: Using automated employment decision tool in NYC hiring/promotion context.

Requirement Status Evidence / Proof
Independent third-party bias audit completed ☐ Yes ☐ No ☐ N/A Audit report dated: [__/__/____]
Audit tests disparate impact by race, gender, intersections ☐ Yes ☐ No ☐ N/A Outcome metrics reported: ☐ Selection rates ☐ Score distributions ☐ Impact ratios
Four-fifths rule compliance verified (80% of highest group) ☐ Yes ☐ No ☐ N/A [____________________]
Audit results publicly posted for 6+ months ☐ Yes ☐ No ☐ N/A URL: [____________________]
10 business days' notice provided to candidates before use ☐ Yes ☐ No ☐ N/A Notice dated: [__/__/____]
Notice in plain language + translated versions ☐ Yes ☐ No ☐ N/A Languages: [____________________]
Alternative process option offered to applicants ☐ Yes ☐ No ☐ N/A Process description: [____________________]

Audit vendor/firm: [____________________________________]
Audit completion date: [__/__/____]
Next audit due: [__/__/____]

4.2 EU AI Act Article 27 (FRIA)

Applies if: Deploying high-risk AI system within EU/EEA (biometric, employment, essential services, law enforcement, migration, justice, etc.).

FRIA Element Completed Date Reference
(a) Deployment process description ☐ Yes ☐ No [__/__/____] Section [__]
(b) Frequency & duration of use ☐ Yes ☐ No [__/__/____] Section [__]
(c) Affected categories of persons/groups identified ☐ Yes ☐ No [__/__/____] Section 2
(d) Specific risks to fundamental rights assessed ☐ Yes ☐ No [__/__/____] Section 3
(e) Human oversight measures implemented ☐ Yes ☐ No [__/__/____] Section 5
(f) Risk mitigation & governance measures documented ☐ Yes ☐ No [__/__/____] Section 5–6
Notification to market surveillance authority (if required) ☐ Yes ☐ No ☐ N/A [__/__/____] [____________________]

4.3 EEOC & Federal Anti-Discrimination Laws

Applies to: All AI employment decisions (Title VII, ADEA, ADA, GINA, FCRA).

Compliance Area Status Supporting Documentation
Vendor claims verified (system does not rely on "bias-free" claims alone) ☐ Yes ☐ No [____________________]
Disparate impact analysis conducted on outcomes ☐ Yes ☐ No Analysis date: [__/__/____]
Job-relatedness & business necessity documented ☐ Yes ☐ No [____________________]
No less discriminatory alternatives identified ☐ Yes ☐ No [____________________]
ADA accessibility & accommodation process in place ☐ Yes ☐ No Alternative method: [____________________]
Human review mechanism for automated rejections ☐ Yes ☐ No Override authority: [____________________]
Audit trail maintained for all decisions ☐ Yes ☐ No Retention period: [__________] years

5. Mitigation Measures & Governance

5.1 Human Oversight & Decision-Making

What human review occurs before consequential decisions?

☐ Individual review of all system outputs
☐ Sampling-based review ([_____]% of decisions)
☐ Review only when system output falls in uncertain range
☐ Review only upon appeal/challenge
☐ No meaningful human review

Human reviewer authority & training:

  • Reviewer role/title: [____________________________________]
  • Decision authority: ☐ Can override system ☐ Can recommend but not override
  • Training on bias & discrimination: ☐ Yes ☐ No | Last date: [__/__/____]
  • Documentation requirement: ☐ Yes ☐ No | Format: [____________________]

5.2 Transparency & Explainability

How is the decision communicated to the affected person?

☐ Full explanation of factors/score provided
☐ Limited explanation (top factors only)
☐ System decision only, no explanation
☐ Placeholder text: [____________________________________]

How can an affected person request information about the decision?

  • Appeal process: [____________________________________]
  • Response timeline: [__________] business days
  • Right to human review: ☐ Yes ☐ No
  • Right to explanation: ☐ Yes ☐ No

5.3 Data & Model Governance

Training data:

  • Data sources: [____________________________________]
  • Time period covered: [__________] to [__________]
  • Known biases or limitations: [____________________________________]
  • Regular validation/testing: ☐ Yes ☐ No | Frequency: [____________________]

Model updates:

  • Version control maintained: ☐ Yes ☐ No
  • Major changes trigger reassessment: ☐ Yes ☐ No
  • Performance monitoring (accuracy, fairness): ☐ Yes ☐ No | Metrics: [____________________]

5.4 Monitoring & Accountability

Ongoing fairness monitoring:

  • Selection rates by protected class tracked: ☐ Yes ☐ No
  • Performance monitoring schedule: [____________________________]
  • Threshold for escalation/intervention: [____________________________]
  • Responsible team: [____________________________________]

Complaint & escalation process:

  • Internal escalation contact: [____________________________________]
  • External escalation (regulatory): ☐ NYC DCWP ☐ EEOC ☐ EU AI Office ☐ Other: [____________________]
  • Documentation retention period: [__________] years

6. Accessibility & Accommodation

6.1 ADA / EU Accessibility Requirements

Alternative assessment methods available:

☐ Yes — Specify: [____________________________________]

☐ No — Justification: [____________________________________]

Process for requesting accommodation:

[____________________________________]

Timeframe for accommodation provision: [__________] business days


7. Documentation & Record-Keeping

7.1 Assessment Documentation

Assessment completed by:

  • Name: [____________________________________]
  • Title/Role: [____________________________________]
  • Email: [____________________________________]
  • Date: [__/__/____]

Internal legal review:

☐ Yes — Reviewed by: [____________________] | Date: [__/__/____]

☐ No — Justification: [____________________________________]

External legal/compliance review:

☐ Yes — Firm/Consultant: [____________________] | Date: [__/__/____]

☐ No

7.2 Records to Maintain

Required documentation to retain:

☐ This assessment (original & updated versions)
☐ Vendor documentation & claims
☐ Independent bias audit report (NYC LL 144)
☐ FRIA report (EU AI Act)
☐ Disparate impact analyses
☐ Decision logs/audit trail
☐ Human review documentation
☐ Complaint records
☐ Accommodation requests & resolutions

Retention period: [__________] years (per applicable law: NYC LL 144 = 3 years, EEOC = 1+ years, EU = per GDPR/FRIA requirements)


8. Regulatory Notification & External Submissions

8.1 Notifications Required

Regulator Applicability Notification Requirement Submitted Date
NYC Department of Consumer & Worker Protection NYC LL 144 applies Complaint mechanism; public audit posting ☐ Yes ☐ No [__/__/____]
EU AI Office / National Authority EU AI Act applies FRIA template notification ☐ Yes ☐ No [__/__/____]
EEOC Federal hiring Charge filed (if discrimination alleged) ☐ N/A ☐ Yes ☐ No [__/__/____]

9. Attestation & Sign-Off

I confirm that this assessment has been completed accurately and that identified risks and mitigation measures have been reviewed and approved for deployment.

Organization Representative:

  • Name (Print): [____________________________________]
  • Title: [____________________________________]
  • Signature: [____________________________________]
    Date: [__/__/____]

Internal Legal/Compliance Officer (if applicable):

  • Name (Print): [____________________________________]
  • Title: [____________________________________]
  • Signature: [____________________________________]
    Date: [__/__/____]

Sources and References

NYC Local Law 144 (AEDT)

EU AI Act Article 27 (FRIA)

EEOC Guidance & Federal Anti-Discrimination Laws

Additional Resources

Ezel AI
Hi! Need help customizing this document? I can tailor every section to your specific case in minutes.
AI Legal Assistant
Ezel AI
Hi! Need help customizing this document? I can tailor every section to your specific case in minutes.

Insert Image

Insert Table

Watch Ezel in action (sample case)

All changes saved
Save
Export
Export as DOCX
Export as PDF
Generating PDF...
ai_system_impact_assessment_universal.pdf
Ready to export as PDF or Word
AI is editing...
Chat
Review

Customize this document with Ezel

  • Deep Legal Knowledge
    Understands case law, statutes, and legal doctrine.
  • Court-Ready Formatting
    Proper captions, certificates of service, and local rule compliance.
  • AI-Powered Editing on Your Timeline
    Edit as many times as you need. Tailor every section to your specific case.
  • Export as PDF & Word
    Download your finished document in professional PDF or DOCX format, ready to file or send.
Secure checkout via Stripe
Need to customize this document?

About This Template

Compliance documents are what regulated businesses use to prove they follow the rules that apply to their industry, whether that is privacy, anti-money-laundering, consumer protection, or sector-specific requirements. Regulators look for consistent policies, up-to-date records, and clear evidence of employee training. The cost of getting compliance paperwork right is almost always smaller than the cost of an enforcement action, fine, or public disclosure.

Important Notice

This template is provided for informational purposes. It is not legal advice. We recommend having an attorney review any legal document before signing, especially for high-value or complex matters.

Last updated: April 2026