Classify your AI use cases against the EU AI Act and produce a defensible risk file

Map each of your AI systems to the EU AI Act risk tiers (prohibited, high-risk, limited-risk, minimal-risk), identify conformity obligations, and produce the documentation expected by Articles 6–15 and 52.

Get Started — $29/mo

Where teams get stuck

  • High-risk classification triggers conformity assessments, technical documentation, and post-market monitoring — most teams have none of it
  • Provider / deployer / importer obligations differ, but most internal analysis treats them as the same role
  • General-purpose AI (GPAI) model obligations overlap unclearly with downstream deployer obligations
  • Article 52 transparency duties apply even to minimal-risk systems and are often missed
  • Fines reach up to 7% of global annual turnover for prohibited-practice violations

What you walk away with

  • Per-system risk classification (prohibited / high-risk / limited / minimal) with rationale
  • Role mapping (provider, deployer, importer, distributor) per AI system
  • Article 9–15 technical-documentation checklist for any high-risk systems
  • Article 52 transparency obligations mapped to your user-facing surfaces
  • Prioritised remediation roadmap with effort estimates

How it works

  1. 1

    Catalogue your AI systems

    List each AI system with its purpose, inputs, outputs, deployment context, and your role (provider vs deployer).

  2. 2

    Run the Compliance Auditor (AI Act profile)

    The specialist classifies each system against Annex III and Articles 5–15, and flags prohibited practices.

  3. 3

    Add governance architecture

    Pair with the Automation Governance Architect for operating-model changes or the Agentic Identity Trust Architect for agent decision audit trails.

  4. 4

    Produce the technical file

    The output includes a conformity checklist and a gap-to-technical-file map you can hand to engineering.

Specialists that run this use case

Frequently asked questions

Which enforcement deadlines matter most?

Prohibited-practice rules applied from 2 February 2025. General-purpose model obligations applied from 2 August 2025. Full high-risk obligations apply from 2 August 2026. PnotL's output is tagged by applicable date so you can prioritise.

Can I cover both AI Act and GDPR in one run?

No — run the AI Act assessment first, then the GDPR audit. They share inputs but map to different frameworks, and combining them produces weaker findings in both.

What counts as a high-risk system under Annex III?

Annex III lists eight categories: biometrics, critical infrastructure, education/vocational training, employment/HR, access to essential services, law enforcement, migration/asylum, and justice/democratic processes. The specialist tests your system against each.

Simple, transparent pricing

Starter

$29/month

5 expert runs

Get Started

Professional

$49/month

20 expert runs

Get Started

Business

$99/month

50 expert runs

Get Started
View Pricing