Conformity Assessments:
Before deploying, high-risk AI systems must undergo assessments to ensure they meet the Act’s requirements. Some can be self-assessed by providers, while others need verification by third parties.
Risk management:
- Data governance (ensuring high-quality datasets without biases)
- Documentation (providing proof of compliance)
- Transparency (ensuring users know they’re interacting with an AI system)
- Human oversight (to minimize erroneous outputs)
- Robustness, accuracy, and cybersecurity.
