AI Regulation (Updated to 2025)
Switzerland
Pioneering Ethical AI
Switzerland is known for its balanced approach to AI regulation, focusing on ethical guidelines, innovation, and international cooperation. While it does not have standalone AI laws yet, its existing data protection and consumer rights frameworks play a key role in shaping AI governance.
Overview of AI Regulation and Governance in Switzerland
TECHNOLOGY-NEUTRAL APPROACH
Switzerland adopts a technology-neutral and proportional approach to AI regulation. Unlike the EU, Switzerland does not currently have a dedicated AI law. Instead, existing legal frameworks are adapted to address AI-related risks, ensuring responsible innovation and alignment with international standards. The Swiss Financial Market Supervisory Authority (FINMA) plays a central role, particularly in the financial sector, focusing on risk management, third-party oversight, and transparency.
Current Market Size and Projections:
- In 2024, the AI market in Switzerland is projected to reach approximately US$1.74 billion. (S-PRO)
- By 2025, this market size is expected to grow to US$2.31 billion. (Statista)
- Looking ahead to 2030, forecasts suggest the market will expand to US$7.71 billion, reflecting a compound annual growth rate (CAGR) of 27.26% from 2025 to 2030. (Statista)
- AI Robotics: The AI Robotics segment in Switzerland is anticipated to reach US$409.50 million by 2025, with an expected CAGR of 23.55%, leading to a market volume of US$1.179 billion by 2030. (Statista)
- Generative AI: This emerging sector is projected to grow at a remarkable CAGR of 41.46% between 2025 and 2030, resulting in a market volume of US$6.40 billion by 2030.
Key Regulations and Governance Aspects
1. What Does It Involve?
- Principles-Based Approach: Provides flexibility for institutions to create governance structures tailored to AI-related risks.
- Risk Management: Institutions must classify and assess risks of AI applications, ensuring transparency, accountability, and completeness.
- Third-Party Oversight: Ensures proper due diligence, contractual clarity, and ongoing monitoring of outsourced AI activities.
- Inventory Completeness: Institutions must maintain comprehensive, regularly updated inventories of all AI applications.
- International Standards: Aligns with OECD AI Principles and adopts their definitions for AI systems.
2. Who Is Impacted?
- Financial Institutions: Banks, insurance companies, and other entities supervised by FINMA.
- Third-Party Providers: External vendors providing AI solutions or services.
- Employees and Customers: Individuals affected by the deployment and use of AI applications, particularly in high-impact scenarios.
- Management and Boards: Responsible for oversight and ensuring compliance with FINMA’s supervisory expectations.
3. When Is It Due?
- FINMA Supervisory Communication 08/2024: Reflects the most recent observations and expectations regarding AI governance. Institutions are expected to align with these guidelines immediately.
- Ongoing Compliance: No fixed deadlines for legislation, but institutions must implement robust governance as part of their regulatory obligations.
4. Cost of Non-Compliance
Non-compliance with FINMA’s supervisory expectations or international standards can lead to:
- Reputational Damage: Public trust and credibility may be severely impacted.
- Regulatory Sanctions: FINMA has the authority to impose fines, restrict activities, or revoke licenses for institutions failing to manage AI-related risks.
- Operational Disruption: Incomplete inventories or poor third-party oversight may expose institutions to cybersecurity threats or service failures.
- Legal Liability: Institutions may face lawsuits or compensation claims from customers or employees harmed by the improper use of AI.
Key Regulatory Components
1. Risk Classification and Management
- Defining Risk Criteria: Institutions must establish their own criteria for classifying AI applications based on their potential impact on business operations, customers, and employees. These criteria should reflect the technology-neutral and principles-based approach of Swiss regulation.
- Impact-Based Classification: AI systems are categorized based on their significance, particularly those that could lead to material business risks, regulatory challenges, or ethical concerns.
- Governance Requirements:
- Institutions are expected to implement tailored frameworks for assessing, managing, and mitigating risks associated with AI.
- Classification must consider both operational risks (e.g., disruption of services) and reputational risks (e.g., biases in AI-driven decisions).
2. Third-Party Management
- Vendor Selection and Due Diligence: Institutions outsourcing AI-related activities must conduct thorough assessments of third-party capabilities. This includes reviewing their technical expertise, compliance with ethical standards, and past performance.
- Contractual Safeguards:
- Contracts must clearly define responsibilities, including liability for failures, compliance requirements, and penalties for breaches.
- Specific clauses should address data protection, algorithmic transparency, and accountability.
- Monitoring and Testing:
- Continuous oversight of third-party activities to ensure adherence to agreed standards.
- Regular performance reviews and audits to validate compliance with governance frameworks.
3. Inventory Completeness
- Centralized AI Inventory:
- Institutions are required to maintain an up-to-date, centralized record of all AI systems used across the organization.
- The inventory must include details such as the purpose of the AI, its risk classification, its operational dependencies, and any third-party involvement.
- Procedures for Maintenance:
- Mechanisms must be established for identifying and cataloging new AI applications as they are introduced.
- Regular reviews and updates to ensure completeness and accuracy.
- Integration Across Departments:
- Coordination among IT, compliance, and operational teams to monitor AI usage throughout the organization.
4. Transparency and Accountability
- Documentation Standards:
- Institutions must create detailed documentation for each AI system, covering its development, intended use, and decision-making processes.
- Documentation should include risk assessments, testing procedures, and mitigation strategies.
- Market Transparency:
- Institutions are encouraged to communicate transparently about their AI usage, particularly in areas that directly impact customers (e.g., credit scoring or automated customer service).
- Stakeholder engagement to address concerns related to AI practices.
- Internal Accountability:
- Clear allocation of roles and responsibilities for AI governance within the organization.
- Boards and senior management must ensure proper oversight and integration of AI systems into the broader governance structure.
5. Alignment with International Standards
- OECD AI Principles:
- Switzerland adopts the OECD’s definitions and principles for AI systems, focusing on trust, fairness, transparency, and human-centric design.
- Global Collaboration:
- Institutions are encouraged to align with international best practices to ensure cross-border interoperability of AI systems.
- Sectoral Adaptation:
- Regulatory expectations vary by sector, with tailored approaches for areas such as financial services, healthcare, and public administration.
Switzerland AI Governance Developments
Exploring the dynamic and adaptive approach to AI regulation in Switzerland.
Switzerland’s approach to AI governance remains dynamic and adaptive. Although there is currently no dedicated AI law, ongoing discussions and developments signal a commitment to refining and strengthening the regulatory framework to address emerging challenges. This strategy focuses on monitoring current guidelines, integrating AI into sectoral regulations, enhancing international collaboration, exploring dedicated legislation, ensuring ethical AI practices, strengthening oversight, fostering research and development, and addressing emerging risks.
Monitoring Existing Guidelines
- Evaluation of Effectiveness: FINMA and other regulators assess risk management, third-party oversight, and inventory completeness.
- Feedback Loops: Regular stakeholder consultations refine supervisory expectations.
- Supervisory Adjustments: Updated guidance based on industry practices and technological advancements.
Integration into Sectoral Regulations
- Healthcare & Insurance: Stricter oversight for AI-driven diagnostic tools and risk modeling to ensure transparency, safety, and fairness.
- Autonomous Vehicles: New safety standards and liability frameworks as AI in transportation evolves.
- Public Administration: Enhanced transparency and accountability in AI applications by public institutions.
International Collaboration
- Alignment with Frameworks: Deepening ties with the EU AI Act and OECD AI Principles for cross-border interoperability.
- Global Standards: Active participation in the development of ISO and other international AI standards.
- Data Sharing: Strengthening international data sharing agreements for responsible innovation.
Dedicated AI Legislation
- Exploration of a Swiss AI Law: Considering new legislation to cover gaps not addressed by current frameworks.
- Public Consultation: Involving industry experts, academics, and civil society to shape future regulations.
Ethical AI Focus
- Fairness & Inclusivity: Developing guidelines to mitigate bias and ensure inclusivity across all demographics.
- Green AI: Encouraging energy-efficient algorithms to minimize environmental impact.
- Human-Centric Approach: Preserving human oversight and accountability in critical applications.
Enhanced Oversight & Enforcement
- AI Audits & Certifications: Introducing certification schemes to verify compliance with ethical and technical standards.
- Mandatory Audits: Regular audits for high-risk AI systems to ensure transparency and accountability.
- Supervisory Units: Potential establishment of dedicated AI supervisory units within regulatory bodies.
Research & Development Initiatives
- Public-Private Partnerships: Fostering collaborations between government, academia, and industry for responsible AI innovation.
- Increased Funding: More investment in R&D programs to develop trustworthy AI solutions.
- Educational Programs: Training initiatives for regulators, industry leaders, and developers in AI ethics and governance.
Addressing Emerging Risks
- Generative AI: Formulating policies to regulate advanced AI systems like LLMs and generative models, while addressing societal and ethical implications.
- Cybersecurity: Enhancing requirements to defend AI systems against cyberattacks and data breaches.
- Autonomy & Decision-Making: Assessing risks and liabilities arising from autonomous AI decisions.
Want to hire
AI Regulation Expert?
Book a call with our experts