Jen GennaiGennai

Mastering Responsible AI

Jen Gennai is a leading voice and pioneer in AI Risk Management, Responsible AI, and AI Ethics.
  • Founded Responsible Innovation at Google, one of the first institutions worldwide to adopt AI Principles to shape how AI is developed and deployed responsibly.
  • Informed governmental and private AI programs on how to manage and implement AI risks.
  • Contributed to EU AI Act, UK Safety Principles, G7 Code of Conduct, OECD AI Principles & NIST.
  • International Panellist & Speaker (Davos, World Economic Forum, UNESCO, etc.).
Jen Gennai
Jen Gennai - speaking at re:publica (Berlin, Germany - June 2023)

Name(Required)

Responsible AI for me means taking deliberate steps to ensure technology works the way it’s intended to and doesn’t lead to malicious or unintended negative consequences

Jen Gennai – Responsible AI Powerhouse

Recognized Industry Leader in AI

Jen founded the Responsible Innovation team at Google, tasked with integrating ethical considerations into AI development. Her team worked with product and engineering teams, leveraging expertise in ethics, human rights, user research, racial justice, and gender equity to validate that Google’s AI products align with commitments to fairness, privacy, safety, and societal benefit.

She co-authored Google’s AI Principles in 2017, which serve as the ethical charter guiding the development and deployment of AI across the company. These principles include commitments to fairness, accountability, privacy, and avoiding harmful or biased outcomes.

Jen’s contributions to AI ethics have made her a prominent figure in the field, recognized for her efforts to balance innovation with ethical responsibility. Her work has been highlighted in various forums and conferences, underscoring her influence in shaping responsible AI practices.

 

Focus on AI Ethics
Jen’s focus is on ensuring that AI technologies are developed and deployed in ways that are beneficial to society and do not exacerbate inequalities or cause harm.

This involves:

  • Implementing Fairness Testing: Establishing rigorous testing protocols to identify and address biases in AI systems.
  • Enhancing Transparency: Making AI systems’ decision-making processes more understandable and interpretable to users.
  • Governance and Accountability: Creating structures that hold AI systems accountable to ethical standards and principles.

Through her leadership, Jen Gennai is at the forefront of integrating ethical considerations into the rapidly evolving field of AI, ensuring that technological advancements contribute positively to society.

Influence in AI Policymaking & Frameworks

Other Expert contributions to ISO/IEC AI for SC 42, G7 Code of Conduct, WH Commitments, UK Safety Principles.

In The Press

Also in Wired, Aufbruch, CNN Chile, Wall Street Journal, Washington Post, etc.

International Panellist & Speaker

Also participated in Unesco (Chile), Moral & Machines (Germany), Digital Summit (Ireland), Impact CEE (Poland), Google I/O & Google CEO forum (US), the United Nations (US), Brookings (US), World 50 (US), AI.LA (US), VentureBeat (US), etc.

AI Expert witness in Regulatory & policymaker engagements

AI expert witness in regulatory and policymaker engagements: UK, Austria, Ireland, Chile, US, the Netherlands, Germany, Mexico, Brazil, Canada, Uruguay, Singapore, Sweden, European Commission, Lithuania, Romania, Czech Republic

Latest AI Ressources

How T3 Can Help You?

Harnessing the Power of AI Responsibly: Balancing Innovation with Risk

Working with senior AI and Compliance advisors who are at the forefront of AI supervisory dialogue, we can support the below activities:

1

Tailored Frameworks

 

We guide you through relevant frameworks like the OECD AI principles, national regulations, and industry-specific standards. We don’t just provide templates but help you operationalize them within your organizational structure.

2

Governance Structures

 

We help establish clear roles, responsibilities, and escalation pathways for AI risk. This may involve setting up AI oversight committees or integrating AI risk into existing risk management structures.

3

Selecting the Right Tools

We assess your needs and recommend the best mix of in-house, open-source, and cloud-based tools for model validation, bias detection, explainability, and ongoing monitoring, taking budget and existing infrastructure into account.

4

Smart Practices and Training

We go beyond theory. We share best practices, case studies, and practical methodologies for embedding AI risk management into your development, deployment, and monitoring processes. We offer general AI awareness training for all relevant employees, along with role-specific deep dives for developers, risk analysts, and business leaders involved in AI projects. Training includes practical exercises to help teams think critically about AI-specific risks (bias, security) as they pertain to their particular products and business lines.

 

Click here to access our Responsible AI trainings

5

Leveraging Existing Resources

Our focus is synergy. We identify how AI risk management can fit into existing risk and compliance processes, avoiding redundant efforts and maximizing the use of current personnel.

6

Governance & Compliance

We help you balance central oversight with distributed accountability, empowering product teams without compromising risk management.

Our Compliance Experts actively engage with stakeholders, including regulatory bodies, customers, and partners, to discuss AI utilization and compliance.