Panel Discussion on ethical AI governance: it’s more than just code

Cambodia’s AI Readiness Assessment Report, finalized in July 2025, directly addresses gender inclusion by identifying a significant gender gap in STEM fields and the AI workforce as a major national challenge, with only 16.68% of women graduating from STEM, compared to 83.32% of men.  Closing this gap, therefore, is critical to ensure fairness and prevent AI systems from inheriting and amplifying existing societal biases.

Moderated by Ms. Wen-Ling (Amy) Lai, Program and Strategy Associate of ODC, the session featuring Ms. Heang Omuoy, CEO of Asia Digital Technology Innovation​ (ADITI), and Ms. Adrienne Ravez-Men, Co-Founder of Global Innovation & Change, provided a crucial deep dive into the principles of Ethical AI Governance, viewed specifically through the essential filters of Gender and Inclusion Lenses.

The core argument was clear: for AI to be a force for good – driving both sustainability and positive social impact – its governance must be proactively built, not retrospectively applied. The foundation of this ethical framework rests on two pillars: transparency and high security. AI models must be trained for transparency because users, stakeholders, and regulators must be able to understand how an AI reached a particular decision, ensuring that the process isn’t an opaque black box.

When AI recommendations or decisions affect people’s lives, from loan applications to hiring processes, this clarity is paramount for trust. High security, in turn, underpins accountability. When systems are secure, the pathways of data input and utilization are traceable, making it possible to trace errors, biases, or misuse back to their source. This mechanism of increased accountability is where the principle of inclusivity truly begins to solidify the foundation of ethical governance. By ensuring that AI is built on secure, traceable, and understandable processes, we can guarantee that its utilization is fair, responsible, and capable of generating lasting social benefit.

The speakers were quick to caution against viewing AI as an autonomous, infallible solution. They reminded us that AI is just a tool; it can be wielded for the better and for the worse. This reality underscores why human monitoring matters and must remain a central feature of any AI implementation strategy. Ethical AI governance, therefore, cannot simply be a set of rules applied to code; it must go hand-in-hand with comprehensive, proper digital training for users.

Regardless of how well an AI system is governed, the human operators, decision-makers, and users must understand the tool’s capabilities, limitations, and ethical guardrails. Without this educated human layer of monitoring and intervention, any governance framework risks being undermined by user error, intentional misuse, or simple lack of understanding. The quality of the human element directly determines the quality of the AI’s impact. A core theme of the discussion was the necessity of broadening the definition of inclusivity beyond mere gender balance.

The speakers stressed that while gender is a crucial factor to consider when training AI, it is not the only factor driving bias or inequality. To build truly ethical and fair systems, AI developers and policymakers must prioritize data that captures the full spectrum of human diversity.

This comprehensive approach mandates attention to data reflecting different social backgrounds, socio-economic status, and, critically, data for people with lower digital literacy. When data sets

disproportionately represent affluent, digitally sophisticated, or urban populations, the resulting AI models will inevitably exhibit bias against those who are marginalized or digitally excluded.

This bias risks replicating and even amplifying real-world inequalities in the digital realm. The speakers made a powerful case that recognizing the multifaceted nature of human experience – from economic standing to digital access – is equally important for achieving a sustainable and just future for AI.

The overarching challenge identified by the panel is the vast gap between the speed of AI innovation and the pace of governance establishment. Technology often moves at an exponential rate, while regulatory bodies, laws, and ethical frameworks operate on a much slower, linear timeline. It is precisely because AI innovation moves faster than governance that a passive approach is unacceptable.

The panel concluded that it is critical for us to foresee the risks of AI and proactively establish preventive mechanisms. This involves creating dynamic, forward-looking policies that are built around the foundational principle of being inclusive from the outset.

Rather than waiting for new forms of bias or harm to manifest before creating rules, policymakers and developers must anticipate potential negative consequences across diverse population groups and integrate ethical safeguards into the development lifecycle. This preventive, inclusive, and future-oriented strategy is the only way to ensure that AI serves humanity broadly and justly.