Hi there! This is Nick, and in these monthly articles, I’m sharing my understanding of key concepts in responsible AI and I give you the possibility to participate in my thought process. It’s great for those who have just started caring about responsible and trustworthy AI. But you can also find new inspiration here when you are more experienced.
This article is my attempt to help you understand what AI governance is and what elements of AI governance are related to each other in what way. If you are new to AI governance or if you don’t even know what it means, this is for you!
So, let’s get right into it.
What is AI governance?
AI governance describes the set of actions that need to be completed and the rules that need to be adhered to when making use of AI systems. If done right, this enables a trustworthy and responsible AI use or development. In other words: AI governance are the concrete activities that make Responsible AI possible.
When taking a closer look at AI governance, you can split that into a few core elements:
Putting ethical principles into practice: First and foremost, effective AI governance brings the principles behind responsible and trustworthy AI to life. These principles are essential to define how and when to differentiate between good or bad actions and guide decision-making, especially in situations in which there is no law or clear rule. Think of it as the desired state when someone is using AI. An example could be the implementation of bias mitigation strategies, such as relying on representative data when training a machine learning model, and testing AI systems for bias to ensure fairness.
Avoiding harm and managing risks: One of the most important elements of AI governance is risk management and the implementation of measures that should ensure no person gets harmed, be it in their safety and health or in their rights. Risks can be manifold in types and dimension, and highly depend on what kind of technology is used and how it is used, respectively. Therefore, it is important to assess the impact and risks for each AI use case.
Complying with policies and regulations: Most often, AI governance is rather thought of a compliance function. While governing AI includes much more than complying to certain policies, this is another core element as policies and laws usually try to enforce the other elements listed here. Organizations who care less about AI governance are externally forced to implement governance measures that comply with the law. Those who already have appropriate governance structures in place will have a much easier time in complying with legal requirements. In any way, AI governance will help to adhere to the rules and avoid additional fines.
Having oversight and the ability to monitor AI: Through the common practices in governing AI, you are able to oversee AI models, use cases and initiatives almost naturally. That can create business intelligence to make informed decisions on how your organization can and should make use of AI. Further, AI governance not only helps you to assess a status quo and to look at what you have done, but also creates monitoring functions that help to continuously align the use of AI with your objectives, laws, and technological progress.
Enabling people to understand AI: The last element, and in my opinion the often underrated one, is the creation of AI literacy. Not only developers should understand how AI systems work, but also those using them and even those affected by them. Training and awareness programs that build a sufficient level of understanding of a complex technology and associated risks will always be the most impactful way to get people to use AI responsibly. This also creates a shared vision among people in how AI should be used and how it shouldn’t be used. At the same time, this element is also a quite difficult one to integrate, as you cannot really measure literacy and as it depends on each individual.
AI governance in the AI strategy
The AI strategy, be it in a formal or informal version, describes how and for what purpose you want to develop and use AI systems. Usually, this is part of the business strategy of an organization.
The AI governance system, which brings all the elements mentioned above to life, should be aligned with your overall strategy on AI. This means, it’s mostly determined by the AI strategy, though can influence parts of the strategy as well.
A car analogy
To help you understand the relationships between AI strategy, policy and governance, let’s try to create an “analogy”:
Imagine you are on the road sitting in a car. That car has a specific use case for you: bringing you from your home to your office, bringing you from A to B.
This car is essentially the machine or system that has a given purpose for you and that helps you achieve a goal, similar to an AI system. You can, to some extent, control your car and how it will impact you and other road users around you.
Your destination and the way you use your vehicle reflect your AI strategy that sets out various objectives for you.
When driving the car, you need to follow several rules that make sure that you can use your car for the intended purposes without disturbing or harming other road users and the surrounding environment (the AI policy and regulation).
While the rules of traffic usually work quite well to control the behavior of people and their cars, there are still cases in which rules get broken. A driver who misses indicating their switch of lanes, another driver who quickly runs across a red light to get to their destination in time, and another one that nearly crashed into a cyclist that was within the blind spot while taking a turn.
These rules get broken both by purpose and unintendedly, which can lead to traffic chaos and, in the worst case, to fatal crashes.
To avoid these unwanted outcomes, there are also various assistances built into your car: lane control, warnings if you don’t tolerate the speed limit, automated braking, power steering, or airbags in case of an accident.
Some of these assistances won’t overrule your human decision (e.g. you can still speed with your car, even when a speed warning pops up in your car’s dashboard), some of them are so useful that you don’t want to turn them off even when you don’t care about its safety feature (e.g. power steering makes turning the car way easier for you), and some of these are so important that you as a normal driver cannot easily turn them off (e.g. the airbag that launches when actually crashing).
And there are even external measures that can affect the way how the car gets driven, e.g. speed bumps, or that can mitigate at least some damage in the worst cases, e.g. the guard rails on a highway or an emergency lane next to the road in case the brakes are failing.
Both rules and the assistances in the vehicle depend also on the type and purpose of the vehicle — the capabilities of an AI system or model if you want, which are again directed by your AI strategy. Is it a small, affordable car that brings you from A to B? Is it an expensive car that allows you to go really fast? Or is it a truck with lots of horsepower to transport goods?
Essentially, the assistances of your vehicle, but also other external measures on and off the road, reflect the AI governance measures which may (or may not) be directly visible to you. They are not only keeping you and others safe, but also make handling your vehicle easier. What types of assistances or governance measures you can influence, also depends a bit if you are on the user’s side (e.g. the driver of the car), or the builder’s side (e.g. the car manufacturer).
This is complemented by proper training in the driving school (the AI literacy) and proper inspection of the car before it gets on the road (keyword: audits and certification) — most Germans, for instance, don’t know the certification body TÜV for its ISO/IEC 42001 certification on AI management systems, but rather for the regular inspections of their cars every 2 years.
Obviously, AI systems are no cars (or at least not yet) and AI governance is a whole different domain. However, I hope that this “analogy” gave you another perspective and helped you to sort your thoughts on AI governance and interactions with AI strategy and policy.
How governing AI works
Let’s move on from the cars and go one level deeper on how governing AI works. I’d like you to think about the relationships between AI systems and single governance activities as the following:
You have a number of AI systems that are either developed by yourself or that you purchase from other vendors. How you keep track of these AI systems depends a bit on how you make use of AI. Usually, you want to track and oversee single AI use cases instead of systems.
Your AI use cases need to meet various requirements and have certain risks that need to be mitigated. If you are making use of third-party AI, the same goes for your AI vendors. The requirements are coming from the governance framework that you want or need to apply (think policies, regulation, and objectives of the ethical principles).
Both fulfilling the requirements and managing risks can be broken down to actions or controls. This is where the actual work gets done, like conducting risk and impact assessments, training your staff on the basics of AI, testing a system for a certain metric or implementing a new procedure for monitoring AI when it's in operation. That can happen both on an organizational and on a use case level, and includes one-off and recurring tasks. The quality of your AI governance is going to depend on the quality of your controls.
Most often, someone else will require a proof that you comply with the requirements or that you can manage the risks associated with an AI system or use case. This can be internal management, an authority, or an auditor that certifies according to a given standard. Therefore, you will gather evidence, usually written documentation, on how you implemented your controls, i.e. what actions you took.
You iterate from here, repeating actions for each AI use case and improving or updating where needed to deploy AI systems confidently.
A complex undertaking
For all of that, various people from different disciplines and with different motivations, will need to take on different responsibilities to fulfil these tasks — especially in big companies. This can make AI governance quite complex. The complexity depends also on the way you make use of AI and the organizational context. While global standards on how to govern AI are in the making, each organization will need to adapt to their individual situation and objectives. Nevertheless, some issues in governing AI still require practical solutions that have yet to be invented, e.g. when it comes to mitigating Shadow AI effectively, or when managing dependencies on third-party AI providers.
On the positive note, for most organizations, governing AI is much simpler than they may anticipate, as there are countless academic discussions on this topic out there. By now, there is also a large amount of practical research and guidelines, a growing community of AI governance practitioners who share experiences and best practices, and useful tools and methods that make AI governance less complex.
What’s next
If you want to get started with AI governance, check out my last two articles, in which I describe how you can come from collecting your AI use cases and choosing a governance framework to translating requirements into day-to-day execution. Or check out the other articles and resources below.
With that, I end this first “season” on Notes on Responsible AI, as I will go into a summer break now to spend some time cycling (will try not to end up in a car’s blind spot though 😉).
See you soon!
— Nick
Connect with me
📚 Good resources to continue learning
Learn why trust matters, not only in technologies
Understand what principles stand behind trustworthy and responsible AI
This why pure self-regulation of AI doesn’t make sense
Hi,
Good write up.
If you're curious about how AI is shaping policy, power, and everyday life around the world, our newsletter Responsible AI and Beyond is for you.
Each week, we break down what’s happening in AI governance, ethics, and innovation — no hype, just the insights that matter.
Subscribe here: https://malcolmdurosaye.substack.com/s/responsible-ai-and-beyond
And if you're already on board, feel free to share it with someone who should be.
— The team at Responsible AI and Beyond