Hi there! This is Nick, and in these monthly articles, I’m sharing my understanding of key concepts in responsible AI and I give you the possibility to participate in my thought process. It’s great for those who have just started caring about responsible and trustworthy AI. But you can also find new inspiration here when you are more experienced.
There are many good reasons for making responsible AI a strategic priority in your organization: avoiding harmful outcomes, building trust, adhering to ethical principles and complying with the law.
In organizational AI governance, we want to put the concepts behind trustworthy and responsible AI into practice using a number of methods and tools. Essentially, we want to make the abstract principles more tangible and actionable in the daily business.
This can be quite challenging, however, because we are faced with lots of new technologies as well as new unknowns. On top of that, plenty of people with all sorts of backgrounds are involved in AI governance that we need to deal with.
In this article, I show you how you can take the first steps in governing AI systems at your organization. These steps are similar no matter the organization’s size, while the scope and depth of a single step may vary with the size. If you are currently tasked with AI governance, following these steps can help you to get started in a structured way fast.
With great governance comes great advantage
When governing AI effectively, you will quickly be able to become a leader of trustworthy and responsible AI. As of now, this will probably differentiate you from 90% of your competition and lets you reap the benefits of responsible AI while your competition is still exploring generative AI.
I won’t go into the great benefits of responsible AI in this article, as I’ve described them in my last articles that you can read into here. But I want to speak about the advantages of AI governance that tend to be overlooked.
Having a system to manage and govern your AI activities and systems not only helps you to comply with rules and audit your systems, but also to keep track of changes effectively. These changes include the rapid technological advancements in AI, new AI-specific regulation, and also market sentiments.
Further, collecting and centralizing all information around AI to perform “classical” governance tasks is a great opportunity to recycle these insights for strategic business decision-making.
How does that benefit you? It gives you a new form of business intelligence, that can help you to understand …
… what is going on in your organization;
… what AI initiatives are value adding and which are not;
… what your AI systems are capable of;
… what risks you are exposed to and what this can cost you;
… what markets are demanding in AI;
… what scenarios you are prepared for and are anticipating.
If you are thinking of AI governance as a mere compliance activity, you are staying in the old scheme of thinking about governance. This can be totally fine for your organization, but you should be aware that this limits your outcomes. AI governance can be more than just a cost center and can turn into an attributable revenue driver.
Those understanding this first will be much more effective in governing AI, and are, in my view, the actual AI-first thinkers, who are both pro innovation and pro value adding responsible AI.
Requirements for good AI governance
To actually benefit from AI governance and to make some of the steps to govern AI possible at all, you do need to keep a few things in mind. Here are five important factors to consider that help you to build a great AI governance program:
Be proactive: Being in a position in which you are proactive, rather than reactive, allows you to avoid incidents and to adapt to major changes. For that, you need someone in your organization that leads and coordinates the AI governance program, and who pushes this agenda internally. Preferably, this would be someone whose voice is highly valued already, who can get buy-in and who understands how to align business with governance goals. This does not need to be a single person, and this does not mean that every governance task is owned by that function. Their main jobs would include bringing together all relevant stakeholders (e.g. information security, data protection, management, technical staff, risk management, etc.) at one table to discuss how to prepare for different kinds of scenarios.
Monitor change: Your organization is constantly changing, your AI models and systems may change, your vendors’ AI systems may change, or regulation may be introduced. Just take a look at the last two years in the markets — a lot has changed. You need to be able to understand current technological advancements (this alone can be a full-time job), regulatory updates and customer demands to get better at anticipating changing requirements for your AI governance.
Be flexible and adaptive: Directly related to being proactive and to being able to monitor. AI governance is not a static, one-off activity. Changing conditions, be it internally or externally, could quickly influence the way you deal with AI and the way you should govern AI. The whole space is still maturing, constantly revealing new best practices. Be prepared for updates to your AI governance from the beginning on. And make sure to keep feedback from important stakeholders coming in, while you are building out your AI governance system, as this helps to improve it.
Standardize, but make it yours: There is a lot of potential to standardize the processes around AI development, use, and governance, allowing you to be more efficient. Taking one of the more popular AI governance frameworks out there is a great way to quickly get up to speed, and to govern AI in a way that is recognized by many as the “right way” to do it. But AI governance is also highly individual. So, take your time along the way to customize such standard frameworks where needed: aggregate, cut, simplify, or extend certain parts and processes. That enables you to be both efficient and effective with your own framework.
Don’t kill innovation: This cannot be stressed enough. Poor governance slows down innovation, and reduces the efficiency and utility of your AI. Do try to make your AI governance support innovation and development as much as possible. Speak with your developers and learn how they could meet governance requirements the best without largely disrupting their workflows. Gladly, most that I’ve met in the AI governance space are well aware of that and are advocates for innovation. Be one as well.
Starting with AI Governance (Part I)
Now, let’s get into how you are actually starting to govern AI. This is a very condensed description of how many organizations out there are approaching their AI governance programs — combined with my learnings from the past years working in the field, my conversations with many smart people leading AI governance in their companies as well as what the literature is saying. I may go into more detail in the future, but this should be a sufficient basis for you to get going.
As AI governance is a very individualistic topic, many organizations are motivated by many different reasons and are operating in many different contexts. Therefore, the scope and depth of these steps and the single activities within them, and sometimes also the order of them, vary. Some of these steps are even interchangeable in their order or should be executed in parallel. Be aware that most of the tasks in AI governance can take weeks or months to be completed, if at all. As said, AI governance is not a static activity.
At the end, these organizations are all achieving similar outcomes in their AI governance — all roads lead to Rome. Take this approach, think about it, and adapt it to your situation.
Step 1: Determine your position
This is especially important when you are starting completely from scratch: understand what role AI is playing in your organization. Take your time to get this right.
You’d need to understand how your AI initiatives are structured, meaning if AI projects are centrally coordinated or if the single teams and departments are making their own decisions on how to make use of AI.
Also, try to answer the question of why you want (or need) to govern AI, as this can influence the scope of your AI governance program later on.
This exercise can already give you an indication of who the internal “responsible AI champions” could be, who could shepherd the governance program (if not you), and where buy-in will be needed.
Step 2: Know your AI
You’ll never be able to govern AI if you don’t know what systems are used, developed, sold, or planned.
Get that understanding and overview of your AI systems by gathering them through intake forms sent out to the single teams, through screening your IT tool repositories, by screening existing policies around AI, and by finding ways to identify shadow AI.
Tip: Go one step further and do collect AI use cases rather than just the systems. Map the use cases to the single systems and teams. That will help you, e.g. with EU AI Act compliance later on. Use case cards can be a good way to document such AI activities.
This collection or inventory of AI use cases will be a) a key element of your new business intelligence to gain understanding, and b) the starting point for the governance work behind the single AI systems and use cases. If you decide to share that inventory with your whole company, it can also help to create more transparency on what systems the staff is allowed to use.
Collecting and analyzing AI use cases can be a very demanding task. Depending on your organization size as well as your existing structure of documenting, it can take weeks until you have a proper overview of all relevant and correct information. Don’t underestimate the manual (research) effort, combined with the time you need to build up your structure for assessing and documenting the use cases. Make sure to have enough people in your team and try to facilitate your work with the helpful resources, methods and tools that are already out there.
If you fall under the scope of the EU AI Act and if you don’t have something like an AI registry yet, better start soon. The EU AI Act already prohibits some AI uses. Therefore, every organization within the scope of the AI Act must already know what type of AI use cases they have.
Step 3: Create your governance framework
By having an idea of what you want to achieve and what use cases you have, you can start selecting a standard governance framework as foundation.
Do begin with the legal framework that is applicable in your situation. Why? Because for most organizations, it's easier to start with compliance and because this makes sure that you are working on the things that you will have to do anyway. For most, regulatory compliance is often also the trigger for AI governance, if not demanded by the clients.
Second, continue building out your governance framework by extending or adapting requirements to align with your goals. Take a look at the popular governance frameworks for AI (e.g. the NIST AI RMF or the ISO/IEC 42001), but also at best practices in your industry, and at frameworks from adjacent fields (e.g. from security or data protection). Pick the parts that are relevant, useful and fitting to your organization, depending on the way you are dealing with AI, i.e. depending on if you are mainly procuring third-party AI systems or mainly developing AI.
For some companies, the second part might be the starting point, e.g. because they want to become thought-leaders in responsible AI or because there is no (significant) AI law applicable.
Step 4: Become AI literate
AI literacy is a key foundation of any AI governance. This is, in my opinion, a very interesting factor that is often given less attention.
All of your AI governance efforts are worth nothing if your teams are not acting responsibly and are not complying with your guidelines. Reasons for that could be that a) your staff is not aware of how to use AI systems properly and what risks are associated with these, or b) they are not aware of your guidelines and governance goals, or c) they don’t understand how your governance set up is helping and how that relates to their day-to-day work.
Therefore, you need to raise awareness for responsible AI and become AI literate as a whole organization. An AI literacy program is a great way to equip your workforce with new skills, make employees ready for new technologies to be more productive, and to make your AI governance effective.
Obviously, the effectiveness of the AI literacy program also depends on the way you are training your employees. Is it just some presentation or PDF with black and white text that employees just need to read or acknowledge? Or is it a more interactive training with quizzes that actually challenge your employees to understand what’s behind AI and AI governance?
Do try to make this part fun, interactive and also role-specific. Train your whole workforce on the basics, and include your organizational context and your AI systems in the trainings. Highlight your (planned) AI governance activities, to make the workforce aware of the way you are tackling specific risks, the way you are promoting responsible AI, and everyone’s responsibilities.
Some roles may require additional training, especially those developing, deploying or assessing the AI systems. You will likely need external training for that as well. Try to also give your staff the means to engage with other people outside your organization, allowing them to identify and adopt best practices and to bring these into your company.
Your AI literacy program will need to be continuously updated, not only because technology and regulation are progressing, but also because your AI governance program will progress fast in the first few months.
Outlook Part II: Progressing into execution
There is much more to governing AI that I can’t cover in a single article. I’ll continue with the remaining steps in the next article. So make sure to subscribe for free and come by next month.
I will only say this much: to progress into a mode of pure execution, you will need to map out all your governance requirements, processes, as well as roles and responsibilities across the AI lifecycle. Most will do themselves a favor when formalizing all of that into an AI policy before the frontline work begins.
A message to AI
I’ve asked Claude about the steps to begin with when starting out with AI governance. Here is the response I got:
Claude would start to establish a governance committee — a good move! This can be part of the step “Determine your position”, and helpful in bigger companies in which you are not the only person setting the agenda.
Afterward, Claude names lots of things that are indeed important, but would come in at a later point of time, e.g. incident management or audits. We will get into that as well. It also brings in training and awareness, great!
What’s next
Thanks for reading Notes on Responsible AI today! Are you already governing AI in your organization? How did you implement your governance program so far? Do these steps sound familiar to you?
Please let me know by sending me a message or by commenting on today’s post!
Next up, we will continue with the remaining steps to get into a mode of effective, continuous AI governance. Subscribe for free, if you don’t want to miss out on that.
See you next month,
— Nick
Connect with me
📚 Good resources to continue learning
Living repository on AI literacy best practices published by the European AI Office within the AI Pact initiative
Paper “Use case cards: a use case reporting framework inspired by the European AI Act” published in the Ethics and Information Technology journal
The GSMA “Responsible AI Maturity Roadmap”, a consolidation of the AI governance process coming from telecommunication firms
accompanied by a list of best practices
accompanied by a step-by-step guide
The ISO/IEC 42001 (first AI management system standard)