For over two years now, I’m studying how AI can be made use of responsibly and what one would need to govern AI to utilize it to our benefit. This is a complex topic and I still have to learn a lot — also given the insane amount of literature and resources available.
With these monthly articles, I want to share my understanding of key concepts in responsible AI and to give you the possibility to participate in my thought process. This can be useful to you when you have just started caring about responsible and trustworthy AI. But also, if you are more experienced, you may find some new inspiration here.
Some background first
I have always been fascinated by the idea of artificial intelligence, by machines becoming as smart as humans. AI is that exciting, mysterious, and abstract thing with a pinch of humanity to it. A concept that seems to fascinate a lot of us not only today but for decades now. At the end, AI systems are the logical consequence of the evolution of software and are “just” automated algorithms with applied statistics, making everything possible that is calculable — but that doesn’t sound sexy, no?
I view AI systems as these very useful technologies to be more efficient and productive, while maintaining quality and reducing the input of labor (or manual work). This is nice because we could produce and sell more while increasing the time a human can spend on more useful or more pleasurable activities. This a rather economical perspective to be fair. And while in theory this sounds good, there is quite some mistrust in the productivity promises of AI among economists (I can recommend reading into Daron Acemoğlu and David Autor if you want to dive into that).
AI offers sheer endless possibilities in what you can do with it. That has been the pitch at least. You and I know that the big promise of Generative AI’s benefits, for instance, is yet to be seen. Obviously, there is a big gap between the capabilities of AI systems and the capabilities of humans today. But there are numerous narrow applications where AI systems have been used for years now, as they can be a great solution and are handling some tasks much faster than a human can. These opportunities should be exploited for the greater good.
However, as with every technology, there are consequences and risks associated when using AI systems, and they can be used to advantage or disadvantage people. Also, a machine that seems to act intelligent, that is “out of control” and that is blindly trusted by its users can be quite frightening, in my opinion. A lack of understanding about this technology and its responsible use as well as a lack of proper control mechanisms makes it more likely to realize a risk with an outcome that is to someone’s disadvantage. Especially when respecting the fast adoption of AI tools presently.
I believe that AI systems without proper governance — especially those that are broadly available and generating content — are worsening existing structural problems in our society. An example, and in my opinion the biggest societal issue currently, is the speed of how mis- and disinformation in the digital space spreads, increasing the polarization within the society, eroding trust and making conflicts more probable (concerns also highlighted by the recent “Global Risks 2025” report of the World Economic Forum). It was an issue ten years ago, and has become an even bigger one with the support of AI.
But also when zooming in to the smaller, more tangible level, a responsible use of AI is key. Whatever the use case of AI may be, the consequences and risks of AI usage should be evaluated and act upon to avoid adverse outcomes, like unfair treatment or harm to people.
That is why AI systems need to be properly governed to make sure they are trustworthy and used responsibly — and this is not solely the duty of a single government or firm (more on that later).
But AI governance also needs to be done appropriately, respecting the situation and context an AI system is used in (i.e. considering high-stake vs. low-stake situation). Because if done poorly, governance not only slows down innovation, but reduces efficiency and the utility of AI. You can take a look and chat with “the world’s most responsible AI model” GOODY-2 to see how useful the answers to your questions may be …
Today, I'll start with exploring the role of trust and the meaning behind trustworthy and responsible AI.
Why trust matters
Trust is this invisible thing that everyone knows, that feels quite good and that makes you feel optimistic. You don’t think about trust really when it’s there, but you notice it immediately when it’s gone. But why do we need trust?
It’s best to think about a scenario when there is no trust first.
You can do a little thought experiment for that: Imagine you would wake up tomorrow, and nobody would trust anyone anymore. How would this world look like to you? How do you know that the first person you’ll speak to is telling the truth? Is the person predictable in their decision-making? Is the person even a real person? You might also not know the answers to such questions in a situation where trust exists, but a lack of trust would change everyone’s behavior dramatically.
Or maybe you have been in a situation before in which your trust was misused by another person or a business. Meaning, the other person or business revealed themself as not trustworthy. How did that affect your relationship to that person or business? Likely, it became worse or you even stopped interacting with them at all.
Mistrust is corrupting our ability to interact with others, potentially leading to situations that are influenced by skepticism, slow decision-making, discrimination, or instability, to name a few. This means, a lack of trust makes it way more complex to work with each other.
Or put differently: Trust makes everything simpler and lets you basically outsource parts of your decision-making. It is essentially the backbone of the economy. It is the unwritten agreement that makes things work in moments of uncertainty and when we rely on others. In the digital space, trust becomes even more important as we are unable to see each other face-to-face.
And from a business point of view, you being trustworthy signals that you are less risky to work with or that you are at least very good at managing potential risks. This makes it attractive for business partners and customers to engage with you in the future, as you are the safe and reliable choice. Who doesn’t want that? To keep it that way, you would need to actively invest in your trustworthiness. Trust is one of your most valuable assets that brings in and retains business relationships or employees, and that increases your brand value. A nice side effect is that you are more efficient as well. Interestingly, most organizations are well aware of that, but most lack to invest in the measures to improve trustworthiness. I'd say this is a relatively easy way for you to differentiate as a firm.
But sometimes, (mis)trust can be a dilemma. If you want to further explore how this affects the way we are interacting with others, I can recommend you the web-game “The evolution of trust” by Nicky Case, which tries to explain the topic with some basic Game Theory.
Now, how does this all relate to AI systems? Technology or AI per se doesn’t need our trust, as trust is placed in people. Technology can only be trustworthy in the sense of having a certain reliability, transparency, quality, and security. But speaking about trustworthy technology is implicitly including the trust we place in the people behind it (i.e. especially those developing or governing it). This means, a trustworthy technology amplifies the trust between people or organizations. Again, it's a way to invest in your own trustworthiness. As we both as users and providers are increasingly relying on AI systems (sometimes even too much — hello, automation bias 👋🏼) we need to be sure that these systems are trustworthy, working to our benefit and not hindering our ability to interact.
What trustworthy and responsible AI means (to me)
When you are starting to learn about trustworthy and responsible AI, the terminology can be relatively confusing. Just like with the word “intelligence”, there is no official or unified definition for those two concepts, and oftentimes people will use both words to express the same idea. What you will usually see is that responsible AI is rather used as the umbrella term in this space (I often do that as well).
I’d like to think about those words as the following:
Trustworthy AI means that the AI system is developed by adhering to certain standards that ensure quality, compliance, and alignment to ethical principles. So this part focuses on the system itself. Ultimately, the trustworthiness of a technology gives you a fair reason to use it (or to “trust it” as some would say).
Responsible AI means that the AI system is used and designed responsibly, taking to account the purpose and consequences when using the system and its alignment to set goals or values. This part focuses on the context of use or the way you will want to use the system.
You could say that trustworthy and responsible AI together means to align the outcomes of AI systems with organizational or societal values and goals, while maximizing benefits and reducing risks.
When done right, this combination allows you to avoid over-reliance on AI systems and to avoid not using AI systems at the same time.
You can take a look at the definitions of standardization institutes or governance frameworks, such as the one from NIST, to get a better feeling for the meaning as you continue to learn.
While you can spend hours on finding the perfect definition, it’s important to focus on giving these words some action. And to understand that trustworthy and responsible AI involves the participation of various stakeholders along the value chain of AI systems:
Developers of AI need to ensure certain quality standards are upheld during development;
Deployers of AI need to validate this, monitor the systems and communicate, especially with users;
Users of AI need to have a sufficient level of understanding the technology, including its proper use and to have critical reflection skills;
Policymakers of AI need to regulate where necessary (i.e. where self-regulation or self-governance is not working out).
This is no exhaustive list of responsibilities, but it should give you a first idea. Being dependent on a single stakeholder’s governance measures is a risk — especially when, for example, a few influential AI model providers with a lack of self-regulation are serving a large group of deployers of AI systems.
Your proactivity becomes important. What can you do to improve your trustworthiness and that of the systems you are using? Or, how can you at least make sure that you are acting responsibly within your means?
You could ignore all of the above and hope for the best, and there is a fair chance that nothing goes wrong. But there is also a fair probability that something will go wrong, leading to an AI incident and to the loss of trust — one of your most valuable assets.
A message to AI
To my own surprise, I've never asked an AI model about trust and what it thinks about the meaning of trustworthy and responsible AI. Let’s see what it spits out.
What Gemini says on trust and its importance:
What ChatGPT says on trust in technology:
What’s next
Maybe the whole scene around responsible AI excites you as well, and perhaps you’ve gained a new perspective now. Or maybe you disagree with me on something? Either way, please share your thoughts with me.
I will release a new edition every month and in the next articles I will get into more detail about the characteristics of trustworthy AI, ethics, governance, regulation, standards, and risk management. Subscribe for free if you don’t want to miss out.
— Nick
Connect with me
📚 Good resources to continue reading
A repository of the latest (public) AI incidents by AIAAIC
The “Global Risks Report 2025” just published by the World Economic Forum with some numbers on the most pressing issues globally
The NIST glossary of terminology around trustworthy and responsible AI
A very interesting article from Sebastian Hallensleben (who leads the CEN-CENELEC JTC21 which creates the standards around the EU AI Act) on ecosystems of digital trust (German article)
A report from McKinsey on why digital trust matters with some interesting numbers on the ROI and the risks of (mis)trust
A study from Workday with some numbers around AI and trust
“What does it mean to trust a technology?” by Jack Stilgoe, to learn more about the trustworthiness of tech
“The Importance of Distrust in AI” by Tobias Peters and Roel Visser, exploring the psychology behind (not) trusting an AI system
The “2025 Trust Barometer” from Edelman with some numbers and insights around trust in general