Hi there! This is Nick, and in these monthly articles, I’m sharing my understanding of key concepts in responsible AI and I give you the possibility to participate in my thought process. It’s great for those who have just started caring about responsible and trustworthy AI. But you can also find new inspiration here when you are more experienced.
Before we start: There is something to celebrate, as this newsletter has now crossed the 🎉 100-subscription-mark 🎉 — how cool is that, given my almost non-existing marketing on here?! It’s great to see that you are interested in responsible AI and in my writings! Thank you for subscribing and sharing so far!
There are three key drivers behind trustworthy AI systems, as I’ve touched upon in last month’s article — ethics, non-harmfulness, and lawfulness. As promised, we will now take a first look at the regulation of AI development and usage, and why we do not just self-govern. And of course, we will also take a look at the EU AI Act.
But let’s start with a broader discussion on regulation first.
Why do we need to regulate at all?
Similar to the discussion around why trust matters when interacting with others, regulation ideally makes interactions simpler and helps us solve situations in which there is uncertainty or a lack of trust.
That is because the law are the actual, written rules, that apply to everyone, coming with unwanted consequences if not adhered to.
While ethical principles are the foundation for regulation, and can be useful to handle the situations in which no law applies, not everyone is acting upright. This is due to misaligned incentives. In the worst case, the incentives of someone are completely contrary to yours, potentially harming you, while that other person is not caring a single bit about your outcome.
Self-regulation may only work well if it sufficiently aligns with your incentives, such as using it as your competitive edge or reputational asset to attract customers. Misaligned incentives combined with internal policies that are too flexible and a lack of enforcement of these policies, as it is the case with ethical principles, leads to self-governance that is bound to fail.
Contrary incentives become a serious problem if there are large imbalances, such as between Big Tech and the rest of society. There have been many examples of how disproportionate control over highly impactful technologies can lead to adverse outcomes. Of course, this now applies to AI model providers as well.
Two former board members of OpenAI have made some big press last year by stating that self-regulation does not work out when profit incentives create too much pressure on a company, eventually leading to outcomes not aligned with societal interests. They have also mentioned, that if someone had been able to successfully self-govern AI model development and usage, it would have been OpenAI.
Just a few days ago, OpenAI has published their new version of the GPT-4o model, capable of generating detailed images. That led to a new wave of debates and accusations of intellectual property theft (again), as many of the generated images and memes are close look-alikes of styles from renown illustrators. Prior to that, many book authors also raised their voice as their book content has been used by vendors to train their AI models on, without asking for permission or compensating the content owners appropriately. All of that to the profit of the model providers.

You may think, “Hey, that’s just some images and text. Maybe some original content has been used for the training yes, but it’s not like that anyone really got harmed?!” — yes and no. Nevertheless, people's work on which they may have spent years of their lifetime was robbed for unauthorized commercialization. That is a serious infringement and could harm creators financially.
The fact that there are big AI companies ignoring the law and rights of people already in these “less-harmful” IP infringements is a concerning place to start mass AI development and usage. Essentially, these vendors are right now outlaws that care about their incentives, and only align that with the law if it is suitable or marketable. The question now is: Do we trust our AI model or system provider enough when it also comes to safety-related risks? Will they protect all of us, or only if we pay for it?
I have the feeling that the large-scale use of AI technologies is now uncovering what has been major issues before, but that we may have forgotten about. There has been theft of intellectual property before. There have been privacy infringements before. There have been scams and false information that harmed people before. There has been (subliminal) manipulation before. And there has been physical harm to people before because of faulty or misused technology. We struggled with avoiding these incidents before, and we will struggle with them even more as AI systems are accelerating and multiplying these incidents. Unless we find a working solution for which (AI-specific) regulation can be one.
Regulation creates accountabilities and obligations to meet certain quality standards, especially when looking at product safety or AI regulation. When you know that there is at least one external authority with large enforcement power taking a closer look at what you have done, the fines suddenly enter your profit calculation — creating alignment of incentives again.
And there is a nice side effect to it: If you end up building a product, such as an AI system, under constraints, such as certain quality standards required by the law, you eventually end up with a product that is better and safer. I'd argue that makes your product a more attractive solution to choose as a customer in the long term.
To sum up, regulation essentially protects others from you, you from others and sometimes even you from yourself. Ideally, it creates fair outcomes, and solves interactions between people or economic actors. If done well, laws can also save us time as we do not need to think through complex considerations over and over again as we “just need to comply” with the rules. In the context of AI, regulation has the power to set new standards on how we as a society actually want to develop and use AI responsibly.
What’s the cost of regulation?
Regulation is an important instrument which becomes even more important when dealing with AI. But laws are far from perfect. So what are the risks and costs of regulation?
A poorly designed regulation can lead to outcomes that leave us worse-off or that don’t improve the current situation. Here are some reasons for the lack of effectiveness:
Lack of enforcement power: When people and organizations can act illegally because the consequences of doing so are negligible or because there is no one prosecuting them, laws lose the strength that ethical principles don’t have — their enforcement power. It’s what we see with the training of large language models with IP right now.
Poor and complex policymaking process: The design process itself can cause ineffectiveness as well, especially when the regulation should apply to large markets and various countries. The number of people in the EU, for instance, and the numerous legislative institutions make it difficult to output a law that reflects the interests of society properly and that comes on time. This is an almost impossible task, making some laws seem useless to many people. At the same time, huge corporates have the resources to lobby their interests into the law. The EU AI Act has been a prime example of that complexity. Check out Gabriele Mazzini’s TED Talk about the design process of the EU AI Act to better understand this one. He was the lead author for the EU AI Act — and he was not so happy about its outcome at the end.
Risk of deindustrialization and destruction of innovation: The classics — if the compliance burden becomes too big and there is sufficient flexibility in movement, organizations may relocate into or focus on their business in other markets. Or they won’t invest into new (AI) technologies, risking a loss of competitiveness while becoming increasingly dependent on those that can “bypass” regulation due to their large number of resources. That makes the regulatory burden relatively larger for many smaller players who possess limited resources, and makes it smaller for the few large and powerful ones.
Economies powered by administration and consulting: The increased compliance burden and killing of innovation then also means that more people are spending more time and resources on administration and bureaucracy. Moreover, additional policies that are complex to comprehend and difficult to find a solution to lead to more (and often expensive) consulting work done. Less time could be spent on productive, outcome-focused work. Essentially, there is a big risk of shifting the focus away from creating value-adding products and services towards fulfilling regulatory requirements. The key question to ask here is to what degree the latter will be value-adding and value-eroding to the society.
Vagueness: Last but not least, regulation can be less effective when it is written vaguely. A key advantage of the law is that it creates legal certainties, or clear rules of how to play the game. But you also want a law to be as flexible as possible to cover many cases, especially when regulating rapidly changing AI technologies. If that doesn’t work out properly, companies may still be hesitant to invest in AI technologies as they are afraid to take the first move because they cannot interpret the law correctly in their context. They would rather not take the risk of being punished due to a lack of understanding, while taking the already large risk of being an innovator. That’s basically the same situation of deflation in the economy (something that you would want to avoid at all times). Unfortunately, the EU AI Act causes this uncertainty in some of its rules in companies that I see or talk to every week. Having read the EU AI Act more times than I can count, and having read lots of lawyer's opinions on it, I still struggle to interpret some provisions in the EU AI Act. This vagueness ultimately also could leave too much room for interpretation, potentially failing to cover the cases it intended to regulate.
To conclude, there is a lot that can go wrong on the policy side when regulating complex markets and technologies, leading to ineffective regulation.
Is (European) regulation now a liability or an asset?
I’d argue that regulation per se is value-adding, as there are usually quite good reasons for laws that benefit society, such as the ones from above.
However, if the circumstances of the economy are, let’s say, unfavorable, regulation may become a big issue. I see regulation as the trigger that makes companies decide not to start up in Europe or not to invest in developing their own technologies, when there are existing structural problems and misalignments. It’s the “final push” so to say. Companies and people may then opt for foreign technology providers that have the sufficient resources to deal with the regulation, and who may be incumbents anyway.
For various reasons, Europe lacks the competitive advantage in many areas of tech, or at least the commercialization of it. Combined with large dependencies on a few other countries with big tech vendors, this can be a danger to Europe’s (digital) sovereignty. Especially in times of big geopolitical tensions.
Europe’s regulation has become an instrument to protect from foreign companies and competition by now. It can set the standards for new technologies, even if we are just using them and not developing them, as it is the case with AI and the EU AI Act. And, to a certain extent, regulation is also what makes the good quality of living in Europe for many people.
I don’t have a definitive answer to the question if regulation is more of a liability or an asset for Europe. I tend to say that, at least for AI regulation, it’s an asset. Maybe you have a more definitive answer? Please let me know in the comments.
The state of AI regulation
The landscape of AI-specific regulations that are already enforced or that are currently implemented is quite manageable so far. Most prominently, there is the European AI Act as the first big law with international reach. I’ll focus on that in a minute, and very likely also in future articles.
Many countries have been hesitant so far, but the number of AI legislation efforts has risen in the past months. In fact, even the US has quite something going on, at least on the state level. There is this number flying around of 781 AI bills currently in the making, that regulate all kinds of AI systems, use cases (e.g. deepfakes or facial recognition), or sectors — contrary to what you may think about the US and their stand on regulation. The Colorado AI Act and New York City’s hiring law requiring bias assessments of AI are the most popular ones here, which are not in draft anymore or already in effect.
If you want to explore what is going on around the world in terms of AI regulation, I can recommend you
’s “Global AI Regulation Tracker”. It’s a nice, interactive resource to get a quick overview of important and upcoming public policies. (Make sure to also check out his substack!)A brief overview of the EU AI Act
The EU AI Act mandates various new rules depending on the combination of the way you are dealing with AI, the risk of an AI use case, and the type of AI system or model. Given that the AI system is used for professional purposes, and not solely for private ones.
What makes it especially interesting is that it does not only apply to those sitting in the European Union, but also to those who distribute, sell and deploy AI systems in the EU.
The EU AI Act describes certain use cases who possess a risk to the safety and health of a person. There are four main categories for these use cases:
Those with an unacceptable risk: These use cases for AI are prohibited since February 2025, such as social scoring or predictive policing.
Those with a high risk: These use cases are the most regulated ones, such as safety components within different products, operating critical infrastructure, candidate evaluation or credit scoring. Especially providers of AI systems for these use cases need to put lots of governance measures in place by August 2026.
Those which require transparency measures (sometimes called “limited risk”): Providers and deployers of AI systems with these use cases need to fulfil some minor requirements also by August 2026. For example, when your AI system interacts with people like a chatbot.
All other use cases and systems (sometimes called “minimal risk”): These use cases can be used without any further regulatory requirements.
If you are providing AI systems with a high risk, you will definitely need to build up the standard repertoire of AI governance: Having a quality- and risk management system in place, keeping logs, documenting your development, having human oversight measures, and much more. Some obligations also apply to those who are in different roles, like deployers or distributors of the AI system.
And if you then also happen to be a provider of a general purpose AI model (e.g. GPT-4o), or if you partly modify it, you will need to comply with another set of obligations on that — already by August 2025.
Most companies will likely not be providers of high risk AI use cases or providers of general purpose AI models. But there is still a lot of compliance work to be done, as you would at least need to classify all of your AI systems and use cases.
Anyway, the EU AI Act is dominating AI governance discussions in Europe right now. Gladly, it brings greater awareness to trustworthy and responsible AI. Unfortunately, the trigger was another regulation, and not the other important aspects of AI governance.
A message to AI
What Claude says on why we would need AI regulation
What ChatGPT considers to be pro and con for Europe’s regulation:
ChatGPT lets us win something for Europe this time — namely responsible AI. 😉
What’s next
That’s it for today. This one was coming from a broader perspective towards AI regulation, as I feel like that there is lots of “hate” against something like the EU AI Act out there. This law in specific may not be perfect, yes, but it lets everyone move in the right direction of responsible AI. What do you think?
Next up, we get into operationalizing AI governance, standardization and managing risks. Make sure to subscribe.
See you next month.
— Nick
Connect with me
📚 Good resources to continue learning
The Economist article with two Ex-OpenAI-Board-Members (Helen Toner and Tasha McCauley) on why AI companies shouldn’t self-govern
Techie Ray’s global AI regulation tracker (also check out his Substack
)Gabriele Mazzini’s TED Talk on how he helped to shape the EU AI Act
The EU AI Act and a quick overview of it on the European Commission’s site
A self-assessment tool for the EU AI Act #1
A self-assessment tool for the EU AI Act #2
by
(also on Substack)The AI bills in the USA, currently in draft
Congrats on the milestone Nick. One thing I'd add on the pros / cons of regulation in AI: Tech regulation in particular can be misused over time to entrench competitive positions in favour of large, incumbent organisations. A regulation in a new domain sets out to prevent harm, and so creates a variety of standards conformance, inspection, reporting, and other compliance requirements. At first, industry decries the burden this will have on all businesses, how it will hinder innovation and increase costs. But as the largest companies over time allocate resources and implement regulatory mechanisms to comply, they internalise the cost and spread it over existing programs. It becomes simply a marginal cost. And so, their position changes - it's no longer an impediment, it's an important differentiator - highlighting that those companies who can't demonstrate compliance with an array of certifications/compliances are less trustworthy. At that point the well-meaning regulation has entrenched a competitive moat, a cost of entry that other smaller, probably more nimble and perhaps even more trustworthy companies can't afford to cross. This restricts market access and competition in the medium term. Then with fewer competitor alternatives, little differentiation against their fellow incumbents, and tight relationships with regulators, those large companies may be able to wind back their efforts on compliance while maintaining the ticked checkboxes of compliance. Unfortunately, it seems this may be where the EU AI Act is headed, and quite likely where the extraordinary number of other legislative initiatives in AI around the world seem to be heading too. Despite what may well be very good intentions, I doubt a vast patchwork of regulatory instruments will increase AI safety, but it certainly will have an impact on competitiveness of smaller market participants.
Great article Nick - very comprehensive. And really appreciate the shout out :)