Doing Things Right
AI Ethics and Governance
The main difference between traditional IT and AI-driven technology is that the former allows for full tracking of the decisions made. With IFs, THENs and ELSEs, most actions (and mistakes) can be reproduced. Artificial Intelligence however is built around the promise that it reacts flexibly to varying inputs without always following a reproducible logic.
First of all, AI can make simple mistakes that might be hard to detect as many AI models don’t account for the logic behind their actions. If that leads to bad customer experiences or missed opportunities, it’s damaging to the bottom line. A badly set up AI chatbot is able to turn many people away without getting noticed, particularly because it is set up so quickly, but in turn requires more time for testing than conventional solutions.
The bigger and riskier part is that AI creates ethical and legal liabilities far bigger than traditional IT systems. By using AI, many data protection and other regulations (e.g. the EU AI Act) become relevant, and as many AI models handle data in the cloud, their actions can pose additional risks. Understanding and managing those is essential. Therefore, establishing solid AI governance and ethics rules is imperative for any organization using the technology. We at 9senses can help you navigate this part of AI too.

AI vs. Human Error
While we typically tolerate human error, society is far less ready to accept technology failure, particularly when it endangers lives or creates unfair outcomes. Self-driving vehicle manufacturers have experienced that problem first-hand: while human driving is typically a dynamic process where we inherently accept risk, we don’t accept the same error rate from fully automated vehicles. When in doubt, self-driving cars are thus more prone to hitting the brakes where humans wouldn’t blink an eye. This led to the fact that being rear-ended by surprised drivers is the predominant cause for accidents with self-driving vehicles, just because they exert so much caution.
When setting up AI solutions, particularly those that have the ability to impact lives, this societal context needs to be accounted for in rules and guidelines.
Key AI Governance and Ethics Topics
There are a few key aspects to keep in mind when introducing AI to your business. While we think that the benefits of AI by far outweigh its risks when managed properly, not spending sufficient time on assessing and managing those aspects can become very costly.
Data Protection
1
AI poses entirely new risks for data protection. Many processes, for example the use of large language models, takes place in the cloud, and contrary to normal processing, the way data is handled is much less clear. If sensitive customer or employee data is processed that way, many jurisdictions require explicit consent. Additionally, further risk emerges from the fact that AI processes are not necessarily fully tracible and reproducible. Evaluating the risks and possible scenarios and covering them in the data management and protection policies is thus essential.
Governance (and the EU AI Act)

As much as we set rules for human behavior, we have to establish the same for AI systems. Solid governance rules and controls are thus required, particularly as some aspects also fall under binding regulatory frameworks. Among them are data protection regulations, but equally more specific Ai-related rules, such as the EU AI Act.
2
Liability Risks
3
Decisions made by AI that create negative outcomes for others will become a liability for organizations like any other act, there will be no hiding behind “the AI did it.”. Quite the contrary, as society is far less ready to accept machine failures compared to human error, court decisions might be harsher in case AI-driven systems are responsible for bad outcomes. Therefore, a careful risk evaluation and extensive safety precautions are even more essential as soon as we – for example – control machines using Artificial Intelligence. This should not stop us from using AI, as AI can actually become helpful in preventing harm, but using it wisely is essential.
Reputation Risk

This is one of the key risks of using Artificial Intelligence without proper review of its possible impacts. By inviting a “black box” into the company’s decision-making, those risks increase manifold compared to well defined business-processes. Key risks can either come by poorly set up processes producing unsatisfactory results, or by overly autonomous AI systems making decisions that deviate from publicly or internally defined standard behaviors.
4