How(not)to do responsible AI

It is a good principle to be responsible. Everyone working with AI technology wants to be responsible, not least the big tech companies like Google, Microsoft and AWS. Unfortunately, such an assertion is not necessarily as straightforward as it might seem. “Principles are helpful to provide guidance to an organization, especially in situations where there are no explicit rules. They can be applied to create new rules or to bring discussions to a higher level above personal interests and opinions.” 

When I work with organizations designing principles, I always point out that there are some meta-rules for formulating principles. You can’t just create a principle like: “We want to be the best producers of monoclonal antibody products in the whole wide world because we are in a unique position.” While it sounds good, it would have either no effect or be downright damaging because it brings more confusion than direction.

The nature of a principle

The first meta-rule is related to the format of a principle. A principle should be formulated in a way that is easily understandable, and it should be a single sentence that guides what actions to take. Another important meta-rule is that the principle should not be too specific. However, my favorite meta-rule is that it should not be trivial. A principle should specify real tradeoffs. This is also the most difficult meta-rule to comply with. To test whether a principle is trivial, you can try to formulate the reverse of the principle. If the reverse is obviously meaningless, then the principle is trivial. For instance, “we want to be the best partner of our clients” is a trivial principle. If we formulate the reverse, it becomes “we want to be the worst partner of our clients,” which is meaningless. When you say that you want to be the best, you don’t really provide any kind of principled guidance or any tradeoffs to guide action. It is sort of like saying that you want to stay alive, which is a great principle but it doesn’t make you do anything that you wouldn’t already do automatically. 

In the worst case scenario, you may encounter subjectivity and randomness, as any action could be interpreted as leading to a successful partnership with clients, whether it involves taking them to events, offering discounts, or providing high-quality services. Trivial principles can lead to more confusion and arbitrary decision-making than benefits, as everyone may end up going in their own direction.

Contrast this with another hypothetical formulation of a principle: “We want to be the fastest to offer new services to our clients.” If we flip this around, it becomes “We don’t want to be the fastest to offer new services to our clients.” This could be a meaningful principle for a risk-averse organization that values tried and tested services. The principle offers a real tradeoff and is, therefore, not trivial. The first formulation would be great for a software start-up but not necessarily a nuclear power company, while the reverse is true for the second formulation 

A non-trivial principle will align the organization around a clear goal, while a trivial principle only states what is already trivially true, which might bring much consensus and joy to the formulators of the principle but in effect, it will often create more confusion than real value.

Promoting responsible AI

If we return to the question of responsible AI, the underlying principle used by many organizations today can be paraphrased: “We promote responsible AI.” Does this pass the triviality test? well, does “we promote irresponsible AI” make sense to you? It does not, and it does not pass the triviality test. It, therefore has no real value. 

Maybe the reason could be that it is too general and we have to break it down into something more specific. Let’s look at Google’s introduction to responsible AI. Here, responsible AI is broken down into AI dimensions: fairness, accountability, safety, and privacy. Let’s try using those more specific dimensions to see if it becomes less trivial: “We want to promote fair AI.” Turning this around yields “We want to promote unfair AI,” which again is meaningless. So is “We promote unaccountable/unsafe/non-private AI.” These are all self-evident statements that no one will publicly disagree with. 

Microsoft is not much different. Their Principles of responsible AI are described thus: “Microsoft outlines six key principles for responsible AI: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security.”  AWS similarly does not differ much in their Building AI responsibly at AWS, which defines six core dimensions of responsible AI: Fairness, explainability, privacy and security, robustness, governance and transparency. 

They are all self-evident qualities that we would expect any organization to follow regardless of whether they are developing AI or sneakers or fishhooks. The result is that there is no real guidance in them, and we are lulled into a feel-good state of virtue signaling. Tthat is not helpful. To be blunt, it resembles marketing more than actual responsibility. 

Fair AI in practice

Let’s dive a little bit deeper to understand the problem when applied in practice. The principle “We want fair AI” places all the information on the concept of fairness. The problem is that, in practice, this opens the floodgates to subjectivity. If you ask politicians like Donald Trump what is fair, all opposition is unfair, but do we really want to promote AI that never allows opposition to politicians? If we ask our “fair” chatbot about negative stories about Donald Trump or Joe Biden, it would have to edit the response based on what would be perceived as fair to them. This is obviously absurd.

in practice, what will happen is that someone applies a subjective measure of what constitutes fairness to the development of AI but there is no certainty that others in the organization or consumers have the same perception of fairness. They can consequently be misled by the promise of responsible AI, so it is better not to pretend that we are building “fair” AI and get rid of this as a principle to follow. 

The path to responsible AI

Instead, we should try to formulate non-trivial principles to be clear about what we mean. Let me give an example: “We promote AI that favors minorities.” That is a good formulation because the reverse “We promote AI that does not favor minorities” is not meaningless. Such a principle would focus on equal value of all people regardless of their status. Some consumers think the first formulation is fair, while others think the other is fair. Stating it clearly gives the consumer a clear choice. 

Another example is “We value transparency over efficiency in AI development.” This is a good example of a non-trivial choice since it states that it is more important for customers to know in detail what is going on than how well it works. That might make a lot of sense in a government context, while the reverse may make more sense for an IT infrastructure provider. 

Bringing clarity to responsibility

To promote the responsible use of AI, we need to start thinking a bit more clearly about the principles we propose and not hide behind well-meaning political slogans. If not, we are lulled into a feel-good state that promotes obscurantism and hidden agendas. Indeed, vague AI principles minimize transparency because it defers decisions about responsibleness and fairness to a new class of “ethics experts” who decide behind the scenes based on subjective and obscure decisions. We, as consumers, have very little insight into who these people are and how they acquired their ethical superiority. 


Udgivet

i

af

Tags:

da_DKDanish