Responsible: AI Vs Data Vs Humans

Rahul Kharat
4 min readMay 7, 2021

There is a big talk on Responsible AI for quite some time now. Every other AI player is building and offering solutions around it. Billions of experts are voicing their opinion and there are regulatory measures being setup. This article is my small addition or responsibility towards Responsible AI

Let’s start with defining responsible AI. There are multiple components of it such as Fairness, Explanability, Interpretability, Privacy (Regulatory) etc.

I am just giving my perspective on these different components, when we look at the solution developed using AI.

Fairness

Let’s start with an example of Fairness. Sometime back many of us heard this news of AI being bias towards race or gender or something else. Now when we say AI being bias i.e. AI model is bias (predictions are biased). Now the question is, why the model was biased. This is not definitely a function of ML model used, assuming we are taking care of all the necessary parameters while training the model. Then what could be the reason of model being biased. It’s the data that was fed in. If we study the data by exploring the distribution of variables against the predictor, we will realize that this bias came from data itself.

So now, how come the data is biased? Where this data came from? This data is the outcome of actions taken by humans. This data is basically human generated or rather generated through human actions. So that means historically humans were biased. But again, is it right? So the point is the data will never be uniformly distributed for the given variables. There will always be skewness and not every skewed variable to be concluded as bias.

Important point is to note that the variables which could lead to social bias or personal bias should not be part of the algorithm or training at all. Because in any way, those variables are not or should not be the decision variables. The other proxy variables, which could be highly correlated with these sensitive variables , will anyways capture the historical patterns. This way we are even addressing the privacy concern by not using any personal identifiers in the training data. There are many open points on how do we eliminate these data at the point of extraction itself, wherein nobody( means no-one) has access to these data. This is the larger question that needs to addressed, but I limit my thoughts on how, while at-least building such models we take the responsibility of eliminating such biases or variables and build Responsible AI solutions.

In one line: Respect privacy while building an AI solution, fairness will automatically get addressed

Explanability and Interpretability

Let’s talk about the other important components: Explanability and Interpretability.

I really wonder we first build or want Auto Feature Engineering, Auto ML and while doing that we are actually making the solution building process a black-box process. And then after building the solution we want to explain why and how everything was done. We could have taken the transparency approach from the beginning itself by keeping our solutions simple and effective.

Anyways whatever approach we take, now the question is how do we explain and interpret the solution or decision, coming from the Responsible AI solution.

There are following 4 very important questions that needs to be answered

  1. Why the decision/prediction was made?
  2. How the decision/prediction was made?
  3. Quantify Uncertainty in the decision made?
  4. Explain mitigation/alternative to handle the uncertainties?

If we are able to answer these questions and explain the answers to a non data scientist then majority of the work is done.

I hope if we address the above concerns of our Responsible AI solution, soon we will see the Gartner reports with at-least more than 50% of the AI solutions goes into actual implementation then the current reported figure of around 20%.

Transparency

One very important point I wanted to highlight is the “manipulation” concern mentioned in the currently submitted data privacy regulations report to European union. It talks about manipulating decisions through algorithms/feeds etc. One point to note is to address the difference between manipulated decisions and informed decisions. Let me give an example here from what I learnt in “Pricing” subject during my MBA. There is a concept called “decoy pricing”, which is fundamentally used to manipulated human decisions in favor of the business, which is being used for ages now. I don’t know why even that was allowed. Anyways so finally to make our AI responsible, make our solution completely transparent and with all the rationale explained behind each recommendation/decision/prediction and then let humans take the final decision!

Fairness does not mean everyone gets the same. Fairness means everyone gets what they need — Rick Riordan

--

--

Rahul Kharat

Explorer | AI Consultant, CXO Advisory | 18 Patents | 2 International Publications | www.linkedin.com/in/irahulkharat |