AI Explainability: How to Be Data Protection Compliant

AI can be the perfect assistant to a human decision maker or data controller. AI can also be used completely independently as a decision making process without the need of human intervention. Either way, it must be used correctly and in a way that can be easily explained to data subjects. Ensuring that AI explainability is central within the approach of a business or organisation to be data protection compliant is a vital component moving forward.

Human-in-the-loop and human-out-the-loop decision making are the names attached to these AI decision making processes. Human-out-the-loop is the more complex system and is often referred to as ‘black box’ systems that are solely automated. If these automated decisions have a direct impact on an individual there is protection from GDPR to look after the rights of that person’s data. Article 22 GDPR helps businesses and organisations to be data protection compliant, especially where a decision could have an adverse legal or significant impact on an individual.

Within Article 22, there is a requirement for a human element within the decision making process in those instances where a decision could have an adverse effect on an individual. This human review part of the process must be meaningful and have a genuine impact within the decision. This has an impact on how data controllers can use AI effectively and remain within the boundaries of data protection compliance.

What is Article 22?

In essence, Article 22 states that any data subject has the right to not be subject to decisions that are made purely by an automated process whereby they are impacted by a legal or adverse effect. This is also the case when it comes to profiling, where it is necessary to evaluate a whole host of personal characteristics that are used to decide and predict the future. These could include a person’s economic situation, their behaviour, reliability, location, health, work performance, and a number of other factors. Article 22 ensures that an organisation is data protection compliant, whereby AI always has that human touch to top off the process and safeguard the process.

How does Article 22 ensure fairness and AI explainability?

There are many processes and decisions in modern life that are made through automated means. AI has become a central part of many organisations core processes and functions. Article 22 ensures that there is always a human role to play when an automated decision could have an adverse impact on an individual. One example of this can be seen with Uber in 2020, where an algorithm was used to deactivate Uber drivers’ apps, preventing them from working, when fraudulent activity was found. When the case went to court, Uber convinced the court that the algorithm was just a tool to assist what was a human team in making the final decision. The human element to this process was crucial in allowing the decisions to stand, as meaningful human review was in place.

Providing key functioning and explainable AI benefits, is therefore part of the understanding as organisations roll out the use of AI within decision making across multiple elements of a business. Article 22 ensures that you can no longer just pay lip service to the ‘meaningful human review’ clause, and that humans must play an important part in the review of information and data that an AI has processed. It is also integral to the credibility of an organisation to have the ability to clearly demonstrate in a transparent manner how and why AI is used in these processes, to ensure that there is fairness and data protection at every single stage of proceedings.

What is meaningful human review?

In practice it can be difficult for data controllers to really know exactly what is meant by meaningful human review. What you can say for sure is that a human (or humans) cannot simply rubber stamp a decision that has been automated. Instead, the information must be collated and analysed in order to come up with a final decision that takes in all considerations, including any external factors that an algorithm might have missed. These human emotions and nuances are often the final piece of the puzzle when determining the outcome for a specific person, whether it is in relation to work performance, healthcare, financial services, or any other area where AI is used.

Documentation at every single stage of the process is important, to demonstrate that human review has indeed taken place and that there is not just a sole reliance on human-out-the-loop decision making. As automated systems become a more central part of everyday lives, it is important that organisations maintain a genuine, credible human presence within the process, especially where an individual can be affected adversely due to the final decision made by the process.

Related Posts