Automated Decision Making: the role of meaningful human reviews

In the first detailed element of our AI framework blog series, Reuben Binns, our Research Fellow in AI, and Valeria Gallo, Technology Policy Adviser, explore how organisations can ensure ‘meaningful’ human involvement to make sure AI decisions are not classified as solely automated by mistake.


This blog forms part of our ongoing work on developing a framework for auditing AI. We are keen to hear your views in the comments below or you can email us.

Artificial Intelligence (AI) systems[1] often process personal data to either support or make a decision. For example, AI could be used to approve or reject a financial loan automatically, or support recruitment teams to identify interview candidates by ranking job applications.



Article 22 of the General Data Protection Regulation (GDPR) establishes stricter conditions in relation to AI systems that make solely automated decisions, i.e. without human input, with legal or similarly significant effects about individuals. AI systems that only support or enhance human decision-making are not subject to these conditions. However, a decision will not fall outside the scope of Article 22 just because a human has ‘rubber-stamped’ it: human input needs to be ‘meaningful’.

The degree and quality of human review and intervention before a final decision is made about an individual is the key factor in determining whether an AI system is solely or non-solely automated.

Board members, data scientists, business owners, and oversight functions, among others, will be expected to play an active role in ensuring that AI applications are designed, built, and used as intended.

The meaningfulness of human review in non-solely automated AI applications, and the management of the risks associated with it, are key areas of focus for our proposed AI Auditing Framework and what we will be exploring further in this blog.

What’s already been said?

Both the ICO and the European Data Protection Board (EDPB) have already published guidance relating to these issues. The key messages are:
  • Human reviewers must be involved in checking the system’s recommendation and should not “routinely” apply the automated recommendation to an individual;
  • reviewers’ involvement must be active and not just a token gesture. They should have actual “meaningful” influence on the decision, including the “authority and competence” to go against the recommendation; and
  • reviewers must ‘weigh-up’ and ‘interpret’ the recommendation, consider all available input data, and also take into account other additional factors’.

Are there additional risk factors in complex systems?

The meaningfulness of human input must be considered in any automated decision-making systems however basic (e.g. simple decision trees). In more complex AI systems however, we think there are two additional factors that could potentially cause a system to be considered solely-automated. They are:
1.   Automation bias
2.   Lack of interpretability
What do we mean by automation bias?
AI models are based on mathematics and data, and because of this people tend to think of them as objective and trust their output.
The terms automation bias or automation-induced complacency describe how human users routinely rely on the output generated by a computer decision-support system and stop using their own judgement, or stop questioning whether the output might be wrong. If this happens when using an AI system, then there is a risk that the system may unintentionally be classed as solely automated under the law.
What do we mean by lack of interpretability?
Some types of AI systems, for example those using deep learning, may be difficult for a human reviewer to interpret.
If the inputs and outputs of AI systems are not easily interpretable, and other explanation tools are not available or reliable, there is a risk a human will not be able to meaningfully review the output of an AI system.
If meaningful reviews are not possible, the reviewer may start to just agree with the system’s recommendations without judgement or challenge,  this would mean the decision was ‘solely automated’.

Distinguishing solely from non-solely automated AI systems

Organisations should take a clear view on the intended use of any AI application from the beginning. They should specify and document clearly whether AI will be used to enhance human decision-making or to make solely automated decisions.

The management body should review and sign-off the intended use of any AI system, making sure that it is in line with the organisation’s risk appetite. This means board members need to have a solid understanding of the key risk implications associated with each option, and be ready and equipped to provide an appropriate degree of challenge.

The management body is also responsible to ensure clear lines of accountability and effective risk management policies are in place from the outset. If AI systems are only intended to support human decisions, then such policies should specifically address additional risk factors such as automation bias and lack of interpretability.

It is possible organisations may not know in advance whether a partly or fully automated AI application would meet their needs best. In such cases, their risk management policies and Data Protection Impact Assessments (DPIAs) should reflect this distinctly, and include the risk and controls for each option throughout the AI system’s lifecycle.

How can you address the additional risk factors?

Automation bias
You may think automation bias can be addressed chiefly by improving the effectiveness of the training and monitoring of human reviewers. While training is a key component of effective AI risk management, controls to mitigate automation bias should be in place from the start.

During the design and build phase business owners, data scientists and oversight functions should work together to develop design requirements that support a meaningful human review from the outset.

They must think about what features they would expect the AI system to consider and which additional factors the human reviewers should  take into account before finalising their decision. For instance, the AI system could consider measurable properties like how many years’ experience a job applicant has, while a human reviewer assesses the skills of applicants which cannot be captured in application forms.

If human reviewers can only access or use the same data used by the AI system, then arguably they are not taking into account other additional factors. This means that their review may not be sufficiently meaningful and the decision may end up being considered solely automated under GDPR.

If needed, organisations have to think about how to capture additional factors. For example, getting the human reviewers to interact directly with the person the decision is about to gather such information.

Those in charge of designing the front-end interface of an AI system must understand the needs, thought process, and behaviours of human reviewers and enable them to effectively intervene. It may therefore be helpful to consult and test options with human reviewers early on.

However, the features the AI systems will use will also depend on the data available, the type of model(s) selected, and other system building choices. Any assumptions made in the design phase will need to be tested and confirmed once the AI system has been trained and built.

Interpretability

Interpretability should also be considered from the design phase.

Interpretability is challenging to define in absolute terms and can be measured in different ways. For example:
  •  Can the human reviewer predict how the system’s outputs will change if given different inputs?
  •  Can the human identify the most important inputs contributing to a particular output?
  • Can the human identify when the output might be wrong?
This is why it is important for organisations to define and document what interpretability means, and how to measure it, in the specific context of each AI system they wish to use.

Some AI systems are more interpretable than others. For instance, models that use a small number of human-interpretable features (e.g. age and weight), are likely to be easier to interpret than models that use a large number of features, or involve heavy ‘pre-processing’2.

The relationship between the input features and the model’s output can also be simple or complicated. Simple “if-then” rules, which can describe decision trees, will be easier to interpret. Similarly, linear relationships (where the value of the output increases proportional to the input) may be easier to interpret than relationships that are non-linear (where the output value is not proportional to the input) or non-monotonic (where the output value may increase or decrease as the input increases).

One approach to address low interpretability is the use of 'local' explanations, using methods like Local Interpretable Model-agnostic Explanation (LIME), which provides an explanation of a specific output rather than the model in general. LIMEs use a simpler surrogate model to summarise the relationships between input and output pairs that are similar to those in the system you are trying to interpret. In addition to summaries of individual predictions, LIMEs can sometimes help detect errors (e.g. to see what specific part of an image has lead a model to classify it incorrectly). However, they do not represent the actual logic underlying the AI system and can be misleading if misused.

Many statistical models can also be designed to provide a confidence score alongside each output, which could help a human reviewer in their own decision-making. A lower confidence score would indicate that the human reviewer needs to have more input into the final decision. (See our blog on Accuracy of AI system outputs for more details) 

Assessing the interpretability requirements should be part of the design phase, allowing explanation tools to be developed as part of the system if required.
Organisations should try to maximise the interpretability of AI systems, but as we will explore in future blogs there will sometimes be trade-offs to make (e.g. interpretability vs. accuracy).

This is why risk management policies should establish a robust, risk-based, and independent approval process for each AI system. They should also set out clearly who is responsible for the testing and final validation of the system before it is deployed. Those individuals should be accountable for any negative impact on interpretability and the effectiveness of human reviews and only provide sign-off if AI systems are in line with the adopted risk management policy.
Training
Training is pivotal to ensuring an AI system is considered non-solely automated.
As a starting point, human reviewers should be trained: 
  • to understand how an AI system works and its limitations;
  • to anticipate when the system may be misleading or wrong and why; 
  •  to have a healthy level of scepticism in the AI system’s output and given a sense of how often the system could be  wrong;  
  • to understand how their own expertise is meant to compliment the system, and be provided with a list of factors to take into account; 
  • and to provide meaningful explanations for either rejecting or accepting the AI system’s output – a decision they should be responsible for. A clear escalation policy should also be in place. 
In order for the training to be effective, it is important that human reviewers have the authority to override the output generated by the AI system and they and are confident that they will not be penalised for so doing. This authority and confidence cannot be created by policies and training alone: a supportive organisational culture is also crucial.

We have focussed here on the training of human reviewers, however it is worth noting that organisations should also consider whether any other function, e.g. risk or internal audit, require additional training to provide effective oversight.

Monitoring

The analysis of why, and how many times, a human reviewer accepted or rejected the AI system’s output will be a key part in an effective risk monitoring system.

If risk monitoring reports flag that human reviewers are routinely agreeing with the AI system’s outputs, and cannot demonstrate they have genuinely assessed them, then their decisions may effectively be classed as solely automated under GDPR.

Organisations need to have controls in place to keep risk within target levels, including, if necessary, stopping the processing of personal data by the AI system, either temporarily or permanently.

Your feedback

We are keen to hear your thoughts on this topic and welcome any feedback on our current thinking. In particular, we would appreciate your views on the following two questions:

1)   What other technical and organisational controls do you think organisations should put in place to reduce the risk of AI systems falling within the scope of GDPR Article 22 by mistake?

2)   Are there any additional risk factors, in addition to interpretability and automation bias, which we should address in this part of our AI Auditing Framework?

Please share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk





Dr Reuben Binns, a researcher working on AI and data protection, joined the ICO on a fixed term fellowship in December 2018. During his two-year term, Reuben will research and investigate a framework for auditing algorithms and conduct further in-depth research activities in AI and machine learning.


Valeria Gallo is currently seconded to the ICO as a Technology Policy Adviser. She works with Reuben Binns, our Artificial Intelligence (AI) Research Fellow, on the development of the ICO Auditing Framework for AI. Prior to her secondment, Valeria was responsible for analysing and developing thought leadership on the impact of technological innovation on regulation and supervision of financial services firms.
Footnotes

[1] ‘AI system’ refers to Artificial Intelligence software which generates ‘outputs’ or ‘recommendations’ relating to a decision, for instance, whether or not to grant a customer a loan or invite an applicant to an interview (elsewhere, these may be referred to as ‘decision support systems’). Such recommendations will often be based on the outputs of a ‘machine learning model’ trained on data to generate predictions or classifications.

[2] Pre-processing is a practice in machine learning that involves modifying the training data so that it is more useful and effective in the learning process.


Comments

  1. You Mention that we have solely automated and automated with a human intervention. what does that mean for a business. what is the risk of using a solely automated system?

    ReplyDelete
  2. in my industry sector (motor insurance) the use of automated decision making is common (underwriting etc.) but you still have to provide the (prospective) customer with the opportunity to challenge the decision - what if your algorithm is inaccurate? what if there are other factors that you cannot take into account in automated decision making? And how do you balance the requirements of DPA18/GDPR if you rely solely on automated decision making? Full automation does not necessarily improve the customer journey and is unlikely to lead to retention as there is still a desire within the population for some human interaction.

    ReplyDelete
  3. Personally, I think that when financial gain is the motivation without a primary industry, then the policy must be default without, then assessment of requirement to monitored authorisation.

    ReplyDelete
  4. The interpretation with more than one variable is conditional. This is important when a couple of related variables are used. Thus achievement given something else can relate to social position. Elaborating this in guidance would be helpful as the conditioning essential to statistical reasoning is not how unfamiliar people will interpret associations.

    ReplyDelete
  5. The overwhelming majority of AIs will be $1 chips in IoT products. https://www.theregister.co.uk/2019/06/04/neural_networks_microcontroller/

    For example a fire alarm sensor might use an integrated AI to detect when people are cooking to reduce the number of false alarms. An AI in a thermostat might vary the home temperature according to which individuals are in the house. Or an AI in a doorbell might categorise visitors and announce this to the home owner - "it's the post woman".

    It is unreasonable and disproportionate to require human reviewers for such cheap devices, even if as in the post woman example the decision affects an individual.

    Please ensure that there is very simple guidance for the 99% of AIs which are cheap tools and restrict onerous compliance to the tiny proportion of AIs that deal with matters such as insurance or banking.

    ReplyDelete
  6. • A clear definition of what AI is would add clarity.
    • Could it be said that if ‘human review’ is given due care during the problem definition, AI model creation, data management, testing and deployment, that review should not be required, just compliance monitoring?
    • Should monitoring results be stored as immutable data on a blockchain?
    • The review process could be somewhat subjective.
    • A ‘AI Risk@ pro forma checklist would be useful!
    • Does there have to be an underlying assumption that the backend software included no ‘bias’ or the stochastic element?
    • Agree: Training IS important.
    • Pragmatism is key.

    ReplyDelete

Post a Comment