When it comes to explaining AI decisions, context matters

Alex Hubbard, Senior Policy Officer at the ICO, looks at some of the key themes identified in the ICO and The Alan Turing Institute’s interim report about explanations of AI decisions.


Explainability of AI decisions is a key area of the AI auditing framework. This guidance from Project ExplAIn will inform our assessment methodology.

If an Artificial Intelligence (AI) system makes a decision about an individual, should that person be given an explanation of how the decision was made? Should they get the same information about a decision regarding criminal justice as they would about a decision concerning healthcare?



These are just two of the issues we have been exploring with public and industry engagement groups over the last few months.

In 2018, the Government tasked the ICO and The Alan Turing Institute (The Turing) to produce practical guidance for organisations, to assist them with explaining AI decisions to the individuals affected. This work has been titled ‘Project ExplAIn’.

The ICO and The Turing conducted research, including citizens’ juries and industry roundtables, to gather views from a range of stakeholders with various, and sometimes competing, interests in the subject.

Today, we published the findings of this research in a Project ExplAIn interim report.

Key themes

The report identifies three key themes that emerged from the research:
  1. the importance of context in explaining AI decisions;
  2. the need for education and awareness around AI; and
  3. the various challenges to providing explanations.

1. Context


The strongest message from the research is that context is key. The importance of providing explanations to individuals, and the reasons for wanting them, change dramatically depending on what the decision is about.

People who took part in the citizen juries (jurors) felt explaining an AI decision to the individual affected was more important in areas such as recruitment and criminal justice than in healthcare. Jurors wanted explanations of AI decisions made in recruitment or criminal justice settings in order to challenge them, learn from them, and check they have been treated fairly. In healthcare settings, however, jurors preferred to know a decision was accurate rather than why it was made.

Jurors said they only expected an explanation of an AI decision if they would also expect a human to explain a decision they had made. They also wanted explanations of AI decisions to be similar to human explanations. But the industry roundtables questioned whether AI decisions should be held to higher standards, because human explanations could sometimes misrepresent the truth in order to benefit the explainer or to appease the individual.

2. Education

The findings showed a need to engage and inform the public in the use, benefits and risks of AI decision-making, through education and raising awareness. Although there wasn’t a clear message on who should be responsible for this, delivering a balanced message is important, suggesting a need for a diverse range of voices.

3. Challenges


Industry roundtable participants generally felt confident they could technically explain the decisions made by AI. However, they raised other challenges to ‘explainability’ including cost, commercial sensitivities (eg infringing intellectual property) and the potential for ‘gaming’ or abuse of systems.

The lack of a standard approach to establishing internal accountability for explainable AI decision systems also emerged as a challenge.

What’s next?


The findings set out in the Project ExplAIn interim report will feed directly into our guidance for organisations. This will go out for public consultation over the summer and will be published in full in the autumn.

The ICO has said many times that data protection is not a barrier to the use of innovative and data-driven technologies. But these opportunities cannot be taken at the expense of being transparent and open with individuals about the use of their personal data.

The guidance will help organisations to comply with data protection law but will not be limited to this. It will also promote best practice, helping organisations to foster individuals’ trust, understanding, and confidence in AI decisions.

As well as benefiting the ICO and The Turing on Project ExplAIn, it is hoped that these findings will help inform others in their own thinking, research and development of explainable AI decisions.

All materials and reports generated from the citizens’ juries are freely available to access. The project ExplAIn guidance will also inform the ICO’s AI auditing framework, which is currently being consulted on and which is due to be published in 2020. 

Comments

  1. Having now read the above cited document, I am curious about the lack of common sense in the approach of this research! Little did I read of our number one priority 'The rights and freedoms of natural persons et al...!

    Next, the research is flawed in it's scope. By this I mean that the initial study and data gathering exercise was conducted in England, (which does a disservice by falsely representing the opinions of citizens in NI, Wales and Scotland), then purports the findings to be representative of the UK.

    That being said, here we are barely one year into GDPR implementation and already having discussions about the future! Remember the old adage "A good paper system will make a better digital system"? We are still being tested in traditional data protection, but already we want to progress to explaining AI to the public at large?

    Next, let's be honest (1st Data principle) here, AI is being developed to save money for corporations, and increase efficiency and profit! Not to ensure or comply with, the rights of Data Subjects. At present there is little appetite to appease individuals rights, in any sector of society, let alone GDPR. How easy is it to find GDPR compliance content on any website? It's never up there on the first page with the 'how to pay' link, is it? Look at the current breach caseload of the ICO! Now you want to attempt to introduce AI explanations. Good luck with that one!

    Paradoxically, it will be a Human, who decides to use AI on behalf of a company. It will be a Human, who decides on 'Explanations. or not, but more worrying is the fact that it will be a Human who designs and operates the AI software, which as any good hacker knows, is the greatest (and easiest) vulnerability in any system. Who will decide on a decision appeal? Humans?

    The ICO is doing a brilliant job as it is, don't be led by the nose into areas such as AI, which is great for building cars, with minimal Human input, but, when it comes to 'Life Impacting' daily decisions, more measured reflection is required, from wise and ethically informed Human Individuals (HI for short).

    Finally, in EU or out, doesn't really matter, however, the whole idea of a persons rights is to let them know that you genuinely care about them, not to distance them with yet another layer of technology which will undoubtedly alienate and isolate the Human population. Oh, and by the way, I am a huge supporter of Robotics and Automation, even Machine Intelligence, when it's being used in the correct context of 'Engineering', not 'Social Engineering'.

    These ARE the views and opinions of James F Stevenson, (GDRP-f & GDRP-p) honestly expressed in plain sight on the front page :)

    ReplyDelete
  2. Explainability is a subset of Accountability. The USACM has published a list of 7 principles on algorithmic accountability that are worth referencing here:

    Awareness
    Users know where automated decisions have been made, and owners are aware of the potential harm automated decision making can do.

    Access and redress
    Users have access to a way to correct wrong decisions.

    Accountability
    If you deploy an algorithm you are responsible for it's decisions.

    Explanation
    You should be able to explain the algorithm in human terms.

    Data provenance
    You should be able to say where did the data come from (and also where did training set data come from).

    Auditability
    You need to keep records of decisions made.

    Validation and testing
    Keep checking the algorithm is behaving as it should.

    ReplyDelete
  3. Good Post! Thank you so much for sharing this pretty post, it was so good to read and useful to improve my knowledge as updated one, keep blogging.

    Artificial Inteligence Training
    Data Science Training

    ReplyDelete
  4. • Explainability and ‘context’ is a tricky and complex area. If eBay recommend a product to me, based on an AI decision, do I care? No. If AI finds me guilty of a traffic offence, do I care ( and want an explanation ), yes! Where does one draw the line? What incentive is there for commerce to comply? How does different/updated/deleted input data affect the output and therefore the possible required breadth of ‘Explainability’?
    • Does or should the software ( and computing power exist ) to truly ‘explain’ every AI decision made? Probably not. Should it?
    • How will Explainability and any related solution be recorded, secured, managed, monitored?
    • Education: this is a tricky one. Who funds it, creates it, moderates it, checks it, inspects it, delivers it, measures its success, maintains it, updates it?

    ReplyDelete

Post a Comment