A call for participation: Building the ICO’s auditing framework for Artificial Intelligence

Simon McDougall, Executive Director for Technology Policy and Innovation, invites comment from organisations on the development of an auditing framework for AI.



Applications of Artificial Intelligence (AI) are starting to permeate many aspects of our lives. I see new and innovative uses of this technology every day: in health care, recruitment, commerce . . . the list goes on and on.

We know the benefits that AI can bring to organisations and individuals. But there are risks too. And that’s what I want to talk about in this blog post. 

The General Data Protection Regulation (GDPR) that came into effect in May was a much-needed modernisation of data protection law.

Its considerable focus on new technologies reflects the concerns of legislators here in the UK and throughout Europe about the personal and societal effect of powerful data-processing technology like profiling and automated decision-making. 

The GDPR strengthens individuals’ rights when it comes to the way their personal data is processed by technologies such as AI. They have, in some circumstances, the right to object to profiling and they have the right to challenge a decision made solely by a machine, for example.

The law requires organisations to build-in data protection by design and to identify and address risks at the outset by completing data protection impact assessments. Privacy and innovation must sit side-by-side. One cannot be at the expense of the other.

That’s why AI is one of our top three strategic priorities.

And that’s why we’ve added to our already expert tech department by recruiting Dr. Reuben Binns, our first Postdoctoral Research Fellow in AI. He will head a team from my Technology Policy and Innovation Directorate to develop our first auditing framework for AI.

The framework will give us a solid methodology to audit AI applications and ensure they are transparent, fair; and to ensure that the necessary measures to assess and manage data protection risks arising from them are in place. 

The framework will also inform future guidance for organisations to support the continuous and innovative use of AI within the law. The guidance will complement existing resources, not least our award winning Big Data and AI report.

But we don’t want to work alone. We’d like your input now, at the very start of our thinking.
Whether you’re a data scientist, app developer or head up a company that relies on AI to do business, whether you’re from the private, public or third sector, we want you to join our open discussion about the genuine challenges arising from the adoption of AI. This will ensure the published framework will be both conceptually sound and applicable to real life situations. 

We welcome your thoughts on the plans and approach we set out in this post. We will shortly publish another article here to outline the proposed framework structure, its key elements and focus areas.

On this new blog site you will be able to find regular updates on specific AI data protection challenges and on how our thinking in relation to the framework is developing. And we want your feedback. You can leave us a comment or email us direct. 

The feedback you give us will help us shape our approach, research and priorities. We’ll use it to inform a formal consultation paper, which we expect to publish by January 2020. The final AI auditing framework and the associated guidance for firms is on track for publication by spring 2020.

We look forward to working with you!

Simon McDougall is Executive Director for Technology Policy and Innovation at the ICO where he is developing an approach to addressing new technological and online harms. He is particularly focused on artificial intelligence and data ethics.
He is also responsible for the development of a framework for auditing the use of personal data in machine learning algorithms.


Comments

  1. It is great and timely to move for a framework to audit AI applications and ensure they are transparent and fair. We believe that this can take two routes


    · Encouraging more Explainable AI (XAI) systems which have the inherent ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. These XAI models can be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Producing formats which can only be understood and analysed by AI experts does not address the abovementioned issues as this will not allow the stake holder, ICO and other stake holders to test the generated models for bias, fairness, transparency, etc.. Hence, XAI should produce formats and outputs which can be easily understood and analysed by the Lay user/expert in a given field. This will allow the users and stake holders to understand the AI's cognition and allow them to determine when to trust or distrust the AI. This will allow to satisfy the abovementioned points of transparency and causality and address the system bias, fairness and safety.

    · Developing Interpreter models which can be applied to any complex AI models (including deep learning, random forests, etc) to generate interpretable (i.e easy to understand and analyse) models which can explain the model general operations (hence enabling the auditing of AI models against the parameters set in the framework) as well as explain the rationale behind any given decision. Hence, such models interpreters such be able to explain the operation of complex AI models from the global and local points of view to provide a complete framework for the auditing and validation of any AI model

    ReplyDelete
  2. Following on from the comments about 'XAI', and from the ICO's recent successful Citizen Jury on the same topic:

    Explainability is a valuable and useful attribute if AI systems can be developed to allow for this. However, part of the nature of Explainability involves something of a trade-off between two competing factors that cannot be fully pursued in parallel.

    An explanation can be evaluated in two ways: according to its interpretability, and according to its completeness.

    (a) The goal of interpretability is to describe the internals of a system in a way that is understandable to humans.

    (b) The goal of completeness is to fully describe the operation of a system in an accurate way.

    When explaining an AI system, a 'complete' explanation could be given by revealing all the mathematical operations and parameters in the system. The challenge facing explainable AI is in creating explanations that are both complete and interpretable: it is difficult, if not impossible, to achieve interpretability and completeness simultaneously. The most accurate explanations are not easily interpretable to people; and conversely the most interpretable descriptions are generalised, partial, probabalistic and/or unable to provide good predictive power.

    As a result of the above I think that ICO guidance should address the balance between interpretability and completeness and give indication on what is the correct balance to be struck in particular circumstances. When is an explanation 'good enough'?

    ReplyDelete
  3. Interesting comments above. I see the big players in the AI space being challenged by interpretability and concerns around their IP. I'd like to see systems that really do remove an individuals data from a model when they object to profiling and provide access to individuals to rectify/check the integrity of their data being used.

    ReplyDelete
  4. Will such a framework be mandatory I wonder?

    ReplyDelete
  5. Awesome post with great piece of information. Check out this article which I found online regarding Artificial Intelligence . Hope you find it useful.

    ReplyDelete

Post a Comment