Developing the ICO AI Auditing Framework: an update

Simon McDougall, Executive Director for Technology and Innovation reflects on the progress made in developing the ICO approach to auditing Artificial Intelligence (AI), and some of the broad themes emerging from the feedback received so far. 


We launched this blog in March to provide regular updates on the development of the ICO Auditing Framework for AI, and encourage organisations to engage with us on this work. Since then we have set out the proposed overall structure of the framework, and explored the data protection challenges and possible controls in relation to five specific risk areas: 
  1. Meaningful human reviews in non-solely automated AI systems
  2.  Accuracy of AI systems outputs and performance measures;
  3. Known security risks exacerbated by AI;
  4. Explainability of AI decisions to data subjects; and
  5. Human biases and discrimination in AI systems.
This ongoing and more informal approach to consultation is new, but we are delighted with the positive response and engagement it has generated so far.
The blog readership in particular has exceeded our expectations with each blog receiving more than 10,000 views. In addition, over 50 organisations and individuals have so far shared their thoughts with us, either by commenting publicly on the individual posts or via the project mailbox. Personally, I would have liked there to have been more comment and discussion in each blog’s comments section, but in practice most people have preferred to feedback directly to us via email. This is still enormously valuable of course, and we will keep trying to innovate in how we engage with people as we launch other initiatives.
The visibility generated by the blog also created additional opportunities for the AI framework team to discuss our work, with both small and large stakeholder groups, at policy forums, roundtables and conferences. The breadth and depth of this engagement is helping to shape our thinking and approach. We have received some detailed technical feedback on a number of aspects of AI and data protection, but a number of common and broader themes have also emerged.
A number of people asked us to clarify what we mean by “AI”. In reality, we use the term AI because it has become a mainstream way for organisations to refer to a range of technologies that mimic human thought. Some of these are quite new and ground-breaking, such as modern deep learning approaches to object recognition, while others have been around for a while; decision trees, for example. But they all have been referred to as AI at some point. 
Before we launched the blogs we gave some thought to working on our definition of AI, but we decided to focus our efforts on the underlying data protection issues instead. This is because while AI has become an umbrella term, any technology underneath it will involve or enable the large scale automated processing of large amounts of data. If this processing involves the use of personal data for purposes of profiling, identification and decision-making, then we are concerned with any new or heightened data protection risks that may arise. The primary objective of our framework is to give us a solid methodology to assess whether organisations using any such technologies are compliant with data protection law. When necessary the framework will also call out when identified risks are specific to certain technologies, such as facial recognition.
Some respondents have also argued that a number of issues, such as security, are not unique to AI. We agree that only a few of the risks arising from the use of AI will be completely unique. For those risks that are not new, our job is to understand how and to what extent they can be exacerbated by AI, and whether additional controls are required to identify, measure, mitigate, and monitor them. We do not expect organisations to redesign their risk management practices from scratch, but we do expect them to review them and make sure they remain fit-for-purpose if AI is used to process personal data. 

The post on the topic of human reviews in non-solely automated AI systems attracted the most interest to date, and generated more than 20,000 views. A number of stakeholders highlighted the trade-offs that having a human-in-the-loop may entail: either in terms of a further erosion of privacy, if human reviewers need to consider additional personal data in order to validate or reject an AI generated output, or the possible reintroduction of human biases at the end of an automated process. Trade-offs is a key area of risk in our framework, and therefore we will make sure to reflect this feedback in the associated blog post which we will publish soon.

Finally, some of the comments also highlighted an important shortcoming in our initial approach. We were aware that AI may be developed or run partially or completely by third parties, rather than in-house. However, the feedback strongly suggests that this is the case the majority of the time. As we finalise our framework, we will need to consider further the challenges this presents for data controllers in exercising adequate levels of oversight and control.

I want to conclude by thanking all stakeholders that have engaged with us so far, and encourage you to continue to do so until the end of October, when this initial consultation phase will conclude. Your feedback is genuinely valuable in improving our work, and we appreciate the time you are taking to help us.

Our plan remains to publish the formal consultation paper on our AI Auditing Framework no later than January 2020.

Simon McDougall is Executive Director for Technology Policy and Innovation at the ICO where he is developing an approach to addressing new technological and online harms. He is particularly focused on artificial intelligence and data ethics.


Comments

  1. I doubt that most AIs will be third-party. I think that the majority of AIs will be self-contained $1 chips in every kind of IoT device. For example a toaster that will learn how each user prefers their bread cooked, scales that report their weight loss progress, or mugs that track your coffee consumption. I would like you to make sure that your auditing framework is limited to the 1% of large-scale AIs which need it. See:
    https://www.theregister.co.uk/2019/06/04/neural_networks_microcontroller/

    ReplyDelete

Post a Comment