• Content Type

Research and analysis item

Towards auditable AI systems

Abstract

Artificial Intelligence (AI) systems are playing an ever growing role as part of decision and control systems in diverse applications, among them security- and safety-critical application domains such as mobility, biometrics and medicine. The use of AI technologies such as deep neural networks offers new opportunities such as a superior performance as compared to traditional IT technologies. At the same time, they pose new challenges with regard to aspects such as IT security, safety, robustness and trustworthiness. In order to meet these challenges, a generally agreed upon framework for auditing AI systems is required. This should comprise evaluation strategies, tools and standards but these are either under development or not ready for practical use yet. This whitepaper first summarizes the opportunities and challenges of AI systems and then goes on to present the state of the art of AI system auditability with a focus on the aspects AI life cycle, online learning and model maintenance in the presence of drift, adversarial and backdoor poisoning attacks and defenses against these attacks, verification, auditing of safety-critical AI systems, explaining black-box AI models and AI standardization. Despite substantial progress for all of these aspects, an overarching open issue is that of (often multi-faceted) trade-offs between desired characteristics of the system, e.g. robustness, security, safety and auditability, on the one hand and characteristics of the AI model, ML algorithm, data and further boundary conditions on the other hand. These trade-offs restrict the scalability and generalizability of current AI systems. To eventually allow leveraging the opportunities of AI technologies in a secure, safe, robust and trustworthy way, two strategies should be combined:

    1. Taking the abovementioned trade-offs into account, favorable boundary conditions for the given task should be selected;
    2. Available technologies should be advanced by substantial investments in R&D to eventually allow for secure and safe AI systems despite complex boundary conditions and, therefore, to improve scalability and generalizability.

In a first step, one should focus on selected security- and safety-critical use cases. Available standards, guidelines and tools should be exploited and interdisciplinary exchange between researchers and industry should be further promoted to find the best combinations of available criteria and tools for achieving auditable, secure, safe and robust AI systems for each specific use case. Insights from these use cases should then be used, in a second step, to generalize the results and to build up a modular toolbox that may subsequently be applied to other use cases. On this basis, first technical guidelines and subsequently standards should be developed. In the ideal case, the outcome will be a generally applicable set of criteria and tools that allows making AI systems sufficiently auditable, safe and secure.

Discussion forum

  • Author
    Posts
  • Up
    0
    ::

    Share your thoughts on this item here.

You must be logged in to contribute to the discussion

Login