Explainable artificial intelligence for unsupervised models

SKU:
SR_2021-54.pdf
$0.00
(Inc. GST)
$0.00
(Ex. GST)
Write a Review

Buxton, R. 2022 Explainable artificial intelligence for unsupervised models. Lower Hutt, N.Z.: GNS Science. GNS Science report 2021/54. 24 p.; doi: 10.21420/3CGA-9E81 

Abstract

Explainable Artificial Intelligence (XAI) is a branch of Computer Science that is gaining popularity, as it addresses the fundamental concept of ‘trust in AI’ – in other words, when users should trust outputs from Artificial Intelligence (AI) models (Ribeiro et al. 2016). GNS Science has been studying XAI as part of a collaboration with Callaghan Innovation since 2020. Callaghan Innovation’s focus has been on XAI applied to supervised AI models. For the 2020/21 financial year, GNS Science chose to examine what approaches might be available to AI practitioners that developed and required unsupervised models. This work takes the form of a review where there is a general discussion of unsupervised approaches, including excerpts taken from publications. Next, there is a similar discussion of XAI approaches and unsupervised approaches applied to XAI (auth)