TY - JOUR
T1 - Explainable Artificial Intelligence for Tabular Data
T2 - A Survey
AU - Sahakyan, Maria
AU - Aung, Zeyar
AU - Rahwan, Talal
N1 - Funding Information:
The work of Maria Sahakyan was supported by Khalifa University, Abu Dhabi, UAE, by providing a Ph.D. Scholarship and Research Facilities.
Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Machine learning techniques are increasingly gaining attention due to their widespread use in various disciplines across academia and industry. Despite their tremendous success, many such techniques suffer from the 'black-box' problem, which refers to situations where the data analyst is unable to explain why such techniques arrive at certain decisions. This problem has fuelled interest in Explainable Artificial Intelligence (XAI), which refers to techniques that can easily be interpreted by humans. Unfortunately, many of these techniques are not suitable for tabular data, which is surprising given the importance and widespread use of tabular data in critical applications such as finance, healthcare, and criminal justice. Also surprising is the fact that, despite the vast literature on XAI, there are still no survey articles to date that focus on tabular data. Consequently, despite the existing survey articles that cover a wide range of XAI techniques, it remains challenging for researchers working on tabular data to go through all of these surveys and extract the techniques that are suitable for their analysis. Our article fills this gap by providing a comprehensive and up-to-date survey of the XAI techniques that are relevant to tabular data. Furthermore, we categorize the references covered in our survey, indicating the type of the model being explained, the approach being used to provide the explanation, and the XAI problem being addressed. Our article is the first to provide researchers with a map that helps them navigate the XAI literature in the context of tabular data.
AB - Machine learning techniques are increasingly gaining attention due to their widespread use in various disciplines across academia and industry. Despite their tremendous success, many such techniques suffer from the 'black-box' problem, which refers to situations where the data analyst is unable to explain why such techniques arrive at certain decisions. This problem has fuelled interest in Explainable Artificial Intelligence (XAI), which refers to techniques that can easily be interpreted by humans. Unfortunately, many of these techniques are not suitable for tabular data, which is surprising given the importance and widespread use of tabular data in critical applications such as finance, healthcare, and criminal justice. Also surprising is the fact that, despite the vast literature on XAI, there are still no survey articles to date that focus on tabular data. Consequently, despite the existing survey articles that cover a wide range of XAI techniques, it remains challenging for researchers working on tabular data to go through all of these surveys and extract the techniques that are suitable for their analysis. Our article fills this gap by providing a comprehensive and up-to-date survey of the XAI techniques that are relevant to tabular data. Furthermore, we categorize the references covered in our survey, indicating the type of the model being explained, the approach being used to provide the explanation, and the XAI problem being addressed. Our article is the first to provide researchers with a map that helps them navigate the XAI literature in the context of tabular data.
KW - Black-box models
KW - explainable artificial intelligence
KW - machine learning
KW - model interpretability
UR - http://www.scopus.com/inward/record.url?scp=85117052096&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85117052096&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3116481
DO - 10.1109/ACCESS.2021.3116481
M3 - Article
AN - SCOPUS:85117052096
SN - 2169-3536
VL - 9
SP - 135392
EP - 135422
JO - IEEE Access
JF - IEEE Access
ER -