TY - JOUR
T1 - Visagreement
T2 - Visualizing and Exploring Explanations (Dis)Agreement
AU - Silva, Priscylla
AU - Guardieiro, Vitoria
AU - Barr, Brian
AU - Silva, Claudio
AU - Nonato, Luis Gustavo
N1 - Publisher Copyright:
© 1995-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - The emergence of distinct machine learning explanation methods has leveraged a number of new issues to be investigated. The disagreement problem is one such issue, as there may be scenarios where the output of different explanation methods disagree with each other. Although understanding how often, when, and where explanation methods agree or disagree is important to increase confidence in the explanations, few works have been dedicated to investigating such a problem. In this work, we proposed Visagreement, a visualization tool designed to assist practitioners in investigating the disagreement problem. Visagreement builds upon metrics to quantitatively compare and evaluate explanations, enabling visual resources to uncover where and why methods mostly agree or disagree. The tool is tailored for tabular data with binary classification and focuses on local feature importance methods. In the provided use cases, Visagreement turned out to be effective in revealing, among other phenomena, how disagreements relate to the quality of the explanations and machine learning model accuracy, thus assisting users in deciding where and when to trust explanations. To assess the effectiveness and practical utility of Visagreement, we conducted an evaluation involving four experts. These experts assessed the tool's Effectiveness, Usability, and Impact on Decision-Making. The experts confirm the Visagreement tool's effectiveness and user-friendliness, making it a valuable asset for analyzing and exploring (dis)agreements.
AB - The emergence of distinct machine learning explanation methods has leveraged a number of new issues to be investigated. The disagreement problem is one such issue, as there may be scenarios where the output of different explanation methods disagree with each other. Although understanding how often, when, and where explanation methods agree or disagree is important to increase confidence in the explanations, few works have been dedicated to investigating such a problem. In this work, we proposed Visagreement, a visualization tool designed to assist practitioners in investigating the disagreement problem. Visagreement builds upon metrics to quantitatively compare and evaluate explanations, enabling visual resources to uncover where and why methods mostly agree or disagree. The tool is tailored for tabular data with binary classification and focuses on local feature importance methods. In the provided use cases, Visagreement turned out to be effective in revealing, among other phenomena, how disagreements relate to the quality of the explanations and machine learning model accuracy, thus assisting users in deciding where and when to trust explanations. To assess the effectiveness and practical utility of Visagreement, we conducted an evaluation involving four experts. These experts assessed the tool's Effectiveness, Usability, and Impact on Decision-Making. The experts confirm the Visagreement tool's effectiveness and user-friendliness, making it a valuable asset for analyzing and exploring (dis)agreements.
KW - Data models
KW - Machine learning
KW - Modeling
KW - Simulation
KW - System applications and experience
KW - Visualization
UR - http://www.scopus.com/inward/record.url?scp=105002127221&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002127221&partnerID=8YFLogxK
U2 - 10.1109/TVCG.2025.3558074
DO - 10.1109/TVCG.2025.3558074
M3 - Article
AN - SCOPUS:105002127221
SN - 1077-2626
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
ER -