Abstract
AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors' assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
Original language | English (US) |
---|---|
Pages (from-to) | 2237-2251 |
Number of pages | 15 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 20 |
DOIs | |
State | Published - 2025 |
Keywords
- Algorithm audit
- Stackelberg game
- accountability
- privacy
- rational inattention
ASJC Scopus subject areas
- Safety, Risk, Reliability and Quality
- Computer Networks and Communications