TY - GEN
T1 - The Algorithmic Transparency Playbook
T2 - 2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
AU - Bell, Andrew
AU - Nov, Oded
AU - Stoyanovich, Julia
N1 - Funding Information:
This research was supported in part by NSF Awards No. 1934464, 1922658, 1916505, 1928614, and 2129076.
Funding Information:
Julia Stoyanovich is Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University. Julia’s goal is to make “Responsible AI” synonymous with “AI.” She works towards this goal by engaging in academic research, education and technology policy, and by speaking about the benefts and harms of AI to practitioners and members of the public. Julia’s research interests include AI ethics and legal compliance, data management and AI systems, and computational social choice. She developed and teaches technical courses on responsible data science, co-developed an online course AI Ethics: Global Perspectives, is a co-creator of two comic book series on responsible AI, and a co-designer of a public education course We are AI: Taking Control of Technology. Julia is engaged in technology policy in the US and internationally, having served on the New York City Automated Decision Systems Task Force, by mayoral appointment, among other roles. She received M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics and Statistics from the University of Massachusetts at Amherst. Julia is a recipient of an NSF CAREER award and a Senior Member of the ACM.
Publisher Copyright:
© 2023 Owner/Author.
PY - 2023/4/19
Y1 - 2023/4/19
N2 - Welcome to 2033, the year when AI, while not yet sentient, can finally be considered responsible. Only systems that work well, improve efficiency, are fair, law abiding, and transparent are in use today. It's AI nirvana. You ask yourself: "How did we get here?"You may have played a major role! As more organizations use algorithmic systems, there is a need for practitioners, industry leaders, managers, and executives to take part in making AI responsible. In this course, we provide for influencing positive change and implementing algorithmic transparency into your organization's algorithmic systems.
AB - Welcome to 2033, the year when AI, while not yet sentient, can finally be considered responsible. Only systems that work well, improve efficiency, are fair, law abiding, and transparent are in use today. It's AI nirvana. You ask yourself: "How did we get here?"You may have played a major role! As more organizations use algorithmic systems, there is a need for practitioners, industry leaders, managers, and executives to take part in making AI responsible. In this course, we provide for influencing positive change and implementing algorithmic transparency into your organization's algorithmic systems.
KW - algorithmic transparency
KW - course
KW - fundamentals of HCI
KW - responsible AI
UR - http://www.scopus.com/inward/record.url?scp=85158077456&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85158077456&partnerID=8YFLogxK
U2 - 10.1145/3544549.3574169
DO - 10.1145/3544549.3574169
M3 - Conference contribution
AN - SCOPUS:85158077456
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2023 - Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
Y2 - 23 April 2023 through 28 April 2023
ER -