TY - GEN
T1 - Automatically identifying targets users interact with during real world tasks
AU - Hurst, Amy
AU - Hudson, Scott E.
AU - Mankoff, Jennifer
PY - 2010
Y1 - 2010
N2 - Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.
AB - Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.
KW - Computer accessibility
KW - Pointing input
KW - Target identification
KW - Usability analysis
UR - http://www.scopus.com/inward/record.url?scp=77951109801&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77951109801&partnerID=8YFLogxK
U2 - 10.1145/1719970.1719973
DO - 10.1145/1719970.1719973
M3 - Conference contribution
AN - SCOPUS:77951109801
SN - 9781605585154
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 11
EP - 20
BT - IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces
T2 - 14th ACM International Conference on Intelligent User Interfaces, IUI 2010
Y2 - 7 February 2010 through 10 February 2010
ER -