T(ether): Spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation

Dávid Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin, Hiroshi Ishii

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.

Original languageEnglish (US)
Title of host publicationSUI 2014 - Proceedings of the 2nd ACM Symposium on Spatial User Interaction
PublisherAssociation for Computing Machinery, Inc
Pages90-93
Number of pages4
ISBN (Electronic)9781450328203
DOIs
StatePublished - Oct 4 2014
Event2nd ACM Symposium on Spatial User Interaction, SUI 2014 - Honolulu, United States
Duration: Oct 4 2014Oct 5 2014

Publication series

NameSUI 2014 - Proceedings of the 2nd ACM Symposium on Spatial User Interaction

Other

Other2nd ACM Symposium on Spatial User Interaction, SUI 2014
CountryUnited States
CityHonolulu
Period10/4/1410/5/14

Keywords

  • 3D modeling
  • 3D user interfaces
  • Collaborative
  • Gestural interaction
  • Multi-user
  • Spatially-aware displays
  • VR

ASJC Scopus subject areas

  • Human-Computer Interaction

Fingerprint Dive into the research topics of 'T(ether): Spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation'. Together they form a unique fingerprint.

Cite this