Interactively articulating virtual 3D characters lies at the heart of computer animation and geometric modeling. Expressive articulation requires control over many degrees of freedom: most often the joint angles of an internal skeleton. We introduce a physical input device assembled on the fly to control any character's skeleton directly. With traditional mouse and keyboard input, animators must rely on indirect methods such as inverse kinematics or decompose complex and integrated motions into smaller sequential manipulations-for example, iteratively positioning each bone of a skeleton hierarchy. While direct manipulation mouse and touch interfaces are successful in 2D [Shneiderman 1997], 3D interactions with 2D input are illposed and thus more challenging. Successful commercial products with 2D interfaces, e.g. Autodesk's MAYA, have notoriously steep learning curves and require interface-specific training.