Skip to Main content Skip to Navigation
Conference papers

FEETICHE: FEET Input for Contactless Hand gEsture Interaction

Abstract : Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.
Document type :
Conference papers
Complete list of metadata
Contributor : Kathleen TORCK Connect in order to contact the contributor
Submitted on : Monday, May 23, 2022 - 9:59:26 AM
Last modification on : Wednesday, September 7, 2022 - 8:14:05 AM



Daniel Simões Lopes, Filipe Relvas, Soraia Figueiredo Paulo, Yosra Rekik, Laurent Grisoni, et al.. FEETICHE: FEET Input for Contactless Hand gEsture Interaction. 17th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2019, Nov 2019, Brisbane QLD, Australia. pp.1-10, ⟨10.1145/3359997.3365704⟩. ⟨hal-03675358⟩



Record views