The technology necessary for hands-free video gaming is available, especially for head-gesture-centric controls. However, remapping controls to head gestures ranges from frustratingly tedious to impossible. I propose a common language of gestures and game actions that categorizes the controls by their use frequency. My suggested categories are primary, secondary, tertiary, and quaternary. The most preferred gestures and more frequently used controls are classified as primary while the least preferred gestures and least frequently used controls classified as quaternary. I also propose constructing an interface software and API to gather data from game designers, hardware designers, and users to suggest optimized game controls for users requiring
accessibility. I created a demo for one branch of this for my technical project — a game to help players determine which gestures they can perform the most accurately so that these gestures may be paired with controls most vital to successful gameplay.
While playing with the eye-tracking software GazePointer, it occurred to me that both the tracker and my ability to cooperate with it were vital to the overall success of using the tracker as a mouse. Some of my family members had an easier time working with the eye-tracker, while others only got a headache. Even though all of us have virtually the same abilities with our eyes, our abilities in conjunction with the eyetracker varied greatly. Further, it was difficult for any of us to determine who may achieve greater accuracy with the eye-tracker than anyone else.