Received: from apple.com by karazm.math.UH.EDU with SMTP id AA09475 (5.65c/IDA-1.4.4 for ); Tue, 15 Oct 1991 13:15:13 -0500 Received: by apple.com (5.61/1-Oct-1991-eef) id AA10508; Tue, 15 Oct 91 10:59:42 -0700 for Received: by motcsd.csd.mot.com (/\=-/\ Smail3.1.18.1 #18.4) id ; Tue, 15 Oct 91 10:55 PDT Received: by roi.ca41.csd.mot.com (smail2.5/CSDmail1.0, Motorola Inc.) id AA18005; 15 Oct 91 10:53:18 PDT (Tue) To: dstamp@watserv1.uwaterloo.ca, galt%peruvian@cs.utah.edu Subject: Re: simple code to get simple gestures Cc: glove-list@karazm.math.uh.edu Message-Id: <9110151053.AA18001@roi.ca41.csd.mot.com> Date: 15 Oct 91 10:53:17 PDT (Tue) From: Lance Norskog Now the bad news. This type of filter will always have overshoots and delays at any change of velocity of the glove. This means that the image of the glove will play catch-up when motion starts, and will continue moving after motion ceases, then go backwars to correct itself. This can be disconcerting to the user! This follows the signal-processing paradigm. An extensions of the mouse-cursor paradigm is another possibility. If you're willing to forgo the direct mapping concept, prediction from the glove inputs can just "push" a 3D position around. Changes in direction and speed can be handled as special cases to avoid the overshoot/correction effect. Being 31 (as of this weekend) instead of 12 years old, I don't plan to hold my hand up to the screen for several hours at a time anyway. I've been testing with the sensors arranged on the floor and letting my hand hang down. I don't see that I need direct mapping to use the glove in this mode, and so the 3D-cursor-pushing scheme should work fine. Lance Norskog