Received: from apple.com by karazm.math.UH.EDU with SMTP id AA04334 (5.65c/IDA-1.4.4 for ); Mon, 14 Oct 1991 13:36:59 -0500 Received: by apple.com (5.61/1-Oct-1991-eef) id AA09655; Mon, 14 Oct 91 11:21:44 -0700 for Received: by motcsd.csd.mot.com (/\=-/\ Smail3.1.18.1 #18.4) id ; Mon, 14 Oct 91 11:13 PDT Received: by roi.ca41.csd.mot.com (smail2.5/CSDmail1.0, Motorola Inc.) id AA08572; 14 Oct 91 11:11:36 PDT (Mon) To: galt%peruvian@cs.utah.edu Subject: Re: simple code to get simple gestures Cc: glove-list@karazm.math.uh.edu Message-Id: <9110141111.AA08568@roi.ca41.csd.mot.com> Date: 14 Oct 91 11:11:36 PDT (Mon) From: Lance Norskog Yes, Fred Brooks of the Pixel-Planes project said that predictive tracking should work best. Graphics Gems II has a thing on predictive coding as compression technique. Here's the idea: you have a state machine which reads a sample stream and guesses the next sample, and output the difference between your guess and the actual sample. This should give you lots of samples with low values, suitable for Huffman or dictionary compression. I've been itching to try this on sound samples. The article does it on pictures, and shows the error output as a separate picture. It's an excellent demonstration of the principle. A first attempt would track the slope of the last few samples, thus extending the first derivative out. If you assume that the input stream is delayed by X milliseconds, and it samples at half your update rate, you can create a separate one-dimensional tracker for each of (X, Y, Z, rot) and do a better job than drawing from raw input. You can also reject weird inputs, and average noisy ones. If you move your hand fast and then stop, the predictive tracking will overshoot and come back to the resting place. Cest la vie. I'd say it's time to sample movements, run the output through a statistics package, and figure out just what kind of error you need to deal with. Lance Norskog