A BRAIN INSPIRED MATHEMATICAL APPROACH TO VOLUMETRIC HAND GESTURE TEXTUALIZATION
DOI:
https://doi.org/10.5281/zenodo.19452018Keywords:
Hyperdimensional Computing, SpatioTemporal Modelling, Gesture Recognition, Air Writing, Gesture-to-Text conversionAbstract
Air-writing lets people write in the air using hand gestures, no keyboard or touchscreen needed. It sounds futuristic, but the idea has been around for a while — the problem is that most systems built around it rely on deep learning models like CNNs and RNNs. Those models are hungry: they need large datasets to train on and serious computing power to run, which makes them impractical for anything real-time or low-resource. This work takes a different approach. Instead of deep learning, it uses Spatio-Temporal Hyperdimensional Computing (ST-HDC) — a method inspired by how the brain processes information. A camera tracks hand movements, and the system converts those movements into very high-dimensional vectors called hypervectors, which encode both the shape and timing of each gesture. The math involved is surprisingly simple — circular shifts to preserve stroke order, superposition to combine information — yet it gets the job done. One thing that stands out is oneshot learning: the system can recognise a gesture after seeing it just once, rather than needing dozens of labelled examples. The output is available in both text and speech, which matters to people who rely on alternative communication methods. The result is a system that's light on resources, runs fast, and works in real time — without the infrastructure overhead that makes most gesture recognition research hard to actually deploy.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.






