During Christmas 1989, thousands of kids whose parents had forked out the $75+ for the glove would be utterly devastated by the products failure to live up to the hype. Putting the glove on was amazing. “Now you’re playing with POWER,” the tagline promised. But after that brief excitement, the experience fell short. Actually, using it was terrible.
The Problem is in the Design
The Power Glove was the first, in my memory, of a long list of failed attempts to commercialise natural gestural interfaces, or what’s now called Spatial Computing. If the promise and ultimate goal of these imaginative interfaces is to make us feel like wizards (or at least, Tony Stark) what makes most of them have the opposite effect? When we dream of liberating ourselves from the confines of our restrictive hardware, why do these dreams manifest in ways that make us ultimately feel a bit silly? It all comes down to human-centred design.
Many of these products are solutions in search of problems. And for one reason or another (tech or budget limitations or rushed software) have isolated the senses they employ for interaction. The human body is a sensorial miracle. In addition to the five senses typically taught in grade school, we employ a host of other sensorial modalities to understand and interact with the world around us. So, when these technologies ask us to ignore some of these senses and suspend disbelief, they create a kind of user experience dissonance. A lack of touch, or sound, or tactility makes using these products feel disjointed, and makes us feel silly. As interaction designers, we need to be aware of these limitations at all times, and design not around the technology, but around the human beings using it.
Despite these many challenges, we continue to push the technology forward and innovate. A testament to spatial computing’s hold on our collective imagination.
Looking Back to Find a Way Forward for Touchless Interfaces
As COVID is on the verge of forcing touchless interfaces out of the realm of well-meaning fantasy, and into the land of business critical and perhaps even lifesaving experiences, Valtech is here. Our innovation program has the relevant experience to guide you through some of the potential pitfalls, away from the gimmicky, ill-conceived and disappointing let downs that historically plagued attempted commercialisations of these technologies.
Let’s take a quick look at a few recent attempts at touchless gestural UI and talk through some of the challenges we faced in applying them in retail or experiential settings.
A myoelectrical band that, when worn, can accurately read gestures performed by the wearer. Everything from a hand wave to a finger pinch is recorded through electrical signals in the forearm. This product suffered from relying on the wearer actually having a piece of hardware attached. The advantage of hardware is the option for rudimentary haptic feedback to the user, adding a sense of touch. But it still felt disjointed and quite labour intensive to use. In a retail setting, the barrier of having to place the hardware on someone was too high.
An industry leading skeletal tracking system that uses an invisible infra-red dot grid, projected into the real world, to allow stereoscopic digital cameras to see and track objects in the environment in 3D. Kinect’s genius is also it’s downfall: the infrared tracking beam is so perfectly invisible, that you don’t know where you need to stand, or worse, when you’ve broken out of the bounds of the controllable area. It’s too easy for the machine to stop seeing you without you knowing. We’ve solved this challenge in the past by creating dedicated, confined spaces where a person and the UI are intrinsically linked by real physical boundaries. Or by designing the experiences to be triggered by natural movement through liminal spaces. Kinect is great for having a digital wall react to a passing crowd in a hallway.
A miniaturisation of the same tech as the Kinect but intended for a desktop setting. A small puck that lived in front of your keyboard that allowed you to wave your hand around in the air to control a PC. The Leap Motion suffered from the same challenges as the Kinect: invisibility of its field. But all is not lost for the Leap. After recently being acquired by a UK based start-up called Ultra-Haptics, we could see the sense of touch added to the interactions. If applied correctly, this has the potential to solve for the disjointedness most touchless special experiences suffer from. When combined with clear visualisation and affordances to the user, this kind of technology could be leveraged to give a person the superhuman sense of wizardry we collectively crave.
In Minority Report, Tom Cruise’s character puts on gloves. These allow the machine to track his movement, but also, I like to imagine, allow him to feel the digital media he’s manipulating in 3D space. If you can’t replicate touch, you need to compensate by ramping up sight and sound.
Overall, our research has led us to believe that the best controller is still the one your customers already have on them: their mobile devices. But, in circumstances where the interaction requires lower friction than that presented by pulling a handset out of one’s pocket, technologies that track their environment for safe, touchless gestural control could and should be employed. It’s critical to leverage as many of the audience’s senses as possible. Think directional speakers for high fidelity binaural sound queues, like we helped create for Dolby’s Soho experience store, or haptic feedback to exploit the sense of touch, or really clear graphical feedback to help make an experience as intuitive and clear as possible. No matter how you engineer the experiences, you must put humans at the centre.
We have a history of innovative experimentation making these technologies work, and we can make them work hard for your customers. Reach out to schedule a brainstorm to see how we can help you avoid the pitfalls of novelty in the future of touchless technologies.