Over the last couple of months we took our off-browser interest one step further and we experimented again with gesture-based interaction. The self-initiated project and its challenges found their way to display the meeting point of technology, experience design and product communication on a humanscale FullHD screen.
We look at gesture as articulated by Kurtenbach and Hulteen "A motion of the body that contains information. Waving goodbye is a gesture". Products evoke a wide range of emotions; this was our starting point. We investigated in the human-product interaction, summing up the story of the user and the product in 7 stages. Accordingly, we conceptualised, designed and built 7 interactions: Discoverer, Approacher, Explorer, Customiser, Experiencer, Reacher and Player.
Each of the interactions embodies a stage of the story through its own gestures and the emotion it stimulates. Thus, a dialogue between the user and the screen (hence the product), takes place.
Emotions are at the very core of the experiment's design process. In a nutshell, our human-product inspiration storyline begins with (1) surprise, to be pleased by something that unexpectedly happens and grasps the attention of the user who discovers the product for the first time. Following the astonishing aspect, the user, (2) wondering, approaches the product aiming at shortening the distance and getting introduced closely to the object. Gaining (3) interest and wanting to know more, the user steps into the world of the product and learns more about it until the (4) desire state is reached. Imagination plays a role and the user looks forward to make the product their own; they customise it. Building a connection with the product, the user tries it and foresees their own usage of the object and consequently (5) admires it. The process naturally continues by (6) pursuing the product and literally reaching it. The process only comes to an end when the user (7) acquires the product and takes the pleasure in experiencing it, reaching a satisfactory state.
A progressive framework
Caseture puts on view the new potential of product communication and service experiences. It embodies the various ways in which products and experiences act as emotional stimuli and the matching concerns that correspond with these stimuli. Caseture's gesture-based interactions are an example of how this approach provides solutions suitable to the context and environment of both, the user and the brand.
Caseture is designed to be contextualised based on the concept it encompasses and hence is limitless in what it can communicate and to whom. The frame of the experiment revolves around a brand, a product, a service or an independent experience. Moreover, it is flexible and open for one or more users. While some interactions are designed having solely a first person user in mind (Customiser), others allow a collaborative experience for multiple users (Player).
A minimalist approach is taken to the visual language of Caseture. By this, the product and service are the main element of the experience and the emphasis is on the interaction and gestures. In other words, the art direction and branding of caseture are supplementary to the overall experience; the simplicity of the icons and graphical elements reflect the simplicity of the gestures and interaction.
Caseture is developed on Unity, a development platform for creating games and interactive experiences. We worked with Full HD resolution at 60 FPS to have the high-end visual results we aimed. The project included an integration of 3D objects, animations, timelines and films - accompanied with different learning curves and experience levels. The execution included one 3D camera and is adaptable for multiple ones.
The gesture detection is computed from a 3D camera input that was received from the open source OpenNI drivers. Caseture can include one or multiple 3D cameras. In our demo case, we used an Asus Xtion camera (640x480@30fps). A java application (Fusion) combines the sensory input to one single point cloud and computes the gestures.
Compared to other technologies we can detect bodies at large distances but also hands that are very close to the display (about 1 meter). So the user can stand quite close in front of the display, yet people in the area can also be tracked.
Caseture we implemented so far 3 gesture detections:
- Position and velocity of the body
- Position of the hand relative to the body
- Head position tracking
Using those information we detected hand-gestures like swiping, dragging, pointing and pushing.
Using the new Unity 5 Standard Shader and real time global illumination, allowed us to perfection our interaction design. In addition to Unity's standard feature set, we developed a custom shaders for "Experiencer" and an efficient multicore cloth simulation for "Discoverer" and "Player". Noting that Fusion and Unity are connected via Google's Protocol Buffers on top of a binary Web-Socket.
Through Caseture we push the boundaries of innovative digital experiences, and by sharing our process we join the discussion on gesture-based interaction. While we experiment further, our framework already enables limitless optimisations to embody any specific goal.