Seamless man-machine interaction
the Immersive Content & Communication Platform
LETHO is a proprietary, Immersive Content & Communication Platform (ICCP) that collates, organizes, analyses and distributes visual, aural, embodied, and spatial aspects of interaction and environments, and the relationships between them. LETHO is the underlying software platform that allows for the infrastructure, that provides for all aspects of any digital and MULTIMODAL mixed reality environment.
LETHO picks up on sensory input and produces an output based on the human senses:
The server, houses and provides high-end services. The services can be used by a browser or for the Letho-clients in events where there is external hardware.
The Letho server consists of two parts: a core system, developed in C++, based on a Three-Tier pattern, and a complementary system that provides management, visualization and object control services, based on the Model View Control (MVC).
The main services provided by Letho server are:
The Letho client has been developed for Windows, using C++ and C# and for Android using C++ and Java; it is based on an Event-driven pattern.
The Letho client allows you to manage a set of hardware and software “devices”:
The Letho architecture allows you to create complex applications in a simple and flexible way, where a variety of hard- and software devices are to be managed. The logic is handled at configuration level and the peculiarities of the various devices and interpreters are managed according to the configuration-file. It also provides the necessary libraries to create and/or extend new features.
The following are the top benefits of the Letho client:
At present, the following devices and interpreters have already been integrated into and are supported by the Letho client:
The Letho architecture results in a code that is reusable, flexible and reliable.
For more information about Letho or the Q-portfolio, please call Jan Christian Berger on +47 995 85 462 or send a mail by clicking here.
Since the arrival of personal computers, the user input has for the most part been limited to keyboard, mouse and later touch screens. With Letho, there are virtually no limits to the input and output that can be used. Via Letho the Q-products can recognize gender, speech (in multiple languages) and even identify one person’s voice from another. In addition, it can understand non-verbal behavior related to parts of the body or even the whole body. This could be gestures, postures, hand movements, age, pictures, colors, movement, music and sound effects. Basically, covering any form of communication, thus making it multimodal.