The machines just got a little bit smarter
LETHO is a proprietary, multimodal Interactive Communications Platform (MICP) that collates, organizes, analyses and distributes visual, aural, embodied, and spatial aspects of interaction and environments, and the relationships between them. It substantially represents the infrastructure that allows for the Q-product portfolio to work by creating a MULTIMODAL mixed reality environment.
Letho picks up on sensory input and produces an output based on the human senses:
The Quintetto product portfolio are all based on the Letho ICP
The server, houses and provides high-end services. The services can be used by a browser or for the Letho-clients in events where there is external hardware.
The Letho client has been developed for Windows, using C++ and C# and for Android using C++ and Java; it is based on an Event-driven pattern.
The Letho client allows you to manage a set of hardware and software “devices”:
This architecture allows you to create complex applications in a simple and flexible way, where a variety of hard- and software devices are to be managed.
The Letho architecture allows you to create complex applications in a simple and flexible way, where a variety of hard- and software devices are to be managed. The logic is handled at configuration level and the peculiarities of the various devices and interpreters are managed according to the configuration-file. It also provides the necessary libraries to create and/or extend new features.
The following are the top benefits of the Letho client:
At present, the following devices and interpreters have already been integrated into and are supported by the Letho client:
The Letho architecture results in a code that is reusable, flexible and reliable.
For more information about Letho or the Q-portfolio, please call Jan Christian Berger on +47 995 85 462 or send a mail by clicking here.
Since the arrival of personal computers, the user input has for the most part been limited to keyboard, mouse and later touch screens. With Letho, there are virtually no limits to the input and output that can be used. Via Letho the Q-products can recognize gender, speech (in multiple languages) and even identify one person’s voice from another. In addition, it can understand non-verbal behavior related to parts of the body or even the whole body. This could be gestures, postures, hand movements, age, pictures, colors, movement, music and sound effects. Basically, covering any form of communication, thus making it multimodal.