Asia Pacific (APAC), Global Politics, Japan, VR World

Toshiba’s Clerk Robot Talks With Sign Language

CEATEC Japan 2014, like any other tech event, was filled with new ideas and innovations of current technologies. This is especially true for Toshiba (TYO: 6502), which had its booth and info desk taken care of by a robot that can communicate using its hands.

Introduced as Chihara Aiko, this humanoid clerk robot stands at roughly the size of an average person, with physical features that reinforce its outer appearance. Affectionately called as Ms. Aiko, the robot is Toshiba’s concept of developing a guidance robot that can communicate in social welfare and healthcare settings using sign language. Basically, it is designed to provide combined actions and gestures that would synch with its vocal responses.

Infused with modern robotics and AI tech, it can perform a considerable combination of gestures and greetings, depending on the setting and situation. While the fluidity of her physical movements is not as advanced as other previously designed humanoid robots, the total of 43 motion devices still ensure that she can provide sign language gestures in a more or less standard (if not creepy) manner. Here’s a better, closer look at how it does its gestures in action:

Chihara Aiko was a collaborative effort between multiple research and academic institutions. Its expressions and eye movements were designed by aLab Inc. and Osaka University, while Shibaura and Shonan Institute of Technology were responsible for building its sensors and motion systems.

As it is still a concept, the number of sign language expressions it can do are still quite limited. Toshiba however, stated that it plans to significantly expand its sign language vocabulary, allowing it to better provide information and varied responses via hand gestures in the near future.

Toshiba plans to use the sign language robot for the upcoming 2020 Olympics, where enhanced versions of it can be installed to provide assistance to the elderly and people with hearing disabilities. At the very least however, the company wants to develop a fully functioning model as early as next year.