A03:AI
Modality Crossing Based on Latent Structural Understanding in Multimodal Dialogue Translation

Can AI understand human gestures and sign languages? This is not such a simple question yet. We are developing a deep learning platform for advanced recognition of multimodal time-series data contained in human gestures and sign languages.
A03: AI

Hideki Nakayama
Principal Investigator
Research Area : Development of Latent Variable Models and Modality-Crossing Techniques
The University of Tokyo Associate Professor

Shin'ichi Satoh
Co-Investigator
Research Area : Development of Deep Learning Techniques on Large-Scale Multimedia Data
National Institute of Informatics Professor

Noriki Nishida
Co-Investigator
Research Area : Computational Linguistic Analysis and Model Design for Dialogue and Sign Language Translation
RIKEN Postdoctoral Researcher

Yusuke Miyao
Research Collaborator
Research Area : Technical Cooperation on Natural Language Processing and Computational Linguistics
The University of Tokyo Professor

Hideto Kazawa
Research Collaborator
Research Area : Technical Cooperation on Natural Language Processing and Machine Translation