A03:AI
Modality Crossing Based on Latent Structural Understanding in Multimodal Dialogue Translation
![](../img/A003_main.png)
Can AI understand human gestures and sign languages? This is not such a simple question yet. We are developing a deep learning platform for advanced recognition of multimodal time-series data contained in human gestures and sign languages.
A03: AI
![](../img/member/A03-nakayama.png)
Hideki Nakayama
Principal Investigator
Research Area : Development of Latent Variable Models and Modality-Crossing Techniques
The University of Tokyo Associate Professor
![](../img/member/A03-satou.png)
Shin'ichi Satoh
Co-Investigator
Research Area : Development of Deep Learning Techniques on Large-Scale Multimedia Data
National Institute of Informatics Professor
![](../img/member/A03-nishida.png)
Noriki Nishida
Co-Investigator
Research Area : Computational Linguistic Analysis and Model Design for Dialogue and Sign Language Translation
RIKEN Postdoctoral Researcher
![](../img/member/A03-miyao.png)
Yusuke Miyao
Research Collaborator
Research Area : Technical Cooperation on Natural Language Processing and Computational Linguistics
The University of Tokyo Professor
![](../img/member/A03-kasawa.png)
Hideto Kazawa
Research Collaborator
Research Area : Technical Cooperation on Natural Language Processing and Machine Translation