Grants-in-Aid for Scientific Research-JSPS KAKENHI- for FY2022(Grant-in-Aid for Transformative Research Areas(B))

Publications

2025

Lectures

Noriki Nishida, Koji Inoue, Hideki Nakayama, Mayumi Bono, Katsuya Takanashi. (2025). Do Multimodal Large Language Models Truly See What We Point At? Investigating Indexical, Iconic, and Symbolic Gesture Comprehension.The 63rd Annual Meeting of the Association for Computational Linguistics. (ACL 2025)(Jul 27-August 1, Vienna, Austria)

Mayumi Bono, Noriki Nishida, Koji Inoue, Tomohiro Okada, Naoya Fujikawa, Yutaka Osugi,Hideki Nakayama, Katsuya Takanashi, and Shin'ichi Satoh. (2025). Relevant Annotation for Multimodal Interaction Analysis: AI Bridging Gestures and Signs.The 10th Conference of the International Society for Gesture Studies (ISGS10). (July 9-11, Netherlands)



2024

Papers

・Hayashi Takato, Kimura Ryusei, Ishii Ryo, Okada, Shogo. (2025). FInvestigating Role of Big Five Personality Traits in AudioVisual Rapport Estimation. The 19th IEEE International Conference on Automatic Face and Gesture Recognition 2025. .

Keiko Sagara, Kang-Suk Byun, Yu Yi Chen and Nick Palfreyman. (2024). Grammaticalisation and semantic change in East Asian Sign Languages. 16th Biennial Desert Linguistics Society (HDLS16) .

Bono, M., Okada, T., Skobov, V. , Adam, R.(2024).Data Integration, Annotation, and Transcription Methods for Sign Language Dialogue with Latency in Videoconferencing, 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources. LREC-COLING2024. Proceedings of the 11th Workshop on the Representation and Processing of Sign Languages , 26-35.

・Jiaxuan Li, Duc Minh Vo, Akihiro Sugimoto, Hideki Nakayama. (2024). EVCap: Retrieval-Augmented Image Captioning with External Visual–Name Memory for Open-World Comprehension. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) .

・Yi-Pei Chen, Noriki Nishida, Hideki Nakayama,Yuji Matsumoto. (2024). Recent Trends in Personalized Dialogue Generation: A Review of Datasets, Methodologies, and Evaluations. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING) .

Lectures

Bono, M. (2025). Embodied Semiotics: Can AI technologies capture meanings of gestures and signs?.14th US-Japan Hi-Tech Industrialzation Forum (March 14, Silicon Valley, USA) (Hybrid format)

Noriki Nishida, Koji Inoue, Hideki Nakayama, Mayumi Bono, Katsuya Takanashi. (2025). (In Japanese) MLLM ha gesture wo dokomade rikai siteirunoka,sihyousei,zuzousei,syoutyousei wo tou. NLP2025. (March 11, Dejima Messe Nagasaki, Japan)

Rui Sakaida, Koji Inoue, Yui Sakao, Yukiko Nakabayasi, Daisuke Yokomori, Katsuya Takanashi. (2025). (In Japanese) multimodal corpus no idoubamen ni taisuru dousarennsa annotation no kokoromi,SCcorpus wo taisyou ni. The 103th SLUD. (March 22, Waseda University, Japan)

Mori, H. and Fujikawa, N. (2025). On the post-nominal pointing sign in JSL. Workshop on Sign Language Linguistics.(February 15, The University of Tokyo, Japan)

Ryosaku Makino. (2025). B01 Report on the results ‘Tracrin’. The Closing Conference of Embodied Semiotics.(February 11, National Institute of Informatics, Japan)

Hideki Nakayama, Minh-Duc Vo, Junwen Mo. (2025). A03 Report on the results ‘YouTube as a resource for LLM’. The Closing Conference of Embodied Semiotics.(February 11 National Institute of Informatics, Japan)

Mayumi Bono, Tomohiro Okada. (2025). A02 Report on the results ‘Japanese Sign Language Corpus of Daily Conversation’. The Closing Conference of Embodied Semiotics.(February 11, National Institute of Informatics, Japan)

Koji Inoue, Katsuya Takanashi. (2025). A01 Report on the results ‘Annotation design based on semiotics’. The Closing Conference of Embodied Semiotics.(February 11, National Institute of Informatics, Japan)

Takeshi Enomoto (2025). Indexes and AI recognition: from the viewpoint of semiotics. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Junwen Mo, Minh-Duc Vo, Hideki Nakayama (2025). Overview of detection of pointing by using AI. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Rui Sakaida. (2025). Overview and the case of pointing in SC corpus. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Tomohiro Okada. (2025).Overview and the case of pointing in JSL Corpus. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Kouhei Kikuchi. (2025).Multimodal Interaction Studies with AI on our side, Overview. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Tomohiro Okada. (2025). Report on the research results of the UK and Japan Cross-Signing Project, Research report. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Yutaka Osugi (2025). Report on the research results of the UK and Japan Cross-Signing Project, Opening remarks. The Closing Conference of Embodied Semiotics.(February 10, National Institute of Informatics, Japan)

Keiko Sagara, Kang-Suk Byun, Yu Yi Chen and Nick Palfreyman. (2025). Semantic change and grammaticalization in Japanese Sign Language, South Korean Sign Language and Taiwan Sign Language.TISLR15 Pre-Conference Event. (January 14, Elilly International Hotel, Addis Ababa, Ethiopia)

Taira, E., Bono, M., Okada, T. (2025). (In Japanese) Japanese Sign Language Corpus ni yoru 「miru」wo gani suru Japanese Sign Language no ruigigo no kenkyu.TISLR15 Pre-Conference Event. (January 14, Elilly International Hotel, Addis Ababa, Ethiopia)

Okada, T., & Bono, M. (2025). Recording methods, data management and analysis.Workshop on Sign Languages in Africa and Sign Language Field Work, TISLR15 Pre-Conference Event. (January 12, Elilly International Hotel, Addis Ababa, Ethiopia)

Keiko Sagara. (2025). Semantic domains in sign languages: typological data on number, color and kinship.TISLR15 Pre-Conference Event. (January 11, Elilly International Hotel, Addis Ababa, Ethiopia)

・Khanh-An C. Quan, Camille Guinaudeau, Shin'ichi Satoh. (2025). Evaluating VQA Models' Consistency in the Scientific Domain.The International Conference on MultiMedia Modeling (MMM2025) .(January 8-10, Nara, Japan)

・Xiangyu Chen, Shinichi Satoh. (2025). Balancing Efficiency and Accuracy: An Analysis of Sampling for Video Copy Detection.The International Conference on MultiMedia Modeling (MMM2025) .(January 8-10, Nara, Japan)

・Elise Lincker, Camille Guinaudeau, Shin'ichi Satoh. (2025). AD2AT: Audio Description to Alternative Text, a Dataset of Alternative Text from Movies.The International Conference on MultiMedia Modeling (MMM2025) .(January 8-10, Nara, Japan)

・Saumya Yadav, Elise Lincker, Caroline Huron, Stephanie Martin, Camille Guinaudeau, Shin'ichi Satoh, and Jainendra Shukla. (2025). Towards Inclusive Education: Multimodal Classification of Textbook Images for Accessibility.The International Conference on MultiMedia Modeling (MMM2025).(January 8-10, Nara, Japan)

Keiko Sagara, Kang-Suk Byun, Yu Yi Chen and Nick Palfreyman. (2024). Grammaticalisation and semantic change in East Asian Sign Languages.16th Biennial Desert Linguistics Society (HDLS16).(November 1, University of New Mexico, USA)

Keiko Sagara. (2024). (In Japanese) Hokkaidou ni okeru syuwa goi no tiiki hensyu to sono mikae zyou no henka, Koureisou to zyakunensou no goisiyou no tigai ni tyakumoku site.The 50th Signlanguagestudiesjapan.(December 15, Research Center for Advanced Science and Technology, the University of Tokyo, Japan)

Rui Sakaida. (2024). (In Japanese) katudou ni kumikomareta zyunbankoutai.EMCA2024,Autumn.(November. 24, KWANSEI GAKUIN University, Japan)

・Jingqiao Xiu, Mengze Li, Wei Ji, Jingyuan Chen, Hanbin Zhao, Shin'ichi Satoh, and Roger Zimmermann. (2024). Hierarchical Debiasing and Noisy Correction for Cross-domain Video Tube Retrieval.ACM Multimedia[Full Paper].(October 28-November1, Melbourne, Australia)

Bono, M. (2024). Sign Language Research and AI: How sign language linguists and AI researchers are collaborating fruitfully,Conference for language, culture, teaching and learning in the Digital Age. .(October 27)

・Ziling Huang, Shin'ichi Satoh. (2024). LoA-Trans: Enhancing Visual Grounding by Location-Aware Transformers.Proc. of European Conference on Computer Vision (ECCV2024).(September 29-October 4, Milano, Italy)

・Naoto Naka, Shin'ichi Satoh. (2024). Query Refinement for Non-Existent Items in Image Retrieval.International Conference on Content-Based Multimedia Indexing (CBMI2014).(September 18-20, Reykjavik, Iceland)

Takeshi Enomoto. (2024). Body, Language, Metacommunication: Semiotic Reality of Human Multimodal Communication in Context.16th World Congress of the International Association for Semiotic Studies (IASS/AIS) .(September 2-6, Warsaw, Poland)

Ryosaku Makino, Takahito Horiuchi, Kouhei Kikuchi. (2024). (In Japanese) tracrin2.5 ~Zyouhou gizyutu wo riyou sita teiseiteki sougokoui bunseki kenkyu no aaratana kanousei~.HCS・VNV.(August 24,University of Hyogo, Japan)

Bono, M. (2024). How should linguists collaborate with AI researchers?: An effective way that sign language interaction research can lead. CTISS presentation at Centre for Translation & Interpreting Studies in Scotland (CTISS). (June 13)

Bono, M., Okada, T., Skobov, V., and Adam, R. (2024). Data Integration, Annotation, and Transcription Methods for Sign Language Dialogue with Latency in Videoconferencing. 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources, LREC-COLING2024. (May 25, Torino, Italy)

Bono, M. (2024). Research on Mutual Understanding in Sign Languages and Finger Braille Communication: A Report from Japan.Research meeting at Deafness Cognition and Language Research Centre (DCAL).( May 10, University College of London)

Books

Naoya Fujikawa. (2025). (In Japanese) Gokai wo Maneitato sitara mousiwakenai:Seizi no kotoba/Kotoba no seizi. KODANSHA.

・Toyotaro Suzumura, Mayumi Bono(Eds.) (2024)『New Frontiers in Artificial Intelligence』: JSAI International Symposium on Artificial Intelligence, JSAI-isAI 2024, Hamamatsu, Japan, May 28-29, 2024, Proceedings (Lecture Notes in Computer Science,14741),Springer

Mayumi Bono, Yasufumi Takama, Ken Satoh, Le-Minh Nguyen and Setsuya Kurahashi (Eds.) (2024)『New Frontiers in Artificial Intelligence』: JSAI-isAI 2023 International Workshops, JURISIN, SCIDOCA, EmSemi and AI-Biz, Kumamoto, Japan, June 4-6, 2023, Revised Selected Papers (Lecture Notes in Computer Science, 14644) ,Springer

Katsuya Takanashi. (2024). (In Japanese) Kibanka to kyoutuukiban,heisekai to kaisekai. Midori Tanimura , Kouitirou Nakamoto , Etuko Yosida (Eds.), Interaction and dialogue,Exploring research methods from multiple perspectives. Kaitakusha.

Mayumi Bono. (2024). (In Japanese) Syuwa・Yubitennzi ni miru tekusuto no hitomatomari sei. Mitiaki Saitou , Dezyen Syuu (Eds.), Danwa・Bunsyou・Tekusuto no hitomatomari sei (pp. 191-206). Izumi Syoin.

MISC

Yutaka Osugi. (2025).Publication of Sign Language Database. Sign Language Dictionary(Osaka):Published in 1963 (Showa 38). Link 

Yutaka Osugi. (2025).Publication of Sign Language Database. Sign language for university life. Link

・Fuminori Nagasawa, Shogo Okada. (2024). JSAI Incentive Award. (In Japanese) Hatuwa iyoku suitei ni motoduku tekiouteki taiwa senryaku wo sonaeta interview robot taiwa sisutemu.

・Fuminori Nagasawa, Shogo Okada. (2024). (In Japanese) Hatuwa iyoku suitei ni motoduku tekiouteki taiwa senryaku wo sonaeta interview robot taiwa sisutemu.SLUD, 100.

Rui Sakaida, Mayumi Bono. (2024). Excellent Field Research Report Award, Japanese Association of Qualitative Psychology.(2023). (In Japanese) Mourousya ni marutimodaritexi wo tutaeru yubitenzituuyakusya no waku.

Mayumi Bono. (2024). 53rd(2024) Selected as a recipient of the Mitsubishi Foundation Humanities Research Grant for Large Collaborative Research, and selected as the representative recipient at the Mitsubishi Foundation grant presentation ceremony.(September 11, Tokyo Kaikan)

・Column Mayumi Bono. (2024). (In Japanese) Hensyuu kouki. Cognitive Studies, 31(2) ,428.

Embodied Semiotics Awardees

Outstanding research award for doctoral thesis
・A02 Victor SKOBOV
・A03 陳 宜珮
・A0 Kuanchao Chu
・A0 Jiaxuan Li

Outstanding research award for master's thesis
・A01 Yui Sakao

Distinguished Service Award
・A01 Seungkyoo Yang
・A02 Tomohiro Okada
・A03 Junwen Mo
・A03 Minh-Duc Vo
・B01 Takahito Horiuchi
・B01 Masako Ogata
・X00 Arisu Ishikawa


2023

Papers

・Kai Katsumata, Duc Minh Vo, Tatsuya Harada, Hideki Nakayama. (2024). Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 5323-5332.

・Kai Katsumata, Duc Minh Vo, Hideki Nakayama. (2024). Label Augmentation as Inter-class Data Augmentation for Conditional Image Synthesis with Imbalanced Data. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 4944-4953.

・Fuminori Nagasawa, Shogo Okada. (2024). Investigating Effects of Multimodal Topic-continuance Recognition on Human-Robot Interviewing Interaction. Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction.

・Tamotsu Miyama, Shogo Okada. (2024). A multimodal dialogue system for customer service based on user personality adaptation and dialogue strategies. Adv. Robotics 38(4), 195-210.

・Yuan Gao, Longbiao Wang, Jiaxing Liu, Jianwu Dang, Shogo Okada. (2024). Adversarial Domain Generalized Transformer for Cross-Corpus Speech Emotion Recognition. IEEE Transactions on Affective Computing.

・Zelong Zeng, Fan Yang, Hong Liu, Shin'ichi Satoh. (2024). Improving Deep Metric learning via Self-distillation and Online Batch Diffusion Process.

Victor Skobov, Mayumi Bono. (2023). “Making Body Movement in Sign Language Corpus Accessible for Linguists and Machines with Three-Dimensional Normalization of MediaPipe”. Findings of the 2023 Conference on Empirical Methods in Natural Language Processing,(EMNLP), (Long paper)

Mayumi Bono, Rui Sakaida, Kanato Ochiai,Satoshi Fukushima. (2023). Intersubjective Understanding in Finger Braille Interpreter-mediated Interaction: Two Case Studies of Other-initiated Repair. Lingua, Elsevior. https://doi.org/10.1016/j.lingua.2023.103569

・Innhwa Park, Rachel S. Y. Chen, Jan Gorisch, Song Hee Park, Nadja Tadic, Eiko Yasui. (2023). The use of interlocking multi-unit turns in topic shifts.

・Minh Duc Vo, An Quoc Luong, Akihiro Sugimoto, Hideki Nakayama. (2023). A-CAP: Anticipation Captioning with Commonsense Knowledge. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR) .

・Yi-Pei Chen, An-Zi Yen, Hen-Hsen Huang, Hideki Nakayama, Hsin-Hsi Chen. (2023). LED: A Dataset for Life Event Extraction from Dialogs. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL) Findings Volume .

Rui Sakaida, Mayumi Bono. (2023). (In Japanese)Mourousya ni marutimodaritexi wo tutaeru yubitenzituuyakusya no wa-ku.. Situtekisinrigakukenkyu No20, S118-S124.

Takeshi Enomoto (2023). (In Japanese)「Kuoria」ni tuite:Communication bunseki ni ennyou surutameno rironteki kisokousatu Ⅱ. Joint Research Project on Language and Culture 2022 Language and Society(2),1-10.

・Hung Le, Sixia Li, Candy Olivia Mawalim, Hung-Hsuan Huang, Chee Wee Leong, Shogo Okada. (2023). Investigating the Effect of Linguistic Features on Personality and Job Performance Predictions. HCI (15) 14025 LNCS, 370-383.

・Xiguang Li, Candy Olivia Mawalim, Shogo Okada. (2023). Inter-person Intra-modality Attention Based Model for Dyadic Interaction Engagement. HCI (15) 14025 LNCS, 91-105.

・Candy Olivia Mawalim, Shogo Okada Yukiko I. Nakano, Masashi Unoki. (2023). Personality trait estimation in group discussions using multimodal analysis and speaker embedding. Journal on Multimodal User Interfaces 17(2) , 47-63.

・Su Shwe Yi Tun, Shogo Okada, Hung-Hsuan Huang, Chee Wee Leong. (2023). Multimodal Transfer Learning for Oral Presentation Assessment. IEEE Access 11, 84013-84026.

・Takato Hayashi, Candy Olivia Mawalim, Ryo Ishii, Akira Morikawa, Atsushi Fukayama, Takao Nakamura, Shogo Okada. (2023). A Ranking Model for Evaluation of Conversation Partners Based on Rapport Levels. IEEE Access 11, 73024-73035.

・Fuminori Nagasawa, Shogo Okada, Takuya Ishihara, Katsumi Nitta. (2023). Adaptive Interview Strategy Based on Interviewees Speaking Willingness Recognition for Interview Robots. IEEE Transactions on Affective Computing, 1-17.

・Minh Duc Vo, An Quoc Luong, Akihiro Sugimoto, Hideki Nakayama. (2023). A-CAP: Anticipation Captioning with Commonsense Knowledge. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 10824-10833.

Lectures

Katsuya Takanashi, Rui Sakaida, Eiko Yasui, Atsushi Yamamoto, Takeshi Enomoto. (2024). (In Japanese) Sougokouityu no sintaidousa wo taisyou to sita multimodal rensabunseki kara sintaikigougaku he.[Workshop] JASS48. (March 8, Fukuoka Women’s University, Japan)

Ryosaku Makino, Kouhei Kikuchi, Takahito Horiuchi. (2024). (In Japanese) tracrin2.0:Kaiwa deta no sikakuhyougenn no seikeisiennsyuhou no kenntou.[Poster Presentation] SLUD100. (February 29, National Institute for Japanese Language and Linguistics Research Fellow of Japan, Japan)

Atsushi Yamamoto, Ryosaku Makino. (2024). (In Japanese) Communication syougai ha sougokoui no bunseki ni yotte rikai dekirunoka? hitenkeisya wo toraeru siten no seiri.[Poster Presentation] SLUD100. (February 29, National Institute for Japanese Language and Linguistics Research Fellow of Japan, Japan)

Yutaka OSUGI. (2024). Japanese Sign Language Map and Japanese Sign Languages in Old Films .(February 21, at Heriot-Watt University)

Rui Sakaida. (2024). (In Japanese) Sintai no hitaisyousei wo norikoeru sigen tosite no multimodality:Sikakusyougaisya,Mourousya to seigansya no sougo kouikara. Research on language issues in multilingual and multicultural societies symposium. (February 10, National Institute for Japanese Language and Linguistics Research Fellow of Japan, Japan)

Mayumi Bono. (2024).『Embodied Semiotics』 National Institute of Informatics Bono Lab, The University of Tokyo Nakayama Lab. The 22th Sign Language Research Seminar. (February 4, Japan National Center of Sign Language Education, Japan)

Eiji Taira, Mayumi Bono, Tomohiro Okada. (2023). (In Japanese) Shuwakopasu ni yoru 「Miru」wo ganisuru nihonsyuwa no ruigigo no kenkyu.The 49th Signlanguagestudiesjapan. (December 9, Research Center for Advanced Science and Technology The University of Tokyo, Japan)

Takeshi Enomoto, Kaho Nakagawa. (2023). (In Japanese) Multimodal kigouron oyobi sono zitsou kanousei ni tuite.The 8th The Osaka University Toyonaka Research Exchange Meeting. (December 8, The University of Osaka, Japan)

・Ziling Huang, Shin'ichi Satoh. (2023). Referring Image Segmentation via Joint Mask Contextual Embedding Learning and Progressive Alignment Network . The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)[Main Conference].(December 6-10, Singapore)

Keiko Sagara. (2023). (In Japanese)Nihonsyuwa to taiwansyuwa ni okeru goi no rekisitekihensen oyobi taiwan syuwa no genzyou. Hatten tozyoukoku ni okeru syakai hatten to kokka to syuwa no kankei wo meguru kadai kenkyukai.(December 1, Online)

Takeshi Enomoto. (2023). (In Japanese)Gengozinruigaku karano multimodal kigouron. Communication no sizensi kenkyukai.(November 20)

・Lin Kejun, Zhixiang Wang, Zheng Wang, Yinqiang Zheng, Shin'ichi Satoh. (2023). Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person Retrieval [Full Paper]. ACM Multimedia.(October 29-Nov 3, Canada)

Keiko Sagara. (2023). Diachronic change and Variation within and between sign languages in Japan, Taiwan and South Korea: The impact of language contact.(October 24, National Chung Cheng University, Taiwan)

Eiko Yasui. (2023). (In Japanese) Sintai wo tukatta kyouzi ni okeru 「yoisyo」Sintaidousa to sintaikankaku no teizi.[Invited Speeches].The 17th Dynamic Pragmatics Association. (October 7, The University of Osaka, Japan)

・Ruth Holmes, Ellen Rushe, Mathieu De Coster, Maxim Bonnaerens, Shin'ichi Satoh, Akihiro Sugimoto, and Anthony Ventresque. (2023). From Scarcity to Understanding: Transfer Learning for the Extremely Low Resource Irish Sign Language . The 11th Workshop on Assistive Computer Vision and Robotics,in conjunction with ICCV2023.(October 3, France)

Tomoko Ohsuga. (2023). (In Japanese) Nii ni okeru gengosigen kyouyu no torikumi.[Invited Speeches].LRW2023. (August 28, Online)

Rui Sakaida, Nam-Gyu Kang,. (2023). Multimodal analysis of joint attention in remote co-creative design process . 25th International Conference on Human-Computer Interaction.(July 23-28, Danmark)

Takeshi Enomoto. (2023). Poetics of Cheering: Multimodal Achievement of Phaticity in a Japanese Junior-High School Pep Rally. The 18th International Pragmatics Conference.(July 14, Universite Libre de Bruxelles, Belgium)

Keiko Sagara. (2023). Toward the Development of Sign Language Studies in Asia. XIX World Congress of the World Federation of the Deaf.(July 14, International Convention Center, Korea)

Eiko Yasui. (2023). Vocalization of effort and force as a device for coordination: A Japanese interjection Yoisho in instructions of a physical activity . International Conference on Conversation Analysis (ICCA) 2023.(July 7, Australia)

・Cecilia Curreli, Andreu Girbau, Shin'ichi Satoh. (2023). WalkingDynamicsH36M: a Benchmarking Dataset for Long-term Motion and Trajectory Forecasting [Extended abstract paper]. The 5th IEEE/CVF CVPR Precognition Workshop.(Jun 18, Canada)

Mayumi Bono, Robert Adam. (2023). Online Cross-Signing Project Between the United Kingdom and Japan: First Phase of Data Collection [Paper presentation]. Online Proceedings of JSAI-isAI2023.(Jun 5-6, Kumamoto-Jo Hall, Japan)

Kanato Ochiai, Mayumi Bono. (2023). Finger Braille as a Personal Medium: Exploring Communication Methods for Deafblind People Based on Computer-Mediated Communication Studies [Paper presentation]. Online Proceedings of JSAI-isAI2023.(Jun 5-6, Kumamoto-Jo Hall, Japan)

Books

Keiko Sagara. (2024). A Historical Study of Japanese Sign Language : Comparison of Numerals and Kinship Terms with Taiwan Sign Language and South Korean Sign Language. Hitsuji shobo.

Mayumi Bono, Tomohiro Okada, Kouhei Kikuchi,Rui Sakaida, Victor Skobov, Yusuke Miyao and Yutaka Osugi. (2023). Chapter 13. Utterance unit annotation for the Japanese Sign Language Dialogue Corpus: Towards a method for detecting interactional boundaries in spontaneous sign language dialogue, Wehrmeyer, Ella. (Ed.), Advances in Sign Language Corpora [SCL 108] (pp. 353-382). John Benjamins Publishing Company.

・Adam Kendon 著/Mayumi Bono, Ryosaku Makino(訳)Mio Cibulka(翻訳協力)(2023). (In Japanese) 23sho kyozai sogo koi ni oite mizukara wo kukan zukeru koto・mizukara no muki wo sadameru koto. Ethnomethodology・Conversation Analysis Handbook (pp. 292-309). Sinyousya.

Keiko Sagara. (2023). (In Japanese) Tekiousuru kotoba [Nihon Syuwa]. Kikusawa Rituko, Yosioka Noboru (Eds.), Syaberu hito:Kotoba no fusigi wo kagaku suru (pp. 248-255). Bunrikaku.

Yutaka Osugi, Mayumi Bono. (2023). (In Japanese) Iroiro no meisyo to kotoba [Nihon Syuwa gengo]. Kikusawa Rituko, Yosioka Noboru (Eds.), Syaberu hito:Kotoba no fusigi wo kagaku suru (pp. 233-238). Bunrikaku.

Keiko Sagara. (2023). (In Japanese) Iroiro ni kazoeru kotoba [Nihon Syuwa]. Kikusawa Rituko, Yosioka Noboru (Eds.), Syaberu hito:Kotoba no fusigi wo kagaku suru (pp. 204-210). Bunrikaku.

Mayumi Bono. (2023). (In Japanese)Kaiwa no rensa sosiki. Kikusawa Rituko, Yosioka Noboru (Eds.), Syaberu hito:Kotoba no fusigi wo kagaku suru (pp. 84-95). Bunrikaku.

MISC

・Kuanchao Chu, Yi-Pei Chen, Hideki Nakayama. (2024). LLM as a Scorer: The Impact of Output Order on Dialogue Evaluation. AAAI 2024 Spring Symposium on User-Aligned Assessment of Adaptive AI Systems, March 2024.

・Column Mayumi Bono. (2024). (In Japanese) Babel no tou wa subete no genngo wo wakatta noka. journals of jsai, 39(1) ,1.

・Shun Katada, Shogo Okada, Komatani Kazunori. (2023). (In Japanese) Honninsinsyou to daisansyasinsou no sutei ni okeru multimodal zouhou to seitaisingou no yakuwari no bunseki. IEICE(Web), 122(349(HCS2022 55-75)).

・Takato Hayashi, Ryusei Kimura, Ryo Ishii, NIHEI Fumio, Atushi Fukayama, Shogo Okada. (2023). (In Japanese) Syotaimenkaiwa to yuuzin kaiwa ni okeru raporu ni motoduku kaiwa no zyunzyo zuke. SLUD, 98.

・Yunosuke Maeda, Naoya Inoue, Shogo Okada. (2023). (In Japanese) Syusyoku mensetu ni okeru hatuwa naiyou no PREP annotation bunseki. JSAI2023 37th(Web)

・Tomoya Ooba, Haruki Kuroki, MAWALIM Candy Olivia, Shogo Okada. (2023). (In Japanese) Multimodal kikai gakusyu moderu ni motoduku syuusyoku katudou mensetu kunren feedback system kouka no bunseki. JSAI2023 37th(Web)

・Takato Hayashi, MAWALIM Candy Olivia, Ryo Ishii, Hikaru Morikawa, Atushi Fukayama, Takao Nakamura, Shogo Okada. (2023). (In Japanese) Kako no kaiwa de keisei sareta raporu ni motoduku kaiwaaite no ranking. JSAI2023 37th(Web)


2022

Papers

・Michiko Asano, Mayumi Bono, Nobuyuki Kawai, Kobasi Yasuaki, Junya Morita, Kuninori Nakamura, Hajime Shirouzu, sokan 30 shunen kinen henshuhen henshuiinkai, Kazunori Terada, Sachiko Kiyokawa, hattori Masasi, Miho Fuyama, Tomohiro Taira. (2023). (In Japanese)Zadankai:『ninchikagaku』nokako・genzai・mirai wo kataru. Cognitive Studies, 30(1), 89-93.

Mayumi Bono, Michita Imai. (2023). (In Japanese) 「taiwa・intarakushon」kenkyu no hatten to choryu. Cognitive Studies, 30(1), 37-45.

・Sokan 30shunen kinen tokushu kenkyuhen iinkai, Syouhei Hidaka, Mayumi Bono, Harumi Kobayasi, Ryou Orita, ,Kohske Takahasi. (2023). (In Japanese) Dai1bu kenkyuhen「ninchi kagaku kaku bunya no 30nen to korekara no tembo」henshu ni atatte. Cognitive Studies, 30(1), 8-11.

・Rui Yang, Duc Minh Vo, Hideki Nakayama. (2023). Indirect Adversarial Losses via an Intermediate Distribution for Training GANs. Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV) .

・Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng. (2022). Character-centric Story Visualization via Visual Planning and Token Alignment. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP) , 8259-8272.

・Rui Yang, Duc Minh Vo, Hideki Nakayama. (2022). Stochastically Flipping Labels of Discriminator’s Outputs for Training Generative Adversarial Networks. IEEE Access 10, 103644-103654.

・Rui Yang, Raphael Shu, Hideki Nakayama. (2022). Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks. IEICE Transactions on Information and Systems E105-D(9), 1537-1545.

Eiko Yasui. (2023). SignSequence-initial pointing: Spotlighting what just happened as a cause of a new sequence. Discourse Studies Online first.

・Makoto Hayasi, Eiko Yasui. (2023). (In Japanese) Soccer shido bamen deno 「shintaiteki jitsuen」ni mirareru kochi to senshu no sogo koi Tomone Komiya,Satomi Kuroshima (Eds.), jissen no rinri wo egaku:sogo koi no naka no chishiki・karada・kokoro (pp. 158-175). Keisosyobo

Eiko Yasui. (2023). Japanese onomatopoeia in bodily demonstrations in a traditional dance instruction: A resource for synchronizing body movements. Journal of Pragmatics, 207, 45-61.

Takeshi Enomoto. (2022). (In Japanese) 「Eigo kyoiku」no shihyoteki ruizo to shite miru 「Eigo no jugyo」. Poeteikusu no shintenkai:pururimodaru na jissen no shiteki kaishaku ni mukete (pp. 251-277). Hituzisyobo

・Astha Verma, A. Venkata Subramanyam, Zheng Wang 0007, Shin'ichi Satoh 0001, Rajiv Ratn Shah. (2023). Unsupervised Domain Adaptation for Person Re-Identification Via Individual-Preserving and Environmental-Switching Cyclic Generation. IEEE Trans. Multim. 25, 364-377.

・Xiao Wang, Zheng Wang, Wu Liu, Xin Xu, Qijun Zhao, Shin'ichi Satoh 0001. (2022). Towards Causality Inference for Very Important Person Localization. ACM Multimedia, 6618-6626.

・Taorong Liu, Liang Liao, Zheng Wang, Shin'ichi Satoh 0001. (2022). Reference-Guided Texture and Structure Inference for Image Inpainting. ICIP, 1996-2000.

・Zhixiang Wang, Xiang Ji, Jia-Bin Huang, Shin'ichi Satoh 0001, Xiao Zhou, Yinqiang Zheng. (2022). Neural Global Shutter: Learn to Restore Video from a Rolling Shutter Camera with Global Reset Feature. CVPR, 17773-17782.

・Andreu Girbau, Ferran Marqués, Shin'ichi Satoh 0001. (2022). Multiple Object Tracking from appearance by hierarchically clustering tracklets. BMVC, 362-362.

・Qier Meng, Liang Liao,Shin'ichi Satoh 0001. (2022). Weakly-Supervised Learning With Complementary Heatmap for Retinal Disease Detection. IEEE Transactions on Medical Imaging 41(8), 2067-2078.

・Liang Liao, Wenyi Chen, Jing Xiao 0004, Zheng Wang 0007, Chia-Wen Lin, Shin'ichi Satoh 0001. (2022). Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label Diffusion. IEEE Transactions on Image Processing 31, 3525-3540.

・Quan Zhao, Jing Xiao 0004, Zheng Wang 0007, Xujie Ma, Mi Wang, Shin'ichi Satoh 0001. (2022). Vehicle Counting in Very Low-Resolution Aerial Images via Cross-Resolution Spatial Consistency and Intraresolution Time Continuity. IEEE Transactions on Geoscience and Remote Sensing 60, 1-13.

・Mengshun Hu, Jing Xiao 0004, Liang Liao, Zheng Wang 0007, Chia-Wen Lin, Mi Wang, Shinichi Satoh 0001. (2022). Capturing Small, Fast-Moving Objects: Frame Interpolation via Recurrent Motion Enhancement. IEEE Transactions on Circuits and Systems for Video Technology 32(6), 3390-3406.

・Yuting Liu, Zheng Wang 0007, Miaojing Shi, Shinichi Satoh 0001, Qijun Zhao, Hongyu Yang. (2022). Discovering regression-detection bi-knowledge transfer for unsupervised cross-domain crowd counting. Neurocomputing 494, 418-431.

・Zhaohui Zhu, Marc A. Kastner 0001, Shin'ichi Satoh 0001. (2022). On Assisting Diagnoses of Pareidolia by Emulating Patient Behavior. MMM (1) , 299-310.

・Donnaphat Trakulwaranont, Marc A. Kastner 0001, Shin'ichi Satoh 0001. (2022). Personalized Fashion Recommendation Using Pairwise Attention. MMM (2), 218-229.

Yutaka Osugi , Aiko Akiyama, Yuenwa,San. (2022,December18). Sign language, what is it? : an ESCAP guide towards legal recognition of sign languages in Asia and the Pacific. United Nations ESCAP.

Keiko Sagara. (2022). Historical Relationships between Numeral Signs in Japanese Sign Language, South Korean Sign Language and Taiwan Sign Language, In Kazumi Matsuoka, Onno Crasborn and Marie Coppola (eds.), East Asian Sign Linguistics (pp. 7-36). Boston/Berlin: de Gruyter Mouton.

Lectures

Mayumi Bono. (2023). Online Cross-Sign ―How do Deaf people communicate with each other without shared expressions and language? [Invited Speeches]. The 47th Annual Meeting of the Japanese Society of Sociolinguistic Sciences.(March 16-18, Tokyo International University All Rights Reserved, Japan)

・Robert Adam, Mayumi Bono. (2023). Exploring Cross-Sign Phenomena in Video Telephony in the With-Corona and After-Corona Era. The 21th Sign Language Research Seminar. (February 5, ITO TRAINING CENTER, Japan)

・Ted Supalla, Yutaka Osugi . (2023). Integrating Historical Sign Language Database Design and Historical Linguistics for Tracing Etymological and Morphological Changes. 2023 Linguistic Society of America 97th Annual Meeting. (January 6)

Eiko Yasui. (2022). (In Japanese) Yubisasi to sougo koui:kaiwa ni okeru yubisasi no tayouna ugoki to sizi no tassei [Keynote Lectures].The 48th Signlanguagestudiesjapan. (December 10, Research Center for Advanced Science and Technology The University of Tokyo, Japan)

Shin'ichi Satoh. (2022). Boosting Image Retrieval by Diffusion[Invited Speeches]. SoICT2022.(December 1-3)

Keiko Sagara, Nick Palfreyman. (2022). Variation within and between sign languages in Japan, Taiwan and South Korea: The impact of language contact. 15th Biennial Desert Linguistics Society (HDLS15). (November 11-13, The University of New Mexico, USA)

・Yasusuke Minami, Nisisawa HiroYuki, Mitsuhiro Okada, Rui Sakaida. (2022) . “Does it feel spacious?”: Negotiations between the visual and non-visual world. 7th Copenhagen Multimodality Day. (November 4)

Hideki Nakayama. (2022).Incorporating External Knowledge for Vision and Language Systems. 2nd Workshop on Trends and Advances in Machine Learning and Automated Reasoning for Intelligent Robots and Systems (IROS2022). (October 27, Kyoto, Japan)

Keiko Sagara. (2022). Diachronic change in Japanese Sign Language, Taiwan Sign Language and South Korean Sign Language: Focus on kinship terms.The 14th Theoretical Issues in Sign Language Research conference (TISLR14). (September 26-30, National Museum of Ethnology, Japan)

Shogo Okada. (2022). (In Japanese) Anata ha taiwa wo tanoshinde imasuka?:taiwa ni okeru naimen jotai suitei no kadai to tembo [Invited Lectures]. SLUD 95. (September 15-16, On-line)

Kouhei Kikuchi. (2022). (In Japanese) Shuwa tsuyaku bamen ni okeru akuseshibiritei jissen no kenkyu. The 39th Annual Meeting of Japanese Cognitive Science Society.(September 8-10, On-line)

Shin'ichi Satoh. (2022). (In Japanese) kikai gakushu ni yoru medeia gazo shori:daikibo gakushu detasetto ni motozuku kikai gakushu shuho no iryoku to sono genkai [Invited Speeches]. The 41st Annual Meeting of the Society of Japan Medical Imaging Engineering (JAMIT2022).(July 29-31, Nagoya University, Japan)

Books

Rui Sakaida. (2023). (In Japanese) Dai 6sho 「onaji seika wo shiru」koto ha ikani shite kano ka:shikaku shogaisha no hoko kunren kara. Takanori Sato, Tokika Kurita (Esd.), Shogai rikai no rifurekushon:koi to kotoba ga egaku〈tasha〉 to tomo ni aru yo. Chitosepress.

Tomohiro Okada. (2023). (In Japanese) Mouth Action. Kazumi Matsuoka, Asako Uchibori(Eds.), Shuwa gengogaku no topikku:kiso kara seizensen he (pp. 249-278). Kurosio shuppan.

・Kuniyoshi Kataoka, Makiko Takekuro, Takeshi Enomoto. (2022). New Vistas in Poetics:plurimodal Approaches to Poetic Practice.

Katsuya Takahashi, Rui Sakaida. (2022). 4 Interaction Analysis of Everyday Life Scenes. Hiroaki Suzuki (Eds.), Cognitive Science 3: Mind and Society. University of Tokyo Press.

Hideki Nakayama, Atsushi Nitanda, Akihiro Tamura, Nakamasa Inoue, Yoshitaka USHIKU. (2022). From deep learning to multimodal information processing. Saiensusya.

MISC

・Column Mayumi Bono. (2023). (In Japanese) Edinburgh kara konnnitiwa. NII Today, No.98, 22.

・Essay Keiko Sagara. (2022). Change of physical environment in spoken and signed language. The Monthly Minpakui, September 2022, 5.