Rchr
J-GLOBAL ID:201601011692584495   Update date: Apr. 08, 2024

Chu Chenhui

Chu Chenhui
Affiliation and department:
Homepage URL  (2): https://nlp.ist.i.kyoto-u.ac.jp/https://nlp.ist.i.kyoto-u.ac.jp/EN/
Research field  (1): Intelligent informatics
Research keywords  (4): Natural Language Processing ,  Machine Translation ,  Vision and Language ,  Speech Processing
Research theme for competitive and other funds  (12):
  • 2023 - 2027 Creation of Fundamental Technology for Speech Dialogue Translation Conveying Intentions Accurately
  • 2023 - 2026 M3OLR: Towards Effective Multilingual, Multimodal and Multitask Speech Recognition for Low-resourced Languages
  • 2023 - 2024 大規模言語モデルを用いた機械翻訳の後編集の研究
  • 2022 - 2023 Cross-Lingual Learning for Text Processing and Multimodal Machine Translation
  • 2022 - 2023 Visual Scene-Aware Machine Translation
Show all
Papers (113):
  • Zhengdong Yang, Shuichiro Shimizu, Chenhui Chu, Sheng Li, Sadao Kurohashi. End-to-end Japanese-English Speech-to-text Translation with Spoken-to-Written Style Conversion. Journal of Natural Language Processing. 2024. 31. 3
  • Hao Wang, Tang Li, Chenhui Chu, Rui Wang, Pinpin Zhu. Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024
  • Yikun Sun, Zhen Wan, Nobuhiro Ueda, Sakiko Yahata, Fei Cheng, Chenhui Chu, Sadao Kurohashi. Rapidly Developing High-quality Instruction Data and Evaluation Benchmark for Large Language Models with Minimal Human Effort: A Case Study on Japanese. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024
  • Norizo Sakaguchi, Yugo Murawaki, Chenhui Chu, Sadao Kurohashi. Identifying Source Language Expressions for Pre-editing in Machine Translation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024
  • Rikito Takahashi, Hirokazu Kiyomaru, Chenhui Chu, Sadao Kurohashi. Abstractive Multi-Video Captioning: Benchmark Dataset Construction and Extensive Evaluation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). 2024
more...
MISC (81):
  • Yahan Yu, Fei Cheng, Chenhui Chu. Evaluation of the Adversarial Robustness in LLM-based Visual Dialog System. 2024
  • Sheng Li, Zhengdong Yang, Wangjin Zhou, Chenhui Chu, Chen Chen, Eng Siong Chng, Hisashi Kawai. Combining Large Language Model with Speech Recognition System in Low-resource Settings. 言語処理学会 第30回年次大会. 2024
  • 清水 周一郎, Yin Jou Huang, 村脇 有吾, Chenhui Chu. 対話の齟齬と介入による解消:LLM を用いた検討. 言語処理学会 第30回年次大会. 2024
  • Youyuan Lin, Masaaki Nagata, Chenhui Chu. Post-Editing with Error Annotation for Machine Translation: Dataset Construction using GPT-4. 言語処理学会 第30回年次大会. 2024
  • Sheng Li, Zhengdong Yang, Wangjin Zhou, Chenhui Chu, Hisashi Kawai. Investigating Effective Methods for Combining Large Language Model with Speech Recognition System. 日本音響学会第151回(2024年春季)研究発表会. 2024
more...
Books (1):
  • Using Comparable Corpora for Under-Resourced Areas of Machine Translation
    Springer 2018
Lectures and oral presentations  (9):
  • Machine Translation
    (Northeast Petroleum University 2022)
  • Visually Grounded Paraphrase
    (Shanghai University 2021)
  • Multilingual Neural Machine Translation
    (Tutorial at the 28th International Conference on Computational Linguistics (COLING 2020) 2020)
  • From Multilingual to Multimodal Processing
    (Chinese Academy of Sciences; Beijing Jiaotong University; Tianjin University; Northeastern University 2019)
  • Cross-Lingual Visual Grounding and Multimodal Machine Translation
    (MSRA Academic Day 2019 2019)
more...
Education (3):
  • 2012 - 2015 Kyoto University Graduate School of Informatics Ph.D., Intelligence Science and Technology
  • 2010 - 2012 Kyoto University Graduate School of Informatics M.S., Intelligence Science and Technology
  • 2004 - 2008 Chongqing University Department of Software Engineering B.E.
Professional career (1):
  • Ph.D. (Kyoto University)
Work history (4):
  • 2020/07 - 現在 Kyoto University Graduate School of Informatics Department of Intelligence Science and Technology Program-Specific Associate Professor
  • 2017/04 - 2020/06 Osaka University Institute for Datability Science Research Assistant Professor
  • 2015/04 - 2017/03 Japan Science and Technology Agency (JST) Researcher
  • 2014/04 - 2015/03 Japan Society for the Promotion of Science Research Fellowship for Young Scientists (DC2)
Committee career (5):
  • 2024/04 - 現在 The Association for Natural Language Processing Board Member
  • 2024/04 - 現在 Special Interest Group on Natural Language Processing (SIG-NL), Information Processing Society of Japan (IPSJ) Steering Committee Member
  • 2019/09 - 2023/09 Editorial Board of the Journal of Natural Language Processing
  • 2019/06 - 2023/06 Editorial Board of the Journal of Information Processing
  • 2018/04 - 2020/03 Young Researcher Association for NLP Studies Steering Committee Member
Awards (9):
  • 2023/03 - 言語処理学会 言語処理学会 第29回年次大会 若手奨励賞 ARKitSceneRefer: 3D屋内シーンでの参照表現による小物の位置特定
  • 2022/04 - Google Google Research Scholar Award Visual Scene-Aware Machine Translation
  • 2022/03 - 言語処理学会 言語処理学会 第28回年次大会 委員特別賞 曖昧性を含む翻訳に着目したマルチモーダル機械翻訳データセットの構築方法の検討
  • 2022/03 - 言語処理学会 言語処理学会 第28回年次大会 委員特別賞 複数映像の抽象化を要するキャプション生成
  • 2019/12 - IPSJ SIG Computers and the Humanities Jinmoncom 2019 Best Poster Award Public Meeting Corpus Construction and Content Delivery
Show all
Association Membership(s) (6):
THE JAPANESE SOCIETY FOR ARTIFICIAL INTELLIGENCE ,  Association for Computational Linguistics ,  THE ASSOCIATION FOR NATURAL LANGUAGE PROCESSING ,  Japanese Association for Medical Artificial Intelligence ,  INFORMATION PROCESSING SOCIETY OF JAPAN ,  Association for Computing Machinery
※ Researcher’s information displayed in J-GLOBAL is based on the information registered in researchmap. For details, see here.

Return to Previous Page