Guan-Ting (Daniel) Lin
Taipei, Taiwan
Guan-Ting received his Ph.D. from the Speech Processing and Machine Learning Lab at National Taiwan University (NTU), advised by Prof. Hung-yi Lee. His research focuses on Speech LLMs, Full-Duplex Interaction, Spoken Language Understanding/Generation, and Test-Time Adaptation for ASR.
His academic innovation is deeply grounded in extensive research experience at leading AI labs including Meta Superintelligence Lab, Google DeepMind, and Amazon AGI. As the project lead of the Full-Duplex-Bench series, he pioneers the evaluation of real-time, full-duplex voice AI. He has published 15+ first/co-first author papers at top-tier conferences (ACL, EMNLP, ICASSP, Interspeech, ASRU, SLT) and received the Best Paper Award at IEEE SLT 2022. He serves as a reviewer for ICLR, NeurIPS, ACL, EMNLP, and is recognized as ICLR 2025 Notable Reviewer.
Research Experience:
- Meta Superintelligence Lab (2025 Fall): Research Scientist Intern at the Voice Modeling Team in Menlo Park, USA, working with Naoyuki Kanda on full-duplex speech LLM.
- Google DeepMind (2025 Spring): Student Researcher at Gemini Speech team (New York City), collaborating with Kartik Audhkhasi, Soheil Khorram, and Bhuvana Ramabhadran on Gemini low-resource speech.
- Amazon AGI (2024 Summer): Applied Scientist Intern at Speech team in Seattle, USA (under Ivan Bulyko's team), working with Prashanth Gurunath Shivakumar, Yile Gu, and Ankur Gandhe on Align-SLM (RL for end-to-end SLMs).
- Amazon Alexa AI (2023 Summer): Applied Scientist Intern at Speech Recognition and LM team in Seattle, USA (under Ivan Bulyko's team), working with Prashanth Gurunath Shivakumar and Andreas Stolcke on paralinguistics-enhanced LLM.
- Amazon Alexa AI (2022 Summer): Applied Scientist Intern in Cambridge, USA (under Chao Wang's team), working with Chieh-Chi Kao and Qingming Tang on neural architecture search for audio.
Open to discussing or collaborating on speech research—feel free to reach out at daniel094144[at]gmail[dot]com.
Beyond academia, he enjoys singing 🎤, photography 📷, and watching MLB games ⚾️.
Update
- 2026/04 Full-Duplex-Bench-v2 accepted by ACL 2026!
- 2026/04 Released Full-Duplex-Bench-v3! Check out the demo.
- 2026/01 Started writing blog posts! Shared learnings on my Ph.D. journey in the post.
- 2026/01 Full-Duplex-Bench v1.5 accepted by ICASSP 2026 🇪🇸!
- 2025/12 Successfully defended my Ph.D. dissertation! 🎓
- 2025/08 Three papers accepted by ASRU 2025. See you in Hawaii 🏝
- 2025/05 Align-SLM accepted by ACL 2025. See you in Vienna!
- 2025/03 Released Full-Duplex-Bench — the first benchmark for full-duplex spoken dialogue models.
- 2024/09 Continual TTA & Emphasized-Talk accepted by EMNLP 2024 (main & findings).
- 2024/05 Spoken-LLM and StyleTalk accepted by ACL 2024.
- 2024/01 Received IEEE SPS Travel Grant for ICASSP 2024!
- 2023/12 Three papers accepted by ICASSP 2024. See you in Seoul!
- 2023/02 Internship work with Amazon Alexa accepted by ICASSP 2023.
- 2023/01 Paper with Prof. Nigel Ward won Best Paper Award at IEEE SLT 2022!
- 2022/07 Received ISCA Travel Grant for Interspeech 2022.
- 2022/06 Two first-author papers accepted at Interspeech 2022.
Selected Publications & Preprints
(For full publication list, please see the Google Scholar).
Full-Duplex Interaction
Benchmarking and evaluation for full-duplex spoken dialogue
-
Full-Duplex-Bench-v3: Benchmarking Tool Use for Full-Duplex Voice Agents Under Real-World Disfluency
Guan-Ting Lin, Chen Chen, Zhehuai Chen, Hung-yi Lee
Arxiv 2026
PAPER DEMO -
Full-Duplex-Bench-v2: A Multi-Turn Evaluation Framework for Duplex Dialogue Systems with an Automated Examiner
Guan-Ting Lin(co-first), Shih-Yun Shan Kuan(co-first), Jiatong Shi, Kai-Wei Chang, Siddhant Arora, Shinji Watanabe, Hung-yi Lee
ACL 2026
PAPER CODE -
Full-Duplex-Bench v1.5: Evaluating Overlap Handling for Full-Duplex Speech Models
Guan-Ting Lin, Shih-Yun Shan Kuan, Qirui Wang, Jiachen Lian, Tingle Li, Shinji Watanabe, Hung-yi Lee
ICASSP 2026
PAPER CODE -
Full-Duplex-Bench: A Benchmark to Evaluate Full-duplex Spoken Dialogue Models on Turn-taking Capabilities
Guan-Ting Lin, Jiachen Lian(co-second), Tingle Li(co-second), Qirui Wang(co-second), Gopala Anumanchipalli, Alexander H. Liu, Hung-yi Lee
ASRU 2025
PAPER CODE
Multi-modal LLM
Speech understanding and generation toward human-like spoken dialogue
-
Align-SLM: Textless Spoken Language Models with Reinforcement Learning from AI Feedback
Guan-Ting Lin, Prashanth Gurunath Shivakumar, Aditya Gourav, Yile Gu, Ankur Gandhe, Hung-yi Lee, Ivan Bulyko
ACL 2025
PAPER -
Can LLMs Understand the Implication of Emphasized Sentences in Dialogue?
Guan-Ting Lin, Hung-yi Lee
EMNLP 2024 Findings
PAPER DATA -
Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations
Guan-Ting Lin, Cheng-Han Chiang, Hung-yi Lee
ACL 2024
PAPER DATA -
Paralinguistics-Enhanced Large Language Modeling of Spoken Dialogue
Guan-Ting Lin, Prashanth Gurunath Shivakumar, Ankur Gandhe, Chao-Han Huck Yang, Yile Gu, Shalini Ghosh, Andreas Stolcke, Hung-yi Lee, Ivan Bulyko
ICASSP 2024
PAPER -
On the Utility of Self-supervised Models for Prosody-related Task
Guan-Ting Lin(co-first), Chi-Luen Feng(co-first), Wei-Ping Huang, Yuan Tseng, Tzu-Han Lin, Chen-An Li, Hung-yi Lee, Nigel G. Ward
SLT 2022 (Best Paper Award)
PAPER CODE -
DUAL: Discrete Spoken Unit Adaptive Learning for Textless Spoken Question Answering
Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-wen Yang, Hsuan-Jui Chen, Shuyan Dong, Shang-Wen Li, Abdelrahman Mohamed, Hung-yi Lee, Lin-shan Lee
Interspeech 2022
PAPER CODE
End-to-end ASR Test-time Adaptation
Sample-dependent test-time adaptation to improve ASR on out-of-domain speech
-
SUTA-LM: Bridging Test-Time Adaptation and Language Model Rescoring for Robust ASR
Wei-Ping Huang(co-first), Guan-Ting Lin(co-first), Hung-yi Lee
ASRU 2025
PAPER CODE -
Continual Test-time Adaptation for End-to-end Speech Recognition on Noisy Speech
Guan-Ting Lin(co-first), Wei-Ping Huang(co-first), Hung-yi Lee
EMNLP 2024
PAPER CODE -
Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech Recognition
Guan-Ting Lin, Shang-Wen Li, Hung-Yi Lee
Interspeech 2022
PAPER CODE
Patents
- Inventor on a pending U.S. patent application in speech and language processing, filed by Google DeepMind (details confidential until publication)
Award
- IEEE Signal Processing Society Travel Grant @ ICASSP 2024
- Best paper award @ IEEE SLT 2022
- NTU Elite Doctoral Scholarship
- GICE Elite Doctoral Scholarship with NVIDIA
- ISCA travel grant @ Interspeech 2022
- Appier top-tier conference scholarship
- Dean’s list * 3 @ NTHU
- Phi Tau Phi Award @ NTHU
- The Zhu Shun Yi He Qin Scholarship @ NTHU
Academic Services
- Official Reviewer: ICLR’24’25, NeurIPS’24’25, ACL’24’25, EMNLP’24, NAACL’23’24, ICASSP’23’24, ISCSLP’22’23’24, COLING’25, IEEE Transactions on Affective Computing