Research scientist, Reality Labs Research, Meta, Pittsburgh, PA, US

About


I'm currently a research scientist at Reality Labs Research Pittsburgh. I have worked in Nagoya University, Academia Sinica, ASUS, and Realtek for more than 6 years. I received my Ph.D. (2021) in the graduate school of informatics at Nagoya University and M.S (2011) and B.S. (2009) degrees in the school of communication engineering at National Chiao Tung University. My research topics focus on speech generation applications based on machine learning methods, such as neural vocoder, voice conversion, speech enhancement, and speech bandwidth expansion.

[Resume]
[Publication list]
[Research overview]
[Ph.D. thesis] [Video]

Contact


E-mail: yichiao.wu@g.sp.m.is.nagoya-u.ac.jp
E-mail: yichiaowu@meta.com
Github
GoogleScholar
ResearchGate
ResearchMap
Web of Science
Linkedin
YouTube
Medium



Experience


Reality Labs Research, US, Jan. 2022 - present.
Research scientist

Academia Sinica, Taiwan, Oct. 2021 - Dec. 2021.
Postdoc researcher
Advisor: Hsin-Min Wang, Yu Tsao

Nagoya University, Japan, Oct. 2017 - Sep. 2021.
Research (Oct. 2020 - Sep. 2021)
Research assistant (Oct. 2017 - Sep. 2020)
Advisor: Tomoki Toda

National Institute of Information and Communications Technology, Japan, Oct 2019 summer.
Summer intern

Academia Sinica, Taiwan, Oct. 2015 - Sep. 2017.
Research assistant
Advisor: Hsin-Min Wang, Yu Tsao

Asus, Taiwan, Oct. 2013 - Mar. 2015.
Software R&D engineer
Da Vinci Innovation Lab

Realtek, Taiwan, Mar. 2012 - Oct. 2013.
System designer
Multimedia BU II

National Chiao Tung University, Taiwan, Sep. 2019 - Dec. 2011.
Research assistant
Advisor: Sin-Horng Chen, Yih-Ru Wang

Education


Nagoya University, Japan, Oct. 2017 - Mar. 2021.
Ph.D.
Advisor: Tomoki Toda

National Chiao Tung University, Taiwan, Sep. 2019 - Aug. 2011.
Master of Science
Advisor: Sin-Horng Chen, Yih-Ru Wang

National Chiao Tung University, Taiwan, Sep. 2005 - Jun. 2009.
Bachelor of Science

Publication (selected)


Y. -C. Wu et al., “AudioDec: An Open-Source Streaming High-Fidelity Neural Audio Codec,” in Proc. ICASSP, 2023. [Paper] [Demo] [Code]

Y.-C. Wu et al., “A Cyclical Approach to Synthetic and Natural Speech Mismatch Refinement of Neural Post-filter for Low-cost Text-to-speech System,” APSIPA Trans., 2022. [Paper]

Y.-C. Wu et al., “Relational Data Selection for Data Augmentation of Speaker-dependent Multi-band MelGAN Vocoder,” in Proc. Interspeech, 2021. [Paper] [Demo] [Video]

Y.-C. Wu et al., “Quasi-Periodic Parallel WaveGAN: A Non-Autoregressive Raw Waveform Generative Model With Pitch-Dependent Dilated Convolution Neural Network,” IEEE TASLP, 2021. [Paper] [Demo] [Code] [Video]

Y. -C. Wu et al., “Quasi-Periodic WaveNet: An Autoregressive Raw Waveform Generative Model With Pitch-Dependent Dilated Convolution Neural Network,” in IEEE TASLP, 2021. [Paper] [Demo] [Code]

Y.-C. Wu et al., “Quasi-Periodic parallel WaveGAN vocoder: a non-autoregressive pitch-dependent dilated convolution model for parametric speech generation,” in Proc. Interspeech, 2020. [Paper] [Demo] [Code] [Video]

Y.-C. Wu et al., “A cyclical post-filtering approach to mismatch refinement of neural vocoder for text-to-speech systems,” in Proc. Interspeech, 2020. [Paper] [Demo] [Video]

Y. -C. Wu et al., “Non-parallel voice conversion system with WaveNet vocoder and collapsed speech suppression,” IEEE Access, 2020. [Paper] [Demo]

Y. -C. Wu et al., “Statistical voice conversion with Quasi-Periodic WaveNet vocoder,” in Proc. SSW10, 2019. [Paper] [Demo] [Poster]

Y. -C. Wu et al., “Quasi-periodic WaveNet vocoder: a pitch dependent dilated convolution model for parametric speech generation,” in Proc. Interspeech, 2019. [Paper] [Demo] [Code]

Y. -C. Wu et al., “Collapsed speech segment detection and suppression for WaveNet vocoder,” in Proc. Interspeech, 2018. [Paper] [Demo] [Poster]

Y. -C. Wu et al., “The NU non-parallel voice conversion system for the voice conversion challenge 2018,” in Proc. Odyssey, 2018. [Paper] [Demo] [Poster]

Y. -C. Wu et al., “A post-filtering approach based on locally linear embedding difference compensation for speech enhancement,” in Proc. Interspeech, 2017. [Paper] [Poster]

Y. -C. Wu et al., “A locally linear embedding based postfiltering approach for speech enhancement,” in Proc. ICASSP, 2017. [Paper] [Poster]

Y. -C. Wu et al., “Locally linear embedding for exemplar-based spectral conversion,” in Proc. Interspeech, 2016. [Paper]

Talk


2021.10.22 @BME, NYCU, Taipei, Taiwan, “Neural-based Speech Generation.” [ppt]

2020.1.13 @CITI, Academia Sinica, Taipei, Taiwan, “Quasi-Periodic WaveNet: An Autoregressive Raw Waveform Generative Model with Pitch-dependent Dilated Convolution Neural Network.” [ppt]

2019.9.16 @Interspeech, Graz, Austria, “Quasi-Periodic WaveNet Vocoder: A Pitch Dependent Dilated Convolution Model for Parametric Speech Generation.” [ppt]

2016.9.10 @Interspeech, San Francisco, USA, “Locally linear embedding for exemplar-based spectral conversion.” [ppt]



page layout is modified from cayman-theme and cayman-blog. LICENSE