retrieved from https://interspeech2019.org/program/schedule/
[T6] Advanced methods for neural end-to-end speech processing – unification, integration, and implementation Sunday, 15 September, 1400–1730, Hall 1 Takaaki Hori (Mitsubishi Electric Research Laboratories), Tomoki Hayashi (Department of Information Science, Nagoya University), Shigeki Karita (NTT Communication Science Laboratories), Shinji Watanabe (Center for Language and Speech Processing, Johns Hopkins University)
[T8] Microphone array signal processing and deep learning for speech enhancement – strong together, Sunday, 15 September, 1400–1730, Hall 11 Reinhold Haeb-Umbach (Department of Communications Engineering, Paderborn University), Tomohiro Nakatani (NTT Communication Science Laboratories)
Simultaneous denoising and dereverberation for low-latency applications using frame-by-frame online unified convolutional beamformer Oral; 1240–1300 Tomohiro Nakatani (NTT Corporation), Keisuke Kinoshita (NTT Corporation)
End-to-end SpeakerBeam for single channel target speech recognition Poster; 1100–1300 Marc Delcroix (NTT Communication Science Laboratories), Shinji Watanabe (Johns Hopkins University), Tsubasa Ochiai (NTT Communication Science Laboratories), Keisuke Kinoshita (NTT), Shigeki Karita (NTT Communication Science Laboratories), Atsunori Ogawa (NTT Communication Science Laboratories), Tomohiro Nakatani (NTT Corporation)
Improving Conversation-Context Language Models with Multiple Spoken Language Understanding Models Poster; 1430–1630 Ryo Masumura (NTT Corporation), Tomohiro Tanaka (NTT Corporation), Atsushi Ando (NTT Corporation), Hosana Kamiyama (NTT Corporation), Takanobu Oba (NTT Media Intelligence Laboratories, NTT Corporation), Satoshi Kobashikawa (NTT Corporation), Yushi Aono (NTT Corporation)
Neural techniques for voice conversion and waveform generation[Mon-P-2-C] Monday, 16 September, Gallery C
StarGAN-VC2: Rethinking Conditional Methods for StarGAN-Based Voice Conversion Poster; 1430–1630 Takuhiro Kaneko (NTT Communication Science Laboratories), Hirokazu Kameoka (NTT Communication Science Laboratories), Kou Tanaka (NTT corporation), Nobukatsu Hojo (NTT)
Improving Transformer Based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration Oral; 1700–1720 Shigeki Karita (NTT Communication Science Laboratories), Nelson Yalta (Waseda University), Shinji Watanabe (Johns Hopkins University), Marc Delcroix (NTT Communication Science Laboratories), Atsunori Ogawa (NTT Communication Science Laboratories), Tomohiro Nakatani (NTT Corporation)
Evaluating Intention Communication by TTS using Explicit Definitions of Illocutionary Act Performance Poster; 1000–1200 Nobukatsu Hojo (NTT), Noboru Miyazaki (NTT)
End-to-End Automatic Speech Recognition with a Reconstruction Criterion Using Speech-to-Text and Text-to-Speech Encoder-Decoders Poster; 1000–1200 Ryo Masumura (NTT Corporation), Hiroshi Sato (NTT Corporation), Tomohiro Tanaka (NTT Corporation), Takafumi Moriya (NTT Corporation), Yusuke Ijima (NTT corporation), Takanobu Oba (NTT Media Intelligence Laboratories, NTT Corporation)
Spoken Term Detection, Confidence Measure, and End-to-End Speech Recognition [Tue-P-5-C] Tuesday, 17 September, Gallery C
A Joint End-to-End and DNN-HMM Hybrid Automatic Speech Recognition System with Transferring Shared Knowledge Poster; 1600–1800 Tomohiro Tanaka (NTT Corporation), Ryo Masumura (NTT Corporation), Takafumi Moriya (NTT Corporation), Takanobu Oba (NTT Media Intelligence Laboratories, NTT Corporation), Yushi Aono (NTT Media Intelligence Laboratories, NTT Corporation)
Speech and Audio Source Separation and Scene Analysis 2[Wed-O-7-4], Wednesday, 18 September, Hall 11
Multimodal SpeakerBeam: Single channel target speech extraction with audio-visual speaker clues Oral; 1510–1530 Tsubasa Ochiai (NTT Communication Science Laboratories), Marc Delcroix (NTT Communication Science Laboratories), Keisuke Kinoshita (NTT), Atsunori Ogawa (NTT Communication Science Laboratories), Tomohiro Nakatani (NTT Corporation)
Speech Emotion Recognition based on Multi-Label Emotion Existence Model Oral; 1700–1720 Atsushi Ando (NTT Corporation), Ryo Masumura (NTT Corporation), Hosana Kamiyama (NTT Corporation), Satoshi Kobashikawa (NTT Corporation), Yushi Aono (NTT Corporation)
Does the Lombard Effect Improve Emotional Communication in Noise? – Analysis of Emotional Speech Acted in Noise – Poster; 1330–1530 Yi Zhao (National Institute of Informatics (NII)), Atsushi Ando (NTT Corporation), Shinji Takaki (National Institute of Informatics), Junichi Yamagishi (National Institute of Informatics), Satoshi Kobashikawa (NTT Corporation)
Neural Whispered Speech Detection with Imbalanced Learning Poster; 1330–1530 Takanori Ashihara (NTT Corporation), Yusuke Shinohara (NTT Corporation), Hiroshi Sato (NTT Corporation), Takafumi Moriya (NTT Corporation), Kiyoaki Matsui (NTT Media Intelligence laboratories), Takaaki Fukutomi (NTT Corporation), Yoshikazu Yamaguchi (NTT Corporation), Yushi Aono (NTT Corporation)
Improved Deep Duel Model for Rescoring N-best Speech Recognition List Using Backward LSTMLM and Ensemble Encoders Oral; 1410–1430 Atsunori Ogawa (NTT Communication Science Laboratories), Marc Delcroix (NTT Communication Science Laboratories), Shigeki Karita (NTT Communication Science Laboratories), Tomohiro Nakatani (NTT Corporation)
Joint Maximization Decoder with Neural Converters for Fully Neural Network-based Japanese Speech Recognition Poster; 1330–1530 Takafumi Moriya (NTT Corporation), Jian Wang (The University of Tokyo), Tomohiro Tanaka (NTT Corporation), Ryo Masumura (NTT Corporation), Yusuke Shinohara (NTT Corporation), Yoshikazu Yamaguchi (NTT Corporation), Yushi Aono (NTT Corporation)
Speech and Audio Source Separation and Scene Analysis 3[Thu-P-10-E] Thursday, 19 September, Hall 10/E
A MODIFIED ALGORITHM FOR MULTIPLE INPUT SPECTROGRAM INVERSION Poster; 1330–1530 Dongxiao Wang (Tokyo Institute of Technology), Hirokazu Kameoka (NTT Communication Science Laboratories), Koichi Shinoda (Tokyo Institute of Technology)
Predicting Speech Intelligibility of Enhanced Speech Using Phone Accuracy of DNN-based ASR Systems Poster; 1000–1200 Kenichi Arai (NTT Communication SCience Laboratories), Shoko Araki (NTT Communication Science Laboratories), Atsunori Ogawa (NTT Communication Science Laboratories), Keisuke Kinoshita (NTT), Tomohiro Nakatani (NTT Corporation), Katsuhiko Yamamoto (Wakayama University), Toshio Irino (Wakayama University)