Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation

Ayllon Clemente I, Heckmann M, Wrede B (2012)
Speech Communication 54(9): 1029-1048.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurde kein Volltext hochgeladen. Nur Publikationsnachweis!
Autor/in
Abstract / Bemerkung
In this paper we present an incremental word learning system that is able to cope with few training data samples to enable speech acquisition in on-line human robot interaction. As with most automatic speech recognition systems (ASR), our architecture relies on a Hidden Markov Model (HMM) framework where the different word models are sequentially trained and the system has little prior knowledge. To achieve good performance, HMMs depends on the amount of training data, the initialization procedure and the efficiency of the discriminative training algorithms. Thus, we propose different approaches to improve the system. One major problem of using a small amount of training data is over-fitting. Hence we present a novel estimation of the variance floor dependent on the number of available training samples. Next, we propose a bootstrapping approach in order to get a good initialization of the HMM parameters. This method is based on unsupervised training of the parameters and subsequent construction of a new HMM by aligning and merging Viterbi decoded sequences. Finally, we investigate large margin discriminative training techniques to enlarge the generalization performance of the models using several strategies suitable for limited training data. In the evaluation of the results, we examine the contribution of the different stages proposed to the overall system performance. This includes the comparison of different state-of-the-art methods with our presented techniques and the investigation of the possible reduction of the number of training data samples. We compare our algorithms on isolated and continuous digit recognition tasks. To sum up, we show that the proposed algorithms yield significant improvements and are a step towards efficient learning with few examples. (c) 2012 Elsevier B.V. All rights reserved.
Stichworte
Discriminative training; Bootstrapping; Learning from few examples; Incremental word learning; Speech; Multiple sequence alignment; recognition
Erscheinungsjahr
2012
Zeitschriftentitel
Speech Communication
Band
54
Ausgabe
9
Seite(n)
1029-1048
ISSN
0167-6393
Page URI
https://pub.uni-bielefeld.de/record/2536055

Zitieren

Ayllon Clemente I, Heckmann M, Wrede B. Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Communication. 2012;54(9):1029-1048.
Ayllon Clemente, I., Heckmann, M., & Wrede, B. (2012). Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Communication, 54(9), 1029-1048. doi:10.1016/j.specom.2012.04.005
Ayllon Clemente, I., Heckmann, M., and Wrede, B. (2012). Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Communication 54, 1029-1048.
Ayllon Clemente, I., Heckmann, M., & Wrede, B., 2012. Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Communication, 54(9), p 1029-1048.
I. Ayllon Clemente, M. Heckmann, and B. Wrede, “Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation”, Speech Communication, vol. 54, 2012, pp. 1029-1048.
Ayllon Clemente, I., Heckmann, M., Wrede, B.: Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation. Speech Communication. 54, 1029-1048 (2012).
Ayllon Clemente, Irene, Heckmann, Martin, and Wrede, Britta. “Incremental word learning: Efficient HMM initialization and large margin discriminative adaptation”. Speech Communication 54.9 (2012): 1029-1048.