On the Generalization Ability of Recurrent Networks

Hammer B (2001)
In: Artificial Neural Networks — ICANN 2001. Dorffner G, Bischof H, Hornik K (Eds); Lecture Notes in Computer Science, 2130. Berlin, Heidelberg: Springer Berlin Heidelberg: 731-736.

Konferenzbeitrag | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Herausgeber*in
Dorffner, Georg; Bischof, Horst; Hornik, Kurt
Abstract / Bemerkung
The generalization ability of discrete time partially recurrent networks is examined. It is well known that the VC dimension of recurrent networks is infinite in most interesting cases and hence the standard VC analysis cannot be applied directly. We find guarantees for specific situations where the transition function forms a contraction or the probability of long inputs is restricted. For the general case, we derive posterior bounds which take the input data into account. They are obtained via a generalization of the luckiness framework to the agnostic setting. The general formalism allows to focus on reppresentative parts of the data as well as more general situations such as long term prediction.
Erscheinungsjahr
2001
Titel des Konferenzbandes
Artificial Neural Networks — ICANN 2001
Serien- oder Zeitschriftentitel
Lecture Notes in Computer Science
Band
2130
Seite(n)
731-736
Konferenz
Artificial Neural Networks (ICANN 2001)
Konferenzort
Vienna, Austria
Konferenzdatum
2001-08-21 – 2001-08-25
ISBN
978-3-540-42486-4
eISBN
978-3-540-44668-2
Page URI
https://pub.uni-bielefeld.de/record/2982130

Zitieren

Hammer B. On the Generalization Ability of Recurrent Networks. In: Dorffner G, Bischof H, Hornik K, eds. Artificial Neural Networks — ICANN 2001. Lecture Notes in Computer Science. Vol 2130. Berlin, Heidelberg: Springer Berlin Heidelberg; 2001: 731-736.
Hammer, B. (2001). On the Generalization Ability of Recurrent Networks. In G. Dorffner, H. Bischof, & K. Hornik (Eds.), Lecture Notes in Computer Science: Vol. 2130. Artificial Neural Networks — ICANN 2001 (pp. 731-736). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-44668-0_102
Hammer, Barbara. 2001. “On the Generalization Ability of Recurrent Networks”. In Artificial Neural Networks — ICANN 2001, ed. Georg Dorffner, Horst Bischof, and Kurt Hornik, 2130:731-736. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg.
Hammer, B. (2001). “On the Generalization Ability of Recurrent Networks” in Artificial Neural Networks — ICANN 2001, Dorffner, G., Bischof, H., and Hornik, K. eds. Lecture Notes in Computer Science, vol. 2130, (Berlin, Heidelberg: Springer Berlin Heidelberg), 731-736.
Hammer, B., 2001. On the Generalization Ability of Recurrent Networks. In G. Dorffner, H. Bischof, & K. Hornik, eds. Artificial Neural Networks — ICANN 2001. Lecture Notes in Computer Science. no. 2130 Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 731-736.
B. Hammer, “On the Generalization Ability of Recurrent Networks”, Artificial Neural Networks — ICANN 2001, G. Dorffner, H. Bischof, and K. Hornik, eds., Lecture Notes in Computer Science, vol. 2130, Berlin, Heidelberg: Springer Berlin Heidelberg, 2001, pp.731-736.
Hammer, B.: On the Generalization Ability of Recurrent Networks. In: Dorffner, G., Bischof, H., and Hornik, K. (eds.) Artificial Neural Networks — ICANN 2001. Lecture Notes in Computer Science. 2130, p. 731-736. Springer Berlin Heidelberg, Berlin, Heidelberg (2001).
Hammer, Barbara. “On the Generalization Ability of Recurrent Networks”. Artificial Neural Networks — ICANN 2001. Ed. Georg Dorffner, Horst Bischof, and Kurt Hornik. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001.Vol. 2130. Lecture Notes in Computer Science. 731-736.
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar
ISBN Suche