Learning and Generalization in Cascade Network Architectures

Littmann E, Ritter H (1996)
Neural Computation 8(7): 1521-1539.

Download
Es wurde kein Volltext hochgeladen. Nur Publikationsnachweis!
Zeitschriftenaufsatz | Veröffentlicht | Englisch
Autor
;
Abstract / Bemerkung
Incrementally constructed cascade architectures are a promising alternative to networks of predefined size. This paper compares the direct cascade architecture (DCA) proposed in Littmann and Ritter (1992) to the cascade-correlation approach of Fahlman and Lebiere (1990) and to related approaches and discusses the properties on the basis of various benchmark results. One important virtue of DCA is that it allows the cascading of entire subnetworks, even if these admit no error-backpropagation. Exploiting this flexibility and using LLM networks as cascaded elements, we show that the performance of the resulting network cascades can be greatly enhanced compared to the performance of a single network. Our results for the Mackey-Glass time series prediction task indicate that such deeply cascaded network architectures achieve good generalization even on small data sets, when shallow, broad architectures of comparable size suffer from overfitting. We conclude that the DCA approach offers a powerful and flexible alternative to existing schemes such as, e.g., the mixtures of experts approach, for the construction of modular systems from a wide range of subnetwork types.
Erscheinungsjahr
Zeitschriftentitel
Neural Computation
Band
8
Zeitschriftennummer
7
Seite
1521-1539
eISSN
PUB-ID

Zitieren

Littmann E, Ritter H. Learning and Generalization in Cascade Network Architectures. Neural Computation. 1996;8(7):1521-1539.
Littmann, E., & Ritter, H. (1996). Learning and Generalization in Cascade Network Architectures. Neural Computation, 8(7), 1521-1539. doi:10.1162/neco.1996.8.7.1521
Littmann, E., and Ritter, H. (1996). Learning and Generalization in Cascade Network Architectures. Neural Computation 8, 1521-1539.
Littmann, E., & Ritter, H., 1996. Learning and Generalization in Cascade Network Architectures. Neural Computation, 8(7), p 1521-1539.
E. Littmann and H. Ritter, “Learning and Generalization in Cascade Network Architectures”, Neural Computation, vol. 8, 1996, pp. 1521-1539.
Littmann, E., Ritter, H.: Learning and Generalization in Cascade Network Architectures. Neural Computation. 8, 1521-1539 (1996).
Littmann, Enno, and Ritter, Helge. “Learning and Generalization in Cascade Network Architectures”. Neural Computation 8.7 (1996): 1521-1539.

2 Zitationen in Europe PMC

Daten bereitgestellt von Europe PubMed Central.

Neural network for graphs: a contextual constructive approach.
Micheli A., IEEE Trans Neural Netw 20(3), 2009
PMID: 19193509
Adaptive color segmentation-a comparison of neural and statistical methods.
Littmann E, Ritter H., IEEE Trans Neural Netw 8(1), 1997
PMID: 18255622

9 References

Daten bereitgestellt von Europe PubMed Central.


friedman, mntli sqftirnrt 3(), 1977

lecun, all-j o i i c c y111 ncurnl irfurrtrotiorproct ssirig s!/sferiis 2 d s tourctzky ed pp (), 1990

stokbro, complex syst 4(), 1990

Mezard, Journal of Physics A Mathematical and General 22(12), 1989
The self-organizing map
Kohonen, Proceedings of the IEEE 78(9), 1990
Learning representations by back-propagating errors
Rumelhart, Nature 323(6088), 1986
What Size Net Gives Valid Generalization?
Baum, Neural Computation 1(1), 1989
Oscillation and chaos in physiological control systems.
Mackey MC, Glass L., Science 197(4300), 1977
PMID: 267326
Toward generating neural network structures for function approximation
Nabhan, Neural Networks 7(1), 1994

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®

Quellen

PMID: 8823945
PubMed | Europe PMC

Suchen in

Google Scholar