# Learning and Generalization in Cascade Network Architectures

Littmann E, Ritter H (1996)

Neural Computation 8(7): 1521-1539.

*Journal Article*|

*Published*|

*English*

No fulltext has been uploaded

Author

Littmann, Enno
;
Ritter, Helge

^{UniBi}Department

Abstract

Incrementally constructed cascade architectures are a promising alternative to networks of predefined size. This paper compares the direct cascade architecture (DCA) proposed in Littmann and Ritter (1992) to the cascade-correlation approach of Fahlman and Lebiere (1990) and to related approaches and discusses the properties on the basis of various benchmark results. One important virtue of DCA is that it allows the cascading of entire subnetworks, even if these admit no error-backpropagation. Exploiting this flexibility and using LLM networks as cascaded elements, we show that the performance of the resulting network cascades can be greatly enhanced compared to the performance of a single network. Our results for the Mackey-Glass time series prediction task indicate that such deeply cascaded network architectures achieve good generalization even on small data sets, when shallow, broad architectures of comparable size suffer from overfitting. We conclude that the DCA approach offers a powerful and flexible alternative to existing schemes such as, e.g., the mixtures of experts approach, for the construction of modular systems from a wide range of subnetwork types.

Publishing Year

eISSN

PUB-ID

### Cite this

Littmann E, Ritter H. Learning and Generalization in Cascade Network Architectures.

*Neural Computation*. 1996;8(7):1521-1539.Littmann, E., & Ritter, H. (1996). Learning and Generalization in Cascade Network Architectures.

*Neural Computation*,*8*(7), 1521-1539.Littmann, E., and Ritter, H. (1996). Learning and Generalization in Cascade Network Architectures.

*Neural Computation*8, 1521-1539.Littmann, E., & Ritter, H., 1996. Learning and Generalization in Cascade Network Architectures.

*Neural Computation*, 8(7), p 1521-1539.E. Littmann and H. Ritter, “Learning and Generalization in Cascade Network Architectures”,

*Neural Computation*, vol. 8, 1996, pp. 1521-1539.Littmann, E., Ritter, H.: Learning and Generalization in Cascade Network Architectures. Neural Computation. 8, 1521-1539 (1996).

Littmann, Enno, and Ritter, Helge. “Learning and Generalization in Cascade Network Architectures”.

*Neural Computation*8.7 (1996): 1521-1539.
This data publication is cited in the following publications:

This publication cites the following data publications:

### 2 Citations in Europe PMC

Data provided by Europe PubMed Central.

Neural network for graphs: a contextual constructive approach.

Micheli A.,

PMID: 19193509

Micheli A.,

*IEEE Trans Neural Netw*20(3), 2009PMID: 19193509

Adaptive color segmentation-a comparison of neural and statistical methods.

Littmann E, Ritter H.,

PMID: 18255622

Littmann E, Ritter H.,

*IEEE Trans Neural Netw*8(1), 1997PMID: 18255622

### 9 References

Data provided by Europe PubMed Central.

friedman,

*mntli sqftirnrt*3(), 1977

lecun,

*all-j o i i c c y111 ncurnl irfurrtrotiorproct ssirig s!/sferiis 2 d s tourctzky ed pp*(), 1990

stokbro,

*complex syst*4(), 1990

Mezard,

*Journal of Physics A Mathematical and General*22(12), 1989

The self-organizing map

Kohonen,

Kohonen,

*Proceedings of the IEEE*78(9), 1990
Learning representations by back-propagating errors

Rumelhart,

Rumelhart,

*Nature*323(6088), 1986
What Size Net Gives Valid Generalization?

Baum,

Baum,

*Neural Computation*1(1), 1989
Oscillation and chaos in physiological control systems.

Mackey MC, Glass L.,

PMID: 267326

Mackey MC, Glass L.,

*Science*197(4300), 1977PMID: 267326

Toward generating neural network structures for function approximation

Nabhan,

Nabhan,

*Neural Networks*7(1), 1994### Export

0 Marked Publications### Web of Science

View record in Web of Science®### Sources

PMID: 8823945

PubMed | Europe PMC