Identification without randomization

Ahlswede R, Cai N (1999)
IEEE Transactions on Information Theory 45(7): 2636-2642.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Abstract / Bemerkung
In the theory of identification via noisy channels randomization in the encoding has a dramatic effect on the optimal code size, namely, it grows double-exponentially in the blocklength, whereas in the theory of transmission it has the familiar exponential growth. We consider now instead of the discrete memoryless channel (DMC) more robust channels such as the familiar compound (CC) and arbitrarily varying channels (AVC). They ran be viewed as models for jamming situations. We make the pessimistic assumption that the jammer knows the input sequence before he acts. This forces communicators to use the maximal error concept (see [1]) and also makes randomization in the encoding superfluous. Now, for a DMC W by a simple observation, made in [2], in the absence of randomization the identification capacity, say C-NRI(W), equals the logarithm of the number of different row-vectors in W. We generalize this to compound channels. A formidable problem arises if the DMC W is replaced by the AVC W. In fact, for 0-1-matrices only in W we are-exactly as for transmission-led to the equivalent zero-error-capacity of Shannon (see [3]). But for general W the identification capacity C-NRI(W) is quite different from the transmission capacity C(W). An observation is that the separation codes of [1] are also relevant here. We present a lower bound on C-NRI(W) Tt implies for instance for [GRAPHICS] that C-NRI(W) = 1, which is obviously tight. It exceeds C(W), which is known ([1]) to exceed 1 - h(delta), where h is the binary entropy function. We observe that a separation code with worst case average list size (L) over bar (which we call an NRA-code) can be partitioned into (L) over bar 2(n epsilon) transmission codes. This gives a nonsingle-letter characterization of the capacity of AVC with maximal probability of error in terms of the capacity of codes with List decoding, We also prove that randomization in the decoding does not increase Cr(W) and CNRI(W) Finally, we draw attention to related work on source coding ([4], [5]).
Stichworte
separation codes
Erscheinungsjahr
1999
Zeitschriftentitel
IEEE Transactions on Information Theory
Band
45
Ausgabe
7
Seite(n)
2636-2642
ISSN
0018-9448
Page URI
https://pub.uni-bielefeld.de/record/1621563

Zitieren

Ahlswede R, Cai N. Identification without randomization. IEEE Transactions on Information Theory. 1999;45(7):2636-2642.
Ahlswede, R., & Cai, N. (1999). Identification without randomization. IEEE Transactions on Information Theory, 45(7), 2636-2642. https://doi.org/10.1109/18.796419
Ahlswede, Rudolf, and Cai, Ning. 1999. “Identification without randomization”. IEEE Transactions on Information Theory 45 (7): 2636-2642.
Ahlswede, R., and Cai, N. (1999). Identification without randomization. IEEE Transactions on Information Theory 45, 2636-2642.
Ahlswede, R., & Cai, N., 1999. Identification without randomization. IEEE Transactions on Information Theory, 45(7), p 2636-2642.
R. Ahlswede and N. Cai, “Identification without randomization”, IEEE Transactions on Information Theory, vol. 45, 1999, pp. 2636-2642.
Ahlswede, R., Cai, N.: Identification without randomization. IEEE Transactions on Information Theory. 45, 2636-2642 (1999).
Ahlswede, Rudolf, and Cai, Ning. “Identification without randomization”. IEEE Transactions on Information Theory 45.7 (1999): 2636-2642.
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®
Suchen in

Google Scholar