|
Shannon's Random-Cipher Result and the Generalized -Norm Entropy of TypeDOI: 10.1155/2013/768384 Abstract: Using the Fano inequality for the generalized -norm entropy and Bayess probability of error, a generalized random-cipher result is proved by taking into account the generalized -norm entropy of type . 1. Introduction It is known that a good cryptosystem can be built provided that the key rate is greater than the message redundancy [1]. Shannon obtained this result by considering the equivocation of the key over a random cipher. By counting the average number of spurious decipherments over a restricted class of random ciphers, Hellman [2] obtained the same result. A similar result was proved by Lu [3] by using the average probability of correct decryption of a message digit as a measure of performance and the Fano inequality for a class of cryptosystems. The analysis done by Lu is precise, whereas in [1] approximations are used. All of these results are obtained by taking into account the Shannon entropy. Sahoo [4] generalized the results of Lu by considering Renyi’s entropy and the Bayes probability of error. But in the literature of information theory, there exist various generalizations of Shannon’s entropy. One of these is the R-norm information, which was introduced by Arimoto [5] and extensively studied by Boekee and Van der Lubbe [6]. The objective of this paper is to generalize the results of Lu by considering the generalized R-norm entropy of type and Bayess probability of error. 2. Generalization of Shannon’s Random-Cipher Result Consider a discrete random variable X, which takes values , having the complete probability distribution . Also consider the set of positive real numbers not equal to 1; that is, . Then the R-norm information [5] is defined as This measure is different from the entropies of Shannon [1], Renyi [7], Havrda and Charvát [8], and Daroczy [9]. The most interesting property of this measure is that when R 1, it approaches to Shannon’s [1] entropy and in case . The measure (1) can be generalized in so many ways; however, Hooda and Ram [10] studied a parametric generalization as follows: where . The measure (2) may be called the generalized R-norm entropy of type and it reduced to (1) when . In case R = 1, (2) reduces to Setting in (3), we get The information measure (4) has also been mentioned by Arimoto [5] as an example of a generalized class of information measures. Although (4) and (1) are the same form, yet these differ as the ranges of R and are different. However, (2) is a joint representation of (1) and (4). So it is interesting to study the applications of the generalized R-norm entropy of type . Let us consider now
|