With the development of communications, the security problem for confidential information has emerged. Also, there is theft or damage to data repositories to which there is no free access. Therefore, there have appeared several encryption procedures for information, in particular, images.1 There are some new methods using the Hilbert transform,2 chaos,3 the hyper-chaos,4 or even the advanced encryption standard (AES) cryptosystem5 with the CBC encryption mode,6 although this way is a sequential encryption. The first two are fast, but they have a robustness problem,6 in that it is not specifically mentioned what the number of elements in the key set is. The hyper-chaos method4 is a key size of and they only speak of brute force attacks and no other type of analysis, for example, differential7 and linear8 attacks. By the way, the AES cryptosystem has no problem with any of them yet.9 Moreover, the AES key set can reach elements. It is also important to note that the AES algorithm uses the substitution operation through a box. The substitution operation gives nonlinearity to the encryption process,10 and none of the aforementioned algorithms. In fact, the nonlinearity of the AES box is superior to data encryption standard (DES) and triple-DES boxes.11 Also, there are encryptions for color images using the transformed Fourier,12 the gyrator and Arnold transforms.13 However, in these two latest investigations, the specific algorithm complexity is not shown. On the other hand, there are some color image encryption investigations14 where they do not specify the set key size. Other optical papers1516.–17 demonsrrate cipher images in an original way, but they do not clearly show the key set size. There is also an important paper in image encryption,18 although this alters the original image when it is decrypted, and this is important because in some countries it is not allowed. There is an interesting investigation using a chaotic map to encrypt images,19 but that paper does not employ a test of NIST Special Publication 800-22 to measure the randomness of the encrypted images.
We decided to use the AES algorithm for images encryption for the following reasons: it is a recent symmetric encryption system, and it is also the International Standard at this time.5 This makes the AES algorithm one of the most studied worldwide. However, an efficient method for breaking it has not yet been found.20 The encryption “quality” of a figure concerns the randomness degree in the image bits distribution. Several methods have been utilized to measure the randomness degree.21 In this work, the following are used: correlation coefficients; horizontal, vertical, and diagonal,22 entropy and discrete Fourier transform (DFT). The latter measures the periodicity degree of the zeroes and ones string, i.e., whether or not a pattern is followed. Furthermore, a different way is proposed to measure the randomness degree of the encrypted figure bits using a “goodness-of-fit test.”22
This investigation does not use compression on images because there are some countries whose security areas do not allow compression in the encryption image.23 In other words, it does not utilize the process: compression–encryption → decryption–decompression. It is only employs encryption → decryption. Five images as prototypes to be encrypted are used, of which four are in most papers on encryption figures; these images are the following: baboon, Barbara, Lena, and peppers. A criterion to select the fifth figure to be encrypted is proposed. All the images have a different difficulty degree to be encrypted when a symmetric cryptosystem is used. This difficulty depends on the randomness degree of the figure bits to be encrypted, since an image with a high randomness degree in its bits distribution is easier to encrypt than a figure that has low randomness, when ECB encryption mode is used. This paper is organized as follows: this section presents a very synthetic state of art and Sec. 2 shows the basic concepts to be used in this investigation. The bijective function is addressed in Sec. 3, and is also demonstrated. Section 4 approaches the AES algorithm with variable permutations, and constructs a permutation in the whole image. Ways to measure the randomness degree are presented in Sec. 5. Section 6 shows the encrypted figures analysis with damage, and the outcome of the randomness degree in the encrypted figures is shown in Sec. 7. Section 8 discusses the results. Finally, the conclusions are given in Sec. 9.
Even though the AES algorithm is the most studied in the world, an efficient way to solve it is not known. Another important issue to clarify is that AES is a symmetric algorithm, which makes it a very fast method with which to encrypt data. However, if a figure with a low randomness degree in its bits distribution is encrypted using the AES algorithm without any modification, perhaps the image encrypted could provide information, i.e., the distribution of the different shades of basic colors follow a certain pattern. So, it is necessary to employ an additional element in the algorithm. Thus, this paper proposes applying a different permutation in each 128 bits block. This permutation is applied in the first round after the x-or operation. The reason why the permutation is not used at the first round entrance as is triple-DES11 is because some images have areas with the same color, such as black or white. In this situation, whatever permutation applied to a zeroes or ones string would not make any change in the chain. If the permutation is used after the x-or operation, then it allows for changing some string bits. Another question is why in the first round? Well, it is understood that the information is mixed in each round, so any change carried out by the permutation at the first round will have more opportunity to mix the information. Therefore, at the end of the encryption process the zeroes and ones string will be random.
As is known, the transcendental numbers are not the solution for any polynomial whose form is with , and besides, all the numbers after the decimal point have the property of not following any periodicity,24 making them good candidates to be used as pseudorandom numbers.25 In fact, for this feature, the irrational numbers are employed in Has-Sha functions.26 The transcendent number used in this investigation is , because it has been studied for a long time.27 On the other hand, the permutations generated depend on the AES key in accordance with the following procedure: denoted as the integer which represents the 128 bits chain AES key. Then the product is also a transcendental number. Thus, using this last number it is possible to get the constants to build the permutations.
The entropy is measured according to the formula: . When working with each basic color of the images—red, green, or blue—each one can be described as 1 byte, i.e., 256 levels are sufficient for each. So, if each basic color has a uniform distribution, all points are equally likely, and the entropy value is 8.28 This means the information is completely random. However, in practice, this is not so. Then values are sought as close to 8 as possible, in the basic colors’ distribution red, green, and blue of the encrypted figure. If the image is mono-colored, the procedure is basically the same, since only 1 byte is used to describe the gray color, i.e., 256 different gray levels, following the same reasoning as the color images.
A statistic test to evaluate the chain bits randomness is formulated by means of a null hypothesis versus an alternative . The null hypothesis establishes that the bits sequence is random and the alternative hypothesis is the opposite. To accept or reject the null hypothesis, a statisticand a threshold are used. If the statistic based on the data has a value bigger than the threshold, it implies that the null hypothesis is accepted, otherwise is rejected. In any hypothesis test scheme there are two errors, namely, type I and type II errors. The type I error is committed when is rejected when this hypothesis is true and the type II error is committed when is accepted and it is false. The type I error can be controlled, because it is supposed that is the more important of the two hypotheses. The amount used in this research for type I error is , although the value can be used.21 The error is also called the significance level.
The probability distributions used in the randomness tests are: Chi-square and complementary error function .29 It is possible to express the function in terms of a normal standard cumulative distribution according to the following reasoning:
The normal standard cumulative distribution, and
The next variable change is proposed for Eq. (2), and .
Therefore, then, this last expression is written thus: , thus
Regarding the concept of a real-time decision relates to the following: it is known that there are important decisions for which there is a short time to make them, therefore, the procedure expressed here contributes to the process of making a timely decision.
Let us have the following considerations: given a natural the sets and can be defined, such that is a permutation of the array. According to the Euclid division algorithm,30 , this one can be written in a unique way as follows:
Note that for a given , are fixed. It will be seen in the algorithm description that the constant . Also, it is easy to prove that
When the constants are calculated, the following algorithm can be constructed:
Step 0. An array in ascending order can be defined as follows: , .
Step 1. According to Eq. (5), ; so is an element from the step 0 array. is removed from the step 0 arrangement and instead is replaced by , i.e., the last element of the array. Note that only two operations are performed, removal and replacement, in fact, the other array elements remain unchanged.
Step 2. Again using Eq. (5), ; thus is an array element from step 1. In the same way as in the previous step, is removed and is replaced by the last element step 1 array.
Step . If this process is repeated at the end, the result will have the following: and with . The number automatically appears as it is the last element, i.e., because it has position zero.
The arrangement of positive integers and is a permutation of the array. This procedure is made in steps. Regarding the complexity to implement this algorithm is because at every step a removal and replacement of an item is made. The remainder is unchanged.
Clearly, in the case of images with 250,000 pixel files or more the algorithm with a complexity represents an important advantage. It is clear that the algorithm presented above defines a function that goes from to ; denoted as . Next : is demonstrated as a bijective function.
Let us have the sets and , such that and is a permutation of array. Then the function : is bijective.
First, it is shown that is a one to one function. It is used reductio ad absurdum as a demonstration method. In this vein, suppose that ; but according to Eq. (4) the positive integers , can be written as follows:
But it is supposed that , which means that the elements of both permutations were selected in the same way, therefore, it must be true that . If this is so, then . So, the latter contradicts the hypothesis and concludes that if . At the moment, this proves that is a one to one function.
The test that function is surjective is simple, since the number of elements in the sets and is equal. ∎
Advanced Encryption Standard with Variable Permutations
At this point, the way to use the algorithm presented above to construct a pseudorandom permutation over positions array and then how to apply this tool in image encryption is shown. In this vein, it proceeds as follows: If the algorithm developed in the previous section is observed, knowing the values for a permutation can be constructed. Note that it is not important to know the number with all its digits, fortunately, because it is impossible to work with integers around or higher since they are huge.
In this sense, the quantities , are used only as marks, so it is not important to write all the digits. Then, the next question to address is how to choose pseudorandom values for . First, the permutations are built on 128 position arrangements using the number thus:
1. The symmetric cryptosystem key AES is a string of zeroes and ones, which in turn represents a positive integer, that is, if the key is 128 bits length, then the integer associated to the bits string has the form , where for and . So this integer can be denoted as , then this paper proposes that multiplies , such that the product is itself a transcendent number.
Particularly, in this research, the AES-128 symmetric cryptosystem is used, although there is the possibility of using keys up to 256 bits.
2. After making the multiplication , it is taken to the right of the decimal point in 8-bit blocks. These blocks are also positive integers and they are denoted as . can be defined as mod. , for and . If in the above procedure each 127-bytes block is taken one after the other, the required number of bytes can be very large. For example, for an image of 7,372,800 bits, 57,600 128-bit blocks are required. Then the necessary bytes are . This amount may be reduced if the procedure is as follows: the first permutation can be built from byte 0 to 126, the second permutation from byte 1 to 127, the third from 2 to 128, and so on until the required number of permutations is reached. If this procedure is made in this way, the bytes number used for the above example is ; as can be seen, this is a significant reduction.
For the whole image, the number of items to permute, , can have 250,000 or more elements. For a situation like this, the procedure is the same as when . There are some differences, namely, the size of the blocks in this case is 24 bits. This is because many of the current images do not exceed bits in the spatial resolution.
Clearly, these blocks also represent positive integers. So, the can be defined as mod. , for and .
Sometimes, some bytes can be subtracted from the image to be encrypted according to the next criterion: if mod. where is the pixels number of the image, then the minimum amount of bytes required is subtracted, say , such that mod.. It is important to point out that and this number of bits, , is not encrypted. Once the values are known, the permutations on elements array can be calculated according to the procedure described in Sec. 3. In the case of AES with variable permutations the constant sets that are necessary according to the image size are calculated.
1. In a random way, the AES-128 cryptosystem key is chosen, that is, a chain of 128 bits.
2. The bits string is converted into a positive integer, which is denoted as , and later the multiplication is performed.
3. The values are calculated as described above to get the permutations; one over positions and another for the whole picture.
4. The permutation to the entire image is applied and the image permutated is encrypted with an AES-128 system with variable permutations.
5. The sender encrypts the AES-128 key with the addressee’s public key using the asymmetric cryptosystem Elgamal. Subsequently, the receptor can find the AES-128 key using its private key.
In this research, the signature and nonrepudiation as part of the structure of PKI secure communication are not mentioned33 since they are not within the scope of this work.
Figures 1 and 2 flowcharts are shown; namely, the first illustrates how the permutation for the entire image is obtained. The second exemplifies how the variable permutations of 128 positions are developed.
Regarding the security of this cryptosystem, the following can be mentioned: the worst that can happen is that the AES key could be known, because if this encryption scheme is used, the permutation over the whole image and the variable permutations applied in each block can be calculated. So, in this situation, the maximum security is . However, if we want to find the key, taking as a plaintext the permuted image using the brute force attack, then this could be a problem with a complexity of ; because in a previous work34 a strong evidence was given that the DES algorithm with variable permutation has a complexity of , where the Monte Carlo method was used. Clearly, if the problem is to find the key from the initial image, the solution to the problem would be more complex. Cases of plaintext are chosen as differential and linear attacks, which are not applicable to the AES cryptosystem due to the way the substitution box is constructed.6
Correlation Coefficient, Entropy, and DFT
In this section, randomness using the following tests will be analyzed: correlation coefficient of horizontal, vertical, and diagonal directions, entropy, and DFT. It is also pointed out that the image encryption process is performed without compression, specifically, loss-less of information. In any image encryption process, it is important that the bits distribution be random in order to avoid bias that could lead to attacks for finding the key or plaintext.
Adjacent pixels are considered in three directions, namely, horizontal, vertical, and diagonal. Furthermore, it is said that a picture is “well encrypted,” if the correlation coefficient between adjacent pixels is a number close to zero.35 The process of calculating the correlation coefficient between two random variables is carried out as follows: in a random way a pixel of the encrypted image is chosen. This pixel has a level of red, green, and blue which is denoted as , , and , i.e., the analysis is performed for each primary color. After selecting a pixel in a random way, the next pixel in the adjacent directions, horizontal, vertical, or diagonal, is taken. The adjacent pixel has a level of red, green, and blue. These levels are denoted as follows: , , and .
Now, suppose that pairs of pixels are chosen randomly. It is possible to calculate the correlation coefficients in the three directions for the three basic colors. The equatio for calculating the correlation coefficient in the horizontal direction and a basic color is thus
In case of a mono-color image, the method is only for 256 gray levels.
The entropy analysis of the image pixels is performed for each basic color apart. In this sense, any basic color, red, green, or blue, requires only 1 byte to express the entropy, i.e., 256 levels. In this vein, it is said that if the distribution of the pixels is completely random then the entropy of any of the basic colors is 8. Measuring the randomness in practical cases to strings of zeroes and ones has the following reasoning: when it is close to 8 this means that the string of zeroes and ones is random, otherwise it would mean the opposite.
The pixels are separated in their primary colors to calculate the entropy. Assume that the bits string has the basic color , which is divided into blocks of 8 bits. It follows that there are 256 possible values. The frequencies are recorded in a table of 256 classes according to their order of appearance. Then, each class has a frequency for , so that an estimate of the probabilities of each of the classes is . Therefore, the entropy for basic color is calculated as follows: .
The DFT measures the randomness degree of a string of zeroes and ones i.e., there is no periodicity—repetitive patterns—one after another in the string of zeroes and ones. In addition, the following elements in the calculation of the statistic test are shown:
is the theoretical amount expected; , where is the chain length.
is the number of values less than a threshold , which depends on the length in the string.
The value , where and .
If is odd, the last chain bit is suppressed. Clearly, has real and complex parts. Then, the module is calculated, which is real, and is compared with . If , a one is added to the value of . Otherwise, stays at its previous value. With this data, the quantities ; the statistic and are calculated. The decision rule is: if the is less than 0.01, the null hypothesis is rejected, otherwise, it is accepted. The null and alternative hypotheses were defined in Sec. 2. The three tests are illustrated in Sec. 7 with the particular value 2F9A68D501CB 57F3A4E80B9A417AD254 key of 128 bits.
Working with images, a test of randomness based on the way the bits are arranged in an encrypted figure is proposed and the statistical, Chi-square, is used for each of the basic colors. The amounts and are the observed and expected values number , respectively. Using statistical it is possible to quantify the freedom degree that has the distribution of the primary colors: red, green, and blue. All the NIST 800-22 tests do not have this type of proof, that is, the randomness of the basic color distribution of an encrypted image is not measured. As in some of the NIST 800-22 standard tests, in this proposal the goodness-of-fit test is applied, using the statistical, , which has a probability distribution Chi-square with freedom degrees.22 The freedom degrees are obtained in the following way: the shades of each color of an image can be displayed as a histogram whose abscissa has 256 divisions. Then, the degrees of freedom are 255. Moreover, the random variable can be approximated to the normal distribution according to the central limit theorem.36 Thus, the mean and standard deviation of the statistical are: and . With this information, it is simple to calculate the thresholds for significance levels and , considering that both boundaries are on the right side of the normal distribution. The threshold for the level of significance is 307.61 and when it is 324.78. Therefore, the process for making the decision to accept or reject the null hypothesis according to a specific bits chain is as follows:
a. The statistical is calculated for specific values, considering that and are the observed and expected values number .
b. The probability after the value is computed, i.e., the area down the normal curve after is calculated. If this probability is greater than or equal to 0.01 the null hypothesis is accepted, otherwise it is rejected. If the significance level is 0.001 the procedure is the same.
Of note, consider that an encrypted figure is rejected when some of the basic colors do not pass the proposed randomness test. Also, the size of the type I error used in this test is . Then, taking into account that rejecting an encrypted image happens when at least one of the primary colors fails the randomness test and considering that the probability of rejecting any of them is , it follows that this situation can be described using the binomial model;36 that is, where is the number of primary colors that do not pass the hypothesis test. So, the rejection happens when . Therefore, the probability of acceptance is approximately , and the probability of rejecting the encrypted image is about .
However, the probability of rejection can be reduced if the procedure is as follows: suppose that five keys are chosen in a pseudorandom way, independent of each other, then the probability that an encrypted image does not fulfil the randomness requirement is rejected as mentioned above, as happens when none of the five figures encrypted fulfill the randomness test for all the primary colors. This type of problem can also be solved using the binomial model, taking into account that . So, where is the number of encrypted images that are rejected because they do not fulfill the proposal test randomness criteria. Then, a setting rejection happens when , i.e., none of the figures encrypted have a randomness quality. In this case, the probability of not accepting any of them is , which means that for every 1000 millions that the latter process makes, about 23 are rejected. Another comment:
1. The probability of rejection can be decreased as much as you want by simply increasing the number of encryption figures.
2. The encryption time is not incremented much if parallel programming is used.37
Now try another question: assume that 100,000 different keys are taken and the image (c) of Fig. 6 is encrypted. Also assume that the figures encrypted with these keys pass the hypothesis test using the criterion given above. Then, in Sec. 7, the following results will be presented: the average entropy for each of the basic colors using these 100,000 encrypted figures. Regarding the correlation coefficient, the averages are also considered in three directions: horizontal, vertical, and diagonal, and for each of the three primary colors: red, green, and blue. Moreover, the furthest value of 8 for each of the basic colors in the case of entropy is reported, and the largest absolute value of the correlation coefficient is also shown in the three directions and the three basic colors.
Criteria of Image Selection
In Sec. 1, it was mentioned that a criterion would be presented to choose a fifth figure to be encrypted. This criterion is based on a characteristic of the goodness-of-fit test that tells us the following: if the tone distribution in each basic color was completely random, then . In fact, this means that the color histogram follows a uniform distribution. However, if has a large value for each basic color, it means they have a defined order. So the authors propose choosing an image that has a as large as possible for each primary color. In this paper, a figure with the values of : , and , for the red, green, and blue colors, respectively, is proposed. This image is a simulation of Latin text that is usually used in graphic design typographic demonstrations or drafts.38 This image is shown in Fig. 3.
On the other hand, there is another reason for making the very large since there are many images with relatively small values for each statistical of the basic color, say less than half a million; in these cases the AES cryptosystem can be applied directly, i.e., it is not necessary to use variable permutations to encrypt an image and the result passes the above tests and even the proposal. For example, image (b) of Fig. 6—Barbara—has the quantities of for the primary colors red, green, and blue: , , and ; these values are equal because it is a mono-colored image. In Fig. 4, Barbara and her encrypted image are presented without using variable permutations. By the way, Fig. 4(a) is ciphered considering the same gray level of each pixel in the three planes and is later encrypted in blocks of 128 bits.
For figures with a very large for each different basic color, say more than 100 million, the variable permutations must be applied so that the encryption process is effective. Figure 5 shows when the encryption process can be ineffective in a simple way, since the for the primary colors are: , , and for red, green, and blue, respectively. Remember, it was noted above that the rejection region threshold is 324.78 for as maximum. Furthermore, the values mentioned above are much higher, in fact, about 6 million.
Basically this is the reason why an image with an as large as possible is proposed in order to verify that the cryptosystem presented is efficient. This section is suitable to show the five figures to be encrypted. The first one is presented in Fig. 3; the other four are shown in Fig. 6.
Damage in the Encrypted Images
In this section, the figures ciphered with damage are treated, either accidentally or voluntarily. It is clear that the damage in the encrypted images is an attack because the message receiver cannot know what is it, therefore, the receiver does not make a decision or decisions that may be important. Time is a factor, too, because there are decisions that cannot wait. So, the decisions that concern us have the following characteristics: first, they are important for a state or corporation, and second, they have a very short time to make a decision, say less time that is required to again ask for the original message. In this research, a way to solve such problems is presented with some restrictions, of course. On the other hand, this investigation only analyzed , and did not take into account additive or multiplicative noise.
It starts by showing an original image, which is encrypted without applying a permutation over the whole figure before being encrypted. Later, the encrypted image is damaged and at the end is deciphered, see Fig. 7.
In Sec. 3, the way to generate a permutation for the whole image was presented. So it is possible to apply a permutation to an array of 250,000 or more positions; in this particular case, the pixels number of the original figure. The purpose of using a permutation in the original image before being encrypted has the objective to disperse the information, thus, when the figure encrypted with damage is decrypted, the damage is dispersed and it is possible to perceive the original picture. Of course, it also depends on the size of the damage. This paper proposes that the size damage is not greater than 40% of the cypher image. It is easy to realize that this amount may be higher or lower depending on the “sharpness” degree that is required in the decoded image. The Chi-square statistic for a basic color of an encrypted image with damage is denoted as considering is the size of the failure. In this sense, the Chi-square of the original image to the same color can be written as follows: . Then, the ratio is the information percentage that is known of color , with respect to the same original figure color . It will be noted in Sec. 7 that a failure less or equal to 40% has a ratio, as minimum, of about 34%. That is for images of about or higher. It is easy to see that this quantity depends on the size of the damage.
With regard to the manner by which the failure of the encrypted figure is made, it is carried out by means of concentric rectangles, see Fig. 8. The key of 128 bits for the AES cryptosystem was written in Sec. 5.1. Figure 8 also presents an encrypted image with 40% damage and later the figure is decrypted. The image (b) of Fig. 6 is used to illustrate the point.
Randomness Results of the Encrypted Images Without Damage
In this section, the correlation coefficient, entropy, DFT, and the proposed test are applied to five images, corresponding to Figs. 3 and 6. Furthermore, the results of this procedure are presented. The key of 128 bits was written in Sec. 5.1, and as noted earlier, the key is associated with a positive integer, and it is: . Then, if is multiplied by , that is, the product , the result is also a transcendental number.
Taken to the right of the decimal point, the bits amount needed to calculate all the constant sets are used to encrypt the image, in addition, the permutation used for the whole image. Table 1 shows the results of the randomness test using DFT for the encrypted images of Figs. 3 and 6. This encryption is performed in two steps: in the first, a permutation is applied over the whole original image; the second uses the AES cryptosystem with variable permutations.
The DFT test results applied to encrypted images of Figs. 3 and 6 (√ accepted and χ rejected).
|Test name||Significance label||P-value/decision|
|Figure 3||Figure 6|
The results for the proposed test are presented in Table 2. Note that the analysis for each primary color is included; it is also important to mention that for all the cases, the null hypothesis is accepted. Regarding entropy, analysis of randomness for the three basic colors of the images encrypted with the key is performed. Again, the images that are encrypted correspond to Figs. 3 and 6. Furthermore, it is considered that for a figure that has been “well encrypted,” entropy must be very close to 8. The results are presented in Table 3.
The proposal test results applying to encrypt images of Figs. 3 and 6 (√ accepted and χ rejected).
|Test name||Significance label||P-value/decision|
|Figure 3||Figure 6|
Entropy of encrypted images using the k key for Figs. 3 and 6.
|Entropy||Figure 3||Figure 6|
To calculate the correlation coefficient, a random sample of 3000 pixel pairs is taken in both original and encrypted images. The horizontal correlation coefficient of a basic color is denoted as then, the vertical correlation coefficient and the diagonal for the primary color are expressed as and .
The correlation coefficient results for the images of the Figs. 3 and 6 are shown in Table 4, and the coefficients for the same images encrypted by are shown in Table 5. The average values of the entropy and the furthest number from 8 are presented in Table 6, regarding number 8 as perfect randomness. Table 7 shows the average amounts of the correlation coefficient and the biggest absolute value or furthest from zero for the same coefficient in the three directions and for each basic color. The absolute value of the correlation coefficient furthest from zero means that it is the worst case.
Correlation coefficients; horizontal, vertical, and diagonal of the three basic colors for images of Figs. 3 and 6.
|Color||Correlation coefficient||Figure 3||Figure 6|
Correlation coefficients for horizontal, vertical, and diagonal of the three basic colors for encrypted images using k key for Figs. 3 and 6.
|Color||Correlation coefficient||Figure 3||Figure 6|
Average and the furthest value from 8 for the entropy using image (c) of Fig. 6.
|Entropy||Average value||The furthest value of 8|
Average and the furthest value from 0 for the correlation coefficient using image (c) of Fig. 6.
|Color||Correlation coefficient||Average||The furthest value of 0|
Randomness Results for the Encrypted Images with Damage
The aspect of analyzing encrypted figures with failure, either voluntarily or not, is pending. In this sense, it is important to mention some aspects before proceeding. The first is with respect to the damage size. In this investigation, failures of 40% with respect to whole encrypted images are handled. The second point concerns how to make the damage. In this paper, the concentric rectangles are used, as presented in Fig. 8.
When an encrypted figure with damage is decrypted, such as that illustrated in Fig. 6, the decoded image with failure has a higher degree of disorder in its bits than the original image, that is, the Fig. 8 clause (c) has a higher randomness degree in its bits than the Fig. 8 clause (a). To measure the randomness degree, is used. In this vein, the value fulfills the following inequality: . Thus, if the pixels’ randomness increases then decreases, in fact, if the bits distribution was totally random, then . So, the value for each basic color of the encrypted image with failure measures how much these colors are separated from the original image colors. In other words, how much information has the decrypted image with damage lost with respect to the original image.
Table 8 presents the results with 40% failure for the five images of Figs. 3 and 6. Moreover, the value of each basic color is reported. The correlation graphs, or linear relationships, in three directions are shown: horizontal, vertical, and diagonal for each primary color and for both the original image and the decoded image with damage. The latter is performed for Fig. 6 clause (c). In Fig. 9, these linear relationships are illustrated for original image and in Fig. 10, for decipher image with 40% damage.
τ ratio for images in Figs. 3 and 6 with 40% damage.
|Color||Figure 3||Figure 6|
Now, it is convenient to show what happens if as in the original image in Fig. 7, a permutation over whole image is applied before it is encrypted. Then, it is ciphered and later is damaged as shown in Fig. 7; at the end, it is decrypted with the failure. The result is illustrated in Fig. 11.
This section carried out the results analysis, separated them into two parts, namely, in the first part, the encrypted images randomness used the two-steps procedure in question. That is, in the first, a permutation was applied to the whole image and in the second, the figure permuted image was encrypted with AES cryptosystem with variable permutations. The second part will address the results’ analysis when the encrypted figures are damaged. In this vein, the discussion is started with the encrypted image randomness for a particular key proposal, which passed all the tests suggested in this paper. However, the results of 100,000 keys that approved the proposed test were observed. Picture (c) of Fig. 6 is used for this purpose.
Subsequently, for these keys, the average entropy for each basic color was calculated and the furthest value of 8 for each primary color is also reported, i.e., the furthest amount from the perfect randomness. The averages for the basic colors were presented in Table 6 and, as can be seen, these quantities are very close to 8. Likewise, the furthest values from 8 for the primary colors are very close to 8. This means that the encrypted figures have a random distribution in their bits for each of the basic colors.
In regard to the analysis of the correlation coefficient between adjacent pixels in the horizontal, vertical, or diagonal directions, it is expected that in a “good encrypted” image, adjacent pixels have a correlation coefficient close to zero.4 For the 100,000 images encrypted in Fig. 6 clause (c), the average values of the correlation coefficient in the three directions and for each of the primary colors were reported in Table 7. The amounts found were close to zero.
The biggest amounts looked for the correlation coefficients, whose absolute values were the largest, taking into account the 100,000 observations reported in Table 7 for the three directions and the three basic colors. These amounts are the furthest from zero, which means that even in the worst cases, these images have correlation coefficients near to zero.
With regard to figures decrypted with damage, the parameter was used, which gives us an idea of the percentage of information that remains of the original image. Taking into account that a figure deciphered with damage has more noise than the original, this makes the distribution of the bits of each of the primary colors more random than those in the original image. Therefore, . Then, if is close to zero, this means that all information of the original image is lost, but if it is close to 1 it would mean the opposite. The size of the damage is 40% in this investigation, but it may be higher or lower depending on the “sharpness” desired in the figure decrypted with failure. The value is around 35% when the damage is 40%.
This paper has built an algorithm which defines a bijective function between the nonnegative integer and permutation sets. Using the transcendental number and this algorithm, it is possible to construct a pseudorandom permutation over 250,000 array positions or more in less than 10 ms. The permutation on the whole original image before the encryption stage is intended to disperse the pixels, thus, when the encrypted file is damaged and later is decrypted, the result does not present the failure in a focalized way as shown in Fig. 7. This is important, because sometimes decisions have to be made quickly, that is, there is no time to wait for an answer later or what is called in real time.39
The encrypted images pass the entire randomness test applied for a particular key proposal and reported in Sec. 7.1. These randomness tests are: DFT, entropy, correlation coefficient in: horizontal, vertical, and diagonal directions and the proposal test. Indeed, for entropy, the results are better than other studies.4
One hundred thousand keys were chosen and applied to Fig. 6 clause (c), whose only requirement was to pass the proposed test. It showed the average entropy for each basic color for these keys, and also the correlation coefficients in three directions and for the three primary colors. The results confirmed the randomness of the encrypted images.
Section 7.2 carried out the analysis of encrypted images with damage, noting that the size of the encrypted images failure is 40%, see Fig. 8. This analysis uses the ratio for each of the primary colors. In fact, the value measures the amount of information lost with respect to the original image for each basic color.
Finally, it is reported that the images’ encryption times of Fig. 6 are about 85 ms. The software was developed in C++ language and an intel core i7 was used.
The authors would like to thank the Instituto Politécnico Nacional (Secretaría Académica, COFAA, SIP, CIDETEC, ESCOM and ESFM), the CONACyT, and SNI for their economical support to develop this work.
J. LiL. Gan, “Study on chaotic cryptosystem for digital image encryption,” in Third Int. Conf. Measuring Technology and Mechatronics Automation, pp. 426–430, IEEE, Shangshai, China (2011).Google Scholar
L. XuemeiX. TongL. Dai, “A novel scheme reality preserving image encryption,” in Third Int. Conf. Measuring Technology and Mechatronics Automation, pp. 218–221, IEEE, Shangshai, China (2011).Google Scholar
C. Fuet al., “A chaos-based digital image encryption scheme with an improved diffusion strategy,” Opt. Express 20 (3), 2363–2378 (2012).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.20.002363Google Scholar
H. ZhuC. ZhaoX. Zhang, “A novel image encryption-compression scheme using hyper-chaos and Chinese remainder theorem,” Signal Process. Image Commun. 28, 670–680 (2013).SPICEF0923-5965http://dx.doi.org/10.1016/j.image.2013.02.004Google Scholar
FIPS PUB 197, Federal Information Processing Standards Publications, NIST (2001).Google Scholar
D. R. Stinson, Cryptography: Theory and Practice, Chapman & Hall/ CRC Press, New York (2005).Google Scholar
J. DaemenV. Rijmen, “AES proposal: Rijndael, AES algorithm submission,” in FIPS 197, NIST (1999).Google Scholar
C. Carlet, “On highly nonlinear S-boxes and their inability to thwart DPA attacks,” in 6th Int. Conf. on Cryptology of the Springer-Verlag, pp. 49–62, Springer-Verlag (2005).Google Scholar
FIPS PUB 46-3, Federal Information Processing Standards Publication, NIST (1999).Google Scholar
S. KeshariS. Gopal-Modani, “Color image encryption scheme based on 4-weighted fractional Fourier transform,” J. Electron. Imaging 21, 033018 (2012).JEIME51017-9909http://dx.doi.org/10.1117/1.JEI.21.3.033018Google Scholar
C. WenC. XudongC. J. R. Sheppard, “Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain,” Opt. Express 20(4), 3853–3865 (2012).OPEXFF1094-4087http://dx.doi.org/10.1364/OE.20.003490Google Scholar
P. RefregierB. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20, 767–769 (1995).OPLEDP0146-9592http://dx.doi.org/10.1364/OL.20.000767Google Scholar
A. Rukhinet al., “A statistical test suite for random and pseudorandom number generators for cryptographic applications,” in NIST 800-22, NIST (2010).Google Scholar
R. WolpeR. Myers, Probability and Statistics for Engineers and Scientists, Prentice Hall, México (2007).Google Scholar
Nom-151, Norma Oficial Mexicana NOM-151-SCFI-2002, Prácticas comerciales—Requisitos que deben observarse para la conservación de mensajes de datos, SCFI (2002).Google Scholar
T. M., Apostol, Análisis Matemático, Editorial Revert, Barcelona (1994).Google Scholar
S. Michael, Calculus: Cálculo Infinitesimal, Reverte, Barcelona Espanã (1993).Google Scholar
FIPS PUB, Federal Information Processing Standards Publication 180-3, NIST (2008).Google Scholar
Alexander J. YeeShigeru Kondo, “5 Trillion Digits of Pi,” 17 October 2011, http://www.numberworld.org/misc_runs/pi-5t/details.html (18 December 2014).Google Scholar
M. AbramowitzI. Stegun, Handbook of Mathematical Functions, Applied Mathematics Series, Vol. 55, National Bureau of Standards, New York (1964).Google Scholar
J. Gallian, Contemporary Abstract Algebra, 7th ed., Brooks/Cole, California (2011).Google Scholar
A. Gómez, Enciclopedia de la Seguridad Informática, Alfaomega, México (2007).Google Scholar
T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete logarithms,” IEEE Trans. Inf. Theory 31, 469–472 (1985).IETTAW0018-9448http://dx.doi.org/10.1109/TIT.1985.1057074Google Scholar
R. L. RivestA. ShamirL. Adleman, “A method for obtaining digital signatures and public key cryptosystems,” Commun. ACM 21, 120–126 (1978).CACMA20001-0782http://dx.doi.org/10.1145/359340.359342Google Scholar
R. Flores-CarapiaV. M. Silva-GarcíaC. Rentería-Márquez, “Monte Carlo scheme: cryptography application,” Appl. Math. Sci. 6(136), 6761–6768 (2012).1312-885XGoogle Scholar
C. GrinsteadL. Snell, Introduction to Probability, American Mathematical Society (1997).Google Scholar
J. Devore, Probabilidad y Estadística: para ingeniería y ciencias, 6th ed., International Thompson, México (2005).Google Scholar
R. AzarderakhshA. Reyhani-Masoleh, “Parallel and high-speed computations of elliptic curve crytography using hybrid-double multipliers,” IEEE Trans. Parallel Distrib. Syst. PP(99), 1–11 (2014).ITDSEO1045-9219http://dx.doi.org/10.1109/TPDS.2014.2323062Google Scholar
H. Jost, El detalle en la tipografía, Comgrafic, Barcelona (2008).Google Scholar
Victor Manuel Silva-García is a research fellow at the Innovation Center in Computer and Technologic Development (CIDETEC). He has a PhD degree in computer science. He belongs to the Researchers National System and is a member of the computer network at the National Polytechnic Institute. He was a director of CIDETEC from 2005 to 2012.
Rolando Flores-Carapia received his ScD degree from the National Polytechnic Institute in 2011. He is a professor in the CIDETEC-IPN. His research interests include image processing and cryptography.
Carlos Rentería-Márquez is a research fellow at the Faculty of Physics and Mathematics (ESFM) of the National Polytechnic Institute. He has a PhD degree in mathematics, belongs to the Researchers National System level III, and is a member of the Mathematical Society México. The topics of investigation and teaching are modern algebra, commutative algebra, and coding theory.
Benjamín Luna-Benoso received his PhD degree in computer science in 2011 from the Computing Research Center, México. He is a professor in the School of Computing (ESCOM). He is a member of the computer network at the National Polytechnic Institute. His research interests include image processing, pattern recognition, and cellular automata.
Cesar Antonio Jiménez-Vázquez received his ScM degree in computer technology from CIDETEC-IPN. He is a project management and technical leader in security systems from Indra company. His research interests include efficient arithmetic for cryptographic algorithms, side-channel security, image encryption using ECC, and PKI applications.
Marlon David González-Ramírez received his ScM degree in computer technology from the Innovation Center in Computer and Technologic Development and is a computer network specialist. He is a project management and Java developer for development and management associated with institutional projects. He is a consultant, teacher, and researcher, and a software tester. He has been a head of the Computer Network Research since 2010.