preview

Notes On N + N

Decent Essays

y_(j+1)=(y_j+k sin (x_(j+1) N)/2π)mod N [█(x_(j+1)@y_(J+1) )]=[█(1 u@v uv+1)][█(x_j@y_J )](mod N) (5) {█(x_(j+1)=N/k_i (x_j-N_i ) 〖+ y〗_j mod N/k_i @y_(j+1)=k_i/N (y_j-y_j mod N/k_i )+N_i )┤ with {█(k_1+k_2+k_3+⋯=N@N_i=k_1+k_2+⋯+k_(i-1)@N_i≤x_j≤N_i+k_i@0≤y_j≤N)┤ (6) Key Generator: Many chaotic key generators exist but the one used in this research involves 1-D cubic map [16]. It takes 64-bit random key Key = [Key1, Key2, Key3, Key4] to calculate initial condition based on its 16-bit component (Keyi) such that y(0)=(∑▒〖Key〗_i/2^16 )mod(1) [16] of 1-D cubic map and returns values of the map using iterations. The states of the cubic map are written as [17]: y(n+1)=λy(n)(1-y(n).y(n)) (7) where λ is typically set at 2.59 as a control parameter, and state of equation is satisfied by 0 ≤ y(n) ≤1. In order to generate initial conditions for the neural network, the equation (7) is first iterated 50 times and values are discarded; and then iterated again to initialize w0, w1, w2, w3, A0, B0, C0, D0, K0, K1, K2, K3, n0, n1, n2, n3. In order for equation (7) to provide randomness and reproducibility of same initial conditions each time it is run even on different computing machine, it should be ensured that values like Key, λ set at 2.59, and initial iteration of 50 to discard values are used with same precision

Get Access