preview

Nt1310 Unit 1 Data Analysis

Satisfactory Essays

Based on Chapter 2, Neural Network Method (NN) will be chosen for voice-based command recognition method because it can handle bigger databased. For Neural Network to implement pattern recognition is quite common, and beneficial to use is backpropagation. Supervised learning that starts by inputting the training data through the network is a form of this method. When the data is put in the network, it will generate propagation output activations and then propagated backwards through the neural network, and generating a delta value for all hidden and output neuron. The weights of the network are then update by calculated delta values that generate by neural network, which increase the speech and quality of the learning process. Backpropagation …show more content…

a^k (j)= f^(k+1) (n^(k+1) (i)) Equation 3.2 For an M layer network the system equations in matrix form are shown in equation 3.3 and equation 3.4. a^0=p Equation 3.3 a^(k+1)= f^(k+1) (W^(k+1) a^k+ b^(k+1) ),k=0,1,… ,M-1. Equation 3.4 The task of network is to learn associations between a specified set of input-output pairs {(p1, t1), (p2, t2) … (pQ, tQ)}. The performance index for the network will be shown using equation 3.5. V= 1/2 ∑_(q=1)^Q▒〖(t_q- a_q^M )^τ (t_q- a_q^M )= 1/2 ∑_(q=1)^Q▒〖 e_q^τ e_q 〗 〗 Equation 3.5 a_q^M = output of the network, qth is the input, pq is presented. t_q- a_q^M = is the error for the qth input. For standard backpropagation algorithm, an approximate steepest descent rule will be used. The performance index is approximated by equation …show more content…

δ^k (i)= -∝∂V/(∂n^k (i) ) Equation 3.9 Now it can be shown by using equation 3.1, equation 3.6 and equation 3.9, that is shown in equation 3.10 and equation 3.11. ∂V/(∂∆w^k (i,j) ) = ∂V/(∂∆n^k (i) ) ∂V/(∂∆w^k (i,j) ) = δ^k (i)a^(k-1) (j) Equation 3.10 ∂V/(∂b^k (i) )= ∂V/(∂∆n^k (i) ) (∂n^k (i))/(∂b^k (i) )= δ^k (i) Equation 3.11 It can also be shown that the sensitivities satisfy the following recurrence relation in equation 3.12 δ^k= F^M 〖(n〗^M)W^( 〖k+1〗^T ) δ^(k+1) Equation 3.12 Where equation 3.13 and equation 3.14 Equation 3.13 f^k (n)= (df^k (n))/dn Equation 3.14 This recurrence relation is initialized at the final layer shown in equation 3.15. δ^M= 〖-F〗^M 〖(n〗^M)(t_q-a_q) Equation 3.15 The overall learning algorithm now proceed as follows; first, propagate the input forward using equation 3.3 and equation 3.4; next, propagate the sensitivities back using equation 3.15 and equation 3.12; and lastly, update the weights and offset using equation 3.7, equation 3.8, equation 3.10 and equation 3.11. (Murphy,

Get Access