# This exercise asks you to derive a gradient descent rule analogous to that used by TANGENTPROP. Consider the instance space X consisting of the real numbers, and consider the hypothesis space H consisting of quadratic functions of x. That is, (a) Derive a gradient descent rule that minimizes the same criterion as BACKPROPAGATION; that is, the sum of squared errors between the hypothesis and target values of the training data. (b) Derive a second gradient descent rule that minimizes the same criterion as TANGENTPROP. Consider only the single transformation s(a, x) = x +a.

Question
1 views

This exercise asks you to derive a gradient descent rule analogous to that used by TANGENTPROP. Consider the instance space X consisting of the real numbers, and consider the hypothesis space H consisting of quadratic functions of x. That is,

(a) Derive a gradient descent rule that minimizes the same criterion as BACKPROPAGATION; that is, the sum of squared errors between the hypothesis and target values of the training data.

(b) Derive a second gradient descent rule that minimizes the same criterion as TANGENTPROP. Consider only the single transformation s(a, x) = x +a.

### This question hasn't been answered yet. 