preview

Nt1310 Unit 1 Map Equations

Decent Essays

2) MAP Estimator:\\
MLE can sometimes result in parameter estimates of zero, if the data does not happen to contain any training samples satisfying the condition in the numerator. To avoid this, it is common to use a “smoothed” estimate which effectively adds in a number of additional “hallucinated” samples, and which assumes these hallucinated examples are spread evenly over the possible values of $X^j$, or equivalently a MAP estimate based on a Dirichlet prior over the each parameters assuming equal priors on each parameter. And especially using Laplace Smoothing then we can get:
\begin{equation}\label{parameter1MAP}
\hat{\pi_c}=\frac{\sum_{j=1}^S\mathbbm{1}(Y^j=c)+1}{S+\sum_{k=1}^{C}1}
\end{equation}
\begin{equation}\label{parameter2MAP} …show more content…

\item Linear Regression \cite{c9}\cite{c10}\\
Linear regression is an approach for modeling the relationship between a scalar dependent variable Y and explanatory variables (or independent variables) denoted X. Function $f(X,W)=Y$ (shown below) can be learned to predict future values.
\begin{equation}\label{eq:linear regression}
Y=f(X,W)=W_0+W_1X+W_2X^2+...+W_MX^M=\underline{W}^T\underline{X}
\end{equation} where M is the degree of polynomial. $W_i$ is the polynomial

Get Access