# What Predictive Modeling Situations Would The Aic Statistic Be The Most Appropriate Choice, And Why?

1849 Words8 Pages
3 In what predictive modelling situations would the AIC statistic be the most appropriate choice, and why?
Akaike (1973) adopted the Kullback-Leibler definition of information, I(f;g) , as a natural measure of discrepancy, or asymmetrical distance, between a “true” model, f(y), and a proposed model, g(y|β), where β is a vector of parameters. Based on large-sample theory, Akaike derived an estimator for of the I(f;g) general form 〖AIC〗_m = -2 Ln (L_m ) + 2 〖.k〗_m where Lm is the sample log-likelihood for the mth of M alternative models and km is the number of independent parameters estimated for the mth model. The term, , may be viewed as a penalty for over-parameterization. A min(AIC) strategy is used for selecting among two or more competing models. In a general sense, the model for which AICm is smallest represents the “best” approximation to the true model. That is, it is the model with the smallest expected loss of information when MLE’s replace true parametric values in the model. In practice, the model satisfying the min(AIC) criterion may or may not be (and probably is not) the “true” model since there is no way of knowing whether the “true” model is included among those being compared. Thus, for example, in comparing four hierarchic linear regression models, AIC is computed for each model and the min(AIC) criterion is applied to select the single “best” model.
The choices for the selection criterion have several model fit statistics that are useful for model