preview

A Description Of Bayesian Rational Selection Methods

Decent Essays

2.5 Full Bayesian set up j Using residual information from the pde as prior for basis selection a Bayesian variable selection method can be devised. Posterior estimates are computed at each time point sequentially from the estimate of the earlier tie points. At each time point one/or more subregions are selected ω The following gives the joint prior based on pde model and the prior on the coefficient, by some ad hoc cut off on αﰇ j . At each selected subregion the extra basis are selected from the following posterior distribution. For a schematic representation see Figure 4, right panel. Prior and Posterior π1(Θ, (I, J , T )) ∼ π(un+1(x, t)|βn+1(In+1, J n+1), un) H (8) for a model dependent constant c ((I,J)n+1). On βn+1 flat normal …show more content…

Let R(k),E(k) and R(−k),E(−k) be the corresponding linear form and the residual and I − Ik be t set of other indices. For the regularization problem: minβ+ 2σL2 + 2σ12 , let R, E the value of the residual and ﰂ11ﰃ11 KTK + STS βﰈ = KTb + STg. For the index 2σ12 + 2σL2 2σ12 2σL2 set I let the minimizer be β(I)+. The posterior sampling can be performed by a Gibbs sampling algorithm after marginalizing over the coefficient of the additional basis β+. MCMC Algorithm •P(Ik=1|I−Ik)= p with 1−p p = αﰇk exp 1 − p 1 − αﰇ k ﰂˆ2ˆ2ˆ2ˆ2ﰃ −∥R(k)∥ − ∥R(−k)∥ − ∥E(k)∥ − ∥E(−k)∥ (10) 2 σ L2 2 σ 12 where αﰇk is the prior probability of selecting that additional basis. Here, Nmc is the number of MCMC sample and the posterior distribution given the index set If d is a linear function, then d(un) becomes a linear function of β+n and therefore its posterior distribution becomes multivariate normal given I. For the MCMC step β+n is marginalized and the MCMC step only depends on the least square error and the prior for the selected index set. For nonlinear case, this posterior normality of the coefficient given the index set does not hold and that results in a prohibitive acceptance rejection based Metropolis-Hastings algorithm as each step requires solving big linear system. To address this problem a Laplace approximation (Tierney and Kadane, 1986; Raudenbush et al.

Get Access