A Comparison Of Likelihood And Bayesian-Based Methods For Fitting Random-Effect Models

Random-effect models are a popular models for the analysis of nested structure of the data. For statistical inference for Random-effect models, there are two ways, likelihood and Bayesian-based methods (or Fisher and Bayesian approaches).
The likelihood (and approximated likelihood) approaches are based on the methods most widely used in current random-effect models: maximum likelihood (ML) and restricted ML (REML), and marginal and penalized quasi-likelihood (MQL and PQL). For Bayesian approaches, adaptive Markov Chain Monte Carlo (MCMC) and several diffuse priors are used for random-effect models and for variance components, respectively.
In this talk, we review these methods and discuss the hierarchical likelihood (h-likelihood; Lee and Nelder, 1996) framework to investigate the relationships between likelihood and Bayesian-based methods. Using examples we will compare between likelihood and Bayesian-based methods for fitting random-effect models.