Learning from a lot: Empirical Bayes for high-dimensional model-based prediction

Mark A. van de Wiel, Dennis E. Te Beest, Magnus M. Münch

Research output: Contribution to journalArticleAcademicpeer-review

10 Citations (Scopus)


Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well-known model-based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross-validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co-data”. In particular, we present two novel examples that allow for co-data: first, a Bayesian spike-and-slab setting that facilitates inclusion of multiple co-data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.

Original languageEnglish
Pages (from-to)2-25
Number of pages24
Issue number1
Early online date1 Jun 2018
Publication statusPublished - Mar 2019


  • co-data
  • empirical Bayes
  • marginal likelihood
  • prediction
  • variable selection

Cite this