Stan was created by a development team consisting of 52 members[3] that includes Andrew Gelman, Bob Carpenter, Daniel Lee, Ben Goodrich, and others.
Example
A simple linear regression model can be described as , where . This can also be expressed as . The latter form can be written in Stan as the following:
Stan implements gradient-based Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference, stochastic, gradient-based variational Bayesian methods for approximate Bayesian inference, and gradient-based optimization for penalized maximum likelihood estimation.
Laplace's approximation for classical standard error estimates and approximate Bayesian posteriors
Automatic differentiation
Stan implements reverse-mode automatic differentiation to calculate gradients of the model, which is required by HMC, NUTS, L-BFGS, BFGS, and variational inference.[2] The automatic differentiation within Stan can be used outside of the probabilistic programming language.
^Goodrich, Benjamin King, Wawro, Gregory and Katznelson, Ira, Designing Quantitative Historical Social Inquiry: An Introduction to Stan (2012). APSA 2012 Annual Meeting Paper. Available at SSRN2105531
^Natanegara, Fanni; Neuenschwander, Beat; Seaman, John W.; Kinnersley, Nelson; Heilmann, Cory R.; Ohlssen, David; Rochester, George (2013). "The current state of Bayesian methods in medical product development: survey results and recommendations from the DIA Bayesian Scientific Working Group". Pharmaceutical Statistics. 13 (1): 3–12. doi:10.1002/pst.1595. ISSN1539-1612. PMID24027093. S2CID19738522.