QPRC 2016
Conference - QPRC 2016
Type | Registration before May 15, 2016 | Registration between May 16 and June 14, 2016 | Short Course |
Regular | $280 | $330 | $230.00 |
Q&P Section Member | $260 | $310 | $210.00 |
SPES Section Member | $260 | $310 | $210.00 |
SP&QP Joint Member | $260 | $310 | $210.00 |
Join Now! | $270 | $325 | $210.00 |
Student | $60 | $60 | $115.00 |
Senior (ageā„65) | $220 | $250 | $170.00 |
Short Course Only | -- | -- | $260.00 |
Guest(includes Banquet and Technical Tour and Reception) | $50 | $50 | -- |
Short Course
Assessing Model Uncertainty in Applied Bayesian Data Analysis
Monday, June 13, 2016, 8:30 am - 4:30 pm
Will Guthrie
Statistical Engineering Division
National Institute of Standards and Technology
One of the high-level assumptions underlying any statistical result with an associated uncertainty statement is the assumption that the statistical model used to describe the data is correct. However, the true adequacy of the model is rarely known. In some cases the model may be based on theory, but the theory may rely on simplifications that omit some physical details that could be important. For example, Newton's laws of motion are used to make predictions about the motion of real objects. The effects of the size and shape of those objects are not incorporated into the model, however, though we know that object size or shape may have significant effects in some cases. In other applications a model may be determined empirically, but the information contained in a single set of data can rarely, if ever, identify a completely correct model beyond all doubt.
One of the attractive features of the Bayesian paradigm for statistical analysis, in theory, is the ability to incorporate model uncertainty into analysis results. This is done through techniques like model averaging. However, model uncertainty is still left out of most analyses because the methods are difficult to implement routinely or automatically. Instead weaker forms of model checking that treat the model as "innocent until proven guilty" are generally used to look for deviations between the true mechanism generating the data (aka the true model) and the model being assumed for the analysis.
This course will review and compare the implementation of some practical methods for assessing model uncertainty in Bayesian data analysis using concrete examples from NIST work and other sources. Live computations will be demonstrated in class and can be carried out by participants in parallel using popular free software with instructor-provided data and script files.
Outline of Course Topics
1. Motivating Examples
2. Introduction/Review
a. Bayesian Modeling
b. Computational Methods
3. Outline of Model Uncertainty Assessment
a. Methods
b. Results