# Uncertainty in Engineering Introduction to Methods and Applications

**Abstract We present basic concepts of Bayesian statistical inference. We briefly introduce the Bayesian paradigm. We present the conjugate priors; a computational convenient way to quantify prior information for tractable Bayesian statistical analysis. We present tools for parametric and predictive inference, and particularly the**

**design of point estimators, credible sets, and hypothesis tests. These concepts are presented in running examples. Supplementary material is available from GitHub.**

**1.1 Introduction**

**Statistics mainly aim at addressing two major things. First, we wish to learn or draw conclusions about an unknown quantity, θ ∈ called ‘the parameter’, which cannot be directly measured or observed, by measuring or observing a sequence of other quantities called ‘observations (or data, or samples)’ x1:n := (x1,..., xn) ∈ Xm**

**whose generating mechanism is (or can be considered as) stochastically dependent on the quantity of interest θ though a probabilistic model x1:n ∼ f (·|θ ). This is an inverse problem since we wish to study the cause θ by knowing its effect x1:n. We will**

**refer to this as parametric inference. Second, we wish to learn the possible values of a future sequence of observations y1:m ∈ Xm given x1:n. This is a forward problem,and we will call it predictive inference. Here, we present how both inferences can be**

**addressed in the Bayesian paradigm.1**

**Consider a sequence of observables x1:n := (x1,..., xn) generated from a sampling distribution f (·|θ ) labeled by the unknown parameter θ ∈ . The statistical model m consists of the observations x1:n, and their sampling distribution f (·|θ ) ;**