ART

In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were separately used in order to compute one value of a statistic (such as, for example, the sample mean or sample variance) for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically.

Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on the joint probability distribution of all the individual sample values.

Introduction

The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given sample size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population.

For example, consider a normal population with mean \( \mu \) and variance \( \sigma ^{2} \). Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean \( \scriptstyle {\bar x} \) for each sample – this statistic is called the sample mean. The distribution of these means, or averages, is called the "sampling distribution of the sample mean". This distribution is normal \( \scriptstyle {\mathcal {N}}(\mu ,\,\sigma ^{2}/n) \) (n is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes).

The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they don't exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations[1][p. 2], bootstrap methods, or asymptotic distribution theory.

Standard error

The standard deviation of the sampling distribution of a statistic is referred to as the standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:

\( \sigma _{{{\bar x}}}={\frac {\sigma }{{\sqrt {n}}}} \)

where \( \sigma \) is the standard deviation of the population distribution of that quantity and n is the sample size (number of items in the sample).

An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost–benefit tradeoffs.

Examples

Population Statistic Sampling distribution
Normal: \( {\mathcal {N}}(\mu ,\sigma ^{2}) \) Sample mean X\( {\bar {X}} \) from samples of size n \( {\bar X}\sim {\mathcal {N}}{\Big (}\mu ,\,{\frac {\sigma ^{2}}{n}}{\Big )}. \)

If the standard deviation σ {\displaystyle \sigma } \sigma is not known, one can consider \( {\displaystyle T=\left({\bar {X}}-\mu \right){\frac {\sqrt {n}}{S}}} \), which follows the Student's t-distribution with ν\( \nu = n - 1 \) degrees of freedom. Here \( S^{2} \) is the sample variance, and T is a pivotal quantity, whose distribution does not depend on \( \sigma \).

Bernoulli: \( \operatorname {Bernoulli}(p) \) Sample proportion of "successful trials" \( {\bar {X}} \) \( n{\bar X}\sim \operatorname {Binomial}(n,p) \)
Two independent normal populations:

\( {\mathcal {N}}(\mu _{1},\sigma _{1}^{2}) \) and \( {\mathcal {N}}(\mu _{2},\sigma _{2}^{2}) \)

Difference between sample means, \( {\bar X}_{1}-{\bar X}_{2} \) \( {\bar X}_{1}-{\bar X}_{2}\sim {\mathcal {N}}\!\left(\mu _{1}-\mu _{2},\,{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}\right) \)
Any absolutely continuous distribution F with density ƒ Median \( X_{{(k)}} f\) rom a sample of size n = 2k − 1, where sample is ordered \( X_{{(1)}} \) to \( {\displaystyle X_{(n)}} \) \( f_{{X_{{(k)}}}}(x)={\frac {(2k-1)!}{(k-1)!^{2}}}f(x){\Big (}F(x)(1-F(x)){\Big )}^{{k-1}} \)
Any distribution with distribution function F Maximum \( M=\max \ X_{k} \) from a random sample of size n \( F_{M}(x)=P(M\leq x)=\prod P(X_{k}\leq x)=\left(F(x)\right)^{n} \)

References

Mooney, Christopher Z. (1999). Monte Carlo simulation. Thousand Oaks, Calif.: Sage. ISBN 9780803959439.

Merberg, A. and S.J. Miller (2008). "The Sample Distribution of the Median". Course Notes for Math 162: Mathematical Statistics, on the web at http://web.williams.edu/Mathematics/sjmiller/public_html/BrownClasses/162/Handouts/MedianThm04.pdf, pgs 1–9.

External links

Generate sampling distributions in Excel
Mathematica demonstration showing the sampling distribution of various statistics (e.g. Σx²) for a normal population

Statistics

Outline Index

Descriptive statistics
Continuous data
Center

Mean
arithmetic geometric harmonic Median Mode

Dispersion

Variance Standard deviation Coefficient of variation Percentile Range Interquartile range

Shape

Central limit theorem Moments
Skewness Kurtosis L-moments

Count data

Index of dispersion

Summary tables

Grouped data Frequency distribution Contingency table

Dependence

Pearson product-moment correlation Rank correlation
Spearman's ρ Kendall's τ Partial correlation Scatter plot

Graphics

Bar chart Biplot Box plot Control chart Correlogram Fan chart Forest plot Histogram Pie chart Q–Q plot Run chart Scatter plot Stem-and-leaf display Radar chart Violin plot

Data collection
Study design

Population Statistic Effect size Statistical power Optimal design Sample size determination Replication Missing data

Survey methodology

Sampling
stratified cluster Standard error Opinion poll Questionnaire

Controlled experiments

Scientific control Randomized experiment Randomized controlled trial Random assignment Blocking Interaction Factorial experiment

Adaptive Designs

Adaptive clinical trial Up-and-Down Designs Stochastic approximation

Observational Studies

Cross-sectional study Cohort study Natural experiment Quasi-experiment

Statistical inference
Statistical theory

Population Statistic Probability distribution Sampling distribution
Order statistic Empirical distribution
Density estimation Statistical model
Model specification Lp space Parameter
location scale shape Parametric family
Likelihood (monotone) Location–scale family Exponential family Completeness Sufficiency Statistical functional
Bootstrap U V Optimal decision
loss function Efficiency Statistical distance
divergence Asymptotics Robustness

Frequentist inference
Point estimation

Estimating equations
Maximum likelihood Method of moments M-estimator Minimum distance Unbiased estimators
Mean-unbiased minimum-variance
Rao–Blackwellization Lehmann–Scheffé theorem Median unbiased Plug-in

Interval estimation

Confidence interval Pivot Likelihood interval Prediction interval Tolerance interval Resampling
Bootstrap Jackknife

Testing hypotheses

1- & 2-tails Power
Uniformly most powerful test Permutation test
Randomization test Multiple comparisons

Parametric tests

Likelihood-ratio Score/Lagrange multiplier Wald

Specific tests

Z-test (normal) Student's t-test F-test

Goodness of fit

Chi-squared G-test Kolmogorov–Smirnov Anderson–Darling Lilliefors Jarque–Bera Normality (Shapiro–Wilk) Likelihood-ratio test Model selection
Cross validation AIC BIC

Rank statistics

Sign
Sample median Signed rank (Wilcoxon)
Hodges–Lehmann estimator Rank sum (Mann–Whitney) Nonparametric anova
1-way (Kruskal–Wallis) 2-way (Friedman) Ordered alternative (Jonckheere–Terpstra)

Bayesian inference

Bayesian probability
prior posterior Credible interval Bayes factor Bayesian estimator
Maximum posterior estimator

CorrelationRegression analysis

Correlation

Pearson product-moment Partial correlation Confounding variable Coefficient of determination

Regression analysis

Errors and residuals Regression validation Mixed effects models Simultaneous equations models Multivariate adaptive regression splines (MARS)

Linear regression

Simple linear regression Ordinary least squares General linear model Bayesian regression

Non-standard predictors

Nonlinear regression Nonparametric Semiparametric Isotonic Robust Heteroscedasticity Homoscedasticity

Generalized linear model

Exponential families Logistic (Bernoulli) / Binomial / Poisson regressions

Partition of variance

Analysis of variance (ANOVA, anova) Analysis of covariance Multivariate ANOVA Degrees of freedom

Categorical / Multivariate / Time-series / Survival analysis
Categorical

Cohen's kappa Contingency table Graphical model Log-linear model McNemar's test

Multivariate

Regression Manova Principal components Canonical correlation Discriminant analysis Cluster analysis Classification Structural equation model
Factor analysis Multivariate distributions
Elliptical distributions
Normal

Time-series
General

Decomposition Trend Stationarity Seasonal adjustment Exponential smoothing Cointegration Structural break Granger causality

Specific tests

Dickey–Fuller Johansen Q-statistic (Ljung–Box) Durbin–Watson Breusch–Godfrey

Time domain

Autocorrelation (ACF)
partial (PACF) Cross-correlation (XCF) ARMA model ARIMA model (Box–Jenkins) Autoregressive conditional heteroskedasticity (ARCH) Vector autoregression (VAR)

Frequency domain

Spectral density estimation Fourier analysis Wavelet Whittle likelihood

Survival
Survival function

Kaplan–Meier estimator (product limit) Proportional hazards models Accelerated failure time (AFT) model First hitting time

Hazard function

Nelson–Aalen estimator

Test

Log-rank test

Applications
Biostatistics

Bioinformatics Clinical trials / studies Epidemiology Medical statistics

Engineering statistics

Chemometrics Methods engineering Probabilistic design Process / quality control Reliability System identification

Social statistics

Actuarial science Census Crime statistics Demography Econometrics Jurimetrics National accounts Official statistics Population statistics Psychometrics

Spatial statistics

Cartography Environmental statistics Geographic information system Geostatistics Kriging

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License