Statistical errors and residuals occur because measurement is never exact.
It is not possible to do an exact measurement, but it is possible to say how accurate a measurement is. One can measure the same thing again and again, and collect all the data together. This allows us to do statistics on the data. What is meant by errors and residuals is the difference between the observed or measured value and the real value, which is unknown.
If there is only one random variable, the difference between statistical errors and residuals is the difference between the mean of the population against the mean of the (observed) sample. In that case the residual is the difference between what the probability distribution says, and what was actually measured.
Suppose there is an experiment to measure the height of 21-year-old men from a certain area. The mean of the distribution is 1.75 m. If one man chosen at random is 1.80 m tall, the "(statistical) error" is 0.05 m (5 cm); if he is 1.70 tall, the error is −5 cm.
A residual (or fitting error), on the other hand, is an observable estimate of the unobservable statistical error. The simplest case involves a random sample of n men whose heights are measured. The sample mean is used as an estimate of the population mean. Then we have:
- The difference between the height of each man in the sample and the unobservable population mean is a statistical error, and
- The difference between the height of each man in the sample and the observable sample mean is a residual.
The sum of the residuals within a random sample must be zero. The residuals are therefore not independent. The sum of the statistical errors within a random sample need not be zero; the statistical errors are independent random variables if the individuals are chosen from the population independently.
In sum:
- Residuals are observable; statistical errors are not.
- Statistical errors are often independent of each other; residuals are not (at least in the simple situation described above, and in most others).
Related pages
|
---|
|
|
---|
Continuous data | |
---|
Count data | |
---|
Summary tables | |
---|
Dependence | |
---|
Graphics |
- Bar chart
- Biplot
- Box plot
- Control chart
- Correlogram
- Fan chart
- Forest plot
- Histogram
- Pie chart
- Q–Q plot
- Run chart
- Scatter plot
- Stem-and-leaf display
- Radar chart
- Violin plot
|
---|
|
|
|
---|
Study design |
- Population
- Statistic
- Effect size
- Statistical power
- Optimal design
- Sample size determination
- Replication
- Missing data
|
---|
Survey methodology | |
---|
Controlled experiments | |
---|
Adaptive Designs |
- Adaptive clinical trial
- Up-and-Down Designs
- Stochastic approximation
|
---|
Observational Studies |
- Cross-sectional study
- Cohort study
- Natural experiment
- Quasi-experiment
|
---|
|
|
|
---|
Statistical theory | |
---|
Frequentist inference | Point estimation |
- Estimating equations
- Unbiased estimators
- Mean-unbiased minimum-variance
- Rao–Blackwellization
- Lehmann–Scheffé theorem
- Median unbiased
- Plug-in
|
---|
Interval estimation | |
---|
Testing hypotheses |
- 1- & 2-tails
- Power
- Uniformly most powerful test
- Permutation test
- Multiple comparisons
|
---|
Parametric tests |
- Likelihood-ratio
- Score/Lagrange multiplier
- Wald
|
---|
|
---|
Specific tests | | Goodness of fit | |
---|
Rank statistics |
- Sign
- Signed rank (Wilcoxon)
- Rank sum (Mann–Whitney)
- Nonparametric anova
- 1-way (Kruskal–Wallis)
- 2-way (Friedman)
- Ordered alternative (Jonckheere–Terpstra)
|
---|
|
---|
Bayesian inference | |
---|
|
|
|
---|
Correlation | |
---|
Regression analysis |
- Errors and residuals
- Regression validation
- Mixed effects models
- Simultaneous equations models
- Multivariate adaptive regression splines (MARS)
|
---|
Linear regression | |
---|
Non-standard predictors |
- Nonlinear regression
- Nonparametric
- Semiparametric
- Isotonic
- Robust
- Heteroscedasticity
- Homoscedasticity
|
---|
Generalized linear model | |
---|
Partition of variance |
- Analysis of variance (ANOVA, anova)
- Analysis of covariance
- Multivariate ANOVA
- Degrees of freedom
|
---|
|
|
Categorical / Multivariate / Time-series / Survival analysis |
---|
Categorical |
- Cohen's kappa
- Contingency table
- Graphical model
- Log-linear model
- McNemar's test
- Cochran-Mantel-Haenszel statistics
|
---|
Multivariate |
- Regression
- Manova
- Principal components
- Canonical correlation
- Discriminant analysis
- Cluster analysis
- Classification
- Structural equation model
- Multivariate distributions
|
---|
Time-series | General |
- Decomposition
- Trend
- Stationarity
- Seasonal adjustment
- Exponential smoothing
- Cointegration
- Structural break
- Granger causality
|
---|
Specific tests |
- Dickey–Fuller
- Johansen
- Q-statistic (Ljung–Box)
- Durbin–Watson
- Breusch–Godfrey
|
---|
Time domain |
- Autocorrelation (ACF)
- Cross-correlation (XCF)
- ARMA model
- ARIMA model (Box–Jenkins)
- Autoregressive conditional heteroskedasticity (ARCH)
- Vector autoregression (VAR)
|
---|
Frequency domain | |
---|
|
---|
Survival | Survival function |
- Kaplan–Meier estimator (product limit)
- Proportional hazards models
- Accelerated failure time (AFT) model
- First hitting time
|
---|
Hazard function | |
---|
Test | |
---|
|
---|
|
|
Applications |
---|
Biostatistics | |
---|
Engineering statistics |
- Chemometrics
- Methods engineering
- Probabilistic design
- Process / quality control
- Reliability
- System identification
|
---|
Social statistics | |
---|
Spatial statistics |
- Cartography
- Environmental statistics
- Geographic information system
- Geostatistics
- Kriging
|
---|
|
|