Statistical hypothesis testing
A statistical hypothesis test is a method used in statistics . It helps you describe the results you get from an experiment . The hypothesis test tells you the likelihood that a specific result would happen by chance .
Statistical hypothesis tests answer the question: Assuming that the null hypothesis is true, what is the probability of getting a value which is at least as extreme as the value that was actually observed? .[ 1]
So, for example, if the result would only happen by chance 5% of the time, then the experimental hypothesis is supported to the 95% level.
References
↑ Cramer, Duncan & Howitt, Dennis 2004. (9 June 2004). The Sage dictionary of statistics . p. 76. ISBN 0-7619-4138-X . {cite book }
: CS1 maint: multiple names: authors list (link ) CS1 maint: numeric names: authors list (link )
Continuous data
Count data Summary tables Dependence Graphics
Bar chart
Biplot
Box plot
Control chart
Correlogram
Fan chart
Forest plot
Histogram
Pie chart
Q–Q plot
Run chart
Scatter plot
Stem-and-leaf display
Radar chart
Violin plot
Study design
Population
Statistic
Effect size
Statistical power
Optimal design
Sample size determination
Replication
Missing data
Survey methodology Controlled experiments Adaptive Designs
Adaptive clinical trial
Up-and-Down Designs
Stochastic approximation
Observational Studies
Cross-sectional study
Cohort study
Natural experiment
Quasi-experiment
Statistical theory Frequentist inference
Point estimation
Estimating equations
Unbiased estimators
Mean-unbiased minimum-variance
Rao–Blackwellization
Lehmann–Scheffé theorem
Median unbiased
Plug-in
Interval estimation Testing hypotheses
1- & 2-tails
Power
Uniformly most powerful test
Permutation test
Multiple comparisons
Parametric tests
Likelihood-ratio
Score/Lagrange multiplier
Wald
Specific tests
Goodness of fit Rank statistics
Sign
Signed rank (Wilcoxon)
Rank sum (Mann–Whitney)
Nonparametric anova
1-way (Kruskal–Wallis)
2-way (Friedman)
Ordered alternative (Jonckheere–Terpstra)
Bayesian inference
Correlation Regression analysis
Errors and residuals
Regression validation
Mixed effects models
Simultaneous equations models
Multivariate adaptive regression splines (MARS)
Linear regression Non-standard predictors
Nonlinear regression
Nonparametric
Semiparametric
Isotonic
Robust
Heteroscedasticity
Homoscedasticity
Generalized linear model Partition of variance
Analysis of variance (ANOVA, anova)
Analysis of covariance
Multivariate ANOVA
Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
Cohen's kappa
Contingency table
Graphical model
Log-linear model
McNemar's test
Cochran-Mantel-Haenszel statistics
Multivariate
Regression
Manova
Principal components
Canonical correlation
Discriminant analysis
Cluster analysis
Classification
Structural equation model
Multivariate distributions
Time-series
General
Decomposition
Trend
Stationarity
Seasonal adjustment
Exponential smoothing
Cointegration
Structural break
Granger causality
Specific tests
Dickey–Fuller
Johansen
Q-statistic (Ljung–Box)
Durbin–Watson
Breusch–Godfrey
Time domain
Autocorrelation (ACF)
Cross-correlation (XCF)
ARMA model
ARIMA model (Box–Jenkins)
Autoregressive conditional heteroskedasticity (ARCH)
Vector autoregression (VAR)
Frequency domain
Survival
Survival function
Kaplan–Meier estimator (product limit)
Proportional hazards models
Accelerated failure time (AFT) model
First hitting time
Hazard function Test
Applications
Biostatistics Engineering statistics
Chemometrics
Methods engineering
Probabilistic design
Process / quality control
Reliability
System identification
Social statistics Spatial statistics
Cartography
Environmental statistics
Geographic information system
Geostatistics
Kriging
The article is a derivative under the Creative Commons Attribution-ShareAlike License .
A link to the original article can be found here and attribution parties here
By using this site, you agree to the Terms of Use . Gpedia ® is a registered trademark of the Cyberajah Pty Ltd