Test Statistic Calculator

Calculate the test statistic from sample data using standard hypothesis testing formulas such as z, t, chi-square, or F statistics. This test statistic calculator includes t-test and t-value and converts your inputs into the standardized value.

Select T-Test Type
Test Parameters
Sample Data — One-Sample T-Test
T-Statistic (t)
--
test value
Degrees of Freedom
--
df
P-Value
--
probability
Statistical Significance Verdict
--
Full Test Results
Critical Value
--
at α level
Standard Error
--
SE of mean diff
Effect Size (d)
--
Cohen's d
95% CI (diff)
--
confidence interval
T-Distribution Visualization
Step-by-Step Calculation
How to Interpret Your Results
T-Statistic
  • How many SE units the sample mean is from H₀
  • Larger |t| = stronger evidence against H₀
  • Sign indicates direction of difference
  • Compare to critical value for significance
P-Value
  • Probability of this result if H₀ were true
  • p ≤ α means reject H₀ (significant)
  • p > α means fail to reject H₀
  • Does NOT measure practical significance
Z-Test Calculator

Use a Z-test when the population standard deviation (σ) is known, or when n > 30. For unknown σ with small samples, use the T-Test tab.

Z-Statistic (z)
--
test value
P-Value
--
probability
Critical Value
--
z critical
--
Additional Results
Standard Error
--
σ / √n
95% CI
--
confidence interval
Effect Size (d)
--
Cohen's d
Mean Difference
--
x̄ − μ₀
Step-by-Step Calculation
Chi-Square (χ²) Goodness-of-Fit / Test of Independence

Enter observed frequencies. Leave expected blank to assume equal distribution across all categories.

χ² Statistic
--
chi-square value
Degrees of Freedom
--
k − 1
P-Value
--
probability
--
Observed vs. Expected
Step-by-Step Calculation
What Is a Test Statistic?

A test statistic is a numerical value calculated from sample data during a hypothesis test. It summarizes how far your observed data deviates from what you would expect under the null hypothesis (H₀), expressed in standardized units. The larger the absolute value of the test statistic, the more evidence against H₀.

Different tests use different statistics: the t-statistic for t-tests, z-statistic for z-tests, χ² for chi-square tests, and F-statistic for ANOVA. Each follows a known distribution under H₀, allowing you to compute a p-value.

T-Statistic
  • Used when population σ is unknown
  • Follows Student's t-distribution
  • t = (x̄ − μ₀) / (s / √n)
  • More conservative than z for small samples
Z-Statistic
  • Used when population σ is known or n > 30
  • Follows the standard normal distribution
  • z = (x̄ − μ₀) / (σ / √n)
  • Converges with t-test for large samples
Chi-Square (χ²)
  • Tests association between categorical variables
  • χ² = Σ[(O − E)² / E]
  • Always positive; df = k − 1
  • Best for contingency tables, goodness-of-fit
P-Value Explained
  • Probability of this result if H₀ were true
  • p ≤ α means reject H₀
  • p > α means fail to reject H₀
  • Does NOT prove H₀ is true or false
Formulas Reference
One-Sample T-Test
t = (x̄ − μ₀) / (s / √n)     df = n − 1
Two-Sample Independent T-Test (Welch's)
t = (x̄₁ − x̄₂) / √(s₁²/n₁ + s₂²/n₂)     df via Welch–Satterthwaite
Paired T-Test
t = / (s_d / √n)     df = n − 1
Z-Test (One-Sample)
z = (x̄ − μ₀) / (σ / √n)
Frequently Asked Questions
When should I use a t-test vs. a z-test? +
Use a t-test when the population standard deviation (σ) is unknown and estimated from the sample — which is almost always the case. Use a z-test only when σ is known or when n > 30. For large samples, both tests yield nearly identical results.
What is the difference between one-tailed and two-tailed tests? +
A two-tailed test checks for a difference in either direction and is the standard default. A one-tailed test checks only one direction and should only be used when you have a directional hypothesis established before data collection. Choosing one-tailed after seeing the data is p-hacking.
What is the difference between a paired and independent t-test? +
A paired t-test is for matched pairs or repeated measures (same subjects measured twice). An independent t-test compares two separate, unrelated groups. When data has a natural pairing structure, the paired test is more statistically powerful.
How do I interpret the t-value / test statistic? +
The t-value measures how many standard errors the sample mean is from the null hypothesis value. Compare |t| to the critical value: if |t| exceeds it, the result is significant. The sign indicates direction — positive means the sample mean is higher, negative means lower.
What are the assumptions of a t-test? +
Key assumptions: (1) Normality — approximately normally distributed data (less critical with n > 30 by CLT); (2) Independence — observations must be independent; (3) Equal variances — for the standard two-sample test (Welch's t-test relaxes this). T-tests are fairly robust to mild normality violations.
What does statistical significance actually mean? +
It means the observed result is unlikely to have occurred by random chance given that H₀ is true. It does not mean the result is practically important. Always report effect sizes (Cohen's d) alongside p-values. Large samples can make tiny, irrelevant differences statistically significant.
What is Cohen's d and how do I interpret it? +
Cohen's d is a standardized effect size measuring the difference in standard deviation units. General benchmarks: d ≈ 0.2 = small, d ≈ 0.5 = medium, d ≈ 0.8 = large. Context matters — these are rough guidelines, not absolute thresholds.
Scroll to Top