You are on page 1of 2

F-Test: Not Significant:All Betas=0 Significant:At least one Beta0 Beta=coefficient

Do not include intercept in hypothesis


If ^2=0, the model is not significant. If ^2 > 0, the model is significant.
For the F test of the entire model, the null hypothesis (Ho) is equivalent to (where 2 is the population regression fit): a. 2 = 0
R-sq= variation of y explained by x1, x2, x3MSR=SSR/DF...F Ratio=MSR/MSE
p-value< P value given, Independent variable is significant
Use T distribution when is not known. If were known, use normal distribution
Two tail: = and one tail: = and < or >
Paired T Test: Match paired T test and CI
Ex post forecast: use recent data, forecast performance of model relative performance of two models
Conditional forecast occurs when we do not know with certainty the future value of one or more independent variables
3 scenarios, high, low, and unchanged: contingency forecast..Error=predicted-actualMAE=predicted-actual, average but absolute value
Minimum RMSE model=small forecast errors
In simple regression models, F-Test and t-test have statistically equivalent hypotheses and yield the identical p-values for each test
F ratio=(T ratio)2. The F distribution is defined for nonnegative values of F. its unimodal
CI for predicting a particular outcome of the dependent variable given the values for each independent variable= the prediction interval for Y
In Regression models, forecast intervals are a type of prediction interval
p-value< Reject Ho and conclude model is significant.
Prediction Interval is wider because involves independent variables are not near the sample means
Analysis of variance uses these that confound the test results. Replication, balance design, randomized design
Characteristic of nonparametric stats= rely on ordinal measures
Difference between sign test and t-test is that sign test does not assume a normal distribution of the sampling statistic
T test justified because sample size is large enough for central limit theorem.
Sign Test is always justified.
If all classical regression models assumptions are valid, least-squares estimators are unbiased, have minimum SD among all unbiased estimators and are efficient
Regression model is more likely to test significant if n is large, few independent variables, R^2 is large, used for test is large
If sample sizes are equal, Analysis of variance moderate departures from normality do not invalidate AOV test results, and moderate differences among treatment standard
deviations don't invalidate AOV test results.
Regression models are similar to Analysis of Variance both report mean-squares ratios in an analysis-of-variance table
Parametric Test= uses mean. Assumptions in classical regression model: the parameters are constant E() is zero, is uncorrelated with each of the independent
variables in the model, each is uncorrelated with every other
Null hypothesis associated with F test= use Mu
Multicolinearlity= prevents variables from testing significant

You might also like