You are on page 1of 4

# Phillips-Perron (PP) Unit Root Tests

The DickeyFuller test involves fitting the regression model yt = yt1 + (constant, time trend) + ut (1)

by ordinary least squares (OLS), but serial correlation will present a problem. To account for this, the augmented DickeyFuller tests regression includes lags of the first differences of yt. The Phillips Perron test involves fitting (1), and the results are used to calculate the test statistics. They estimate not (1) but: yt = yt1 + (constant, time trend) + ut (2)

In (1) ut is I(0) and may be heteroskedastic. The PP tests correct for any serial correlation and heteroskedasticity in the errors ut non-parametrically by modifying the Dickey Fuller test statistics. Phillips and Perrons test statistics can be viewed as DickeyFuller statistics that have been made robust to serial correlation by using the NeweyWest (1987) heteroskedasticity- and autocorrelation-consistent covariance matrix estimator. Under the null hypothesis that = 0, the PP Zt and Z statistics have the same asymptotic distributions as the ADF t-statistic and normalized bias statistics. One advantage of the PP tests over the ADF tests is that the PP tests are robust to general forms of heteroskedasticity in the error term ut. Another advantage is that the user does not have to specify a lag length for the test regression. We have not dealt with it, but the Dickey Fuller test produces two test statistics. The normalized bias T ( 1) has a well defined limiting distribution that does not depend on nuisance parameters it can also be used as a test statistic for the null hypothesis H0 : = 1. This is the second test from DF and relats to Z in Phillips and Perron. Continued...............

## EXTRACT FROM STATA MANUAL

Note the regression is y on lagged y, not differenced y on lagged y. ZT is the adjusted t statistic as in Dickey Fuller.

is just the equivalent in tthe t stat in the DF test. S2n is an unbiased estimator (OLS) of the variance of the error terms.

when j=0 this is a (maximum likelihood) estimate of the variance of the error terms, when j>0 its an estimator of the covariance between two error terms j periods apart.

q is the number of lagged covariances looked at. Now when the covariances are zero i.e. no autocrrelation between error terms . In this case we can replace is zero for j>0. Hence the second terms disappears and with in:

## = 0 and the second term disappears.

= 1 thus

reduces to

= . This is just the t statistic in the standard Dickey Fuller equation. Hence.when there is no autocorrelation between error terms this part of the Phillips-Perron test is equal to the Dickey Fuller albeit one estimated on (2) rather than (1). This perspective helps us understand that the PP test corrects the DF one for autocrrelation amongst error terms non-parametrically (i.e. outside of a regression framework). The critical values, have the same distribution as the DickeyFuller statistic

Although we have not done it when there is no autocorrelation between error terms, when the covariances are equal then again the second term in the other PP statistic collapses to zero because

In this case