The HCCME= option in the MODEL statement selects the type of heteroscedasticity-consistent covariance matrix. In the presence of heteroscedasticity, the covariance matrix has a complicated structure that can result in inefficiencies in the OLS estimates and biased estimates of the covariance matrix. The variances for cross-sectional and time dummy variables and the covariances with or between the dummy variables are not corrected for heteroscedasticity in the one-way and two-way models. Whether or not the HCCME= is specified, these variances are the same. For the two-way models, the variance and the covariances for the intercept are not corrected.[5]
Consider the simple linear model:
This discussion parallels the discussion in Davidson and MacKinnon (1993, pp. 548–562). For panel data models, heteroscedasticity-corrected covariance matrix estimation (HCCME) is applied to the transformed data ( and
). In other words, first the random or fixed effects are removed through transforming the data,[6] and then the heteroscedasticity (also autocorrelation with the HAC option) is corrected in the residual. The assumptions that make the linear regression best linear unbiased estimator (BLUE) are
and
, where
has the simple structure
. Heteroscedasticity results in a general covariance structure, and it is not possible to simplify
. The result is the following:
As long as the following is true, then you are assured that the OLS estimate is consistent and unbiased:
If the regressors are nonrandom, then it is possible to write the variance of the estimated as
You can ameliorate the effect of structure in the covariance matrix by using generalized least squares (GLS), provided that can be calculated. Using
, you premultiply both sides of the regression equation,
where L denotes the Cholesky root of (that is,
with L lower triangular).
The resulting variance expression for the GLS estimator is
The difference in variance between the OLS estimator and the GLS estimator can be written as
By the Gauss-Markov theorem, the difference matrix must be positive definite under most circumstances (zero if OLS and GLS are the same, when the usual classical regression assumptions are met). Thus, OLS is not efficient under a general error structure. It is crucial to realize that OLS does not produce biased results. It would suffice if you had a method of estimating a consistent covariance matrix and you used the OLS . Estimation of the
matrix is certainly not simple. The matrix is square and has
elements; unless some sort of structure is assumed, it becomes an impossible problem to solve. However, the heteroscedasticity can have quite a general structure. White (1980) shows that it is not necessary to have a consistent estimate of
. On the contrary, it suffices to calculate an estimate of the middle expression. That is, you need an estimate of
This matrix, , is easier to estimate because its dimension is K. PROC PANEL provides the following classical HCCME estimators for
.
The matrix is approximated as follows:
HCCME=N0:
This is the simple OLS estimator. If you do not specify the HCCME= option, PROC PANEL defaults to this estimator.
HCCME=0:
Here N is the number of cross sections and is the number of observations in the ith cross section. The
is from the tth observation in the ith cross section, constituting the
th row of the matrix
. If the CLUSTER option is specified, one extra term is added to the preceding equation so that the estimator of matrix
is
The formula is the same as the robust variance matrix estimator in Wooldridge (2002, p. 152), and it is derived under the assumptions of section 7.3.2 of Wooldridge (2002).
HCCME=1:
Here M is the total number of observations, , and K is the number of parameters. If the CLUSTER option is specified, the estimator becomes
The formula is similar to the robust variance matrix estimator in Wooldridge (2002, p. 152) with the heteroscedasticity adjustment term .
HCCME=2:
The term is the
th diagonal element of the hat matrix. The expression for
is
. The hat matrix attempts to adjust the estimates for the presence of influence or leverage points. If the CLUSTER option is specified, the estimator becomes
The formula is similar to the robust variance matrix estimator in Wooldridge (2002, p. 152) with the heteroscedasticity adjustment.
HCCME=3:
If the CLUSTER option is specified, the estimator becomes
The formula is similar to the robust variance matrix estimator in Wooldridge (2002, p. 152) with the heteroscedasticity adjustment.
HCCME=4: PROC PANEL includes this option for the calculation of the Arellano (1987) version of the White (1980) HCCME in the panel setting. Arellano’s insight is that there are covariance matrices in a panel, and each matrix corresponds to a cross section. Forming the White HCCME for each cross section, you need to take only the average of those
estimators. The details of the estimation follow. First, you arrange the data such that the first cross section occupies the first
observations. Then, you treat the cross sections as separate regressions with the form
where the parameter estimates and
are the result of least squares dummy variables (LSDV) or within estimator regressions, and
is a vector of ones of length
. The estimate of the ith cross section’s
matrix (where the
subscript indicates that no constant column has been suppressed to avoid confusion) is
. The estimate for the whole sample is
The Arellano standard error is in fact a White-Newey-West estimator with constant and equal weight on each component. In the between estimators, specifying HCCME=4 returns the HCCME=0 result because there is no "other" variable to group by.
In their discussion, Davidson and MacKinnon (1993, p. 554) argue that HCCME=1 should always be preferred to HCCME=0. Although an HCCME= option value of 3 is generally preferred to 2 and 2 is preferred to 1, the calculation of HCCME=1 is as simple as the calculation of HCCME=0. Therefore, HCCME=1 is preferred when the calculation of the hat matrix is too tedious.
All HCCMEs have well-defined asymptotic properties. The small-sample properties are not well known, and care must exercised when sample sizes are small.
The HCCME of is used to drive the covariance matrices for the fixed effects and the Lagrange multiplier standard errors. Robust estimates of the covariance matrix for
imply robust covariance matrices for all other parameters.
[5] The dummy variables are removed by the within transformations, so their variances and covariances cannot be calculated the same way as the other regressors. They are recovered by the formulas in the sections One-Way Fixed-Effects Model (FIXONE and FIXONETIME Options) and Two-Way Fixed-Effects Model (FIXTWO Option). The formulas assume homoscedasticity, so they do not apply when HCCME is used. Therefore, standard errors, variances, and covariances are reported only when the HCCME= option is ignored. HCCME standard errors for dummy variables and intercept can be calculated by the dummy variable approach with the pooled model.
[6] For more information about transforming the data, see the sections One-Way Fixed-Effects Model (FIXONE and FIXONETIME Options), Two-Way Fixed-Effects Model (FIXTWO Option), One-Way Random-Effects Model (RANONE Option), and Two-Way Random-Effects Model (RANTWO Option).