Shared Concepts

Adaptive LASSO Selection

This section applies to the glm action in the regression action set.

Adaptive LASSO selection is a modification of LASSO selection. In adaptive LASSO selection, weights are applied to each of the parameters in forming the LASSO constraint (Zou 2006). More precisely, suppose that the response y has mean 0 and the regressors x are scaled to have mean 0 and common standard deviation. Furthermore, suppose that you can find a suitable estimator ModifyingAbove beta With caret of the parameters in the true model and you define a weight vector by w equals 1 slash StartAbsoluteValue ModifyingAbove beta With caret EndAbsoluteValue Superscript gamma, where gamma greater-than-or-equal-to 0. Then the adaptive LASSO regression coefficients beta equals left-parenthesis beta 1 comma beta 2 comma ellipsis comma beta Subscript m Baseline right-parenthesis are the solution to the following constrained optimization problem:

minimize StartMetric bold y minus bold upper X bold italic beta EndMetric squared subject to sigma summation Underscript j equals 1 Overscript m Endscripts StartAbsoluteValue w Subscript j Baseline beta Subscript j Baseline EndAbsoluteValue less than or equals t

The solution to the unconstrained least squares problem is used as the estimator ModifyingAbove beta With caret. This is appropriate unless collinearity is a concern. If the regressors are collinear or nearly collinear, then Zou (2006) suggests using a ridge regression estimate to form the adaptive weights.

Adaptive LASSO enjoys the oracle properties; namely, it performs as well as if the true underlying model were given in advance.

Last updated: March 05, 2026