ABSTRACT The abundance of available digital big data has created new challenges in identifying relevant variables for regression models. One statistical problem that gained relevance in the era of big… Click to show full abstract
ABSTRACT The abundance of available digital big data has created new challenges in identifying relevant variables for regression models. One statistical problem that gained relevance in the era of big data is high-dimensional statistical inference, when the number of variables greatly exceeds the number of observations. Typically, prediction errors in linear regression skyrocket when the number of included variables gets close to the number of observations, and ordinary least squares (OLS) regression no longer works in a high-dimensional scenario. Regularized estimators as a feasible solution include the Least Absolute Shrinkage and Selection Operator (Lasso), which we introduce to communication scholars here. We will include the statistical background of this technique that combines estimation and variable selection simultaneously and helps identify relevant variables for regression models in high-dimensional scenarios. We contrast the Lasso with two alternative strategies of selecting variables for regression models, namely, a theory-based “subset selection” of variables and a nonselective “all in” strategy. The simulation shows that the Lasso produces lower and more relatively stable prediction errors than the two alternative variable selection strategies, and it is therefore recommended to use, especially in high-dimensional settings typical in times of big data analysis.
               
Click one of the above tabs to view related content.