A current strand of research in high-dimensional statistics deals with robustifying the methodology with respect to deviations from the pervasive light-tail assumptions. In this article, we consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors, and investigate support estimation and sign recovery. We use a strictly convex, smooth variant of the Huber loss function with tuning parameters depending on the parameters of the problem, as well as the adaptive LASSO penalty for computational efficiency. For the resulting estimator, we show sign-consistency and optimal rates of convergence in the & ell;infinity$$ {\ell}_{\infty } $$ norm as in the homoscedastic, light-tailed setting. In our simulations, we also connect to the recent literature on variable selection with the thresholded LASSO and false discovery rate control using knockoffs and indicate the relevance of the Donoho-Tanner transition curve for variable selection. The simulations illustrate the favorable numerical performance of the proposed methodology.