In this paper we demonstrate, how `p-regularized univariate quadratic loss function can be effectively optimized (for 0 6 p 6 1) without approximation of penalty term and provide analytical solution for p = 1 2 . Next we adapt this approach for important multivariate cases like linear and logistic regressions, using Coordinate Descent algorithm. At the end we compare sample complexity of `1 with `p, 0 6 p < 1 regularized models for artificial and real datasets.