We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube {0, 1}(n). Informally, a function f : {0, 1}(n) -> R is self-bounding if for every x is an element of{0, 1}(n), f(x) upper bounds the sum of all the n marginal decreases in the value of the function at x. Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by Boucheron et al. (2010) in the context of concentration of measure inequalities. Our main result is a nearly tight l(1)-approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be epsilon-approximated in l(1) by a polynomial of degree (O) over tilde (1/epsilon) over 2((O) over tilde (1/epsilon)) variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger l(2) approximation and proved nearly tight bounds of Theta(1/epsilon(2)) on the degree and 2(Theta(1/epsilon 2)) on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and l(1) approximation by low-degree polynomials. This technique can also be used to get tighter bounds on l(1) approximation by low-degree polynomials and a faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of n((Theta) over tilde (1/epsilon)). (C) 2019 Published by Elsevier B.V.