Aerospace systems must perform complex tasks in diverse, uncertain environments. The controllers of these systems must safely and intelligently adapt to unforeseen internal and external disturbances and changes online, during operation. In this work, we provide one possible avenue for safe and smart control when the system is learning online: incorporating nonlinear stabilizing elements into the online learning-based control law. The learning elements allow adaptation, while the nonlinear control elements stabilize the system during initial learning and reject disturbances after learning converges. We utilize sliding mode control (SMC) as the disturbance-rejecting nonlinear control term, with the approximation error of the learning elements considered as an internal disturbance. A neural network is used to learn the system state function, along with direct parameter learning, with online adaptation rules derived from Lyapunov's direct method. We first describe the novel proposed controller in a general derivation, and we present the complete Lyapunov proof of asymptotic stability for the zero-error equilibrium point and boundedness of learned parameters. To show the versatility of the proposed controller, we consider two aerospace control problems: rigid-body spacecraft attitude control and quadcopter control. In spacecraft attitude control, we show that careful consideration of the sliding variable used for quaternion trajectory tracking leads to effective disturbance-rejecting adaptive control. A sky-scanning satellite tracking problem is considered and simulated to verify the controller. In quadcopter control, we break up the control into multiple second-order and fourth-order subsystems. We formulate the quadcopter dynamics to match the assumptions of the proposed controller, which effectively controls the position and yaw of the quadcopter in a 3D trajectory tracking problem under wind and other aerodynamic effects. Both applications of the proposed controller require minimal tuning and modeling when compared to traditional model-based approaches by simplifying complicated state functions into simple neural network approximations, rendering a highly accurate and robust controller. This work further shows that function approximation instruments such as neural networks are able to be stably used online in a controller, and that many tools and techniques in modern machine learning may be able to benefit the adaptive control of aerospace systems.