Radio path-loss prediction is an important but computationally expensive component of wireless communications simulation. Models may require significant computation to reach a solution or require that information about the environment between transceivers be collected as model inputs, which may also be computationally expensive. Despite the complexity of the underlying model that generates a path-loss solution, the resulting function is not necessarily complex, and there may be ample opportunity for compression. We introduce a method for rapidly estimating radio path loss with Feed-Forward Neural Networks (FFNNs), in which not only path-loss models but also map topology is implicitly encoded in the network. Since FFNN simulation is amenable to Single Instruction Multiple Data architecture, additional performance can be gained by implementing a trained model in a parallel manner with a Graphical Processing Unit (GPU), such as those found on modern video cards. We first describe the properties of the training data used, which is either taken from measurements of the continental United States, or generated with random processes. Secondly, we discuss the model selection process and the training algorithm used on all candidate networks. Thirdly, we show accuracy evaluations of a number of FFNNs trained to estimate both commercial and public-domain path-loss solution sets. Lastly, we describe the approach used to implement trained networks on a GPU, and provide performance evaluations versus conventional path-loss models running on a Central Processing Unit.