This study proposes a wearable glove for sign language expressions based on deep learning. Wearable technology has made many advances in fields, such as medicine and education. In addition, research on recognizing sign language expressed by the deaf using wearable technology is actively underway. It is difficult for a deaf person who is learning sign language for the first time, or someone who has just became deaf to express themselves using sign language. Therefore, we design the wearable glove and manufacture a prototype based on this design to confirm that it is possible to control a finger using it. The proposed wearable glove controls movement of the exoskeleton with a DC motor. For sign language recognition and expression of the wearable glove, a deep learning model designed for expressing 20 Korean words is trained. As sign language requires movement changes over time and expresses meaning based on the movements, the deep learning model for sign language recognition must be capable of learning over time. Therefore, in this study, three deep learning models, Simple Recurrent Neural Network (SimpleRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), were used for sign language recognition of the wearable gloves. The training results of the three models are compared, and training-performance comparison experiments are conducted according to the sequence length of the training data. Based on the experimental results, GRU is the most effective sign language-recognition model for the proposed wearable gloves.