The completely paralyzed and quadriplegic patients cannot communicate with others. However, the imagined thoughts of these patients can be used to drive assistive devices by brain-computer interfacing (BCI), the success of which relies on better classification accuracies. In this paper, we have performed an experiment for the classification of imagined words, which can provide an alternative neural path of speech communication for deprived people. A 32-channel industry-standard physiological signal system is used to measure imagined electroencephalogram (EEG) signals of five words (sos, stop, medicine, washroom, comehere) from 13 subjects. We have used the Hilbert transform to calculate time and joint time-frequency features from the imagined EEG signals. The above features are extracted individually in electrodes corresponding to nine brain regions. Each region of the brain is further analyzed in seven EEG frequency bands. The imagined speech features from each of the 63 combinations of brain region and frequency band are classified by the proposed deep architectures like long short term memory (LSTM), gated recurrent unit, and convolutional neural network (CNN). Some combinations are also classified by six traditional machine learning classifiers for performance comparison. In a five-class classification framework, we achieved the average and maximum accuracy of 71.75% and 94.29%. CNN gave high accuracy, but LSTM gave less network prediction time. Our results show that the alpha band can classify imagined speech better than other frequency bands. We have implemented subject-independent BCI, and the results are better than the state-of-the-art methods present in the literature.