The key to automatic question generation (AQG) is the selection of the tail entity (named entity in latter part of the question), which is vital concern for generating coherent and relevant questions. To address this requirement, advanced techniques such as positional encoding and dependency parsing are widely adopted. Positional encoding captures the sequential representation of words by analyzing context and structure, while dependency parsing identifies grammatical relationships to ensure linguistic coherence. Dependency parsing techniques such as transition-based and graph-based methods are widely employed to model source text sequences and generate accurate dependency relationships. However, the applicability of the existing dependency models to capture complex semantic relationships in the source text with subject or domain specific nouns is quite challenging. The evolution of AQG is recently approached by the application of Transformer models and end-to-end neural network architectures. To address this challenge, a hybrid model is proposed with TreeRNN and T5 Transformer to improve the AQG process. This model focuses on two main objectives: (1) training TreeRNN for dependency-aware tail entity extraction and (2) fine-tuning T5 Transformer with multi-shot tail entity prompting. Experimental results show that the proposed TreeRNN significantly enhances performance metrics such as Labelled Attachment Score and Unlabelled Attachment Score. Moreover, performance analysis of the proposed model for domain-specific question generation highlights the model's effectiveness, yielding substantial auto-generated questions.