Automatic music generation represents a challenging task within the field of artificial intelligence, aiming to harness machine learning techniques to compose music that is appreciable by humans. In this context, we introduce a text-based music data representation method that bridges the gap for the application of large text-generation models in music creation. Addressing the characteristics of music such as smaller note dimensionality and longer length, we employed a deep generative adversarial network model based on music measures (MT-CHSE-GAN). This model integrates paragraph text generation methods, improves the quality and efficiency of music melody generation through measure-wise processing and channel attention mechanisms. The MT-CHSE-GAN model provides a novel framework for music data processing and generation, offering an effective solution to the problem of long-sequence music generation. To comprehensively evaluate the quality of the generated music, we used accuracy, loss rate, and music theory knowledge as evaluation metrics and compared our model with other music generation models. Experimental results demonstrate our method's significant advantages in music generation quality. Despite progress in the field of automatic music generation, its application still faces challenges, particularly in terms of quantitative evaluation metrics and the breadth of model applications. Future research will continue to explore expanding the model's application scope, enriching evaluation methods, and further improving the quality and expressiveness of the generated music. This study not only advances the development of music generation technology but also provides valuable experience and insights for research in related fields.