Multilabel classification in remote sensing (RS) images aims to correctly predict multiple object labels in an RS image with the primary challenge of mining correlations among multiple labels. In this context, we argue that a scene can be treated as a high-level depiction of the interactions among multiple interconnected objects within the image. However, hierarchical relationships between the scene and local objects are often neglected in other state-of-the-art approaches. In this article, we consider multilabel classification as a global-to-local prediction process, whereas the scene of an image is first identified, followed by recognition of local objects in the image. To achieve this, we propose a novel hierarchical knowledge graph (HKG)-based framework for multilabel classification in RS images (ML-HKG). Specifically, we first construct a hierarchical KG to depict label correlations between scenes and objects and represent the hierarchical knowledge as interrelated scene- and object-level label embeddings. Subsequently, we generate a scene-aware enhanced feature map by recognizing scene categories in an image under the guidance of scene-level knowledge embeddings. Afterward, object-level embeddings are used to derive category-specific visual representations for final multilabel prediction. Extensive experiments on the UCM and AID datasets demonstrate the effectiveness of our framework.