Existing change detection (CD) research mostly focuses on pixel-level dense prediction, while object detection (OD) for entire damaged/changed buildings is rare. Object-level detection datasets for damaged/changed buildings are also lacking. This article proposes an object-oriented damaged/changed building CD model, OoCDNet, along with five global-scale OD datasets for damaged/changed buildings. OoCDNet bridges and integrates the dual tasks of CD and OD, driven by the task of locating damaged/changed buildings. By modeling the changes in buildings at the object level between bitemporal images, it achieves rapid identification of target buildings. OoCDNet consists of four parts, the two paths of the dual-path feature extraction module (DouBackbone) are responsible for extracting the base features of the image, the bidirectional pyramid feature aggregation module (AggNeck) models the change information of the features aggregated at the end, the cross-informative self-attentive short-circuit enhancement module enhances the features from the DouBackbone with highly efficient self-attentive information and supplements the enhancement information to the AggNeck, and the enhanced features with semantic and localization information are fed into the detection module for OD. The proposed OoEWEBD is a global-scale OD dataset for damaged buildings, containing 10377 image pairs, each sized $256\times 256$ pixels. The remaining four datasets are created based on the WHU_CD, LEVIR-CD+, S2Looking, and xBD datasets, targeting buildings with general changes or those affected by disasters. Compared with state-of-the-art (SOTA) OD and CD methods, OoCDNet can quickly and effectively detect target buildings, achieving the highest accuracy and having a high application value. The code and datasets will be available at https://github.com/Haiming-Z/OD-based-Change-Detection-mode-OoCDNet.