Traditional Chinese ancient character recognition methods often fail to recognize new characters when working with continuously updated archeological materials. To cope with this ever-changing data stream, continual Learning becomes the key. Retraining the model using both new and old categories is a straightforward concept, but it is constrained by memory and data privacy issues. Currently, many existing approaches use incremental freezing technology, but due to the high similarity and few samples of ancient Chinese character datasets, typical incremental learning methods face tremendous challenges in this case. To this end, this paper proposes a forward-looking simulation network to pre-simulate unknown new categories through virtual sample generation technology. Specifically, we decouple the network into feature extractors and classifiers and expand the feature extractor into a dual-branch structure. During the basic training phase, techniques like Mixup are used to create deep virtual features and virtual datasets, which successfully enhances the basic model's capacity to represent new categories. Moreover, L-Selective Loss is proposed to further optimize the boundaries between categories. And enhance the extraction of identifiable high-level features between new categories and original categories. Experimental results show that the proposed method can effectively recognize all existing categories on oracle bone script, Yi script, and Dongba script without saving old category samples. Compared with the traditional incremental frozen framework FACT, the forgetting rate is improved by 2.021%, 0.949%, and 2.552%, respectively.