Due to the distinctive distributed privacy-preserving architecture, split learning has found widespread application in scenarios where computational resources on the client side are limited. Unlike clients in federated learning retaining the whole model, split learning partitions the model into two segments situated separately on the server and client ends, thereby preventing direct access to the complete model structure by either party and fortifying its resilience against attacks. However, existing studies have demonstrated that even with access restricted to partial model outputs, split learning remains susceptible to data reconstruction attacks. This vulnerability persists despite prior research predominantly relying on stringent assumptions and the attacker being the server with the ability to access global information. Building upon this understanding, we devise GAN-based data reconstruction attacks within the U-shaped split learning framework, meticulously examining and confirming the feasibility of attacks initiated from both server and client sides, along with the underlying assumptions. Specifically, for attacks originating from the server, we propose the Model Approximation E stimation Reconstruction Attack (MAERA) to mitigate the requisite prior assumptions, and we also introduce the Distillation-based Client-side Reconstruction Attack (DCRA) to execute data reconstructions from the client for the first time. Experimental results illustrate the effectiveness and the robustness of the proposed frameworks in launching attacks across various datasets. In particular, MAERA necessitates merely 1% of the test set samples and 1% of the private data samples from the CIFAR100 dataset to unleash effective attacks, while DCRA adeptly expropriates models from clients and yields more pronounced reconstruction effects on target class samples during the process of inferring data distribution characteristics, in contrast to conventional Maximum A Posteriori (MAP) estimation algorithms.