A style-Pix2Pix GAN framework for data augmentation in landslide semantic segmentation
Citation
Ren, T., Gong, W., Agliardi, F., et al. (2026). A style-Pix2Pix GAN framework for data augmentation in landslide semantic segmentation. Landslides, 23: 263-273. Link to paper
Abstract
This paper proposes a Style-Pix2Pix GAN framework to address limited labeled data in landslide semantic segmentation, especially in emergency mapping contexts. A dual-network architecture is used: StyleGAN2 generates realistic landslide masks that capture morphology and spatial structure, and Pix2Pix reconstructs corresponding optical images through conditional translation. Experiments on the Shaoguan Landslide Dataset show that generated samples preserve geometric complexity and spectral characteristics of real landslides. Training segmentation models with mixed real and synthetic datasets consistently improves identification performance compared with using real samples alone. The framework provides a practical augmentation strategy for improving robustness and transferability of deep-learning-based landslide mapping.