Hybrid Pipeline
Combines diffusion with GAN domain adaptation for robust adverse‑weather rendering.
A cost-effective pipeline using
Simulation
(CARLA
CARLA: Open-source autonomous driving simulator providing synthetic data with perfect ground truth labels.
Official Website →
)
,
Diffusion (Stable Diffusion), and
GANs
(DA-UNIT
DA-UNIT: Domain Adaptation with Unsupervised Image-to-Image Translation Networks—a GAN-based architecture for cross-domain image translation.
View Paper →
)
for realistic adverse-condition data.
Autonomous driving systems perform poorly in adverse weather, yet collecting such data is costly and dangerous. We propose SDG-DA (Simulation-Diffusion-GAN Domain Adaptation), a novel pipeline that uses simulation and diffusion models to generate training data for a GAN, which then transforms real clear-weather images into photorealistic adverse conditions. While trained primarily on synthetic pairs, our GAN operates on real images at inference time, creating realistic fog, rain, snow, and nighttime scenes. Our approach achieves 78.57% mIoU on ACDC-Adverse ACDC-Adverse: The Adverse Conditions Dataset for semantic segmentation. Paired clear vs. fog/rain/snow/night images. Visit ACDC → without using any real adverse data in training, demonstrating a cost-effective solution for robust perception.
You’re viewing SDG-DA (Simulated-Diffusion-GAN Domain Adaptation) in action. SDG-DA was trained on clear-weather images from ACDC Clear plus synthetic scenes from CARLA, then applied zero-shot to real-world
nuScenes
nuScenes: A large-scale autonomous driving dataset with 360° camera, lidar, and radar recordings across diverse urban scenarios.
Visit nuScenes →
frames under rain, fog, and nighttime conditions.
Click image to pause/play animation
Click zoom icon (top-right) for fullscreen
Click histogram icon (top-left) to toggle Exposure-Correction (not available in clear weather images)
Use arrows to navigate weather conditions
Combines diffusion with GAN domain adaptation for robust adverse‑weather rendering.
Extended architecture supporting depth, semantics, and instances for better object preservation and label alignment.
Novel approach that removes diffusion artefacts while retaining photorealistic enhancements.
This research is supported by Bosch Research and Technology Center.