Bridging Clear and Adverse Driving Conditions: Domain Adaptation with Simulation, Diffusion, and GANs

A cost-effective pipeline using Simulation (CARLA i CARLA: Open-source autonomous driving simulator providing synthetic data with perfect ground truth labels.
Official Website →
)
, Diffusion (Stable Diffusion), and GANs (DA-UNIT i DA-UNIT: Domain Adaptation with Unsupervised Image-to-Image Translation Networks—a GAN-based architecture for cross-domain image translation.
View Paper →
)
for realistic adverse-condition data.

View Demos Key Features

Abstract

Autonomous driving systems perform poorly in adverse weather, yet collecting such data is costly and dangerous. We propose SDG-DA (Simulation-Diffusion-GAN Domain Adaptation), a novel pipeline that uses simulation and diffusion models to generate training data for a GAN, which then transforms real clear-weather images into photorealistic adverse conditions. While trained primarily on synthetic pairs, our GAN operates on real images at inference time, creating realistic fog, rain, snow, and nighttime scenes. Our approach achieves 78.57% mIoU on ACDC-Adversei ACDC-Adverse: The Adverse Conditions Dataset for semantic segmentation. Paired clear vs. fog/rain/snow/night images. Visit ACDC → without using any real adverse data in training, demonstrating a cost-effective solution for robust perception.

Demo Overview

You’re viewing SDG-DA (Simulated-Diffusion-GAN Domain Adaptation) in action. SDG-DA was trained on clear-weather images from ACDC Clear plus synthetic scenes from CARLA, then applied zero-shot to real-world nuScenesi nuScenes: A large-scale autonomous driving dataset with 360° camera, lidar, and radar recordings across diverse urban scenarios.
Visit nuScenes →
frames under rain, fog, and nighttime conditions.

  • Original nuScenes input
  • SDG-DA enhanced output
  • Baseline comparison (hover to overlay)

Interactive Demonstrations

Click image to pause/play animation

Click zoom icon (top-right) for fullscreen

Click histogram icon (top-left) to toggle Exposure-Correction (not available in clear weather images)

Use arrows to navigate weather conditions

Key Features

Hybrid Pipeline

Combines diffusion with GAN domain adaptation for robust adverse‑weather rendering.

Enhanced DA‑UNIT

Extended architecture supporting depth, semantics, and instances for better object preservation and label alignment.

Blending Technique

Novel approach that removes diffusion artefacts while retaining photorealistic enhancements.

Performance Gains

  • 78.57% mIoU on ACDC-adverse test split (zero-shot transfer with no adverse training data)
  • +1.85% mIoU improvement overall on ACDC validation split (clear + adverse conditions)
  • Night conditions: +4.62% mIoU gain on validation set

Our Team

Yoel Shapiro

Shapiro Yoel

Research Engineer

Yoel.Shapiro@il.bosch.com
Yahia Showgan

Yahia Showgan

Computer Vision Researcher

YahiaShowgan@gmail.com
Koustav Mullick

Mullick Koustav

Computer Vision Researcher

Koustav.Mullick@in.bosch.com

This research is supported by Bosch Research and Technology Center.