CycleGANs
Overview
From-scratch replication of CycleGAN for unpaired image-to-image translation. CycleGAN trains two generators (A→B and B→A) and two discriminators simultaneously, with a cycle consistency loss enforcing that translating an image and translating back recovers the original. This enables translation without paired training data. Based on Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks (Zhu et al., 2017).
Architecture
- Generator G: Domain A → Domain B (ResNet-based)
- Generator F: Domain B → Domain A (ResNet-based)
- Discriminators D_A, D_B: PatchGAN discriminators
- Losses: Adversarial + cycle consistency (L1) + identity loss
Training
- Dataset: Cityscapes (semantic segmentation maps ↔ street photos)
- Framework: PyTorch
- Generated images stored in
output_images_val/
Paper
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks — Zhu et al., 2017