Controllable Style Transfer for Pose-Guided Human Image Generation Using Diffusion (Seminar)
Abstract
This seminar explores advancements in pose-guided human image generation using diffusion models, highlighting their superior performance over traditional GAN-based methods. Diffusion models, such as DALL-E 2 and Imagen, achieve high-fidelity and semantically accurate images by progressively refining noisy images. The seminar also discusses controllable style transfer techniques that integrate diverse styles while maintaining pose accuracy. Recent studies demonstrate the effectiveness of these models in addressing challenges like occlusions and complex deformations, making them a robust choice for applications in digital art, fashion, and computer vision.
BibTeX
@inproceedings{usama2024controllable,
title={Controllable Style Transfer for Pose-Guided Human Image Generation Using Diffusion},
author={Usama, Muhammad and Khan, Muhammad Saif Ullah},
booktitle={Proceedings of the Computer Vision and Deep Learning (CVDL) Course},
month={October},
year={2024},
pages={42-49}
}
Maintained by saifkhichi96 on GitHub.
The website is distributed under different open-source licenses. For more details, see the notice at the bottom of the page.