Image inpainting, the process of removing unwanted pixels and seamlessly replacing them with new ones, poses significant challenges requiring algorithms to understand image context and generate realistic replacements. With applications ranging from content generation to image editing, image inpainting has garnered significant interest. Traditional approaches involve training deep neural network models from scratch using binary masks to identify regions for inpainting. Recent advancements have shown the feasibility of leveraging well-trained image generation models, such as StyleGANs, for inpainting tasks. However, effectively embedding images into StyleGAN's latent space and addressing the challenges of diverse inpainting remain key obstacles. In this work, we propose a hierarchical encoder tailored to encode visible and missing features seamlessly. Additionally, we introduce a single-stage architecture capable of encoding both low-rate and high-rate latent features used by StyleGAN. While low-rate latent features offer a comprehensive understanding of images, high-rate latent features excel in transmitting intricate details to the generator. Through extensive experiments, we demonstrate significant improvements over state-of-the-art models for image inpainting, highlighting the efficacy of our approach.
Image processing Generative Adversarial Networks Deep Learning
Birincil Dil | İngilizce |
---|---|
Konular | Elektrik Mühendisliği (Diğer) |
Bölüm | Tasarım ve Teknoloji |
Yazarlar | |
Erken Görünüm Tarihi | 26 Aralık 2024 |
Yayımlanma Tarihi | 31 Aralık 2024 |
Gönderilme Tarihi | 11 Ekim 2024 |
Kabul Tarihi | 12 Aralık 2024 |
Yayımlandığı Sayı | Yıl 2024 Cilt: 12 Sayı: 4 |