Image inpainting, the process of removing unwanted pixels and seamlessly replacing them with new ones, poses significant challenges requiring algorithms to understand image context and generate realistic replacements. With applications ranging from content generation to image editing, image inpainting has garnered significant interest. Traditional approaches involve training deep neural network models from scratch using binary masks to identify regions for inpainting. Recent advancements have shown the feasibility of leveraging well-trained image generation models, such as StyleGANs, for inpainting tasks. However, effectively embedding images into StyleGAN's latent space and addressing the challenges of diverse inpainting remain key obstacles. In this work, we propose a hierarchical encoder tailored to encode visible and missing features seamlessly. Additionally, we introduce a single-stage architecture capable of encoding both low-rate and high-rate latent features used by StyleGAN. While low-rate latent features offer a comprehensive understanding of images, high-rate latent features excel in transmitting intricate details to the generator. Through extensive experiments, we demonstrate significant improvements over state-of-the-art models for image inpainting, highlighting the efficacy of our approach.
Primary Language | English |
---|---|
Subjects | Electrical Engineering (Other) |
Journal Section | Tasarım ve Teknoloji |
Authors | |
Early Pub Date | December 26, 2024 |
Publication Date | December 31, 2024 |
Submission Date | October 11, 2024 |
Acceptance Date | December 12, 2024 |
Published in Issue | Year 2024 Volume: 12 Issue: 4 |