In an era where images and visual content dominate our digital landscape, the ability to manipulate and personalize these images has become a necessity. Envision seamlessly substituting a tabby cat lounging on a sunlit window sill in a photograph with your own playful puppy, all while preserving the original charm and composition of the image. We present Photoswap, a novel approach that enables this immersive image editing experience through personalized subject swapping in existing images. Photoswap first learns the visual concept of the subject from reference images and then swaps it into the target image using pre-trained diffusion models in a training-free manner. We establish that a well-conceptualized visual subject can be seamlessly transferred to any image with appropriate self-attention and cross-attention manipulation, maintaining the pose of the swapped subject and the overall coherence of the image. Comprehensive experiments underscore the efficacy and controllability of Photoswap in personalized subject swapping. Furthermore, Photoswap significantly outperforms baseline methods in human ratings across subject swapping, background preservation, and overall quality, revealing its vast application potential, from entertainment to professional editing.
Figure 2. Photoswap pipeline.
Figure 3. Results at different swapping steps.
From everyday objects to cartoon, the diversity in subject swapping tasks has showcased the versatility and robustness of our framework across different contexts.
Figure 4. More results at various domains.
We set P2P+DreamBooth as a baseline for Photoswap. The former faces challenges in preserving both the background and the reference subject accurately, while for Photoswap, it is robust to handle various cases.
Figure 5. Comparison with P2P+DreamBooth.
With proper parameters, we could control the similarity between the generated image and the source image.
Figure 6. Control over similarity.
@misc{gu2023photoswap,
title={Photoswap: Personalized Subject Swapping in Images},
author={Jing Gu and Yilin Wang and Nanxuan Zhao and Tsu-Jui Fu and Wei Xiong and Qing Liu and Zhifei Zhang and He Zhang and Jianming Zhang and HyunJoon Jung and Xin Eric Wang},
year={2023},
eprint={2305.18286},
archivePrefix={arXiv},
primaryClass={cs.CV}
}