SeedEdit

SeedEdit

We introduce SeedEdit, a diffusion model that is able to revise a given image with any text prompts. In our perspective, the key to such a task is to obtain an optimal balance between maintaining the original image, i.e. image reconstruction, and generating a new image, i.e. image re-generation. To this end, we start from a text-to-image model that can be regarded as a weak editing model focusing on the re-generation and gradually align it into a strong image editor that well balances between the two tasks. Re-Diffuse achieves more diverse and stable editing capability over prior image editing methods, enabling high precision zero-shot control, and stable sequential editing over images generated by diffusion models.
080