GazeFusion: Saliency-Guided Image Generation

Yunxiang Zhang, Nan Wu, Connor Z. Lin, Gordon Wetzstein, Qi Sun

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Diffusion models offer unprecedented image generation power given just a text prompt. While emerging approaches for controlling diffusion models have enabled users to specify the desired spatial layouts of the generated content, they cannot predict or control where viewers will pay more attention due to the complexity of human vision. Recognizing the significance of attention-controllable image generation in practical applications, we present a saliency-guided framework to incorporate the data priors of human visual attention mechanisms into the generation process. Given a user-specified viewer attention distribution, our control module conditions a diffusion model to generate images that attract viewers' attention toward the desired regions. To assess the efficacy of our approach, we performed an eye-tracked user study and a large-scale model-based saliency analysis. The results evidence that both the cross-user eye gaze distributions and the saliency models' predictions align with the desired attention distributions. Lastly, we outline several applications, including interactive design of saliency guidance, attention suppression in unwanted regions, and adaptive generation for varied display/viewing conditions.

    Original languageEnglish (US)
    Article number14
    JournalACM Transactions on Applied Perception
    Volume21
    Issue number4
    DOIs
    StatePublished - Nov 15 2024

    Keywords

    • Additional Key Words and PhrasesHuman Visual Attention
    • Controllable Image Generation
    • Perceptual Computer Graphics

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • General Computer Science
    • Experimental and Cognitive Psychology

    Fingerprint

    Dive into the research topics of 'GazeFusion: Saliency-Guided Image Generation'. Together they form a unique fingerprint.

    Cite this