HIVE: Harnessing Human Feedback for Instructional Visual Editing.

2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2024)

引用 56|浏览226
暂无评分
摘要
Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences. We hypothesize that state-of-the-art instructional image editing models, where outputs are generated based on an input image and an editing instruction, could similarly benefit from human feedback, as their outputs may not adhere to the correct instructions and preferences of users. In this paper, we present a novel framework to harness human feedback for instructional visual editing (HIVE). Specifically, we collect human feedback on the edited images and learn a reward function to capture the underlying user preferences. We then introduce scalable diffusion model fine-tuning methods that can incorporate human preferences based on the estimated reward. Besides, to mitigate the bias brought by the limitation of data, we contribute a new 1M training dataset, a 3.6K reward dataset for rewards learning, and a 1K evaluation dataset to boost the performance of instructional image editing. We conduct extensive empirical experiments quantitatively and qualitatively, showing that HIVE is favored over previous state-of-the-art instructional image editing approaches by a large margin.
更多
查看译文
关键词
Human Feedback,Limited Data,Training Dataset,Input Image,Diffusion Model,Evaluation Dataset,Reward Function,User Preferences,Human Preferences,Fine-tuned Model,Image Editing,Fine-tuning Method,Training Data,Image Quality,Invertible,Diffusion Process,User Study,Image Pairs,Reward Learning,Proximal Policy Optimization,Reward Model,Cycle Consistency,Highest Reward
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要