Calibrated Self-Rewarding Vision Language Models

1UNC-Chapel Hill,
2University of Chicago, 3University of Maryland, 4Rutgers University, 5Independent Researcher
* Equal Contribution
geometric reasoning

The CSR framework operates an iterative process of preference data generation and learning. During preference data generation, CSR utilizes a sentence-level beam search approach to construct responses sentence by sentence, assigning a reward to each sentence. This reward, initially generated by the model itself, is then calibrated using image-relevance information. Preferences are determined based on the cumulative reward for each response. In each iteration, CSR generates new preference data and performs preference learning based on this data, continuously enhancing the model's performance.

Introduction

Existing methods use additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These methods are resource-intensive and may not effectively reflect the target LVLM's preferences, making the curated preference data easily distinguishable. To address these challenges, we proposes the Calibrated Self-Rewarding (CSR), which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In reward modeling, a step-wise strategy is adopted, and visual constraints are incorporated into the self-rewarding process to emphasize visual input.


Left: Different parameter sizes of LLaVA 1.5 can enhance their learning through CSR iterations. Right: The change in image relevance scores before and after employing CSR.

Through the online CSR process, the model continuously enhances its performance across various benchmarks and improves the overall relevance scores of its responses to visual inputs. Additionally, it reduces the gap between rejected responses and chosen responses, thereby improving the model's performance lower bound.

Results

Compared to the original model and self-rewarding approaches, CSR effectively improves performance across various benchmarks.

geometric reasoning

Compared to other data-driven preference learning methods and self-rewarding approaches, CSR demonstrates superior performance [1].

LLaVA 1.5, optimized through CSR, outperforms other open-source LVLMs across various benchmarks [2].

Additionally, as the model continues to learn iteratively online, CSR effectively reduces model hallucinations and enhances overall capabilities [3-4].

BibTeX

@article{zhou2024calibrated,
      title={Calibrated Self-Rewarding Vision Language Models},
      author={Zhou, Yiyang and Fan, Zhiyuan and Cheng, Dongjie and Yang, Sihan and Chen, Zhaorun and Cui, Chenhang and Wang, Xiyao and Li, Yun and Zhang, Linjun and Yao, Huaxiu},
      journal={arXiv preprint arXiv:2405.14622},
      year={2024}
    }