RLVF: Learning from Verbal Feedback without Overgeneralization

1Stanford University;
Corresponding authors: moritzst@stanford.edu


Abstract

Large language models (LLMs) are increasingly deployed for various industries and users, necessitating the ability to align them with specific use cases and user preferences. Standard methods for such adaptation, such as reinforcement learning from human feedback, require extensive manual annotations. Alternatively, prompting-based approaches to incorporating verbal feedback are efficient but struggle to appropriately incorporate nuanced, context dependent user preferences, often overgeneralizing the feedback to contexts where it should not apply. We study whether it is possible to adapt language models using verbal feedback without such overgeneralization. Crucially, we propose Contextualized Critiques with Constrained Preference Optimization (C3PO), where we first introduce a scheme for synthetically generating both preference data that is relevant and irrelevant to the provided feedback. Then, we finetune the language model in accordance with the synthetic preference data while minimizing the divergence from the original model on out-of-scope prompts. Our experimental results indicate that our approach effectively applies verbal feedback to relevant scenarios while preserving existing behaviors in irrelevant contexts. Across many examples of human and GPT-4 generated feedback, C3PO effectively adheres to the given feedback comparably to incontext baselines while reducing overgeneralization by 30%.

Method overview


Dataset generation

Model training

C3PO facilitates feedback adherence for relevant prompts by fine-tuning with DPO on the generated in-scope data while minimizing overgeneralization through SCD losses on the generated out-of-scope and near-scope data, which regularizes model behavior towards the original model for feedback-irrelevant prompts.

Results

C3PO substantially reduces overgeneralization (applying the given feedback to prompts where it is not actually relevant) with only minor reduction in adherence to feedback for prompts where the feedback is relevant.

Acknowledgements

We thank Modal.com for sponsoring the compute for this project. We thank OpenAI for providing API credits through their Researcher Access Program.

BibTeX

@misc{stephan2024rlvf,
    title={RLVF: Learning from Verbal Feedback without Overgeneralization}, 
    author={Moritz Stephan and Alexander Khazatsky and Eric Mitchell and Annie S Chen and Sheryl Hsu and Archit Sharma and Chelsea Finn},
    year={2024},
    eprint={2402.10893},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}