Learning from Mistakes via Cooperative Study Assistant for Large Language Models

UC Santa Barbara, †Carnegie Mellon University
EMNLP 2023

Cooperative Interaction between two agents: main LLM and study assistant. The study assistant helps LLM revise its response by analyizing its previous mistakes and providing guidelines based on the ground truth. It also maintains a mistake memory for all mistakes the LLM made on the training set. During inference, the study assistant directly provides guidance without the ground truth and retrieves similar mistakes from the collection.

Abstract

Large language models (LLMs) have demonstrated their potential to refine their generation based on their own feedback. However, the feedback from LLM itself is often inaccurate, thereby limiting its benefits. In this paper, we propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes through interactive cooperation. In the gathering phase, the student assistant agent probes the main LLM, analyzes its errors, and collects the interaction in a mistake memory. During the examination phase, the study assistant provides guidelines by retrieving relevant cases to help the main LLM anticipate and avoid similar errors. We first investigate the effectiveness of a general study assistant and then customize it to provide LLMspecific guidance through imitation learning from successful guidance experiences. Our experiments on three LLMs using two challenging frameworks demonstrate that SALAM can significantly boost LLMs by an accuracy margin of up to 6.6 on BBH and 12.6 on BBQ.

Cooperation Makes LLM Better

SALAM

😨 LLM may self-reflect, but is the reflection always reliable and reusable?

  • Inopportune time to stop or continue the refinement loop
  • Too vague feedback to refine response
  • Repeated mistakes without knowing previous reflection

🧐 We need an expert to help LLMs reflect

  • analyze common misunderstanding and provide global guidelines
  • collect the experience for future use

Mistake Gathering & Examination

Mistake Memory

📚 Mistake Memory uiltizes previous mistakes

  • Collect mistakes from the training data with ground truth
  • Let main LLM interact with the study assistant until it gets the correct answer
  • Store these experience into mistake memory

📝 No ground truth is provided during Examination

  • Study assistant retrieves similar mistakes from the memory and provides guideline without the ground truth

Train a model-agnostic Study Assistant

Focus only on the case, not the LLM!
  1. Dataset: Collect mistakes from different LLMs and get feedback data from GPT-4
    • Analysis (why is wrong)
    • Guideline (how to avoid)
  2. Training: finetune a LLaMA-based study assistant on the feedback dataset.

Tailor a model-specific Study Assistant for each LLM

Each LLM has his own opinion!
  1. Formulation: Markov decision process
    • State S: (query, response, context)
    • Action A: feedback from the study assistant
    • Reward R: LLM performance
      • 1 if the LLM’s response is correct
      • 0 otherwise
    • Policy 𝜋(𝑎|𝑠): a language model to provide feedback

  2. Offline Sampling: Collect a replay dataset and only keep successful trajectories.
  3. Training: finetune a LLaMA-based study assistant on the filtered replay dataset.

Poster

BibTeX

@article{wang2023learn,
      title={Learn from Mistakes through Cooperative Interaction with Study Assistant},
      author={Wang, Danqing and Li, Lei},
      journal={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2023},
      year={2023}
}