TypedThinker: Typed Thinking Improves Large Language Model Reasoning

ICLR 2025

Abstract

Large Language Models (LLMs) have demonstrated strong reasoning capabilities in solving complex problems. However, current approaches primarily enhance reasoning through the elaboration of thoughts while neglecting the diversity of reasoning types. LLMs typically employ deductive reasoning, proceeding step-by-step from given conditions, which limits their exploration during problem-solving. Our analysis reveals that certain problems are exclusively solvable through specific reasoning strategies like inductive, abductive, or analogical reasoning. However, incorporating diverse reasoning approaches presents two key challenges: identifying the appropriate reasoning type for each problem and exploiting this approach during problem-solving. Therefore, we propose the TypedThinker that predicts suitable reasoning types based on the problem and their previous effectiveness and provides relevant demonstrations to guide LLMs in applying these strategies. Experimental results show significant improvements across multiple benchmarks, with performance gains of 3.4% for Mistral 7B, 6.5% for LLaMA3 8B, and 7% for Qwen 2 7B on logical and mathematical reasoning tasks. TypedThinker enhances LLM reasoning without requiring knowledge distillation from larger models. It can be integrated into more advanced systems like GPT-4o or specialized models like MetaMath to diversify their reasoning approaches and improve their problem-solving capabilities.

Diverse Reasoning Types Matter

motivation

🤔 Why do we need typed thinking?

  • Human reasoning is diverse and flexible
  • Current LLMs primarily use deductive reasoning
  • Many problems require specific reasoning types (inductive, abductive, analogical)
  • Limited exploration during problem-solving

🎯 TypedThinker's Solution

  • Predicts suitable reasoning types for each problem
  • Provides relevant demonstrations to guide LLMs
  • Enhances reasoning without requiring larger models

Key Challenges & Solutions

TypedThinker Framework

🔍 Two Key Challenges

  • Identifying appropriate reasoning types for each problem
  • Effectively applying these reasoning strategies

💡 Our Approach

  • Train a meta thinker to predict reasoning types based on previous effectiveness
  • Collect demonstrations for each reasoning type
  • Finetune the reasoner to better apply reasoning types

Poster

BibTeX

@article{wang2025typedthinker,
    title={TypedThinker: Typed Thinking Improves Large Language Model Reasoning},
    author={Wang, Danqing and Ma, JianXin and Fang, Fei and Li, Lei},
    journal={International Conference on Learning Representations (ICLR) 2025},
    year={2025}
  }