Enhancing Formalization for LLM’s Mathematical Reasoning

Authors

  • Wanchen Jiang Author

DOI:

https://doi.org/10.61173/cp7xmq61

Keywords:

Large Language Models (LLMs), Mathe-matical Reasoning, Self-Verification, Formalization, Ro-bustness

Abstract

Large Language Models (LLMs) perform very well in natural language processing tasks, but they still have problems in complex mathematical reasoning. One important difficulty is the formalization step, which means to correctly and precisely formalize natural language math problems into mathematical expressions. Current methods rely too much on this formalization, so they are often easy to make misun derstanding and inconsistency. In this paper, we propose an enhanced formalization framework, which combines multi-round formalization and selfverification. In detail, the LLM will formalize the same problem several times into different formal representation, and then a verification module is used to select and correct the results, to make sure consistency and correctness. We do experiments with advanced models like GPT-4 and DeepSeek, on benchmark datasets such as GSM8K and MATH. The results show that our method can improve accuracy about 0.05 compared with methods like Chainof-Thought and Self-Consistency, and it is also more stable when facing rephrased adversarial examples.

Downloads

Published

2025-12-19

Issue

Section

Articles