Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save m0o0scar/77870d6b7221abd339bd623ee1269421 to your computer and use it in GitHub Desktop.
Save m0o0scar/77870d6b7221abd339bd623ee1269421 to your computer and use it in GitHub Desktop.
Internal Consistency and Self-Feedback in Large Language Models: A Survey. Continue this conversation at https://readfm.vercel.app?gist=77870d6b7221abd339bd623ee1269421

[arxiv] Internal Consistency and Self-Feedback in Large Language Models: A Survey

Source

Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li

Large language models (LLMs) are expected to respond accurately but often exhibit deficient reasoning or generate hallucinatory content. To address these, studies prefixed with Self-'' such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating itself to mitigate the issues. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization without examining the motivations behind these works. In this paper, we summarize a theoretical framework, termed Internal Consistency, which offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations. Internal Consistency assesses the coherence among LLMs' latent layer, decoding layer, and response layer based on sampling methodologies. Expanding upon the Internal Consistency framework, we introduce a streamlined yet effective theoretical framework capable of mining Internal Consistency, named Self-Feedback. The Self-Feedback framework consists of two modules: Self-Evaluation and Self-Update. This framework has been employed in numerous studies. We systematically classify these studies by tasks and lines of work; summarize relevant evaluation methods and benchmarks; and delve into the concern, Does Self-Feedback Really Work?'' We propose several critical viewpoints, including the Hourglass Evolution of Internal Consistency'', Consistency Is (Almost) Correctness'' hypothesis, and ``The Paradox of Latent and Explicit Reasoning''. Furthermore, we outline promising directions for future research. We have open-sourced the experimental code, reference list, and statistical data, available at \url{this https URL}.

URL: https://huggingface.co/papers/2407.14507

Conversation

▲ Please explain the paper to reader with no relevant technical or research background, be clear and concise.

This paper explores the problem of large language models (LLMs) like ChatGPT sometimes making mistakes or generating inaccurate information. The authors argue that these issues stem from a lack of "internal consistency" within the model.

Imagine a model as a human trying to answer a question. If the human is confused about their own thoughts or reasoning, they are likely to give a wrong or inconsistent answer. Similarly, LLMs can struggle with internal consistency, leading to errors.

The paper proposes a framework called "Self-Feedback" to address this issue. This framework involves the LLM evaluating its own output and then using that feedback to improve its responses. It's like a human thinking about their answer, realizing a mistake, and then correcting it.

The paper then dives into different ways LLMs can self-evaluate and update themselves. It explores various techniques, including:

  • Reasoning Topologically: Helping the model find the best reasoning path to reach a correct answer.
  • Refining with Responses: Having the model generate multiple answers and then selecting the best one.
  • Multi-Agent Collaboration: Using multiple LLMs to work together and debate to reach a consensus.
  • Refining the Response Iteratively: Having the model refine its own answer step-by-step until it's accurate.
  • Mitigating Hallucination while Generating: Helping the model avoid generating false information as it creates text.
  • Decoding Truthfully: Improving the model's ability to choose the most accurate words when generating text.
  • Activating Truthfulness: Enhancing the model's internal representation of truthfulness to make it more likely to generate accurate information.

The paper concludes by discussing the challenges and future directions for this research. It emphasizes the need for more research on how to improve LLMs' internal consistency and make them more reliable and accurate.

{
"url": "https://huggingface.co/papers/2407.14507",
"type": "arxiv",
"title": "Internal Consistency and Self-Feedback in Large Language Models: A Survey",
"subtitle": "Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li",
"description": "Large language models (LLMs) are expected to respond accurately but often exhibit deficient reasoning or generate hallucinatory content. To address these, studies prefixed with ``Self-'' such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating itself to mitigate the issues. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization without examining the motivations behind these works.\nIn this paper, we summarize a theoretical framework, termed Internal Consistency, which offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations. Internal Consistency assesses the coherence among LLMs' latent layer, decoding layer, and response layer based on sampling methodologies. Expanding upon the Internal Consistency framework, we introduce a streamlined yet effective theoretical framework capable of mining Internal Consistency, named Self-Feedback. The Self-Feedback framework consists of two modules: Self-Evaluation and Self-Update. This framework has been employed in numerous studies.\nWe systematically classify these studies by tasks and lines of work; summarize relevant evaluation methods and benchmarks; and delve into the concern, ``Does Self-Feedback Really Work?'' We propose several critical viewpoints, including the ``Hourglass Evolution of Internal Consistency'', ``Consistency Is (Almost) Correctness'' hypothesis, and ``The Paradox of Latent and Explicit Reasoning''. Furthermore, we outline promising directions for future research. We have open-sourced the experimental code, reference list, and statistical data, available at \\url{this https URL}."
}
# Internal Consistency and Self-Feedback in Large Language Models: A Survey Xun Liang∗, Senior Member, IEEE, Shichao Song∗, Zifan Zheng∗, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, Zhiyu Li† Abstract—Large language models (LLMs) are expected to respond accurately but often exhibit deficient reasoning or generate hallucinatory content. To address these, studies prefixed with “Self-” such as Self-Consistency, Self-Improve, and Self-Refine have been initiated. They share a commonality: involving LLMs evaluating and updating itself to mitigate the issues. Nonetheless, these efforts lack a unified perspective on summarization, as existing surveys predominantly focus on categorization without examining the motivations behind these works. In this paper, we summarize a theoretical framework, termed Internal Consistency, which offers unified explanations for phenomena such as the lack of reasoning and the presence of hallucinations. Internal Consistency assesses the coherence among LLMs’ latent layer, decoding layer, and response layer based on sampling methodologies. Expanding upon the Internal Consistency framework, we introduce a streamlined yet effective theoretical framework capable of mining Internal Consistency, named Self-Feedback. The Self-Feedback framework consists of two modules: Self-Evaluation and Self-Update. The former captures Internal Consistency signals, while the latter leverages the signals to enhance either the model’s response or the model itself. This framework has been employed in numerous studies. We systematically classify these studies by tasks and lines of work; summarize relevant evaluation methods and benchmarks; and delve into the concern, “Does Self-Feedback Really Work?” We propose several critical viewpoints, including the “Hourglass Evolution of Internal Consistency”, “Consistency Is (Almost) Correctness” hypothesis, and “The Paradox of Latent and Explicit Reasoning”. Furthermore, we outline promising directions for future research. We have open-sourced the experimental code, reference list, and statistical data, available at https://github.com/IAAR-Shanghai/ICSFSurvey. Index Terms—Large Language Model (LLM), Internal Consistency, Self-Feedback, Reasoning, Hallucination. # I. INTRODUCTION Large language models (LLMs), including ChatGPT1, have made significant advancements in the field of natural language processing (NLP), demonstrating capabilities close to basic human intelligence. These include features like performing basic reasoning and learning from examples [1]. ∗Equal contribution. †Corresponding author: Zhiyu Li (lizy@iaar.ac.cn). Xun Liang, Shichao Song and Hanyu Wang are with the School of Information, Renmin University of China, Beijing, China. Zifan Zheng, Qingchen Yu, Feiyu Xiong and Zhiyu Li are with the Large Language Model Center, Institute for Advanced Algorithms Research, Shanghai, China. Xunkai Li and Rong-Hua Li are with the School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China. 1https://openai.com/index/chatgpt/ --- # TABLE I |Definitions|Real-World Example|Benchmark Examples| |---|---|---| |Lack Reasoning| |Exhibit Hallucination| |Reasoning is the process of thinking about something logically and systematically, using evidence and past experiences to reach a conclusion or make a decision [13].|Hallucination is the text generated by language models that does not conform to the facts, contradicts itself, or does not meet the requirements of the user [14].| | |User: There is a family where mom and dad have 3 daughters and each daughter has 1 younger brother, so how many people are there in this family?|User: There is a family where mom and dad have 3 daughters and each daughter has 1 younger brother, so how many people are there in this family?|User: There is a family where mom and dad have 3 daughters and each daughter has 1 younger brother, so how many people are there in this family?| | | |GLM-3: This family consists of:|GLM-3: This family consists of:|GLM-3: This family consists of:| | | | |(1) Mom and Dad: 2 people;| | | |(2) 3 daughters: 3 people;| | | |(3) Each daughter has 1 younger brother, and since there are 3 daughters, this means there are 3 younger brothers: 3 people.| | |Adding these numbers together, there are a total of 2 (mom and dad) + 3 (daughters) + 3 (younger brothers) = 8 people in this family.|Adding these numbers together, there are a total of 2 (mom and dad) + 3 (daughters) + 3 (younger brothers) = 8 people in this family.|Adding these numbers together, there are a total of 2 (mom and dad) + 3 (daughters) + 3 (younger brothers) = 8 people in this family.| | | |[MMLU] Q: Order the following (risk, return) pairs from least to most favourable, assuming the perspective of a rational and risk-averse investor: (2,2),(2,3) and (4,2). Risk is measured in standard deviations and return in percentage. HINT: Imagine a scatter diagram with standard deviation on the x-axis and return on the y-axis. A: (4,2),(2,2),(2,3).| | | (e.g., Self-Evaluation Decoding [11]), and latent state strategies (e.g., Inference-Time Intervention [5]). # A. Lack Reasoning and Exhibit Hallucination The issues of “lack reasoning” and “exhibit hallucinations” in models represent persistent concerns, with their prominence in the academic community demonstrably increasing, as illustrated by Google Trends2 data shown in Fig. 2. In this section, we compare these two pivotal issues to demonstrate the necessity of examining them from the perspective of Internal Consistency. The definitions of two issues, real-world examples3, and benchmark examples are shown in Table I. Relative Search Interest100806040ChatGPT Release200 LLM Hallucination: (Worldwide) LLM Reasoning: (Worldwide) Fig. 2. Relative search interest for the keywords “LLM Hallucination” and “LLM Reasoning.” The data retrieval date is June 14, 2024. “Reasoning” and “hallucination” have different connotations; however, it is challenging to clearly distinguish between “lack reasoning” and “exhibit hallucinations.” As illustrated in Table I, in the real-world example, an LLM assumes that each daughter has a corresponding brother, clearly demonstrating a misunderstanding of the question—a hallucination that contradicts the user’s input. This can also be interpreted as a failure in reasoning, as the model doesn’t deduce that “each having a brother means there is only one brother in total.” Consequently, it is difficult to definitively determine whether this scenario is a hallucination or a lack of reasoning. Similarly, MMLU [15] serves as a widely recognized reasoning evaluation benchmark, while TruthfulQA [16] is a hallucination evaluation benchmark. Yet, both benchmark examples in Table I, addressing financial topics in a question–answering format, make it even harder to find an essential difference between them. Consequently, we argue that “lack reasoning” and “exhibit hallucinations” share the same essence. However, due to the different keywords, many works claim their methods solely “elevate reasoning”, even though they compare against baselines claiming to “alleviate hallucinations”, and vice versa. For example, Zhang et al. [17] proposed a method named RATT to enhance the reasoning ability but used the hallucination evaluation benchmark TruthfulQA [16] in the experiments. Similarly, Zhang et al. [7] focused on exploring methods to reduce hallucinations but employed the TriviaQA benchmark [18], supposedly to “test the model’s reasoning ability”. These instances underscore the confusion in the development of this field caused by the misuse of terminology. Therefore, a unified perspective is required to analyze these two similar phenomena. We adopt “Internal Consistency Mining” as a term to encompass methods aimed at “reasoning elevation” and “hallucination alleviation”. Indeed, LLMs don’t comprehend reasoning or hallucinations; they only predict the next token based on probabilistic principles. # B. Self-Feedback to Promote Internal Consistency To enhance the model’s internal consistency, scaling the model’s parameters is the most direct approach [19]. However, this doesn’t address the fundamental problem of weak consistency, as shown in Fig. 1. Furthermore, small language models provide distinct advantages, including lower computational costs and edge-side availability [20]. This indicates that while studying scaling models, it is also crucial to explore strategies for maximizing the capabilities given a small language model. Numerous initiatives have been undertaken to improve the Internal Consistency of models. A pivotal approach involves mimicking human thought processes, which enables models to self-evaluate their own outputs and self-update their structure. --- # TABLE II |Survey|Target|Framework Modules|Feedback Form|Depth| |---|---|---|---|---| |Self-Evolution [22]|Instruction Following↑, Reasoning↑; Math↑; Code Generating↑; Role-Play↑; Planning↑; Tool Using↑|Experience Acquisition; Experience Refinement; Updating; Evaluation|Textual; Scalar; External|Response| |Self-Correction [23]|Hallucination↓; Unfaithful Reasoning↓; Toxic, Biased and Harmful Content↓|Language Model (Patient); Critic Model (Doctor); Refine Model (Treatment)|Textual; Scalar; External|Response, Decoding| |Self-Correction [24]|Reasoning↑; Knowledge↑; Context-based Generation↑; Open-ended Generation↑|Initial Response Generation; Feedback; Refinement|Textual; External|Response| |Self-Feedback (Ours)|Internal Consistency Mining (Reasoning Elevation; Hallucination Alleviation)↑|Self-Evaluate; Internal Consistency Signal; Self-Update|Textual; Scalar; External; Contrastive|Response, Decoding, Latent| Several other surveys have also focused extensively on works prefixed with "Self-". To our knowledge, surveys such as [22], [23], and [24] demonstrate substantial similarities to our work. We present a straightforward comparison in Table II. A Survey on Self-Evolution of Large Language Models [22] primarily focuses on two aspects of literature. It covers articles on LLMs generating their own training data and papers employing multi-agent approaches for iterative optimization. Compared to other studies, this study is the most comprehensive in content. As shown in Table II, the methods summarized in this survey encompass various tasks, including Instruction Following, Code Generation, and Planning. Given the wide range of tasks covered, the objectives of Self-Evolution proposed in this survey may lack a clear focus. Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies [23] specifically concentrates on Self-Correction, a process where models correct their own errors. Defined tasks encompass reasoning errors and harmful information. Owing to its explicit task definition, this survey offers a more detailed and comprehensive theoretical analysis. These tasks are classified based on the timing of correction, including training-time, generation-time, and post-hoc corrections. Although the classification approach is logical, it doesn't provide a comprehensive overview related to latent layers. Furthermore, the survey introduces a third task, Biased and Harmful Content Elimination, which is notably different from the first two tasks and leans towards subjective evaluation, indicating the potential for improvement in the task definition. When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of Self-Correction of LLMs [24] also focuses on Self-Correction, delving deeper into the issue by primarily questioning whether models can genuinely Self-Correct. However, the survey's approach to this question is limited as it only considers cases where Feedback Signals are textual and partially external, thereby directly constraining the scope of Self-Correction. Consequently, we find that the survey's summary is not sufficiently comprehensive and lacks adequate support for its conclusions. In Section IX, we further explore this issue and provide a more insightful analysis. Compared to these surveys, our advantages are as follows: 1. Internal Consistency theoretical framework. We offer an in-depth review of LLMs' Internal Consistency, examining its phenomena, formalization, status quo, etc. Furthermore, we introduce the task of Internal Consistency Mining, providing a unified perspective for reasoning elevation and hallucination alleviation tasks. 2. Self-Feedback theoretical framework. Our framework includes Self-Evaluation, Consistency Signal Acquisition, and Self-Update. Characterized by its simplicity and comprehensiveness, this framework is poised to inspire further research. We summarize a broad array of Self-Evaluation strategies that extend from model responses to latent space exploration. These strategies allow us to capture a diverse range of Feedback Signals, extending beyond the scalar, textual, and external signals discussed in other surveys, to include contrastive signals. 3. Taxonomy based on lines of work. Unlike other surveys that categorize methods based on theoretical frameworks alone, we organize similar methods into coherent lines of work. Subsequently, we summarize their Self-Evaluation and Self-Update strategies per line. Thus, our summarized lines are consistent with the baselines mentioned in related works, enabling scholars to quickly position their research within the field. 4. A better response to "Does Self-Feedback Really Work?" Many surveys discuss this question but often provide biased (using the success or failure of a specific method to represent the entire field) or overly complex. --- # Fig. 3. Core Concepts and Article Organization (Mainly Involving Sections II ˜ VII). |I-A Phenomena|II-A: Formulation|II-A & II-B: Inclusion| |---|---|---| |Q-A Tasks|Lack Reasoning|Internal Consistency:| |Open-ended Tasks|Exhibit hallucination|Response Consistency| | | |Deeper| |II-D: Sources| |Decoding Consistency| |1. Latent Reasoning| |Deeper| |2. Snowballed Hallucination| |Latent Consistency| |3. Stochastic Parrot| |Train Larger Model| |III: Self-Feedback Framework| | | |Self-Evaluate Response|IV: Signal Acquisition|Consistency Signal| |Deeper Logits|Scalar|External| | |Uncertainty/Confidence Estimation, Hallucination Detection|Compiler execution results, Model feedback from other models…| | | |Model Response| |Deeper Latent States| | | | |Model generated critique, correction, refinement, …| | | | |Token probability comparison, response ranking, …| |VII: Other Tasks|VI: Hallucination Alleviation|V: Reasoning Elevation| |Data Synthesis|Reining the Mitigating Hall. Decoding|Activating Reasoning Reining with Multi-Agent| |Knowledge Distillation|Response Iter. Generating Truthfully|Truthfulness Topologically Responses Collaboration| |Preference Optimization| | | |…| | | (providing different answers for each type of work). analyses. Thanks to our proposed perspective on Internal Consistency, we provide a more insightful analysis. Additionally, we also incorporate insights from other weakly related surveys. Section IV-A draws inspiration from [25], which meticulously investigates the uncertainty issues in LLMs. Additionally, [26] offers in-depth analyses of strategies like Chain of Thought, Self-Consistency, Tree of Thought, and Graph of Thought, which have guided our discussions in Section V-A. Moreover, the methodologies and insights from surveys on knowledge distillation [27] and preference learning [28] have been valuable in shaping Section VII. # Structure of Our Work The logical structure and the organizational framework of this paper are depicted in Fig. 3. Our research begins with the existing problem of low Internal Consistency in LLMs (Section II-C). Specific manifestations of low Internal Consistency include poor reasoning capabilities in question-answering (QA) scenarios and hallucinations in free-form generation (Section I-A). From a causal perspective, elements contributing to low Internal Consistency include inadequate latent reasoning abilities, the snowball effect of hallucinations, and the stochastic parrot hypothesis (Section II-D). We formalize internal consistency as the sampling-based consistency of model expressions across different layers (Section II-A). This involves enhancing response consistency, decoding consistency, and latent consistency (Sections II-A & II-B). To improve Internal Consistency, we propose Internal Consistency Mining across these layers. While scaling up the model is an intuitive solution, it comes with various cost-related challenges. Thus, we focus on the Self-Feedback theoretical framework, which mainly includes Self-Evaluation, Consistency Signal Acquisition, and Self-Update. Models obtain different forms of Internal Consistency signals through Self-Evaluation, and subsequently use these signals to Self-Update either responses or the model itself (Section III). We explore six lines of work in Consistency Signal Acquisition (Section IV) and seven lines of work utilizing the Self-Feedback framework, divided into three lines dedicated to reasoning elevation (Section V) and four lines aimed at hallucination alleviation (Section VI). --- # INTERNAL CONSISTENCY Internal Consistency is the core concept in our work. In this section, we define this concept and present an experimental analysis that vividly delineates three distinct types of internal consistency. We discuss the strengths and weaknesses of current language models in terms of internal consistency and analyze their underlying reasons. Ultimately, we offer a straightforward explanation of internal consistency. # Formulation Consistency is a critical term in logic, referring to a system where no two statements contradict each other [31]. However, systems like those of language models typically exhibit inconsistencies. For instance, as shown in Fig. 1, even GPT-4o can't guarantee consistent responses to the same question. To better define the internal consistency, we utilize a sampling-based approach to model expressions in LLMs [25]. For a large language model M and a user query x, we can obtain expressions from the model for this query, defined across three different types as follows: |Expression Type|Description| |---|---| |Expression from Response Layer (text)|Expressions consist of sentences that may show inconsistencies due to random sampling or subtle variations in input queries.| |Expression from Decoding Layer (token)|Expression refers to the choice of different tokens influenced by various decoding strategies (e.g., beam search, top-p).| |Expression from Latent Layer (vector)|Expression at this layer encompasses the different activation of attention heads and latent states across the model's architecture, contributing to diverse outputs.| For the expression type e, the distribution of different expressions produced by model M in response to the query x can be defined as follows: Oe(M, x), e ∈ {response, decoding, latent} (1) By sampling from this distribution (e.g., repeatedly prompting the model with a query or its synonymous variants), we can obtain a sampling set with potentially repeated elements: Y = {y1, y2, . . . , yn}, yi ∼ Oe(M, x) (2) Here, yi represents the i-th sample obtained from Oe(M, x). With this sampling set, various methods can be employed to estimate the consistency of these expressions. For example, as shown in Fig. 1, we can obtain Y = {4, 3, 3, 3, 4}. Below are two relatively trivial estimation methods. From a statistical perspective, we can compute the negative variance as a measure of consistency, as shown in Eq. 3; from an information-theoretic perspective, we can use the negative entropy as a measure of consistency, as shown in Eq. 4. However, simple variance and entropy may not provide useful guidance for better result updates, and their applicability is limited to tasks where expressions are numerical labels. -D(Y) = -E(Y - E(Y))2 = -0.24 (3) -H(Y) = Σi=1n Xp(yi) log2 p(y) ≈ -0.971 (4) We will comprehensively discuss existing methods for acquiring consistency signals in Section IV. Additionally, the three different types of "expressions" mentioned above constitute the main focus of this paper's discussion on three types of consistency: Response Consistency, Decoding Consistency, and Latent Consistency. Fig. 4 visually illustrates the positions of these three types in an LLM. --- # Fig. 4. Positions of the Three Types of Consistency B. The Hourglass Evolution of Internal Consistency In this section, we delve deeper into the three different types of Internal Consistency to uncover their relationships. We conducted a simple experiment where a model was asked to respond to a straightforward query to observe the consistency of different types of expressions. The given model M is Llama3-8B-Instruct6, and the given query x is: How many full stops (periods) are there: “.!..!..!”. We then posed this question to the model to observe the preferred answers in the {response, decoding, latent} layers. Below are the methods for collecting sampling sets at different layers. Refer to Appendix C for detailed experimental settings and results. # Response Layer We used common Top-p sampling decoding with a temperature coefficient control to sample five times. To induce the model to output longer and more diverse content, we enabled CoT reasoning. We observed the model’s final text choices during free generation. One example output is: “Let’s think step by step. There is one period at the end of the first part, then another after the second part, and finally one more after the third part. So, there are 3 periods in total.” The resulting simplified sampling set is Yresponse = {5, 3, 3, 3, 3}. # Decoding Layer We used five decoding strategies to sample and observe the tokens selected. These decoding strategies included Greedy Decoding, Beam Search Decoding, Sampling Decoding, Top-k Sampling Decoding, and Top-p Sampling Decoding. The sampling set is Ydecoding = {4, 4, 3, 4, 4}. # Latent Layer We hypothesized that different attention heads lead to different answers. To test this, we kept only the h-th attention head of the l-th Transformer block of model M active and set the attention output of other heads in that layer to zero, observing which token had the highest probability in the forward pass. We used six different combinations of l and n, i.e., (l, n) ∈ {0, 15, 30} × {0, 16}. The resulting ordered sampling set is Ylatent =< 0, 0, 5, 4, 4, 4 >. The experimental results are also shown in Fig. 5. We observed that from the shallow layers of the latent layer to the deeper layers, through the intermediate decoding layer, and finally to the response layer, the consistency of the model’s answers exhibits an “hourglass evolution” pattern. We analyze this phenomenon as follows: In the latent state, since the forward propagation is not yet complete, the attention heads near the bottom layers may tend to choose answers randomly. In contrast, the attention heads near the top layers can continually accumulate knowledge from the previous layers due to the presence of residual connections, leading to a gradual convergence in judgment and increased certainty in answers. During the decoding phase, all decoding strategies tend to output the token with the highest probability, thus maintaining high certainty. However, at the response stage, greater variability appears. We believe this is because, when the LLM generates the first token, it has already conducted reasoning (latent reasoning) and make an initial judgment of the answer. However, during the response phase, the output tokens such as “I’m willing to help.” can interfere with the model’s initial reasoning and preliminary judgment, leading to a collapse in the computation results during latent reasoning. From this figure, we can also see that our goal is to have the orange consistency boundary line move as close to the center as possible, which is the goal of internal consistency mining. # Fig. 5. The Hourglass Evolution of Internal Consistency C. Status Quo of LLM Consistency This section briefly discusses the current performance of LLMs in terms of consistency. As indicated at the beginning of the survey, GPT-4o’s various responses to the same question (see Fig. 1) already demonstrate that even relatively powerful language models still exhibit low consistency. There are also many works that either actively or inadvertently explore the Internal Consistency of LLMs. The well-known Self-Consistency explores the use of the majority voting strategy, where the LLM generates multiple responses and selects the most voted one as the final response. Their experiments showed that on the reasoning benchmark GSM8K, this method increased the answer accuracy from 56.5% to 74.4%. This implies that many responses may be random and do not represent a consistent response. In terms of hallucination alleviation, M¨undle et al. proposed the Self-Contradict strategy, which attempts to generate different samples to identify self-contradictory content and introduce latent reasoning. This concept implies that when a Transformer model responds to a query, it has already conducted reasoning before generating the first token, similar to how humans think before speaking. We will also explain this concept further in Section II-D. --- # Sources of Low Internal Consistency Why do models exhibit low consistency? Many scholars have conducted in-depth research on this phenomenon, exploring the causes from various angles such as prompt engineering, the decoding process, and the attention mechanism. Some studies explore the impact of slight variations in prompts on response consistency. The structures of prompt may cause low consistency. Xie et al. [36] designed different CoT prompts and discovered that under different CoT’s guidance, the latent state’s distance between the intermediate and final layers of LLMs varied significantly. Simply put, some prompt designs result in low consistency between different latent layers, with significant differences in the probability distribution of the next token prediction. Liu et al. [37] tested the accuracy of LLMs’ responses by placing text containing answers in different positions within the prompt, discovering a “lost-in-the-middle” phenomenon. Models tend to focus on the content at the beginning and end of prompts, leading to inconsistent responses to differently structured prompts. Similarly, Liu et al. [38] found that hallucinations emerge when LLMs dealing with long context. They analyzed that this is caused by the soft attention mechanism, where attention weights become overly dispersed as sequence length increases, leading to poor consistency in reasoning paths. In addition to prompt engineering explorations, some researchers have directly investigated drawbacks within the LLM architecture. Yang et al. [33] studied whether models perform intermediate reasoning within latent states when answering questions and whether enhancing this intermediate reasoning signal strength could improve answer accuracy. Taking the example of the model answering “In what year did Plato’s teacher die?”, this work specifically investigated whether the model infers the intermediate entity “Socrates” in the latent layer and whether increasing the weight of the latent state corresponding to the intermediate entity would make the model answer more accurately. The experimental results showed that models do possess latent reasoning capabilities, but these are weak. On the one hand, the signal strength of the intermediate entity is weak. On the other hand, enhancing this signal strength did not significantly improve the LLM’s response. This indicates deficiencies in current LLM architectures in performing and expressing latent reasoning. This means that when predicting the next token, the model may make near-random predictions due to failed intermediate entity reasoning. Additionally, Zhang et al. [39] argued that models could appear hallucinations due to the “snowball effect”. The full attention mechanism makes LLMs overly confident in their outputs, leading to compounding errors if an initial reasoning mistake occurs. Consequently, model’s responses may become inconsistent with the knowledge it has learned. Furthermore, some hypotheses and viewpoints reveal potential reasons for the low internal consistency of current LLMs. Bender et al. [40] proposed that large language models might be “stochastic parrots”, learning rules and patterns from training data rather than truly understanding the grammar and semantics of natural language. This inherent randomness in generation reflects a form of internal inconsistency in the model. Ma et al. [41] proposed the Principle of Self-Consistency for intelligent agents, aiming to find a coherent model that minimizes internal differences between observed and regenerated data. They found many factors that could affect internal consistency, such as mode collapse, neural collapse, and over-fitting or under-fitting caused by overly high or low dimensional feature spaces. In conclusion, both theoretical and experimental findings indicate that model architecture, training processes, and user’s queries can all contribute to low internal consistency. Understanding these causes can help researchers better address this issue and improve model performance. # How to Understand Internal Consistency? This section adopts a broader perspective, an alignment perspective, to help readers understand Internal Consistency. If there is Internal Consistency, there must also be corresponding External Consistency. Internal Consistency focuses on whether the model can align with itself during expression. External Consistency includes the alignment between the pre-training dataset and the pre-training model parameters, the alignment between the pre-trained model and the chat model, and the alignment between the chat model and the model subjected to RLHF (Reinforcement Learning with Human Feedback), among others. These different alignments can be illustrated in Fig. 6. Each stage of alignment plays a unique role. Internal Consistency is crucial for AI safety. Kadavath et al. [42] mention the significant value of Internal Consistency: - Truthfulness. LLMs should provide factually accurate information, including finding, using, and evaluating source materials correctly. - Calibration. LLMs’ probabilistic predictions should correspond with frequencies of occurrence. --- # Fig. 6. Various Alignments Involved in the LLM development - Self-Knowledge. LLMs should know what they know and make accurate predictions about their own behavior. - Explainability. LLMs should reveal their “thinking” completely and faithfully. - Non-deceptiveness. LLMs should be ensured not to lie, even when human preference might encourage systematic mistakes or provide rewards for pleasant misconceptions. In addition to these significant values in AI safety, Internal Consistency also affects the robustness and reliability of AI systems. For instance, the current inability of models to reason consistently directly impacts an AI agent’s understanding of goals, leading to incorrect operations by the agent [43]. # III. SELF-FEEDBACK FRAMEWORK # A. Formulation Self-Feedback is a theoretical framework we have summarized from numerous studies. It includes Self-Evaluation and Self-Update, as shown in the middle part of Fig. 3. Self-Feedback refers to the method of improving a model’s own Internal Consistency through its feedback, where “own” refers to a specific model entity or a specific response. “Own” can be extended to other models. For example, multiple different models can improve their capabilities through feedback generated from debates among them, which is a more generalized interpretation of Self-Feedback. Based on the above descriptive definition, we can formalize the process of Self-Feedback. For a given model M, query x, and a sampling set Y obtained under a certain expression, Self-Evaluate is first performed to obtain feedback f: f = SelfEvaluateM(Y) (5) # B. Taxonomy The Self-Feedback methods introduced in this paper can be categorized from two different perspectives, as shown in the middle and bottom parts of Fig. 3. The first perspective classifies based on the components of Self-Feedback, as depicted in the middle part of Fig. 3. In Self-Feedback, the initial step is Self-Evaluation. The model can evaluate its own response, token probability distribution, or the latent states. Through Self-Evaluation, different consistency signals can be obtained: scalar signals (e.g., the confidence level of the output), textual signals (e.g., the model’s critique of its own output), external signals (e.g., results from a Python interpreter), and comparative signals (e.g., different token probability distributions induced by two different prompts). These signals are then used for downstream Self-Update, with three typical update strategies. The model can use the signals to prompt itself for direct result modification, identify better outputs for re-tuning itself through Self-Evaluation, or even re-tune a student model in the domain of knowledge. --- # IV. TASK: CONSISTENCY SIGNAL ACQUISITION Consistency signal acquisition refers to evaluating the consistency of expressions after obtaining the sampling set Y from the language model M for the query x. The evaluated signal can help the model update its expressions or parameters, thereby improving the model’s Internal Consistency. Therefore, consistency signal acquisition is a pivotal task within the Self-Feedback framework. These methods either require access only to the model’s output contents, to the logits, or the latent states of the model. Depending on the depth of access required by different methods, the approaches mentioned in this section are categorized as black-box (accessing only the model’s output contents), gray-box (also accessing logits), and white-box (also accessing the model’s latent states). Numerous explorations have been undertaken in this task. These include: - Section IV-A: Uncertainty Estimation (Scalar) - Section IV-B: Confidence Estimation (Scalar) - Section IV-C: Hallucination Detection (Scalar) - Section IV-D: Verbal Critiquing (Textual) - Section IV-E: Contrastive Optimization (Contrastive) - Section IV-F: External Feedback (External) The names of the first three lines of work mentioned above separate research topics that are actually quite similar. They all provide scalar feedback for LLM responses, and some works even mix the keywords from these three lines, such as [58]–[60]. The differences among these three lines mainly lie in the slight distinctions in their downstream tasks. Estimating the uncertainty and confidence of model expressions is essentially two sides of the same coin, as both calculate the model’s certainty level to obtain a scalar within the range of [0, 1]. This approach is often used to optimize the model’s reasoning ability, selecting a better reasoning path based on the uncertainty rate or confidence rate. Moreover, hallucination detection determines the presence of hallucinations (choosing from {0, 1}) through various methods, which is evidently more used in the task of hallucination alleviation. In addition to the aforementioned works that obtain scalar signals, other types of signals have been explored. Verbal Critiquing refers to having the language model directly evaluate the quality of an output, providing suggestions for improvement. External Feedback leverages external sources, such as textual feedback from other robust models or error messages from a compiler in code generation tasks. Finally, there is a more implicit signal, contrastive optimization, which obtains consistency signals through the comparison between different expressions and optimizes towards consistency. In this section, we focus more on the first three lines of work, as they are often studied independently and are hotspots in academic research. The last three lines of work are only briefly mentioned here, as they tend to be relatively simple or implicit methods. They will be elaborated in Sections V, VI. # A. Uncertainty Estimation Due to the black-box nature of the deep learning, uncertainty estimation has always been an important topic. Uncertainty estimation refers to estimating the data uncertainty, model uncertainty, and distributional uncertainty involved in the neural networks [61]. For uncertainty estimation in the NLP field, Hu et al. [25] conducted a detailed survey. Interested readers can refer to this article for further understanding. Here we briefly introduce the uncertainty modeling proposed in their work. The purpose of uncertainty modeling is to identify the sources of uncertainty that cause the model to generate uncertain results and to systematically understand the uncertainties present in the model. Hu et al. [25] categorize and explain the sources and modeling methods of uncertainty from the perspectives of models, outputs, and distributions. 1. Calibration Confidence-based Methods: This approach aims to correct the reliability of the uncertainty estimates provided by the model. The basic idea is to compare the accuracy of predicted probabilities with actual probabilities. 2. Sampling-based Methods: This approach models the variability of multiple expressions provided by the model, allowing us to observe the arising uncertainties. This method is also the focus of our article. 3. Distribution-based Methods: This approach starts with the training dataset, pre-judging the distribution characteristics of the dataset and directly calculating the model’s inherent uncertainty by constructing specific distribution functions. We introduce an important method cluster within Sampling-based Methods: Monte Carlo Dropout (MCD) [62]. In traditional deep learning, model predictions are often deterministic, and multiple samples yield consistent answers, preventing us from understanding the model’s implicit certainty about the results. The MCD method uses dropout technique to construct an implicit binomial distribution. For example, a 50% dropout probability constructs a B(#activation, 0.5) binomial distribution, which implicitly creates multiple models with different parameters θi ∼ q(θ), i = 1, 2, . . . , n. At test time, MCD uses multiple models with different parameters to obtain multiple output results P (yi|x; θi) and estimates the model’s uncertainty by calculating the variance of results. --- # TABLE III |Section: Paradigm|Expression|Signal Type|#LLM|Train.|Self-Evaluation|Self-Update|Typical Works| |---|---|---|---|---|---|---|---| |V-A: Reasoning Topo-logically|Response, Decoding|Scalar, Textual, Contrastive|1|No|Majority Voting, Value Function|Best Selection|[2], ToT [44], GoT [45]| |V-B: Refining with Responses|Response|Textual|1 or 2|Half|Sampling|Best Selection, Model Tuning|Self-Improve [46], ConCoRD [47], LEMA [48]| |V-C: Multi-Agent Collaboration|Response|Textual, Scalar|≥ 2|Rare|Negotiation|Answer Aggregation|FORD [49], MACNet [50], RE-FINER [51]| |VI-A: Refining the Response Iteratively|Response|Textual, External|1|Few|Model Generate Critique|Model Generate Refinement|Self-Refine [8], Reflexion [52], Self-Correct [9]| |VI-B: Mitigating Hallu. while Generating|Response|Textual, Contrastive, External|1|Few|Inherent model evaluation|Model Delete Hallucination|Self-Contradict [21], EVER [53], FEVA [54]| |VI-C: Decoding Truthfully|Decoding|Contrastive|1 or 2|No|Evaluate Decoding Path|Select the Best Decoding Path|DoLa [55], CAD [56], DIVER [57], SED [11]| |VI-D: Activating Truthfulness|Latent|Contrastive|1|No|Evaluate Latent States|Activate Best States|ITI [5], TrFr [6], TruthX [7]| Note: This table summarizes the characteristics of representative methods. The first three lines are dedicated to “Reasoning Elevation”, while the latter four lines are focused on “Hallucination Alleviation.” #LLM indicates the number of LLMs needed. Train. denotes “How many works need training?” LLM, obtaining different expressions is much easier, such as using temperature coefficients to control generation. From the perspective of MCD, changing the probability of the Softmax layer implicitly constructs different models. Besides MCD, which offers more explanatory insights, there are simpler, Sampling-based Methods available. For example, the Active Prompting strategy proposed by [63] uses disagreement in answers as an estimate of uncertainty, SelfEvaluate(Y) ≜ |unique(Y)|. Here, unique(Y) represents the set after removing duplicate elements. # B. Confidence Estimation Literally, confidence is just the antonym of uncertainty, and their evaluation objectives are the opposite. Uncertainty tends to provide insights into model interpretability, while confidence focuses on providing reliability scores, thereby directly enhancing user trust and engagement with the responses. In this line of work, Self-Evaluation is the core method. The concept of Self-Evaluation was first proposed in [42], where the goal is for the model to express its level of confidence using its own knowledge and reasoning. As shown in Fig. 7, the Self-Evaluation method simply asks the AI: Is the proposed answer True or False? Then, the confidence score, P(True), is extracted from the model’s logits. Prompt for Self-Evaluation Question: Who was the first president of the United States? Proposed Answer: George Washington was the first president. Is the proposed answer: (A) True (B) False The proposed answer is: Fig. 7. Prompt for Self-Evaluation [42] Besides naively asking the model whether it thinks the proposed answer is correct, some works have proposed other frameworks. For instance, BSDetector [64] is a confidence estimation framework suitable for both black-box and white-box models. It combines the consistency of multiple outputs sampled from the model with the model’s own reflection on its output, weighting these scores to obtain the confidence scores. Another example, TrustScore [65] is a reference-free confidence estimation framework using behavior consistency. It generates distractors based on entity information rules from Wikipedia, asks the LLM multiple times, and checks if it consistently chooses its own generated answer. # C. Hallucination Detection Hallucination Detection aims to identify untruthful or unfaithful text within a response. SelfCheckGPT [66] provides a reference-free hallucination detection framework. Specifically, the goal of SelfCheckGPT is to determine the presence of hallucination in a given query x and response y0. The framework works in three steps. Firstly, the model samples several different responses, Y = {y1, y2, . . . , yn}. Secondly, it calculates whether y1:n support y0. Finally, it summarizes the support level to calculate the final score. Designing support level metric is where creativity can be applied, and the authors provide five different methods: - Similarity-based: Compute the negation of the mean similarity between y1:n and y0; - QA-based: Generate many questions from y0 and test consistencies in the answers derived from y0 and y1:n; - N-gram model-based: Build an n-gram model from Y, then use it to compute the negation of the mean transition probability between tokens in y0; - Natural language inference (NLI)-based: Compute the mean probability of contradiction between the responses; - Prompt-based: Similar to Self-Evaluation [42], directly ask the language model whether y1:n support y0. The Alibaba team proposed INSIDE [67] for deeper exploration. They sampled the latent expressions. They extracted the --- # vectors from the intermediate layers corresponding to the last token of the sampled sentences and calculated the covariance matrix of these vectors. Since the eigenvalue of the covariance matrix represents data variability, they used this value as a measure of hallucination. Intuitively, if the variability is small, it indicates that the sentence vectors are not far apart from each other, and the likelihood of hallucination is low. The above works generally employ a single LLM. Additionally, some methods utilize multiple agents to detect hallucination. For example, Cross Examination [68] uses two LLMs, an Examinee, and an Examiner, employing a cross-examination approach to determine factual errors. The Examinee generates responses, while the Examiner iteratively asks questions related to the initial response. The Examinee answers these questions for several rounds, then examiner determines whether there are factual errors. # D. Verbal Critiquing The aforementioned methods can only provide scalar signals for downstream Self-Update, lacking flexibility. Inspired by the idea that “all tasks are generation tasks” [69], [70], many works have proposed allowing LLMs to generate more semantically rich textual signals. There include: - Let LLMs offer critiques. Saunders et al. [71] use a fine-tuned Self-Critiquing model to generate insights on content. McAleese et al. [72] use RLHF based on the GPT-4 model to train the model to critique code generation, resulting in CriticGPT. Du et al. [73] propose the Multi-Agent Debate method, where two agents generate modifications to each other’s content, gradually converging to an outcome. - Let LLMs summarize. Xiong et al. [49] use a Judge LLM to aggregate the results produced by multiple agents, providing a final judgment. Graph-of-Thought [45] uses the aggregation of thoughts to perform subsequent reasoning. - Let LLMs refine the text. These methods involve the LLM generating a refined response as a better result [8], [9], [52]. # E. Contrastive Optimization Contrastive optimization is an implicit signal acquisition method, which often involves constructing a scoring function, score(yi), to evaluate all responses in the sampling set Y, {score(yi)|i = 1, 2, . . . , n}. Finally, the best candidate is selected as ybest = arg maxyi score(y).i Here are some examples of scoring functions at different layers. At the latent layer, in order to find attention heads with a stronger preference for truthfulness, Li et al. [5] trained a probe to evaluate the attention head’s ability of answering questions truthfully. At the decoding layer, Self-Evaluation [42] can be used to evaluate the reasoning paths during beam search, comparing scores to choose a better decoding direction [74]. At the response layer, the well-known Self-Consistency [2] strategy implicitly relies on comparisons between different responses. A variant, Soft Self-Consistency [75], calculates the joint probability of tokens for each response as the scoring function. # F. External Feedback Sometimes, feedback from the model itself is not sufficient, and external feedback is needed. For example, in code generation, if there are hallucinations (bugs) in the code, it is difficult for even humans to accurately identify some bugs without executing the code with an external executor. Self-Debug [76] proposes using the execution results from an external executor as feedback. Besides using external tools, some works use other models as external feedback sources, such as a more powerful teacher model [77] or a peer model [73]. The commonly used RAG method, which can incorporates information retrieved from external sources as external feedback, is another example. However, as mentioned in Section I-E, using external information sources is not the focus of this paper as it does not pertain to Internal Consistency Mining. Nevertheless, many studies employing external signals in Self-Feedback are summarized in Section VII. # V. TASK: REASONING ELEVATION Reasoning Elevation refers to enhancing the logical reasoning capabilities of language models during response generation to improve their internal consistency. The primary feature of this line of work is the use of benchmarks in the form of QA tasks. We have identified three significant lines of work, as shown in the upper part of Table III. # A. Reasoning Topologically When answering a question, LLMs may employ different reasoning methods to solve the same problem. For instance, solving a system of linear equations can be done using either substitution elimination or addition-subtraction elimination. Similarly, LLMs may choose different reasoning paths, but not all reasoning paths lead to the correct answer. Therefore, finding reasoning paths that are consistent with the learned knowledge becomes a key issue, leading to a series of works focusing on optimizing reasoning paths. Fig. 8 summarizes the similarities and differences of these works. A survey [26] comprehensively examines various X-of-Thought methods, including Chain-of-Thought, Self-Consistency, Tree-of-Thought, and Graph-of-Thought. Here, we briefly introduce these methods. Readers can refer to [26] for more details. Input-Output (IO) is the most straightforward way to make a model reason. That is, ask a question and get an answer directly. However, due to the lack of intermediate steps, it often fails to solve complex problems. Consequently, the efficient and robust Chain-of-Thought (CoT) [10] was proposed, which requires the model to provide intermediate reasoning steps to avoid failures in solving difficult problems. However, this method’s limitation is that if there is an error in the reasoning, the final result will also be affected. Subsequently, Self-Consistency (SC) [2] was introduced, where a simple majority voting strategy can significantly improve the accuracy of the final answer. Yet, the majority voting strategy is inherently limited in its exploratory capabilities and can only address problems like QA, where there is a fixed answer set, allowing the selection of the most frequent answers from different responses. To address these --- # Fig. 8. Different Reasoning Topologies I⃝ / T⃝ / O⃝ indicate input / intermediate thought / output, respectively. #(·) and d(·) indicate the number and the degree of nodes, respectively. limitations, Tree-of-Thought (ToT) [44] was proposed, which views reasoning as a path linking different thoughts, with each node having multiple successor nodes for thorough local exploration. This method also developed a general scoring system to let the language model evaluate the quality of nodes and decide whether to explore further. Graph-of-Thought (GoT) [45] further extended this line of work by providing aggregation among different thought nodes, enhancing the utilization of reasoning chains. Maieutic Prompting [78], similar to GoT, attempts to establish entailment relationships between thoughts through rules, then constructs a Max-SAT [79] problem to obtain the best choices. Most X-of-Thought methods require sampling and aggregation of thoughts, often limited to queries with fixed label sets during aggregation. To solve this problem, several works have emerged. Multi-Perspective Self-Consistency (MPSC) [80] targets code generation tasks, evaluating each solution from multiple perspectives (solution, specification, and test case) to select the best one. Universal Self-Consistency (Universal SC) [81] uses LLMs instead of simple answer matching to choose the most selected response, enhancing the stability of the majority voting. Soft Self-Consistency (Soft SC) [75] proposes a more adaptive scoring function, calculating the joint probability of tokens in a response as the scoring function, thus extending the problem scope to soft labels. Additionally, Quiet Self-Taught Reasoner (Quiet-STaR) [82] addresses the issue mentioned in Section II-B, where “although complex reasoning in responses is beneficial for solving intricate problems, they may disrupt model’s latent reasoning due to redundant reasoning text, thereby increasing response-level inconsistency.” Quiet-STaR samples rationales from model’s responses and wraps each rationale between special markers, that is, <|startofthought|> and <|endofthought|>, to assist next-token reasoning. These rationales are invisible to the user, making latent reasoning explicit and effectively reducing conflicts. However, these lines of work are mostly focused on how to choose the next thought from an input, overlooking the input stage. An input is a combination of a query and a prompt template. While the query remains relatively unchanged, the instructions and demonstrations in the prompt template can be optimized. Several works have explored this area: DIVERSE [83] pre-constructs various prompt templates to increase prompt diversity. Promptbreeder [84] uses genetic algorithms [85] to continuously optimize the original prompt template. DSPy [86] innovatively builds a prompt optimizer, similar to a gradient optimizer in PyTorch. These methods extend reasoning topology to the input stage, demonstrating significant creativity. Boldly, we could construct a reasoning-topology-oriented framework incorporating prompt optimization, which could potentially solve more complex problems. Furthermore, we can extend our approach to the decoding stage. CoT Decoding [87] incorporates CoT’s ideas into the decoding process, attempting to identify CoT-included decoding paths in the natural decoding process. ToT Decoding [74] integrates ToT concepts into decoding, replacing beam search criteria with Self-Evaluation [42], where each token’s selection depends on confidence scores C(·), achieving better reasoning, as shown in Eq. 9, where yt is the t-th token in string y. Self-Evaluation Strategy. The methods discussed in this section typically require searching the thought graph, necessitating evaluators to determine the usefulness of thoughts and whether they merit further exploration. These works generally use three approaches: Majority Voting, selecting the most consistent response among multiple thoughts [2]; Rule-based methods, designing specific scoring functions based on the problem, such as error scoring functions in sorting tasks, representing the number of inversions and frequency differences before and after sorting [45]; and LLM-based methods, like the scoring function in the Game of 24 task, where LLMs rate the solution’s feasibility as “sure/maybe/impossible” [44]. Self-Update Strategy. For Self-Consistency prompting, the update uses majority voting result. For ToT prompting, the update method uses BFS and DFS strategies to search and select suitable thoughts as output. For GoT prompting, the --- # Journal of LaTeX Class Files, Vol. 14, No. 8, August 2021 Update method is similar to ToT but includes more extensive search spaces, aggregating different thoughts. Despite the innovations, these methods have several limitations [26]: 1. They often select extremely simple tasks like Game of 24, Sorting, and Keyword Counting for experiments. 2. They incur high reasoning costs. 3. They struggle to adapt to general tasks and deployment. Accurate and consistent predictions. Similarly, Mitchell et al. [47] propose ConCoRD (Consistency Correction through Relation Detection), which further utilizes a pre-trained NLI model to estimate the logical relationships between model predictions, constructs a factor graph representing the prediction probability distribution, and finds the maximum probability prediction result on the factor graph. # Refining with Responses Refining with Responses refers to the process where an LLM first generates multiple responses, then identifies the better responses or self-evaluates its own generated content and corrects errors, and finally refines its output or fine-tunes the model itself to improve response consistency. The following are three common lines of work. Fine-tuning from the collected responses. This line of work involves “using self-generated data to fine-tune itself.” Specifically, they often use LLMs to produce multiple answers, select the better responses from them, and then use these better responses to fine-tune the model, enhancing its reasoning capabilities. For example, Self-Improve proposed by [46] uses a majority voting strategy to obtain better outputs, collecting such data to fine-tune the model itself. Similarly, Tian et al. [88] propose a framework called Self-Improvement, which uses Monte Carlo Tree Search for data synthesis while generating fine-tuning datasets, improving model’s reasoning capabilities. This concept is not only effective in the reasoning domain but also finds applications in other fields like the web agent domain. Self-Improved Agents [89] improved performance by 31% using this method. In the preference optimization field, SRPO (Self-Improving Robust Preference Optimization) [90], and Self-Alignment [91] both utilize model-generated preferences to align with human preferences. Learning from mistakes. This line of work is similar to fine-tuning from the collected responses but focuses on learning from errors and optimizing by avoiding mistakes. This intuitive method naturally improves model performance by avoiding errors. For instance, the LEMA (LEarning from MistAkes) method proposed by [48] samples multiple reasoning rationales, has GPT-4 annotate and correct errors among them, and uses the corrected rationales to form a new dataset for re-fine-tuning the model. Similarly, Tong et al. [92] propose the Mistake Tuning scheme: it has the model self-rethink and correct its errors based on references, using large amounts of such self-corrected datasets to fine-tune the model. Getting better response with NLI models. Besides fine-tuning methods, we also demonstrate rule-based optimization techniques using NLI. NLI is a classic task in traditional NLP that can determine the relationship between two statements as entailment / contradiction / neutral. With an NLI model, we can identify the relationships between multiple samples and find better responses. For instance, Agarwal et al. [93] use a pre-trained NLI model to identify and correct logically inconsistent statements generated by a pre-trained language model. They then convert the entailment and contradiction probabilities of the NLI into a Max-SAT problem [79], and use a constraint solver [94] to optimize and obtain more. # Multi-Agent Collaboration The methods in this category generally involve using more than one LLM to collaboratively solve problems, address contradictions, and promote consistency, essentially constituting a generalized form of Self-Feedback. There are numerous papers in the Multi-Agent field; here, we list some typical and novel works that employ Multi-Agent systems for Self-Feedback. For a more comprehensive understanding, refer to the extensive survey on LLM Agents by Wang et al. [95]. Multi-Agent Debate [73] utilizes multiple peer models that engage in iterative debates, with a fixed number of rounds as the stopping condition. Their experiments show that debates with three or fewer rounds can generally lead to convergence among agents (i.e., LLMs consistently agreeing on the same answer). Xiong et al. [49] further propose the FORD (Formal Debate Framework), which introduces a Judge LLM to summarize the agents’ statements at the end, also using a fixed number of rounds as the stopping condition. They expand the scope of LLM debates by exploring the effects of debates among models with mismatched capabilities in various scenarios. REFINER [51] trains two models with different roles: a generator for intermediate reasoning steps and a critic for feedback, continuing the iterative dialogue until the correct answer is obtained or the critic has no further feedback. Notably, using the correct answer as a stopping condition has been criticized as unrealistic [96]. The Consensus Game proposed by Jacob et al. [97] deviates from the above frameworks by avoiding direct dialogue between LLMs. Instead, different LLMs participate in a game, based on the hypothesis that “asking a model for answer A to question Q (generative)” and “asking a model if A is the answer to Q (discriminative)” lack consistency [98]. To achieve consistency, they prompt the generator to produce both correct and incorrect answers, then use the discriminator to evaluate its own responses, aiming for the generator and discriminator to reach a consensus (Nash equilibrium). They select the best response based on the degree of consistency. The significant drawback of this line of work is the high inference cost, as it often requires different LLM instances, potentially consuming multiple times the GPU memory and increasing the inference burden due to the extensive context generated by agents. Additionally, most models need a stopping condition to end the dialogue, and fixed round stopping is inflexible and can reduce performance. There is no current flexible and efficient stopping criterion. However, Multi-Agent systems remain a promising AI direction, and cost issues shouldn’t deter exploration. For instance, MACNet (Multi-Agent Collaboration Network) [50] uses dozens of agents and various network topologies to collaboratively solve problems. --- # JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 Despite the high costs, such exploration is beneficial for collaborative optimization. # VI. TASK: HALLUCINATION ALLEVIATION Reasoning elevation typically targets QA tasks, while hallucination alleviation is generally aimed at open-ended generation tasks such as story writing and code generation, emphasizing goals like fact enhancement, error reduction, and faithfulness enhancement. This section provides a survey of these directions. We have categorized four significant lines of work, as shown in the lower half of Table III. # A. Refining the Response Iteratively This line of work is similar to Refining with Responses listed in Section V-B. The difference lies in the focus: Refining with Responses primarily targets QA tasks, where various methods often only need to consider intermediate reasoning steps and the correctness of the final answer. This can be achieved by sampling multiple responses and synthesizing or selecting the better ones. In contrast, hallucination alleviation primarily deals with open-ended tasks such as story generation and code generation. When sampling multiple responses, it is still necessary to meticulously check each response for errors. Therefore, to alleviate hallucinations, it is crucial to iteratively refine and polish a response, eliminating errors. Their comparison is shown in Table IV. |REFINING WITH RESPONSES V.S. REFINING THE RESPONSE ITERATIVELY| | | |---|---|---| |Section V-B|Section VI-A| | |Name|Refining with Responses|Refining the Response Iteratively| |Train.|Half needed|Few needed| |Target|Reasoning Elevation|Hallucination Alleviation| |Task|QA: Math, NLI, etc.|Open-Ended: Story, Code, etc.| |Steps|1. Sample responses|1. Generate one response| | |2. Aggregate them|2. Refine it| | |3. Refine|3. Iterate| This line of work is relatively mature. The most famous include Self-Refine [8], Reflexion [52], and Self-Correct [9]. These three frameworks share the basic structure of having the LLM provide textual feedback, which is then used to update the response iteratively until a stopping criterion is met or the maximum iterations is reached, as shown in Algorithm 1. Algorithm 1 REFINING THE RESPONSE ITERATIVELY Require: Input query x, model M, consistency signal generator SelfEvaluate(·), Self-Update strategy SelfUpdate(·), stopping criterion stop(·), max iteration T 1: y0 = M(x) 2: i ← 0 i 1| |Psecond|γ maxP w | y0:t−1�| |I (Pgen)|τ =| |pmax|∈ V | P yt | y0:t−1� ≥ τw∈V| Those methodologies primarily apply to closed-book generation tasks. For open-book generation tasks, current research focuses on leveraging external references to guide decoding. CAD [56] and ECAD [104] (named ECAD in this review) incorporate contextually relevant or irrelevant knowledge snippets into model inputs, intervening in the decoding process through contrastive decoding strategies to bridge the information gap between useful and non-useful information. To further exploit valuable information from references, FastMem [105] fine-tunes the final layer of LLMs to quickly memorize reference texts during inference, highlighting differences in decoding with and without references. # Activating Truthfulness Activating Truthfulness focuses on enhancing consistency in latent layers. Its core methods involve boosting attention heads and states that represent “truthfulness” within latent layers, or refining latent layers through fine-tuning, aiming to improve the model’s internal consistency. The exploration of latent truthfulness began with CCS (Contrast-Consistent Search) proposed by Burns et al. [106]. CCS investigates methods for mining knowledge embedded in latent layers by training a small classification head on Transformer latent layers. This method effectively activates model truthfulness, surpassing conventional inference methods. Developed the Inference-Time Intervention (ITI) technique [5]. Inspired by CCS, scholars from Harvard University de- veloped the Inference-Time Intervention (ITI) technique [5]. This technique involves two main steps: 1. Probe analysis: Using probe technology to identify attention heads in the model related to truthfulness. 2. Inference-time intervention: Adjusting selected attention heads during the model’s answer generation process by increasing their weights. This guides the model towards more truthful reasoning paths. However, ITI has --- # Journal of LaTeX Class Files, Vol. 14, No. 8, August 2021 Limitations in training probes using the last token’s latent layer state at the end of a question-answer pair. This approach lacks features and turns the problem into discerning truthfulness rather than generating truthful content (generative). Addressing these issues, TrFr [6] proposed the use of multi-dimensional orthogonal probes, effectively extracting features from both truthful and non-truthful texts to better identify effective attention heads. TruthX [7] explored a more efficient intervention strategy. It targets not only attention heads but also latent states in the forward feedback layer. Mapping these states separately using truthful and semantic encoders significantly reduces the impact on the language model’s overall performance while enhancing representations of truthfulness. Besides improving truthfulness, what other characteristics do latent layers possess? Recently, more work has begun to explore various features within latent layers. This not only demystifies model black boxes but also offers potential methods to mitigate hallucination issues. For example, Wu et al. [107] discovered that some attention heads focus more on long-context retrieval capabilities (strong copy-paste needle abilities). In tests like Needle-in-a-Haystack, blocking these attention heads results in performance dropping from 94.7% to 63.6%. Can enhancing model retrieval heads reduce hallucinations in long contexts? This is a question worth exploring. For another example, teams from Anthropic also used dictionary learning to discover monosemanticity within LLMs [108], such as “Golden Gate Bridge.” Activating states corresponding to the “Golden Gate Bridge” in latent layers leads the model to consider itself as the bridge. OpenAI also proposed Sparse Autoencoder (SAE) [109] to automatically identify latent features within language models. If feature exploration were more accessible, could we identify “human-likeness” features that can prevent models from mistakenly perceiving themselves as human, thus reducing hallucinations related to cognitive errors? Exploring latent features in language models to subsequently reduce hallucinations from a white-box perspective is a promising direction. However, the main challenge lies in the inefficiency of probe technology, dictionary learning, SAE, and similar methods, which require training many classifiers to effectively extract latent features from models. In addition to implicit feature mining methods, Activating Truthfulness also includes truthfulness-oriented fine-tuning methods that essentially explore consistency within latent states. For instance, Tian et al. [110] proposed sampling multiple responses from an LLM and using Self-Evaluation to identify more factual responses. Collecting these QA pairs and retraining the original LLM aligns with the fine-tuning approach mentioned in Section V-B, though it is not the primary focus of this section. # Task: Others Apart from reasoning elevation and hallucination alleviation, several lines of work follow the Self-Feedback framework, although their objectives are not to improve the model’s internal consistency. Examples include knowledge distillation, preference learning, and data augmentation. For the sake of completeness in this survey, this section briefly summarizes these other works adhering to the Self-Feedback framework. # Preference Learning Currently, the responses of LLMs in specific domains sometimes fail to satisfy users, even violating human values, social ethics, and morals. The goal of Preference Learning is to enable LLMs to better follow human instructions and output responses that meet human expectations. Most of the work around this task can be broadly covered by the Self-Feedback framework. Specifically, the model generates initial responses during the Self-Evaluation process and then performs Self-Update based on the Feedback Signal. Here, the Feedback Signal mainly refers to the reward information given by a reward model R, which is trained through preference feedback. Preference feedback involves comparing and ranking different responses to the same question in terms of helpfulness, harmlessness, and honesty. The Self-Update here primarily refers to broadly updating the model M, including methods like supervised fine-tuning and reinforcement learning (such as PPO [111], DPO [112]). There are three main ways to obtain preference feedback. The first is through human feedback, as seen in works like OASST [113] and BeaverTails [114], which include human-annotated data from multiple annotators. The second method involves feedback generated by models [115]–[117], offering lower annotation costs and faster iterative feedback efficiency compared to human feedback. The third type of feedback is derived from inductive bias. For instance, the SHP dataset mentioned in [118] considers upvotes and downvotes to differentiate between good and bad responses. Another example is ALMoST [119], which proposes some prior rules, such as larger parameter models generally performing better, and more prompt context examples yielding higher quality responses. These rules are used to rank response quality across different models as preference feedback. Based on preference feedback, we can train a reward model to output Feedback Signals. There are two common types of reward models. One is the Reward Model proposed in InstructGPT [120], with the loss function as shown in Eq. 14. Here, rθ (x, y) represents the output of the Reward Model, and response yw is ranked higher than y. However, this method’s downside is that the overall score distribution for high-quality and low-quality responses is similar, making it difficult to effectively distinguish between different responses to different questions. To address this, Xu et al. [121] proposed an evaluation model that directly scores QA pairs. # LLM-Based Knowledge Distillation Small-parameter LLMs have faster inference and training speeds, but their reasoning capabilities are inferior to those of large-parameter LLMs. LLM-based knowledge distillation methods aim to transfer advanced capabilities from proprietary --- # Journal of LaTeX Class Files, Vol. 14, No. 8, August 2021 LLMs (such as GPT-4) to small-parameter open-source models (such as LLaMA and Mistral) [27]. These two models can be referred to as the “teacher model” and the “student model” respectively, with the teacher model guiding the student model to enhance its capabilities, fitting the generalized Self-Feedback framework proposed in this paper. During the Self-Evaluation, the student model generates answers, which are then assessed by the teacher model. In the Self-Update, the student model uses the evaluation signal to update itself or its answers. This signal can be in the form of statistical metrics, such as MiniLLM [122] calculating the reverse Kullback-Leibler (KL) divergence of the probability distributions output by the student and teacher models; or GKD [123] computing metrics like forward KL divergence, reverse KL divergence, and generalized JSD. The signal can also be in the form of natural language feedback, such as Selfee [124] utilizing ChatGPT as the teacher to provide textual feedback on the outputs of the student model; or in PERsD [77], where the teacher executes the code generated by the student model and provides specific suggestions based on errors. When the teacher and student models are the same LLM, this leads to Self-Knowledge Distillation (Self-KD). In Self-KD, the model iteratively updates its capabilities using the knowledge it gradually accumulates during training, falling under the narrow Self-Feedback paradigm. For example, the goal of Impossible distillation [125] is to obtain a Stronger Paraphraser. In the Self-knowledge distillation process, it evaluates its paraphrase results from perspectives such as semantics, format, and diversity, and further refines high-quality data to fine-tune itself accordingly. # Data Augmentation Data Augmentation aims to construct and filter high-quality datasets using LLMs. It is somewhat similar to the methods in Sections VII-A and VII-B that combine Feedback information to create datasets, but there are slight differences in focus and specific forms. The latter focuses on the model’s capabilities, using datasets during the Self-Update stage for model fine-tuning, with most methods falling under narrow Self-Feedback. In contrast, Data Augmentation focuses on the dataset itself, updating the model’s responses during the Self-Update stage to further refine the dataset, with most methods falling under generalized Self-Feedback. Self-instruct [126] is a typical example, where the LLM generates new task instructions during the Self-Evaluation stage and generates input-output instances based on the new instructions. It calculates the ROUGE-L metric between the new instructions and existing instructions as the Feedback signal. Finally, during the Self-Update stage, it filters and screens the newly generated set of instructions. Currently, methods applying LLMs to Data Augmentation and Synthetic Data Generation mainly focus on the prompt engineering layer. In other words, Self-Evaluation only involves responses. Many studies have shown that LLM responses are highly sensitive to prompt variations [127], [128]. Therefore, the main bottleneck in this task is: how to design better prompts and how to deeply explore the relationship between decoding, latent space, and data quality. # Evaluation Evaluation helps identify the strengths and weaknesses of different models and methods. This section lists evaluation methods and benchmarks for Internal Consistency and Self-Feedback. These evaluations mainly cover two types of abilities: meta ability, such as model’s uncertainty, consistency, and feedback ability; and common ability, which pertains to solving real-world problems, such as reasoning QA tasks and code generation tasks. Meta evaluation helps us identify which LLMs are more promising for solving complex problems, while common evaluation helps us understand which Self-Feedback methods can better solve problems for a given LLM. # Meta Evaluation We summarize five meta evaluation methods. These methods can generally be categorized into metric-based and benchmark-based paradigms. The former constructs mathematical formulas or metrics to directly calculate the performance of a particular aspect; the latter uses QA datasets to empirically measure performance. We summarize benchmark-based meta evaluations in Table V. |Type|Benchmark|Organization| |---|---|---| |Uncertainty|LLM-Uncertainty-Bench [129]|Tencent| |Uncertainty|UBench [130]|Nankai| |Consistency|ConsisEval [131]|PKU| |Consistency|PopQA-TP [132]|IBM| |Consistency|ParaRel [133]|BIU| |Consistency|BMLAMA [134]|RUG| |Consistency|BECEL [135]|Oxford| |Critique Ability|CriticBench [136]|THU| |Self-Knowledge|SelfAware [4]|Fudan| |Self-Knowledge|Idk(I don’t know) [35]|Fudan| |Self-Knowledge|Self-Knowledge Evaluation [137]|THU| Metric-Based Uncertainty Evaluation. As mentioned in Section IV-A, uncertainty estimation involves assessing the uncertainty of a model’s specific response. Uncertainty evaluation, on the other hand, measures the overall uncertainty of a model. Key metrics for evaluating model uncertainty include: Expected Calibration Error (ECE), which assesses the expected difference between model confidence and accuracy; Maximal Calibration Error (MCE), which indicates the maximum deviation between model accuracy and confidence; and Brier Score (BS), which is used to assess how closely the model’s predicted probabilities align with the true class probabilities [25]. Benchmark-Based Uncertainty Evaluation. LLM-Uncertainty-Bench [129] extracts five test tasks (including question answering, reading comprehension, commonsense inference, dialogue response selection, and document summarization) from common benchmark datasets and uses conformal prediction techniques to construct benchmarks. UBench [130] also extracts data from other datasets, totaling 3978 multiple-choice questions covering knowledge, language, understanding, and reasoning abilities. UBench --- # Journal of Latex Class Files, Vol. 14, No. 8, August 2021 Evaluates individual data items by having models textually express uncertainty scores. # Benchmark-Based Consistency Evaluation This line of work centers on assessing whether a model delivers consistent responses to queries that are semantically equivalent but phrased differently. The key focus is on developing a variety of synonymous queries to test the model’s reliability. For instance, the ConsisEval Benchmark [131] creates simpler synonymous queries for each question. PopQA-TP [132] and ParaRel [133] construct synonymous queries through rephrasing. BMLAMA [134] focuses on multilingual consistency, constructing a parallel corpus of queries. BECEL [135] draws inspiration from behavioral consistency, considering higher-order consistency in model responses by creating semantic consistency data, negational consistency data, symmetric consistency data, etc. Notably, most studies have found that models generally exhibit low consistency. # Benchmark-Based Critique Ability Evaluation Lin et al. [136] collect a large number of QA pairs from 15 datasets across mathematical, commonsense, symbolic, coding, and algorithmic fields, creating CriticBench through model generation and human annotation. It can be used to evaluate the ability of LLMs to generate critiques, an important aspect of the Self-Feedback framework. # Benchmark-Based Self-Knowledge Evaluation Self-Knowledge refers to the LLM’s understanding and recognition of its own abilities, limitations, and the content it creates. Yin et al. [4] and Cheng et al. [35] construct sets of unanswerable questions to explore the question “Do large language models know what they do not know?” Tan et al. [137] investigate “Does the model truly understand the questions and solutions it creates?” These studies generally yield negative empirical results, indicating that models have weak Self-Knowledge. # Common Evaluation In most works utilizing the Self-Feedback framework, they do not conduct meta evaluations in experiments but rather assess the model’s ability to solve real-world problems, such as solving math problems, code generation, and summarizing articles. Here, we classify and summarize the popular benchmarks based on the focus of the evaluation tasks, as shown in Table VI. |Type|Benchmark|Organization| |---|---|---| |Knowledge reasoning|C-Eval [138]|SJTU| |Knowledge reasoning|MMLU [15], [139]|UCB| |Logic reasoning|BBH [140]|Google| |Logic reasoning|ARC [141]|AI2| |Linguistic understanding|WiC [142]|Cambridge| |Code generating|HumanEval [143]|N/A| |Math Solving|MATH [144]|UCB| |Math Solving|GSM8K [34]|OpenAI| Currently, these Benchmark evaluation formats mainly include multiple-choice questions and text generation, each with its own advantages and disadvantages. For multiple-choice questions, evaluators can accurately calculate metrics such as accuracy by extracting answer text from the response or comparing the probabilities of different option tokens during LLM decoding. However, many researchers pointed out the drawbacks of this method, namely that LLMs tend to choose the first option, leading to poor self-consistency and not reflecting the LLM’s true ability. Text generation tasks, such as math problems and code generation, mitigate the shortcomings of multiple-choice questions by allowing the LLM to perform freely, better reflecting the model’s true ability. However, this evaluation method struggles to precisely quantify the gap between generated text and reference answers in dimensions such as syntax and semantics. It is evident that current Benchmarks for common evaluation are still far from accurately measuring the true capabilities of LLMs and have many areas for improvement. # Does Self-Feedback Really Work? # Conflicting Viewpoints All methods in this survey suggesting that models can generate proper Feedback Signals to optimize themselves. However, with the emergence of a series of works prefixed with “Self-”, questions about their feasibility have also arisen: Can models really self-correct, self refine, etc.? Jiang et al. [147] propose the SELF-[IN]CORRECT hypothesis. It experimentally verifies in the context of QA tasks that the “accuracy of generating initial answers (generative)” is higher than the “accuracy of judging the correctness of its own generated answers (discriminative)”. This indicates models struggle to assess their own content accurately. They argue that models form a definitive belief about a particular choice during generation, which refutes works like Refining with Responses and Refining the Response Iteratively. However, this work can only refute studies in the domain of QA tasks. Moreover, the model’s discriminative ability is not always greater than its generative ability, so it cannot be concluded that various methods cannot achieve self-correction. Stechly et al. [148] and Valmeekam et al. [149] used similar strategies to test whether GPT-4 possesses the Self-Feedback capability. Stechly et al. [148] used the Graph Coloring task to test and found that in work similar to Refining the Response Iteratively, GPT-4 almost always fails to verify the correctness of its own solutions but acknowledges methods like Self-Consistency, which suggest that the model can select appropriate solutions from multiple options. In contrast, Valmeekam et al. [149] tested similar strategies in the task planning domain, refuting the feasibility of both Refining the Response. --- # Iteratively and Multi-Agent Collaboration Both Stechly et al. [148] and Valmeekam et al. [149] believe that models have difficulty making correct judgments about their own outputs. The above criticisms often focus on narrow Feedback Signals. In comparison, the criticisms from Huang et al. [96] are more reasonable. Huang et al. [96] specifically refute the effectiveness of three works (Reflexion [52], Multi-Agent Debate [73], and Self-Refine [8]) through reasonable comparisons and comprehensive experiments. Reflexion relies on external golden truth as a stopping condition for iterative refining during self-correction, which is an unreasonable setup—if there is already a golden truth, there is no need for predictions. For Multi-Agent Debate, the authors’ experiments found that this method is significantly inferior to the Self-Consistency strategy and consumes substantial memory. For Self-Refine, the authors found that the prompts used for initial results and refining were unfair, and they generated a fairer prompt that produced better responses in one go. # Characteristics of Various Works Due to the different characteristics of various works, Kamoi et al. [24] provide a more comprehensive analysis by constructing a clear classification method and systematically comparing the strengths and weaknesses of each type of work. They suggest that the ability to self-correct should be discussed according to the specific task. For example, for tasks with decomposable responses or verifiable tasks, it is feasible for the model to optimize itself. # Proposed Self-Feedback Framework Benefiting from the perspective of Internal Consistency and the clear boundary discussions in Section I-E, a more meaningful discussion on the proposed Self-Feedback framework is conducted: 1. Does Self-Feedback improve Internal Consistency? 2. Does Internal Consistency mean correctness? - Self-Refine [8] (Refining the Response Iteratively) iteratively optimizes its responses, which is an implicit way to achieve high-consistency responses. - Self-Contradict [21] (Mitigating Hallucination while Generating) eliminates self-contradictory content in its responses, naturally improving consistency. - DoLa [55] (Decoding Truthfully) compares the probability distributions across different layers of the model’s latent layers and reduces their discrepancies through comparison, thereby improving the model’s consistency. - ITI [5] (Activating Truthfulness) implicitly optimizes the model’s Internal Consistency by identifying attention heads that prefer factual information. # Does Internal Consistency Mean Correctness? To answer this question, let’s revisit the relationship between world knowledge, corpus, and language models, as shown in Fig. 9. World knowledge is the consensual (correct) knowledge humans possess. The training corpus used for models is a true subset of world knowledge, containing the vast majority of correct knowledge and a small portion of uncleanable erroneous knowledge. Language models, by fitting the corpus, acquire higher-order probabilistic representations of this knowledge, and the probabilistic nature makes the learned knowledge vague and non-deterministic, as illustrated by the shaded areas in Fig. 9. Vagueness (or hallucination) is an important characteristic of language models, enabling the generation of novel and creative expressions outside the training corpus distribution. However, from a reliability perspective, vagueness is a disaster, as it makes the model’s expressions inconsistent. # Does Self-Feedback Improve Internal Consistency? For this question, the different lines of work proposed in this paper provide affirmative answers from various perspectives: - Self-Consistency [2] (Reasoning Topologically) improves response consistency by finding the majority vote from the sampling set. - Self-Improve [46] (Refining with Responses) also uses majority voting to find better responses from the sampling set and then fine-tunes the model itself, improving consistency from the parameter level. - Multi-Agent Debate [73] (Multi-Agent Collaboration) involves multiple models participating in answering questions simultaneously, essentially a generalized way to obtain consistent responses. Therefore, improving Internal Consistency and eliminating vagueness within the model enhances its confidence in correct knowledge. However, eliminating vagueness also means that the model will be equally confident in erroneous knowledge. This raises a question: does enhancing consistency yield overall benefits or drawbacks? The advantage is that when preprocessing and cleaning the pre-training corpus, the intention is to align it towards world knowledge. --- # Consistency Is (Almost) Correctness Enhancing a language model’s internal consistency activates its cognitive certainty, reinforcing both correct and erroneous knowledge. However, because the pre-training corpus is predominantly aligned with correct world knowledge, improving consistency tends to amplify correct content more than incorrect content. Consequently, increased internal consistency generally results in improved overall correctness. However, why do some opposing voices believe that improving consistency cannot enhance the model’s correctness? We believe this is closely related to the testing tasks. Many works refuting Self-Feedback use testing tasks that lie in the shaded areas of Fig. 9 (e.g., unstated puzzles not in the training corpus or questions unsolvable without external knowledge). Models struggle to effectively Self-Evaluate and Self-Update for tasks beyond their generalization capability. In summary, within-distribution capabilities, the Self-Feedback framework can enhance model consistency by reinforcing the model’s fit to corpus priors, thereby eliminating uncertainty and improving consistency. According to the “Consistency Is (Almost) Correctness” hypothesis, this leads to an overall improvement in the model’s performance. # Appeals Currently, there are many chaos in this field: using similarly expressed names, proposing rare or unrealistic tasks, using different benchmarks for the same task, and comparing different baselines. In summary, without clear debate topics, many works contradict each other, resulting in confusing conflicts of views. To avoid these confusions, we propose several appeals. AppealsNaming. When proposing new methods, ensure names are appropriate and avoid conflicts (e.g., Self-Improve [46], Self-Improvement [88]). Additionally, when classifying one’s work, consider the accuracy of the direction name. For instance, uncertainty estimation and confidence estimation have different directional indicators; the former leans towards mechanistic explanation, while the latter leans towards application.Task Definition. Some work has begun to notice that reasoning elevation and hallucination alleviation are similar tasks. For example, [150], [151] propose methods that focus on both tasks. We advocate for using Internal Consistency Mining to refer to these two tasks, ensuring terminological standardization.Reasoning and Hallucination. Regarding the use of these two terms, we suggest that when dealing with tasks like QA, we can say that a method may lack reasoning ability; whereas for open-ended generation tasks, we can use exhibit hallucination.Selection of Baselines. This paper summarizes some relatively fixed lines of work, suggesting that when selecting comparison baselines, first determine the sub-direction for your work and choose important work within that direction as comparison baselines. Comparing across different lines of work can lead to unfair comparisons due to significant methodological differences.Experiment Settings. Unrealistic experimental task settings will not advance scientific research. For instance, requiring pre-given golden label to make predictions does not fit real-world needs [96].Prompt Engineering. As mentioned in Section IX-A, many works exhibit a prompt tuning phenomenon, where adjusting the prompt can reverse experimental results. Therefore, we propose that prompt templates must be disclosed in the paper or source code and provide clear usage instructions; verify the robustness of prompts, such as using various templates for experiments; and verify the generality of prompts, such as using multiple different LLMs for experiments. # Future Directions and Challenges # Textual Self-Awareness Human speech often lacks consistency and certainty in expressing viewpoints. However, we typically use phrases like “I’m not sure, but I think” or “I believe there’s an 80% chance” to hedge, demonstrating our good self-awareness. Yona et al. [152] proved that current models still cannot verbally and faithfully express their uncertainty. Kapoor et al. [153] found similar issues and showed through experiments that models can achieve good calibration only after fine-tuning. How to enable models to utilize the available Internal Consistency signal to help textually express their self-awareness is a promising direction [154]. # The Reasoning Paradox As mentioned in Section II-B, there is a paradox between the reasoning done during single token prediction (latent reasoning [33]) and the reasoning done using multiple tokens in language (explicit reasoning, e.g., CoT). # The Paradox of Latent and Explicit Reasoning Language models excel in latent reasoning when decoding a single token, effectively utilizing attention mechanisms and deep feature interactions to achieve accurate reasoning. However, single tokens can’t answer complex questions. Explicit reasoning, which involves generating a sequence of tokens (e.g. CoT), enhances the model’s problem-solving capabilities. Yet, lengthy reasoning chains and inherent noise in text disrupt the model’s latent reasoning. Thus, there is a paradox between latent reasoning and explicit reasoning. Therefore, we need to study the equilibrium point between latent and explicit reasoning, enabling efficient use of reasoning resources and improving the model’s reasoning efficiency. Currently, there is little research on this issue. --- # C. Dive Deeper From the seven lines of work we summarized, many works optimize only at the response layer. However, this approach relies on experience and is highly sensitive to prompt templates. Moreover, the low entry barrier and extensive participation in such work have led to an influx of low-quality papers. Therefore, we encourage researchers to delve into the decoding layer and latent layer, exploring more universal discoveries from an interpretability perspective. Can help researchers locate their work’s position within a vast system and facilitate reasonable experimental comparisons. Finally, we include three critical topics: relevant evaluation methods and benchmarks, exploring whether Self-Feedback truly works, and future research directions. In summary, this paper attempts to use a deeper research perspective (Internal Consistency) and a more general framework (Self-Feedback) to summarize a series of important works on reasoning elevation and hallucination alleviation. # D. The Unified Perspective At present, the focus of work in this field is relatively narrow, lacking a comprehensive understanding of the entire field, and consequently, there are no more general framework works. We believe that using the perspective proposed in this paper, considering problems from the response, decoding, and latent layers in a unified manner, can better facilitate Internal Consistency Mining. There are emerging efforts that begin to integrate multiple layers. For example, Xie et al. [36] start from the response layer and reflect on how different CoT paths guide the consistency of the latent layer; Xie et al. [74] use Self-Evaluation strategies at the response layer to guide better decoding strategies. # ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (Grants No. 62072463, 71531012), the National Social Science Foundation of China (Grants No. 18ZDA309), the Research Seed Funds of the School of Interdisciplinary Studies at Renmin University of China, and the Opening Project of the State Key Laboratory of Digital Publishing Technology of the Founder Group. # APPENDIX A NOTATIONS |Symbol|Description| |---|---| |x|Query| |M, N|LLMs| |e|Expression type, e ∈ {response, decoding, latent}| |Oe(M, x)|Sampling distribution| |Y|Sampling set| |yi|The i-th element in the sampling set| |y0:i|Elements from 0 to i in the sampling set| |yt|The t-th token in text y| |f|Consistency Signal of Self-Feedback| |P (y|x; θ)|Probability that model parameterized by θ outputs y given input x| # XI. LIMITATIONS Given the terminological confusion and overlapping lines of work in this field, the sections of this paper are not necessarily orthogonal. Consequently, a single work may fall into multiple categories, indicating that the method employs ideas from different lines of work. The writing design is more inspirational, helping researchers to answer: What is the position of their work in a grand system? What are its advantages and disadvantages? What should be the next step in designing new and better methods? # XII. CONCLUSION This paper proposes using an Internal Consistency perspective to observe the most prominent phenomena in the field of LLMs: lack of reasoning and the presence of hallucinations. The article explains the modeling of Internal Consistency, the hourglass evolution pattern, the current status, sources, and significance from multiple aspects, and proposes the Self-Feedback framework for Internal Consistency Mining. We summarize the various tasks and distinctive lines of work involved in the Self-Feedback framework. # APPENDIX B THE FIVE RESPONSES OF GPT-4O Fig. 10, 11, 12, 13, and 14 show five different responses obtained when querying GPT-4o: How many full stops (periods) are there: “.!..!..!”. The experiments were conducted on June 14, 2024, accessed via the official website20. # GPT-4o Response 1 To determine the number of full stops (periods) in the string ”.!..!..!”, we need to count the occurrences of the character ’.’. Let’s break down the string: 1. ”.!”: 1 full stop 2. ”..!”: 2 full stops 3. ”.!”: 1 full stop Summing these up: 1 + 2 + 1 = 4. Therefore, there are 4 full stops in the string ”.!..!..!”. Fig. 10. GPT-4o Response 1 20https://chat.openai.com/ --- # TABLE VIII |Setting|Selected Token| |---|---| |Fix attn0; Zero out attn0i̸=16i̸ =0|0| |Fix attn0|0| |Fix attn16; Zero out attn0̸=0|0| |Fix attn16; Zero out attni̸=160|0| |Fix attn15; Zero out attni̸=015|0| |Fix attn16; Zero out attni̸=161515|4| |Fix attn030|30| |30; Zero out attni30|4| # TABLE IX |Decoding Strategy|Selected Token| |---|---| |Greedy Decoding|4| |Beam Search Decoding (beam = 5)|4| |Sampling Decoding|3| |Top-k Sampling Decoding (k = 50)|4| |Top-p Sampling Decoding (p = 0.95)|4| # TABLE X Model ResponseRESPONSE 1: There is a period after the first dot, then another period after the second dot, and so on. So, there are 5 periods in total.RESPONSE 2: There is one full stop after the first ”!”, then another after the second ”!”, and another after the third ”!”. So, there are 3 full stops in total.RESPONSE 3: There is one full stop after the first ”!”, then another after the second ”!”, and so on. So, there are 3 full stops.RESPONSE 4: There is one period at the end of the first part, then another after the second part, and finally one more after the third part. So, there are 3 periods in total.RESPONSE 5: There is 1 period, then another one, and another one... So, there are 3 full stops! # REFERENCES 1. W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou et al., “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023. 2. X. Wang, J. Wei, D. Schuurmans, Q. V. Le, E. H. Chi, S. Narang et al., “Self-consistency improves chain of thought reasoning in language models,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=1PL1NIMMrw 3. P. Mondorf and B. Plank, “Beyond accuracy: Evaluating the reasoning behavior of large language models–a survey,” arXiv preprint arXiv:2404.01869, 2024. 4. Z. Yin, Q. Sun, Q. Guo, J. Wu, X. Qiu, and X. Huang, “Do large language models know what they don’t know?” in Findings of the Association for Computational Linguistics: ACL 2023, A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 8653–8665. [Online]. Available: https://aclanthology.org/2023.findings-acl.551 5. K. Li, O. Patel, F. Vi´egas, H. Pfister, and M. Wattenberg, “Inference-time intervention: Eliciting truthful answers from a language model,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. [Online]. Available: https://openreview.net/forum?id=aLLuYpn83y 6. Z. Chen, X. Sun, X. Jiao, F. Lian, Z. Kang, D. Wang et al., “Truth forest: Toward multi-scale truthfulness in large language models through intervention without tuning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 19, pp. 20 967–20 974, Mar. 2024. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/30087 7. S. Zhang, T. Yu, and Y. Feng, “Truthx: Alleviating hallucinations by editing large language models in truthful space,” in Proceedings of the # APPENDIX C EXPERIMENT DETAILS OF THREE TYPES OF CONSISTENCY The setups and results of the comparative experiments on three different types of consistency are shown in Tables VIII, IX, and X. Here, Fix attn1516 refers to keeping the 16th attention head in the 15th layer unchanged, while zero out attni̸ =16 denotes zeroing out all attention heads in the 15th layer except the 16th one. The source code for this experiment is available in our open-source GitHub repository. --- # 62th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |1.|A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe et al., “Self-refine: Iterative refinement with self-feedback,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. [Online]. Available: https://openreview.net/forum?id=S37hOerQLB| |---|---| |2.|S. Welleck, X. Lu, P. West, F. Brahman, T. Shen, D. Khashabi et al., “Generating sequences by learning to self-correct,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=hH36JeQZDaO| |3.|J. Wei, X. Wang, D. Schuurmans, M. Bosma, brian ichter, F. Xia et al., “Chain of thought prompting elicits reasoning in large language models,” in Advances in Neural Information Processing Systems, A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, Eds., 2022. [Online]. Available: https://openreview.net/forum?id=VjQlMeSB J| |4.|Z. Luo, H. Han, H. Zhao, G. Jiang, C. Du, T. Li et al., “Sed: Self-evaluation decoding enhances large language models for better generation,” arXiv preprint arXiv:2405.16552, 2024.| |5.|M. Nezhurina, L. Cipolina-Kun, M. Cherti, and J. Jitsev, “Alice in wonderland: Simple tasks showing complete reasoning breakdown in state-of-the-art large language models,” arXiv preprint arXiv:2406.02061, 2024.| |6.|Y. Zhang, S. Mao, T. Ge, X. Wang, A. de Wynter, Y. Xia et al., “Llm as a mastermind: A survey of strategic reasoning with large language models,” arXiv preprint arXiv:2404.01230, 2024.| |7.|Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu et al., “Siren’s song in the ai ocean: a survey on hallucination in large language models,” arXiv preprint arXiv:2309.01219, 2023.| |8.|D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song et al., “Measuring massive multitask language understanding,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=d7KBjmI3GmQ| |9.|S. Lin, J. Hilton, and O. Evans, “TruthfulQA: Measuring how models mimic human falsehoods,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), S. Muresan, P. Nakov, and A. Villavicencio, Eds. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 3214–3252. [Online]. Available: https://aclanthology.org/2022.acl-long.229| |10.|J. Zhang, X. Wang, W. Ren, L. Jiang, D. Wang, and K. Liu, “Ratt: Athought structure for coherent and correct llmreasoning,” arXiv preprint arXiv:2406.02746, 2024.| |11.|M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer, “TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), R. Barzilay and M.-Y. Kan, Eds. Vancouver, Canada: Association for Computational Linguistics, Jul. 2017, pp. 1601–1611. [Online]. Available: https://aclanthology.org/P17-1147| |12.|J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child et al., “Scaling laws for neural language models,” arXiv preprint arXiv:2001.08361, 2020.| |13.|D. Hillier, L. Guertler, C. Tan, P. Agrawal, C. Ruirui, and B. Cheng, “Super tiny language models,” arXiv preprint arXiv:2405.14159, 2024.| |14.|N. M¨undler, J. He, S. Jenko, and M. Vechev, “Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=EmQSOi1X2f| |15.|Z. Tao, T.-E. Lin, X. Chen, H. Li, Y. Wu, Y. Li et al., “A survey on self-evolution of large language models,” arXiv preprint arXiv:2404.14387, 2024.| |16.|L. Pan, M. Saxon, W. Xu, D. Nathani, X. Wang, and W. Y. Wang, “Automatically correcting large language models: Surveying the landscape of diverse automated correction strategies,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 484–506, 2024. [Online]. Available: https://aclanthology.org/2024.tacl-1.27| |17.|R. Kamoi, Y. Zhang, N. Zhang, J. Han, and R. Zhang, “When can llms actually correct their own mistakes? a critical survey of self-correction of llms,” arXiv preprint arXiv:2406.01297, 2024.| |18.|M. Hu, Z. Zhang, S. Zhao, M. Huang, and B. Wu, “Uncertainty in natural language processing: Sources, quantification, and applications,” arXiv preprint arXiv:2306.04459, 2023.| |19.|M. Besta, F. Memedi, Z. Zhang, R. Gerstenberger, G. Piao, N. Blach et al., “Demystifying chains, trees, and graphs of thoughts,” arXiv preprint arXiv:2401.14295, 2024.| |20.|X. Xu, M. Li, C. Tao, T. Shen, R. Cheng, J. Li et al., “A survey on knowledge distillation of large language models,” arXiv preprint arXiv:2402.13116, 2024.| |21.|R. Jiang, K. Chen, X. Bai, Z. He, J. Li, M. Yang et al., “A survey on human preference learning for large language models,” arXiv preprint arXiv:2406.11191, 2024.| |22.|T. Schick and H. Sch¨utze, “Generating datasets with pretrained language models,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, M.-F. Moens, X. Huang, L. Specia, and S. W.-t. Yih, Eds. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, Nov. 2021, pp. 6943–6951. [Online]. Available: https://aclanthology.org/2021.emnlp-main.555| |23.|Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi et al., “Retrieval-augmented generation for large language models: A survey,” arXiv preprint arXiv:2312.10997, 2023.| |24.|A. Tarski, Introduction to logic: And to the methodology of deductive sciences. Oxford University Press, 1941.| |25.|J. Sun, C. Shaib, and B. C. Wallace, “Evaluating the zero-shot robustness of instruction-tuned language models,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=g9diuvxN6D| |26.|S. Yang, E. Gribovskaya, N. Kassner, M. Geva, and S. Riedel, “Do large language models latently perform multi-hop reasoning?” arXiv preprint arXiv:2402.16837, 2024.| |27.|K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser et al., “Training verifiers to solve math word problems,” arXiv preprint arXiv:2110.14168, 2021.| |28.|Q. Cheng, T. Sun, X. Liu, W. Zhang, Z. Yin, S. Li et al., “Can ai assistants know what they don’t know?” arXiv preprint arXiv:2401.13275, 2024.| |29.|Z. Xie, J. Guo, T. Yu, and S. Li, “Calibrating reasoning in language models with internal consistency,” arXiv preprint arXiv:2405.18711, 2024.| |30.|N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni et al., “Lost in the middle: How language models use long contexts,” Transactions of the Association for Computational Linguistics, vol. 12, pp. 157–173, 2024.| |31.|B. Liu, J. T. Ash, S. Goel, A. Krishnamurthy, and C. Zhang, “Exposing attention glitches with flip-flop language modeling,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. [Online]. Available: https://openreview.net/forum?id=VzmpXQAn6E| |32.|M. Zhang, O. Press, W. Merrill, A. Liu, and N. A. Smith, “How language model hallucinations can snowball,” arXiv preprint arXiv:2305.13534, 2023.| |33.|E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the dangers of stochastic parrots: Can language models be too big?” in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 2021, pp. 610–623.| |34.|Y. Ma, D. Tsao, and H.-Y. Shum, “On the principles of parsimony and self-consistency for the emergence of intelligence,” Frontiers of Information Technology & Electronic Engineering, vol. 23, no. 9, pp. 1298–1323, 2022.| |35.|S. Kadavath, T. Conerly, A. Askell, T. Henighan, D. Drain, E. Perez et al., “Language models (mostly) know what they know,” arXiv preprint arXiv:2207.05221, 2022.| |36.|S. Han, Q. Zhang, Y. Yao, W. Jin, Z. Xu, and C. He, “Llm multi-agent systems: Challenges and open problems,” arXiv preprint arXiv:2402.03578, 2024.| |37.|S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao et al., “Tree of thoughts: Deliberate problem solving with large language models,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. [Online]. Available: https://openreview.net/forum?id=5Xc1ecxO1h| |38.|M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi et al., “Graph of thoughts: Solving elaborate problems with large language models,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 16, pp. 17 682–17 690, Mar. 2024. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/29720| |39.|J. Huang, S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu et al., “Large language models can self-improve,” in Proceedings of the 2023, Conference on Empirical Methods in Natural Language Processing H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 1051–1068. [Online]. Available: https://aclanthology.org/2023.emnlp-main.67| --- # JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 [47] E. Mitchell, J. Noh, S. Li, W. Armstrong, A. Agarwal, P. Liu et al., “Enhancing self-consistency and performance of pre-trained language models through natural language inference,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 1754–1768. Link[48] S. An, Z. Ma, Z. Lin, N. Zheng, J.-G. Lou, and W. Chen, “Learning from mistakes makes llm better reasoner,” arXiv preprint arXiv:2310.20689, 2023.[49] K. Xiong, X. Ding, Y. Cao, T. Liu, and B. Qin, “Examining inter-consistency of large language models collaboration: An in-depth analysis via debate,” in Findings of the Association for Computational Linguistics: EMNLP 2023, H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 7572–7590. Link[50] C. Qian, Z. Xie, Y. Wang, W. Liu, Y. Dang, Z. Du et al., “Scaling large-language-model-based multi-agent collaboration,” arXiv preprint arXiv:2406.07155, 2024.[51] D. Paul, M. Ismayilzada, M. Peyrard, B. Borges, A. Bosselut, R. West et al., “REFINER: Reasoning feedback on intermediate representations,” in Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), Y. Graham and M. Purver, Eds. St. Julian’s, Malta: Association for Computational Linguistics, Mar. 2024, pp. 1100–1126. Link[52] N. Shinn, F. Cassano, A. Gopinath, K. Narasimhan, and S. Yao, “Reflexion: language agents with verbal reinforcement learning,” in Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 8634–8652. Link[53] H. Kang, J. Ni, and H. Yao, “Ever: Mitigating hallucination in large language models through real-time verification and rectification,” arXiv preprint arXiv:2311.09114, 2023.[54] A. Mishra, A. Asai, V. Balachandran, Y. Wang, G. Neubig, Y. Tsvetkov et al., “Fine-grained hallucination detection and editing for language models,” arXiv preprint arXiv:2401.06855, 2024.[55] Y.-S. Chuang, Y. Xie, H. Luo, Y. Kim, J. R. Glass, and P. He, “Dola: Decoding by contrasting layers improves factuality in large language models,” in The Twelfth International Conference on Learning Representations, 2024. Link[56] W. Shi, X. Han, M. Lewis, Y. Tsvetkov, L. Zettlemoyer, and S. W.- t. Yih, “Trusting your evidence: Hallucinate less with context-aware decoding,” arXiv preprint arXiv:2305.14739, 2023.[57] J. Lu, C. Wang, and J. Zhang, “Diver: Large language model decoding with span-level mutual information verification,” arXiv preprint arXiv:2406.02120, 2024.[58] Y. Xiao and W. Y. Wang, “On hallucination and predictive uncertainty in conditional language generation,” in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, P. Merlo, J. Tiedemann, and R. Tsarfaty, Eds. Online: Association for Computational Linguistics, Apr. 2021, pp. 2734–2744. Link[59] Z. Lin, S. Trivedi, and J. Sun, “Generating with confidence: Uncertainty quantification for black-box large language models,” Transactions on Machine Learning Research, 2024. Link[60] Y. A. Yadkori, I. Kuzborskij, A. Gy¨orgy, and C. Szepesv´ari, “To believe or not to believe your llm,” arXiv preprint arXiv:2406.02543, 2024.[61] D. Deng, G. Chen, Y. Yu, F. Liu, and P.-A. Heng, “Uncertainty estimation by fisher information-based evidential deep learning,” in International Conference on Machine Learning. PMLR, 2023, pp. 7596–7616.[62] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in Proceedings of The 33rd International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. F. Balcan and K. Q. Weinberger, Eds., vol. 48. New York, New York, USA: PMLR, 20–22 Jun 2016, pp. 1050–1059. Link[63] S. Diao, P. Wang, Y. Lin, and T. Zhang, “Active prompting with chain-of-thought for large language models,” arXiv preprint arXiv:2302.12246, 2023.[64] J. Chen and J. Mueller, “Quantifying uncertainty in answers from any language model and enhancing their trustworthiness,” arXiv preprint arXiv:2308.16175, 2023.[65] D. Zheng, D. Liu, M. Lapata, and J. Z. Pan, “Trustscore: Reference-free evaluation of llm response trustworthiness,” arXiv preprint arXiv:2402.12545, 2024.[66] P. Manakul, A. Liusie, and M. Gales, “SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 9004–9017. Link[67] C. Chen, K. Liu, Z. Chen, Y. Gu, Y. Wu, M. Tao et al., “INSIDE: LLMs’ internal states retain the power of hallucination detection,” in The Twelfth International Conference on Learning Representations, 2024. Link[68] R. Cohen, M. Hamri, M. Geva, and A. Globerson, “LM vs LM: Detecting factual errors via cross examination,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 12 621–12 640. Link[69] W. Yuan, G. Neubig, and P. Liu, “BARTScore: Evaluating generated text as text generation,” in Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021. Link[70] J. Fu, S.-K. Ng, Z. Jiang, and P. Liu, “Gptscore: Evaluate as you desire,” arXiv preprint arXiv:2302.04166, 2023.[71] W. Saunders, C. Yeh, J. Wu, S. Bills, L. Ouyang, J. Ward et al., “Self-critiquing models for assisting human evaluators,” arXiv preprint arXiv:2206.05802, 2022.[72] N. McAleese, R. M. Pokorny, J. F. C. Uribe, E. Nitishinskaya, M. Trebacz, and J. Leike, “Llm critics help catch llm bugs,” arXiv preprint arXiv:2407.00215, 2024.[73] Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch, “Improving factuality and reasoning in language models through multiagent debate,” arXiv preprint arXiv:2305.14325, 2023.[74] Y. Xie, K. Kawaguchi, Y. Zhao, X. Zhao, M.-Y. Kan, J. He et al., “Self-evaluation guided beam search for reasoning,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. Link[75] H. Wang, A. Prasad, E. Stengel-Eskin, and M. Bansal, “Soft self-consistency improves language model agents,” arXiv preprint arXiv:2402.13212, 2024.[76] X. Chen, M. Lin, N. Sch¨arli, and D. Zhou, “Teaching large language models to self-debug,” in The Twelfth International Conference on Learning Representations, 2024. Link[77] H. Chen, A. Saha, S. Hoi, and S. Joty, “Personalized distillation: Empowering open-sourced LLMs with adaptive learning for code generation,” in The 2023 Conference on Empirical Methods in Natural Language Processing, 2023. Link[78] J. Jung, L. Qin, S. Welleck, F. Brahman, C. Bhagavatula, R. Le Bras et al., “Maieutic prompting: Logically consistent reasoning with recursive explanations,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 1266–1279. Link[79] R. Battiti, Maximum satisfiability problemMaximum Satisfiability Problem. Boston, MA: Springer US, 2009, pp. 2035–2041. Link[80] B. Huang, S. Lu, W. Chen, X. Wan, and N. Duan, “Enhancing large language models in coding through multi-perspective self-consistency,” arXiv preprint arXiv:2309.17272, 2023.[81] X. Chen, R. Aksitov, U. Alon, J. Ren, K. Xiao, P. Yin et al., “Universal self-consistency for large language model generation,” arXiv preprint arXiv:2311.17311, 2023. --- # JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 [82] E. Zelikman, G. Harik, Y. Shao, V. Jayasiri, N. Haber, and N. D. Goodman, “Quiet-star: Language models can teach themselves to think before speaking,” arXiv preprint arXiv:2403.09629, 2024.[83] Y. Li, Z. Lin, S. Zhang, Q. Fu, B. Chen, J.-G. Lou et al., “Making language models better reasoners with step-aware verifier,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 5315–5333. [Online]. Available: https://aclanthology.org/2023.acl-long.291[84] C. Fernando, D. Banarse, H. Michalewski, S. Osindero, and T. Rockt¨aschel, “Promptbreeder: Self-referential self-improvement via prompt evolution,” arXiv preprint arXiv:2309.16797, 2023.[85] I. Harvey, “The microbial genetic algorithm,” in Advances in Artificial Life. Darwin Meets von Neumann, G. Kampis, I. Karsai, and E. Sza-thm´ary, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 126–133.[86] O. Khattab, A. Singhvi, P. Maheshwari, Z. Zhang, K. Santhanam, S. V. A et al., “DSPy: Compiling declarative language model calls into state-of-the-art pipelines,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=sY5N0zY5Od[87] X. Wang and D. Zhou, “Chain-of-thought reasoning without prompting,” arXiv preprint arXiv:2402.10200, 2024.[88] Y. Tian, B. Peng, L. Song, L. Jin, D. Yu, H. Mi et al., “Toward self-improvement of llms via imagination, searching, and criticizing,” arXiv preprint arXiv:2404.12253, 2024.[89] A. Patel, M. Hofmarcher, C. Leoveanu-Condrei, M.-C. Dinu, C. Callison-Burch, and S. Hochreiter, “Large language models can self-improve at web agent tasks,” arXiv preprint arXiv:2405.20309, 2024.[90] E. Choi, A. Ahmadian, M. Geist, O. Pietquin, and M. G. Azar, “Self-improving robust preference optimization,” arXiv preprint arXiv:2406.01660, 2024.[91] Z. Sun, Y. Shen, Q. Zhou, H. Zhang, Z. Chen, D. Cox et al., “Principle-driven self-alignment of language models from scratch with minimal human supervision,” in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 2511–2565. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2023/file/0764db1151b936aca59249e2c1386101-Paper-Conference.pdf[92] Y. Tong, D. Li, S. Wang, Y. Wang, F. Teng, and J. Shang, “Can llms learn from previous mistakes? investigating llms’ errors to boost for reasoning,” arXiv preprint arXiv:2403.20046, 2024.[93] A. Agarwal, A. Tzen, and C. Tew, “Improving logical consistency in pre-trained language models using natural language inference,” Stanford CS224N Custom Project, 2022. [Online]. Available: https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1224/reports/custom 116994635.pdf[94] A. Ignatiev, A. Morgado, and J. Marques-Silva, “RC2: An Efficient MaxSAT Solver,” Journal on Satisfiability, Boolean Modeling and Computation, vol. 11, pp. 53–64, January 2019, published: 1 Jan. 2019.[95] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang et al., “A survey on large language model based autonomous agents,” vol. 18, no. 6, p. 186345. [Online]. Available: https://doi.org/10.1007/s11704-024-40231-1[96] J. Huang, X. Chen, S. Mishra, H. S. Zheng, A. W. Yu, X. Song et al., “Large language models cannot self-correct reasoning yet,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=IkmD3fKBPQ[97] A. P. Jacob, Y. Shen, G. Farina, and J. Andreas, “The consensus game: Language model generation via equilibrium search,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=n9xeGcI4Yg[98] X. Liang, S. Song, S. Niu, Z. Li, F. Xiong, B. Tang et al., “Uhgeval: Benchmarking the hallucination of chinese large language models via unconstrained generation,” arXiv preprint arXiv:2311.15296, 2023.[99] K. Yang, Y. Tian, N. Peng, and D. Klein, “Re3: Generating longer stories with recursive reprompting and revision,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Y. Goldberg, Z. Kozareva, and Y. Zhang, Eds. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics, Dec. 2022, pp. 4393–4479. [Online]. Available: https://aclanthology.org/2022.emnlp-main.296[100] T. Schick, J. A. Yu, Z. Jiang, F. Petroni, P. Lewis, G. Izacard et al., “PEER: A collaborative language model,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=ETKGuby0hcs [101] A. Chen, P. Pasupat, S. Singh, H. Lee, and K. Guu, “Purr: Efficiently editing language model hallucinations by denoising language model corruptions,” arXiv preprint arXiv:2305.14908, 2023.[102] L. Gao, Z. Dai, P. Pasupat, A. Chen, A. T. Chaganty, Y. Fan et al., “RARR: Researching and revising what language models say, using language models,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 16 477–16 508. [Online]. Available: https://aclanthology.org/2023.acl-long.910[103] X. L. Li, A. Holtzman, D. Fried, P. Liang, J. Eisner, T. Hashimoto et al., “Contrastive decoding: Open-ended text generation as optimization,” arXiv preprint arXiv:2210.15097, 2022.[104] Z. Zhao, E. Monti, J. Lehmann, and H. Assem, “Enhancing contextual understanding in large language models through contrastive decoding,” arXiv preprint arXiv:2405.02750, 2024.[105] J. Zhu, S. Liu, Y. Yu, B. Tang, Y. Yan, Z. Li et al., “Fastmem: Fast memorization of prompt improves context awareness of large language models,” arXiv preprint arXiv:2406.16069, 2024.[106] C. Burns, H. Ye, D. Klein, and J. Steinhardt, “Discovering latent knowledge in language models without supervision,” in The Eleventh International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=ETKGuby0hcs[107] W. Wu, Y. Wang, G. Xiao, H. Peng, and Y. Fu, “Retrieval head mechanistically explains long-context factuality,” arXiv preprint arXiv:2404.15574, 2024.[108] A. Templeton, T. Conerly, J. Marcus, J. Lindsey, T. Bricken, B. Chen et al., “Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet,” Transformer Circuits Thread, 2024. [Online]. Available: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html[109] L. Gao, T. D. la Tour, H. Tillman, G. Goh, R. Troll, A. Radford et al., “Scaling and evaluating sparse autoencoders,” arXiv preprint arXiv:2406.04093, 2024.[110] K. Tian, E. Mitchell, H. Yao, C. D. Manning, and C. Finn, “Fine-tuning language models for factuality,” arXiv preprint arXiv:2311.08401, 2023.[111] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.[112] R. Ermon, and C. Finn, “Direct preference optimization: YourS.Rafailov, A. Sharma, E. Mitchell, C. D. Manning, language model is secretly a reward model,” in Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 53 728–53 741. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2023/file/a85b405ed65c6477a4fe8302b5e06ce7-Paper-Conference.pdf[113] A. K¨opf, Y. Kilcher, D. von R¨utte, S. Anagnostidis, Z. R. Tam, K. Stevens et al., “Openassistant conversations - democratizing large language model alignment,” in Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 47 669–47 681. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2023/file/4dbb61cb68671edc4ca3712d70083b9f-Paper-Datasets and Benchmarks.pdf[114] J. Ji, M. Liu, J. Dai, X. Pan, C. Zhang, C. Bian et al., “Beavertails: Towards improved safety alignment of llm via a human-preference dataset,” in Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 24 678–24 704. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2023/file/949f0f8f32267d297c2d4e3ee10a2e7e-Paper-Datasets and Benchmarks.pdf[115] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones et al., “Constitutional ai: Harmlessness from ai feedback,” arXiv preprint arXiv:2212.08073, 2022.[116] Z. Sun, Y. Shen, H. Zhang, Q. Zhou, Z. Chen, D. D. Cox et al., “SALMON: Self-alignment with instructable reward models,” in The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: https://openreview.net/forum?id=xJbsmB8UMx[117] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones et al., “Constitutional ai: Harmlessness from ai feedback,” arXiv preprint arXiv:2212.08073, 2022. --- # JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 |Reference|Authors|Title|Publication Details| |---|---|---|---| |118|K. Ethayarajh, Y. Choi, and S. Swayamdipta|Understanding dataset difficulty with V-usable information|In Proceedings of the 39th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, Eds., vol. 162. PMLR, 17–23 Jul 2022, pp. 5988–6008. [Online]. Available: Link| |119|S. Kim, S. Bae, J. Shin, S. Kang, D. Kwak, K. M. Yoo et al.|Aligning large language models through synthetic feedback|arXiv preprint arXiv:2305.13735, 2023.| |120|L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin et al.|Training language models to follow instructions with human feedback|Advances in neural information processing systems, vol. 35, pp. 27 730–27 744, 2022.| |121|Y. Xu, X. Liu, X. Liu, Z. Hou, Y. Li, X. Zhang et al.|Chatglm-math: Improving math problem-solving in large language models with a self-critique pipeline|arXiv preprint arXiv:2404.02893, 2024.| |122|Y. Gu, L. Dong, F. Wei, and M. Huang|MiniLLM: Knowledge distillation of large language models|In The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: Link| |123|R. Agarwal, N. Vieillard, Y. Zhou, P. Stanczyk, S. R. Garea, M. Geist et al.|On-policy distillation of language models: Learning from self-generated mistakes|In The Twelfth International Conference on Learning Representations, 2024. [Online]. Available: Link| |124|S. Ye, Y. Jo, D. Kim, S. Kim, H. Hwang, and M. Seo|Selfee: Iterative self-revising llm empowered by self-feedback generation|Blog post, 2023.| |125|J. Jung, P. West, L. Jiang, F. Brahman, X. Lu, J. Fisher et al.|Impossible distillation: from low-quality model to high-quality dataset & model for summarization and paraphrasing|arXiv preprint arXiv:2305.16635, 2023.| |126|Y. Wang, Y. Kordi, S. Mishra, A. Liu, N. A. Smith, D. Khashabi et al.|Self-instruct: Aligning language models with self-generated instructions|In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 13 484–13 508. [Online]. Available: Link| |127|M. Sclar, Y. Choi, Y. Tsvetkov, and A. Suhr|Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting|arXiv preprint arXiv:2310.11324, 2023.| |128|Q. Yu, Z. Zheng, S. Song, Z. Li, F. Xiong, B. Tang et al.|xfinder: Robust and pinpoint answer extraction for large language models|arXiv preprint arXiv:2405.11874, 2024.| |129|F. Ye, M. Yang, J. Pang, L. Wang, D. F. Wong, E. Yilmaz et al.|Benchmarking llms via uncertainty quantification|arXiv preprint arXiv:2401.12794, 2024.| |130|X. Wang, Z. Zhang, Q. Li, G. Chen, M. Hu, B. Luo et al.|Ubench: Benchmarking uncertainty in large language models with multiple choice questions|arXiv preprint arXiv:2406.12784, 2024.| |131|Z. Yang, Y. Zhang, T. Liu, J. Yang, J. Lin, C. Zhou et al.|Can large language models always solve easy problems if they can solve harder ones?|arXiv preprint arXiv:2406.12809, 2024.| |132|E. Rabinovich, S. Ackerman, O. Raz, E. Farchi, and A. Anaby Tavor|Predicting question-answering performance of large language models through semantic consistency|In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), S. Gehrmann, A. Wang, J. Sedoc, E. Clark, K. Dhole, K. R. Chandu et al., Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 138–154. [Online]. Available: Link| |133|Y. Elazar, N. Kassner, S. Ravfogel, A. Ravichander, E. Hovy, H. Sch¨utze et al.|Measuring and improving consistency in pretrained language models|Transactions of the Association for Computational Linguistics, vol. 9, pp. 1012–1031, 2021. [Online]. Available: Link| |134|J. Qi, R. Fern´andez, and A. Bisazza|Cross-lingual consistency of factual knowledge in multilingual language models|In The 2023 Conference on Empirical Methods in Natural Language Processing, 2023. [Online]. Available: Link| |135|M. Jang, D. S. Kwon, and T. Lukasiewicz|BECEL: Benchmark for consistency evaluation of language models|In Proceedings of the 29th International Conference on Computational Linguistics, N. Calzolari, C.-R. Huang, H. Kim, J. Pustejovsky, L. Wanner, K.-S. Choi et al., Eds. Gyeongju, Republic of Korea: International Committee on Computational Linguistics, Oct. 2022, pp. 3680–3696. [Online]. Available: Link| |136|Z. Lin, Z. Gou, T. Liang, R. Luo, H. Liu, and Y. Yang|Criticbench: Benchmarking llms for critique-correct reasoning|arXiv preprint arXiv:2402.14809, 2024.| |137|Z. Tan, L. Wei, J. Wang, X. Xie, and W. Huang|Can i understand what i create? self-knowledge evaluation of large language models|arXiv preprint arXiv:2406.06140, 2024.| |138|Y. Huang, Y. Bai, Z. Zhu, J. Zhang, J. Zhang, T. Su et al.|C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models|Advances in Neural Information Processing Systems, vol. 36, 2024.| |139|D. Hendrycks, C. Burns, S. Basart, A. Critch, J. Li, D. Song et al.|Aligning ai with shared human values|Proceedings of the International Conference on Learning Representations (ICLR), 2021.| |140|M. Suzgun, N. Scales, N. Sch¨arli, S. Gehrmann, Y. Tay, H. W. Chung et al.|Challenging big-bench tasks and whether chain-of-thought can solve them|arXiv preprint arXiv:2210.09261, 2022.| |141|P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick et al.|Think you have solved question answering? try arc, the ai2 reasoning challenge|arXiv preprint arXiv:1803.05457, 2018.| |142|M. T. Pilehvar and J. Camacho-Collados|Wic: the word-in-context dataset for evaluating context-sensitive meaning representations|Proceedings of NAACL 2019 (short), 2019.| |143|M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan et al.|Evaluating large language models trained on code|2021.| |144|D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang et al.|Measuring mathematical problem solving with the math dataset|NeurIPS, 2021.| |145|X. Wang, B. Ma, C. Hu, L. Weber-Genzel, P. R¨ottger, F. Kreuter et al.|" my answer is c": First-token probabilities do not match text answers in instruction-tuned language models|arXiv preprint arXiv:2402.14499, 2024.| |146|H. Wang, S. Zhao, Z. Qiang, B. Qin, and T. Liu|Beyond the answers: Reviewing the rationality of multiple choice question answering for the evaluation of large language models|arXiv preprint arXiv:2402.01349, 2024.| |147|D. Jiang, J. Zhang, O. Weller, N. Weir, B. Van Durme, and D. Khashabi|Self-[in] correct: Llms struggle with refining self-generated responses|arXiv preprint arXiv:2404.04298, 2024.| |148|K. Stechly, M. Marquez, and S. Kambhampati|GPT-4 doesn’t know it’s wrong: An analysis of iterative prompting for reasoning problems|In NeurIPS 2023 Foundation Models for Decision Making Workshop, 2023. [Online]. Available: Link| |149|K. Valmeekam, M. Marquez, and S. Kambhampati|Investigating the effectiveness of self-critiquing in LLMs solving planning tasks|In NeurIPS 2023 Foundation Models for Decision Making Workshop, 2023. [Online]. Available: Link| |150|D. Nathani, D. Wang, L. Pan, and W. Y. Wang|MAF: Multi-aspect feedback for improving reasoning in large language models|In The 2023 Conference on Empirical Methods in Natural Language Processing, 2023. [Online]. Available: Link| |151|W. Xu, G. Zhu, X. Zhao, L. Pan, L. Li, and W. Y. Wang|Pride and prejudice: Llm amplifies self-bias in self-refinement|arXiv preprint arXiv:2402.11436, 2024.| |152|G. Yona, R. Aharoni, and M. Geva|Can large language models faithfully express their intrinsic uncertainty in words?|arXiv preprint arXiv:2405.16908, 2024.| |153|S. Kapoor, N. Gruver, M. Roberts, K. Collins, A. Pal, U. Bhatt et al.|Large language models must be taught to know what they don’t know|arXiv preprint arXiv:2406.08391, 2024.| |154|L. Chen, Z. Liang, X. Wang, J. Liang, Y. Xiao, F. Wei et al.|Teaching large language models to express knowledge boundary from their own signals|arXiv preprint arXiv:2406.10881, 2024.| --- # Xun Liang Xun Liang (Senior Member, IEEE) received the B.Sc. and Ph.D. degrees in computer engineering from Tsinghua University, Beijing, China, in 1989 and 1993, respectively, and the M.Sc. degree in operations research from Stanford University, Palo Alto, CA, USA, in 1999. He worked as a Post-Doctoral Fellow with the Institute of Computer Science and Technology, Peking University, Beijing, from 1993 to 1995, and with the Department of Computer Engineering, University of New Brunswick, Fredericton, NB, Canada, from 1995 to 1997. He worked as a CTO, leading over ten intelligent information products in RixoInfo Ltd., CA, USA, from 2000 to 2007, and was the Director of the Data Mining Lab, Institute of Computer Science and Technology, Peking University, from 2005 to 2009. He is currently a professor with the School of Information, Renmin University of China. His research interests include graph data management and mining, social network analysis, graph computation systems, and graph-based machine learning. # Rong-Hua Li Rong-Hua Li received the Ph.D. degree in computer science from The Chinese University of Hong Kong, Hong Kong, in 2013. He is currently a Professor with the Beijing Institute of Technology, Beijing, China. His research interests include graph data management and mining, social network analysis, graph computation systems, and graph-based machine learning. # Feiyu Xiong Feiyu Xiong is the Head of the Large Language Model Center of the Institute for Advanced Algorithms Research-Shanghai. He holds a Bachelor’s degree from Huazhong University of Science and Technology and a Ph.D. from Drexel University. He has previously served as the Head of Data Intelligence for Alibaba’s Business Middle Platform and the Head of the Data Platform for Taobao and Tmall Group. During his tenure at Alibaba, he was primarily responsible for the intelligent construction of systems related to core e-commerce transactions. # Shichao Song Shichao Song is currently a PhD student at the School of Information, Renmin University of China, under the supervision of Prof. Xun Liang. His research interests span a wide range of topics, including internal consistency mining of LLMs, LLM interpretability, and reliable evaluation methods for LLMs. For more information, visit his website at https://ki-seki.github.io/. # Zhiyu Li Zhiyu Li received his Ph.D. in Computer Science from the School of Information, Renmin University of China, in 2019. He is currently a Senior Researcher at the Large Language Model Center of the Institute for Advanced Algorithms Research-Shanghai. He has published over 30 papers in top-tier conferences and journals such as TKDE, KDD, and ACL. His current responsibilities include research and application implementation related to large language models. His research interests include model pre-training, model alignment, and hallucination optimization. # Zifan Zheng Zifan Zheng is currently a research intern at the Large Language Model Center of the Institute for Advanced Algorithms Research, Shanghai. He received the B.S. degree in Computer Science and Technology from Beijing Institute of Technology, China, in 2024. His research interests include LLMs interpretability, reliable evaluation, and social network analysis. # Hanyu Wang Hanyu Wang is a Ph.D. student at the School of Information, Renmin University of China, under the supervision of Professor Xun Liang. His research areas include large language models, controllable text generation in large language models, and controlled decoding. # Qingchen Yu Qingchen Yu is currently a research intern at the Large Language Model Center of the Institute for Advanced Algorithms Research in Shanghai. He is also a master’s student at Shanghai University. His research interests include machine learning, LLM evaluation, and prompt engineering. # Xunkai Li Xunkai Li is currently working toward the PhD degree with the school of Computer Science, Beijing Institute of Technology, advised by Prof. Rong-Hua Li. He received the BS degree in computer science from Shandong University in 2022. His research interest lies in Data-centric ML and Graph-ML within complex relational data and new learning paradigms. He has published 5+ papers in top DB/DM/AI conferences such as VLDB, WWW, AAAI as the first author.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment