Certified Self-Consistency: Statistical Guarantees and Test-Time Training for Reliable Reasoning in LLMs

1Imperial College London
TLDR: We provide a unified statistical framework of when and why self-consistency yields certifiable reliability in reasoning models, and how test-time adaptation can further reduce the computational cost of this certification.
Certified self-consistency workflow
Given a prompt, the model generates multiple reasoning rollouts from the reference distribution \(\pi_{\mathrm{ref}}(\cdot|{pr})\). The resulting terminal answers are aggregated via majority voting, viewed as mode estimation under sampling uncertainty. The Martingale Majority Certificate (MMC) monitors the empirical margin and provides an anytime-valid stopping rule for certification. Test-time training with SNR or entropy-based adaptation sharpens the terminal distribution, thereby increasing the signal-to-noise ratio (SNR) and reducing the number of samples required for certification.
MMC stopping rule in action
MMC stopping rule in action.

Abstract

Recent advances such as self-consistency and test-time reinforcement learning (TTRL) improve the reliability of large language models (LLMs) without additional supervision, yet their underlying mechanisms and statistical guarantees remain poorly understood.

We present a unified framework for certifiable inference in LLMs, showing that majority voting provides a statistical certificate of self-consistency: under mild assumptions, the aggregated answer coincides with the mode of the model’s terminal distribution with high probability. We derive finite-sample and anytime-valid concentration bounds that quantify this confidence, and introduce the Martingale Majority Certificate (MMC), a sequential stopping rule that adaptively determines when sufficient samples have been drawn.

We further prove that label-free post-training methods such as TTRL implicitly sharpen the answer distribution by exponentially tilting it toward its mode, thereby reducing the number of samples required for certification. Building on this insight, we propose new post-training objectives that explicitly optimise this trade-off between sharpness and bias. Together, these results explain and connect two central test-time scaling strategies, self-consistency and TTRL, within a single statistical framework for label-free, certifiable reliability in reasoning LLMs.

Setting

LLM rollouts can be formalised as a stochastic decoding process \[ (Y_t)_{t \ge 0}, \quad Y_t \in \mathcal{V}, \] where \( \mathcal{V} \) is the vocabulary and the process is initialised by a prompt \( pr \). At each step the model samples \[ Y_{t+1} \sim \pi_\phi(\cdot \mid Y_{\le t}, pr), \] from a conditional policy parametrised by weights \( \phi \). The thinking phase consists of the random evolution of this sequence until a termination token is produced, at which point the model emits the response, starting from a random stopping time \( \tau \). We denote by \[ X := g(Y_{\tau:}) \in \mathcal{A} \] the canonicalised terminal answer, obtained by applying a deterministic extraction map \( g \). The induced terminal distribution \( \mathbf{p} = \mathrm{Law}(X) \) over the answer set \( \mathcal{A} \) captures the model’s epistemic uncertainty about its own final output. In an ideal reasoning model, we would like rollouts to exhibit rich variability in \( Y_{1:\tau-1} \) (the reasoning trajectories), yet concentrate mass in the final answer \( X \) (the outcome). That is, we seek diversity over reasoning paths, but consistency over terminal responses.

In supervised or verifier-equipped settings, correctness can be externally validated. In open-ended reasoning tasks, such supervision is unavailable. In the absence of external rewards, a model must act relative to its own uncertainty. Letting \( a \in \mathcal{A} \) denote the chosen output and \( X \sim \mathbf{p} \) the stochastic model response, the expected 0–1 loss is \( \mathbb{E}[1\{a \neq X\}] \). The Bayes-optimal decision minimising this loss is the mode

\[ c^\star = \arg\max_j p_j, \]

which corresponds to the model’s most probable self-consistent answer. Hence, under symmetric loss, recovering the mode is the optimal model-relative prediction. When a verifier is absent, certifying that a model’s reported answer coincides with this mode provides a natural measure of reliability.

Statistical Certificates of Self-Consistency

In practice, the terminal probabilities \( \mathbf{p} \) are unknown and can be estimated only through multiple independent rollouts \( X_1,\ldots,X_n \). The simplest estimator of the mode is the majority vote

\[ \widehat{c}_n := \arg\max_j \hat{p}_{n,j}, \qquad \hat{p}_{n,j} = \frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{X_i=j\}. \]

This estimator forms the basis of self-consistency test-time scaling. From a statistical standpoint, majority voting is the Bayes-optimal estimator of \( c^\star \) under 0--1 loss, and an associated upper bound on \( \mathbb{P}[\widehat{c}_n \neq c^\star] \) provides a statistical certificate of self-consistency: a quantitative guarantee that the aggregated answer coincides with the mode of the terminal law \( \mathbf{p} \) with high probability.

Under standard regularity conditions the majority-vote estimator is consistent, \( \Pr[\widehat{c}_n = c^\star] \to 1 \) as \( n \to \infty \). A more practical question concerns the finite-sample regime: how large must \( n \) be to guarantee, with confidence \( 1-\varepsilon \), that \( \widehat{c}_n \) already equals \( c^\star \)?

To address this, we derive finite-sample and asymptotic certificates, leveraging Hoeffding, Bernstein, Chernoff–Markov, and Sanov concentration bounds for the error probability \( \mathbb{P}[\widehat{c}_n \neq c^\star] \). These bounds clarify how reliability scales with the ensemble size and with the mode margin \( \delta = p_{c^\star} - p_{j^\star} \), i.e., the gap between the top two answer probabilities.

If the probabilities \( p_j \) were known, one could invert these bounds to determine the number of samples required to achieve a desired confidence \( 1-\varepsilon \). In reality, both \( p_j \) and \( \delta \) must be estimated on the fly. This motivates a sequential formulation: as rollouts arrive, can we determine adaptively when the current majority is statistically reliable? We introduce the Martingale Majority Certificate (MMC), a sequential procedure that adaptively tests whether the empirical leader remains significantly ahead of its nearest rival and of all others combined. This guarantees that at the (random) stopping time \( \tau \), majority vote coincides with the true mode with high probability:

\[ \Pr[\widehat{c}_{n_\tau} \neq c^\star] \le \varepsilon, \]

thus providing an anytime-valid certificate of model self-consistency.

Martingale Majority Certificate Stopping Rule

Our proposed stopping rule adaptively decides when to stop sampling rollouts while controlling the error of returning the empirical majority.

The central challenge in the LLM setting is the potentially large number of possible outcomes. A naive stopping rule would require pairwise comparisons of the empirical probabilities across all classes \( i \neq j \), \( i,j \in \{1, \dots, k\} \), which becomes computationally prohibitive as \( k \) grows.

To address this, we exploit the observation that the mass of the terminal law is typically concentrated on a few classes \( m \ll k \). Thus, instead of considering all classes individually, we aggregate votes into three categories:

  • the current leader \( \widehat{c}_n \),
  • the runner-up
  • all the others.

Accordingly, we perform two tests: leader vs runner-up and leader vs others.

MMC algorithm

Optimising Sample Efficiency with Test-Time Training

Our ultimate goal is to minimise the number of samples required from the LLM for the majority vote to return the correct answer with high confidence \(1-\varepsilon\). The expected stopping time of the MMC scales approximately as \[ N \;\approx\; \frac{2(p_{\hat c}+p_{j^\star})}{(p_{\hat c}-p_{j^\star})^{2}} \,\log \frac{1}{\varepsilon}, \] so that small mode margins \( \delta = p_{\hat c}-p_{j^\star} \) lead to rapidly increasing sample requirements.

The key question is whether test-time adaptation can reshape the terminal distribution to enlarge this margin, thereby improving sample efficiency.

We show that the optimal policy corresponding to the KL-regularised objective proposed in TTRL is an exponentially tilted version of the base model. Decreasing the regularisation parameter consistently increases the margin and reduces the number of samples required for certification.

Two new test-time RL objectives

We introduce two label-free group-level rewards designed to optimise the trade-off between sharpness and bias. Let \( \mathbf{X} = (X_1, \dots, X_n) \) be a set of answers arising from rollouts \( \mathbf{Y} =(Y_1, \ldots, Y_n) \) for a given prompt, with \( \widehat{c}_n \) denoting the majority vote and \( j_n^\star \) the runner-up. Define \( N_j = \sum_i \mathbf{1}\{X_i=j\} \).

  1. SNR-based reward.

    Directly leveraging the SNR as a driving factor in the efficiency of the MMC scheme we introduce the first reward

    \[ r^{(1)}_n(\mathbf{Y}) = \widehat{\mathrm{SNR}}(\Delta_{j^\star_n})(\mathbf{X}) = \frac{(N_{\widehat c_n}-N_{j^\star_n})^{2}} {n \left(N_{\widehat c_n}+N_{j^\star_n}\right) -(N_{\widehat c_n}-N_{j^\star_n})^{2}} \;\xrightarrow[n\to\infty]{}\; \mathrm{SNR}(\Delta_{j^\star_n}). \]

    This objective aims to directly maximise \( \text{SNR}(\Delta_{j_n^\star}) \), which is equivalent to minimising the expected number of samples required to obtain statistical certificates for the majority vote.

  2. Entropy-based reward.

    As we want to encourage a more peaked terminal distribution, another natural option is negative entropy, i.e.

    \[ r^{(2)}_n(\mathbf{Y}) = \widehat H_n(\mathbf{X}) = \sum_{j:N_j>0}\frac{N_j}{n} \log \frac{N_j}{n} \;\xrightarrow[n\to\infty]{}\; \sum_j p_j \log p_j = -H(p). \]

    Maximising \( \widehat H_n \) minimises the Shannon entropy of the answer distribution, encouraging a sharper, lower-entropy terminal distribution. 🚨Important: The tempering sharpens only the distribution of final answers, not the full sequence distribution. This gives us the best of both worlds: promoting certainty when providing a final answer, but permitting exploration of diverse pathways during the chain-of-thought reasoning process.

Performance TTT
Pass@1 performance after test-time training with SNR and entropy-based rewards relative to the base models.

We observe in the table below that the number of samples required under the MMC stopping rule decreases after applying test-time training, relative to the pre-trained model. That is, test-time training sharpens the terminal answer distribution, increasing the mode margin and thus reducing the number of samples required for certification.

Performance TTT
Majority vote accuracy and required number of samples under the MMC stopping rule (✅) at confidence levels 0.1 and 0.4 for the pre-trained model and after test-time training with SNR-based rewards. Performance is compared to that obtained using the full sample budget (❌).

SNR as a label-free estimator of task difficulty

Our experiments reveal a notable empirical regularity: the signal-to-noise ratio (SNR) of the margin variable \(\Delta_{j^\star} = \mathbf 1\{X = c^\star\} - \mathbf 1\{X = j^\star\}\), which quantifies the sharpness of the model’s terminal answer distribution, correlates strongly with external measures of problem difficulty. Across the MATH-500 benchmark, harder problems exhibit systematically lower and more variable SNR values, while easier problems yield sharply peaked distributions concentrated around a single answer.

This behaviour is non-trivial: the model has no access to ground-truth difficulty labels, yet its own epistemic uncertainty, reflected in the variability of its rollouts, aligns closely with these labels. This suggests an emergent form of calibration in reasoning LLMs: without explicit supervision or external verification, models appear to ''know when they do not know.'' In statistical terms, the SNR acts as a label-free proxy for epistemic uncertainty and, consequently, for task difficulty.

SNR distribution qwen-math-1.5B. SNR distribution qwen-math-7B.
Distribution of the estimated SNR when using MMC stopping rule with \(\varepsilon = 0.1\) and \(N_{\text{budget}}=100\). Results are obtained after applying test-time training with SNR-based rewards.

Conclusion

Our results unify several strands of recent work on reliable inference in LLMs, self-consistency, adaptive compute allocation, and test-time reinforcement learning (TTRL), under a common statistical perspective. Through this lens, majority voting emerges naturally as a means of estimating the mode of the terminal distribution. The validity of the majority vote as an estimate of the mode can be certified by finite-sample and asymptotic bounds. The Martingale Majority Certificate (MMC) extends this view by providing an operational test-time algorithm that determines, from model rollouts alone, when a response is statistically self-consistent.

Furthermore, we shed light on the underlying mechanism by which TTRL and related post-training approaches improve reasoning reliability: KL-regularised optimisation corresponds to an exponential tilting of the terminal law, sharpening it around its mode and increasing the signal-to-noise ratio (SNR) of the margin variable. This insight explains empirical observations of enhanced consistency after test-time adaptation, and motivates new label-free objectives such as our SNR- and entropy-based rewards, which explicitly target this trade-off between sharpness and bias. Unlike prior work that tunes temperature or per-token distributions, our formulation operates on the terminal marginal, preserving exploration during reasoning while promoting confidence in the final answer.

BibTeX

@article{corderoencinar2025certified,
  author    = {Paula Cordero-Encinar and Andrew B. Duncan},
  title     = {Certified Self-Consistency: Statistical Guarantees and Test-Time Training for Reliable Reasoning in LLMs},
  journal   = {arXiv:2510.17472},
  year      = {2025},
}