In this section, we investigate the bounds for the improvement on the variance of balance heuristic estimator with equal count of samples.

This problem has been attacked by Veach establishing an inequality (Theorem 9.5 of his thesis [2]) for the variance of the balance heuristic estimator with equal count of sampling \({\hat F}_{{\text {eq}}}\):

$$\begin{array}{@{}rcl@{}} V[\!{\hat F}_{{\text{eq}}}] \le n V[\!{F}] + \frac{n-1}{N} \mu^{2}, \end{array} $$

(23)

where *F* is any multiple importance sampling estimator using the same total number of samples *N*. Veach interpreted Theorem 9.5 as a proof of quasi-optimality of balance heuristic with equal count of samples, saying “According to this result, changing the *N*_{
i
} can improve the variance by at most a factor of *n*, plus a small additive term. In contrast, a poor choice of the *w*_{
i
} can increase variance by an arbitrary amount. Thus, the sample allocation is not as important as choosing a good combination strategy.”

The proof of Eq. 23 is based on the following inequality, which compares a general multiple importance sample estimator *F* with arbitrary number of samples {*N*_{
i
}} with the same estimator (i.e., using the same weights *w*_{
i
}) but with equal count of samples, *F*_{eq} as follows:

$$\begin{array}{@{}rcl@{}} V[\!F] \ge \frac{1}{n} V[\!F_{{\text{eq}}}]. \end{array} $$

(24)

But this inequality is not valid when the weights *w*_{
i
}(*x*) depend on the number of samples *N*_{
i
}, see Appendix A for a proof. Just to give a single counter example, let us consider the case when zero variance estimator is possible by properly setting the number of samples, making *V*[*F*] zero, but the equal count of samples estimator will not have zero variance. However, we show that Theorem 9.5 can be generalized to such cases as well, but it requires the full reconsideration of the original proof.

The interpretation by Veach of Theorem 9.5 is based on the assumption that additive term *μ*^{2}(*n*−1)/*N* is small if the total number of samples, *N*, gets larger. However, denominator *N* is implicitly included in the other terms of Eq. 23 as well, thus the considered additive term is, in fact, not negligible. As a result, the selection of *N*_{
i
} sample numbers or weights *α*_{
i
} can make a significant difference in the variance, which is worth examining and opens possibilities to find better estimators.

### 3.1 One-sample balance heuristic

The general MIS one-sample primary estimator is

$$\begin{array}{@{}rcl@{}} {\mathcal{F}}^{1}= \frac{w_{i}(x) f(x)}{\alpha_{i} p_{i} (x)}, \end{array} $$

(25)

where technique *i* is selected with probability *α*_{
i
}. It can be easily shown that it is unbiased, i.e., its expected value is *μ*. Using the balance heuristic weights,

$$ w_{i}(x) = \frac{\alpha_{i} p_{i}(x)}{\sum_{k=1}^{n} \alpha_{k} p_{k}(x)}, $$

(26)

the estimator becomes the *one-sample balance heuristic estimator*,

$$\begin{array}{@{}rcl@{}} \hat{\mathcal{F}}^{1} = \frac{f(x)}{\sum_{k} \alpha_{k} p_{k}(x)}. \end{array} $$

(27)

One-sample balance heuristic is the same as the Monte Carlo estimator using the mixture of probabilities \(p(x)~=~\sum _{k~=~1}^{n} \alpha _{k} p_{k}(x)\), \(\sum _{k~=~1}^{n} \alpha _{k}~=~1\). The *α*_{
i
} values are called the mixture coefficients and represent the average count of samples from each technique. The variance of this estimator can be obtained by the application of the definition of variance,

$$\begin{array}{@{}rcl@{}} V\left[\!\hat{\mathcal{F}}^{1}\right] &=& \int \frac{f^{2}(x)}{ \sum_{k=1}^{n} \alpha_{k} p_{k}(x)} {\mathrm{d}}x - \mu^{2}. \end{array} $$

(28)

###
**Theorem 1**

If \(V\left [\!\hat {\mathcal {F}}_{{\text {eq}}}^{1}\right ]\) is the variance of the one-sample balance heuristic estimator with equal weights, and \(V\left [\!\hat {\mathcal {F}}^{1}\right ]\) the variance of the one-sample balance heuristic estimator with any distribution of weights {*α*_{
k
}}, then the following inequality holds:

$$ V\left[\hat{\mathcal{F}}_{{\text{eq}}}^{1}\right] \le n \alpha_{\max} V\left[\!\hat{\mathcal{F}}^{1}\right] + (n \alpha_{\max} -1) \mu^{2}. $$

(29)

###
*Proof*

The variance of the one-sample balance heuristic with equal weights is

$$ V\left[\hat{\mathcal{F}}_{{\text{eq}}}^{1}\right] = \int{\frac{f^{2}(x)}{\frac{1}{n}\sum_{k} p_{k}(x)} {\mathrm{d}}x} - \mu^{2}. $$

(30)

As \(\alpha _{\max } \sum _{k=1}^{n} p_{k}(x) \ge \sum _{k=1}^{n} \alpha _{k} p_{k}(x)\) where *α*_{max}>0 is the maximum of the values of *α*_{
k
} and *p*_{
k
} is not negative, we have \(\frac {1}{n} \alpha _{\max } \sum _{k=1}^{n} p_{k}(x) \ge \frac {1}{n} \sum _{k=1}^{n} \alpha _{k} p_{k}(x)\) and thus:

$$\begin{array}{@{}rcl@{}} V\left[\hat{\mathcal{F}}_{{\text{eq}}}^{1}\right] &\le& n \alpha_{\max} \int{\frac{f^{2}(x)}{\sum_{k=1}^{n} \alpha_{k} p_{k}(x)} {\mathrm d}x} - \mu^{2} \\ &=& n \alpha_{\max} \left(\int{\frac{f^{2}(x)}{\sum_{k=1}^{n} \alpha_{k} p_{k}(x)} {\mathrm{d}}x} - \mu^{2}\right) \\ &&+ (n \alpha_{\max} -1)\mu^{2}. \end{array} $$

(31)

□

When for all *i*, *α*_{
i
}=1/*n*, Eq. 29 becomes an equality.Observing that *α*_{max}≤1, the following corollary is immediate.

###
**Corollary 1**

$$ V\left[\hat{\mathcal{F}}_{{\text{eq}}}^{1}\right] \le n V\left[\!\hat{\mathcal{F}}^{1}\right] + (n-1) \mu^{2}. $$

(32)

Equations 29 and 32 do not imply that the improvement with respect to the equal count of samples is limited by a factor of *n*, since *μ* can be large in comparison with the variances. So, it is worth trying to obtain *α*_{
i
} values that can reduce the variance.

Equations 29 and 32 can be extended to the one-sample MIS in general using Theorem 9.4 of Veach’s thesis, which states that the variance of the one-sample MIS estimator is minimal when it is a balance heuristic one, i.e., \(V[\hat {\mathcal {F}}] \le V[{\mathcal {F}}]\). For instance, for Eq. 32

$$ V\left[\hat{\mathcal{F}}_{{\text{eq}}}^{1}\right] \le n V\left[\!{\mathcal{F}}^{1}\right] + (n-1) \mu^{2}. $$

(33)

And for *N* samples,

$$\begin{array}{@{}rcl@{}} V\left[\!\hat{\mathcal{F}}_{{\text{eq}}}\right] \le n V[\!{\mathcal{F}}] + \frac{(n-1)}{N} \mu^{2}. \end{array} $$

(34)

Observe that Eq. 34 is similar to Eq. 23 except for the fact that it holds for the one-sample estimator. Using \(V[\!\hat {F}] \le V[\!\hat {\mathcal {F}}]\) (as shown in [11]) we also have:

$$\begin{array}{@{}rcl@{}} V\left[\!\hat{F}_{{\text{eq}}}\right] \le n V[\!{\mathcal{F}}] + \frac{(n-1)}{N} \mu^{2}. \end{array} $$

(35)

Finally, note that we do not state that Eq. 23 is invalid, but that its proof by Veach in his thesis [2] is wrong and also his interpretation is not correct. On the other hand, we have proved a formally identical relationship, Eq. 34, for the one-sample MIS estimator instead of the multi-sample estimator.