Sampling

Exact sampling

For most distributions, if we know the exact parametric form, we can get exact sampling from it. Most programming languages achieve this by generating random samples from a uniform distribution, then inverse mapping the cumulative distribution function to give a sample from the distribution.

Therefore, for any continuous function \(F\) having the PDF \(f\),

\[X = F^{-1}(U) \implies X \sim f \; ; Y \sim \operatorname{Unif}(0,1)\]

Programming languages have their implementations to generate random numbers that have the same statistical property as the numbers sampled from a uniform distribution in \([0,1]\). We can understand the above more intuitively by imaging a CDF. A random number U sampled from \(\operatorname{Unif}(0,1)\) will be sampled on the y-axis of the CDF curve. More samples of \(X\) will be sampled from the distribution by \(X = F^{-1}(U)\) where the CDF is steeper, or in the region where the PDF peaks, eg., a narrow gaussian will rise more steeply compared to a broad one. Assuming both are centered around the same point, the narrow one will sample more points around the center.

Rejection sampling

Often, sampling from \(f(x)\) is not feasible but there maybe a bounding distribution \(g(x)\) such that

\[f(x) \le c g(x) \; ; c \ge 1 \quad \forall x\]

In such cases, where we know the bound of \(f(x)\) upto some proportionality constant, rejection sampling is employed. We sample from \(f(x)\) by

  • Sample \(X \sim g(x)\)
  • Sample \(U \sim \operatorname{Unif}[0,1]\)
  • Accept \(X\) as being drawn from \(f(x)\) when \(\frac{f(X)}{c e(X)} \ge U \; \text{where} \; e(x) = c g(x)\)

Rigorously, this works because:

\[\begin{aligned} P(X \le x | \frac{f(X)}{e(X)} \ge) &= P(X \le x | U \le \frac{f(X)}{e(X)}) \\ &= \frac{P(X \le x , U \le \frac{f(X)}{e(X)})}{P(U \le \frac{f(X)}{e(X)})} \\ &= \frac{P(U \le \frac{f(X)}{e(X)}, X \le x)}{P(U \le \frac{f(X)}{e(X)}, X \le \infty )} \\ \text{In above, } X \le \infty & \text{ will not impose any constraint and allows us to see the steps more clearly} \\ \text{Now, noting that } P({X \le x}) &= \int_0^x g(y)dy \text{ because we're sampling from } g(y) \\ &= \frac{\int_0^{x} P(U \le \frac{f(X)}{e(X)}) g(y) dy}{\int_0^{\infty} P(U \le \frac{f(X)}{e(X)}) g(y) dy} \\ &= \frac{\int_0^x \frac{f(y)}{e(y)} g(y) dy}{\int_0^{\infty} \frac{f(y)}{e(y)} g(y) dy} \\ \because P({U \le k}) = k; \; & U \sim \operatorname{Unif}(0,1) \\ &= \frac{\int_0^x \frac{f(y)}{c g(y)} g(y) dy}{\int_0^{\infty} \frac{f(y)}{c g(y)} g(y) dy} \\ &= \frac{\frac{1}{c} \int_0^x f(y)}{\frac{1}{c} \int_0^{\infty} f(y)} \\ &= \int_0^x f(y) dy \\ &= P(X \le x) \end{aligned}\]

We, therefore, sample EXACTLY from \(f(x)\). <!–

\begin{align} P(Y\le y | \frac{f(y)}{\alpha g(y)} \ge U) & = \frac{P(Y\le y, \frac{f(y)}{\alpha g(y)} \ge U)}{P(\frac{f(y)}{\alpha g(y)} \ge U)}
& = \frac{\int_{- \infty}^{Y} P(\frac{f(y)}{\alpha g(y)} \ge U | y =v) g(y) dy}{\int_{- \infty}^{\infty} P(\frac{f(y)}{\alpha g(y)} \ge U | y = v) g(y) dy}
& = \frac{\int_{- \infty}^{Y} \frac{f(y)}{\alpha g(y)} g(y) dy}{\int_{- \infty}^{\infty} \frac{f(y)}{\alpha g(y)} g(y) dy}
& = \frac{1/\alpha \int_{- \infty}^{Y} f(y) dy}{1/\alpha \int_{- \infty}^ {\infty} f(y) dy}
& = \int_{-\infty}^Y f(y) dy = P(Y \le y) \end{align
} $$

It should be noted that the more closely \(g(x)\) bounds \(f(x)\), the more points will be accepted, implying faster convergence.

Importance Sampling

\(\nabla^2 E - k^2 E = 0 \quad , \; k = i\omega \sigma \mu_0\) –>