Separated representations of integral kernels#
Background#
The Poisson and BSH kernels are singular, 6-dimensional integral kernels that cannot be projected directly into the MRA representation
A low-rank representation is a sum of outer products of low-dimensional objects -> sum of Gaussians
Objective is to represent the Poisson and BSH kernels as sum of Gaussians with guaranteed precision
Use the following integral identity
where we discretize the integral using the trapezoidal rule. The quanitites in the sum are
\(r_\text{max}\) and \(r_\text{min}\) are the upper and lower limits of the representation, \(h\) is the step size
Exercise 1#
Plot the integrand of the identity representing the Poisson kernel (\(\mu=0\)) for \(r=0.01\), \(r=1\), \(r=100\). Use a logarithmic plot.
def kernel(r,s):
val=1 # fill in the equation
return val
s_array = np.linspace(-10.0,13.0,101)
kernel_array_001=kernel(0.01,s_array)
plt.xscale("linear")
plt.yscale("log")
plt.ylim(1.e-10, 1.e5)
plt.plot(s_array,kernel_array_001)
import numpy as np
import matplotlib.pyplot as plt
def compute_expansion(s,stepsize):
coeff=stepsize*2/np.sqrt(np.pi)*np.exp(s)
exponent=np.exp(2*s)
return coeff,exponent
def fill_separated_expansion(r_min,r_max,prec):
# compute left and right interval boundaries
t_min=np.log(prec/r_max)-1
t_max=0.5*np.log(20/(r_min*r_min))
print("integration boundaries",t_min,t_max)
# compute step size (semi-heuristically)
h = 1 / (0.2 - 0.47 * np.log10(prec))
# the number of terms in the sum
n_exp = int(np.ceil((t_max - t_min) / h) + 1)
# ab is coefficient and exponent
ab = np.zeros(shape=(2,n_exp))
for i in range(n_exp):
t = t_min + h * i
ab[0][i], ab[1][i] = compute_expansion(t,h)
print("number of terms in the expansion:", n_exp)
return ab
# reconstruct the poisson/BSH kernels from the sum
def poisson_separated(r,ab):
val = 0
for i in range(len(ab[0])):
alpha = ab[0][i]
beta = ab[1][i]
val += alpha * np.exp(-beta * r**2)
return val
# reconstruct the poisson/BSH kernels from the sum
def poisson_exact(r):
return 1/r
r_min = 1.0e-5
r_max = 5
prec = 1.0e-4
ab = fill_separated_expansion(r_min,r_max,prec)
# Show 1/r behavior of the separated representation
linear_x_array = np.linspace(r_min,r_max,1001)
sep_array = poisson_separated(linear_x_array, ab)
exact_array = poisson_exact(linear_x_array)
plt.xscale("linear")
plt.yscale("linear")
plt.xlim(r_min,r_max)
plt.ylim(r_min,r_max)
plt.plot(linear_x_array,sep_array)
plt.plot(linear_x_array,exact_array)
plt.show()
# Show relative error of the separated representation
log_x_array = np.arange(np.log(r_min),np.log(r_max),(np.log(r_max)-np.log(r_min))/40000)
x_array = np.exp(log_x_array)
sep_array = poisson_separated(x_array, ab)
dif_array = 1 - separated_func(x_array, ab) * (x_array)
plt.xscale("log")
plt.yscale("log")
plt.title("relative error of the separated representation")
plt.xlim(r_min,r_max)
plt.plot(x_array,dif_array)
Exercise 2#
Repeat the construction and plotting of the Poisson kernel for different precisions prec. Compare the number of terms and the faithfulness of the representation.