Extensions to compute specific intensities in a local comoving frame.

Overview¶

We require the surface radiation field model to be implemented in C extensions to Python if signal computation is to be sufficiently fast. Signal computation includes:

• integrating over the image of a star on a distant observer’s sky, yielding a phase-energy resolved signal for likelihood function evaluation;
• resolving the image of star on a distant observer’s sky as a function of phase and energy for visualisation purposes, and for more general simulation purposes.

The first type of signal computation is enabled by surface discretization, whilst the latter is enabled via image-plane discretization. For surface discretization there are two extension modules for users and developers to work with: hot.pyx and elsewhere.pyx. These extension modules contain functions whose bodies enable evaluation of specific intensities with respect to local comoving frames at the stellar surface. These functions have simple default code (isotropic blackbody radiation field) but must generally be customized for more advanced physics models. The hot.pyx module defines the radiation field inside surface hot regions represented by HotRegion instances; however, the same module is used by an instance of the Everywhere class. The Elsewhere.pyx extension module defines the radiation field exterior to the HotRegion instances on the surface.

For image-plane discretization, the hot.pyx extension defines the local surface radiation field properties with respect to a comoving frame. However, to compute the variables required by this module to evaluate specific intensity, another extension module is required with transforms a set of global variables into local variables given spacetime coordinates of an event belonging to a null geodesic connecting the surface to a radiation-detecting distant instrument. Users and developers must customize the function bodies in local_variables.pyx in order to implement more advanced physics models.

For all of these modules, numerical data can be read from disk to define the functions for specific intensity evaluation given local variables, and for transformation of global variables to local variables. A common usage pattern is to statistically model data assuming some numerical physics model for the radiation emergent from an atmosphere. Statistical modeling requires many likelihood function evaluations, and therefore to avoid unnecessarily reading numerical data from disk every time the likelihood function is called, users can preload the numerical atmosphere data in an encapsulation Python process and point to the data structures in memory when computing signals; developers can develop extension modules to work with preloaded numerical atmosphere data. When preloading an atmosphere, the number of grid points in each dimension is dynamically inferred, but the existing source code for multi-dimensional interpolation is hard-coded for four dimensions. With development, this can be made more sophisticated.

Developers can contribute extension modules to the surface_radiation_field/archive. Extensions from this archive must replace the contents of the hot.pyx, elsewhere.pyx, and local_variables.pyx modules, adhering to the function prototypes defined in the associated header files and using existing examples in the archive for guidance.

Use the Python-exposed functions below to test the C implementations of surface radiation field models.

Wrappers¶

xpsi.surface_radiation_field.intensity(__Pyx_memviewslice energies, __Pyx_memviewslice mu, __Pyx_memviewslice local_variables, atmosphere=None, extension='hot', size_t numTHREADS=1)

Evaluate the photon specific intensity using an extension module.

The intensities are computed directly from local variable values.

Parameters: energies (double[::1]) – A 1D numpy.ndarray of energies in keV to evaluate photon specific intensity at, in the local comoving frame. mu (double[::1]) – A 1D numpy.ndarray of angles at which to evaluate the photon specific intensity at. Specifically, the angle is the cosine of the angle of the ray direction to the local surface normal, in the local comoving frame. local_variables (double[:,::1]) – A 2D numpy.ndarray of local variables such as temperature and effective gravity. Rows correspond to the sequence of points in the space of energy and angle specified above, and columns contain the required variable values, one variable per column, in the expected order. atmosphere (tuple) – A numerical atmosphere preloaded into buffers. The buffers must be supplied in an $$n$$-tuple whose $$n^{th}$$ element is an $$(n-1)$$-dimensional array flattened into a one-dimensional numpy.ndarray. The first $$n-1$$ elements of the $$n$$-tuple must each be an ordered one-dimensional numpy.ndarray of parameter values for the purpose of multi-dimensional interpolation in the $$n^{th}$$ buffer. The first $$n-1$$ elements must be ordered to match the index arithmetic applied to the $$n^{th}$$ buffer. An example would be (logT, logg, mu, logE, buf), where: logT is a logarithm of local comoving effective temperature; logg is a logarithm of effective surface gravity; mu is the cosine of the angle from the local surface normal; logE is a logarithm of the photon energy; and buf is a one-dimensional buffer of intensities of size given by the product of sizes of the first $$n-1$$ tuple elements. It is highly recommended that buffer preloading is used, instead of loading from disk in the customisable radiation field extension module, to avoid reading from disk for every signal (likelihood) evaluation. This can be a non-negligible waste of compute resources. By preloading in Python, the memory is allocated and references to that memory are not in general deleted until a sampling script exits and the kernel stops. The likelihood callback accesses the same memory upon each call without I/O. extension (str) – Specify the extension module to invoke. Options are 'hot' and 'elsewhere'. numTHREADS (int) – Number of OpenMP threads to launch. A 1D numpy.ndarray of the photon specific intensities in units of photons/s/keV/cm^2.
xpsi.surface_radiation_field.intensity_from_globals(__Pyx_memviewslice energies, __Pyx_memviewslice mu, __Pyx_memviewslice colatitude, __Pyx_memviewslice azimuth, __Pyx_memviewslice phase, __Pyx_memviewslice global_variables, double R_eq, double zeta, double epsilon, atmosphere=None, size_t numTHREADS=1)

Evaluate the photon specific intensity using extension modules.

The local variables are computed from global variables and spacetime coordinates of events at the surface.

Parameters: energies (double[::1]) – A 1D numpy.ndarray of energies in keV to evaluate photon specific intensity at, in the local comoving frame. mu (double[::1]) – A 1D numpy.ndarray of angles at which to evaluate the photon specific intensity at. Specifically, the angle is the cosine of the angle of the ray direction to the local surface normal, in the local comoving frame. colatitude (double[::1]) – A 1D numpy.ndarray of colatitudes in radians at which to evaluate photon specific intensity, in the local comoving frame. This is the coordinate of a fixed point relative to a distant static observer. azimuth (double[::1]) – A 1D numpy.ndarray of azimuths in radians at which to evaluate photon specific intensity, in the local comoving frame. This is the coordinate of a fixed point relative to a distant static observer. phase (double[::1]) – A 1D numpy.ndarray of rotational phases in radians at which to evaluate photon specific intensity, in the local comoving frame. The phase describes the rotation of the surface radiation field through the fixed points alluded to above. A localised hot region for instance rotates so that it’s azimuth changes relative to a point with fixed azimuth described above. Specifying zero phase means that the intensity is evaluated at the fixed points for the initial configuration of the hot region (or more general radiation field). global_variables (double[::1]) – A 1D numpy.ndarray of global variables controlling the surface radiation field. R_eq (double) – The equatorial radius in metres. Needed for effective gravity universal relation. zeta (double) – See the Spacetime class property. Needed for effective gravity universal relation. It is recommended to supply this dimensionless variable by using an instance of Spacetime. epsilon (double) – See the Spacetime class property. Needed for effective gravity universal relation. It is recommended to supply this dimensionless variable by using an instance of Spacetime. atmosphere (tuple) – A numerical atmosphere preloaded into buffers. The buffers must be supplied in an $$n$$-tuple whose $$n^{th}$$ element is an $$(n-1)$$-dimensional array flattened into a one-dimensional numpy.ndarray. The first $$n-1$$ elements of the $$n$$-tuple must each be an ordered one-dimensional numpy.ndarray of parameter values for the purpose of multi-dimensional interpolation in the $$n^{th}$$ buffer. The first $$n-1$$ elements must be ordered to match the index arithmetic applied to the $$n^{th}$$ buffer. An example would be (logT, logg, mu, logE, buf), where: logT is a logarithm of local comoving effective temperature; logg is a logarithm of effective surface gravity; mu is the cosine of the angle from the local surface normal; logE is a logarithm of the photon energy; and buf is a one-dimensional buffer of intensities of size given by the product of sizes of the first $$n-1$$ tuple elements. It is highly recommended that buffer preloading is used, instead of loading from disk in the customisable radiation field extension module, to avoid reading from disk for every signal (likelihood) evaluation. This can be a non-negligible waste of compute resources. By preloading in Python, the memory is allocated and references to that memory are not in general deleted until a sampling script exits and the kernel stops. The likelihood callback accesses the same memory upon each call without I/O. numTHREADS (int) – Number of OpenMP threads to launch. A 1D numpy.ndarray of the photon specific intensities in units of photons/s/keV/cm^2.
xpsi.surface_radiation_field.effective_gravity(__Pyx_memviewslice cos_colatitude, __Pyx_memviewslice R_eq, __Pyx_memviewslice zeta, __Pyx_memviewslice epsilon)

Approximate local effective gravity using a universal relation.

See Morsink et al. (2007), AlGendy & Morsink (2014), and Bogdanov et al. (2019, ApJL, 887, L26).

Parameters: cos_colatitude (double[::1]) – A 1D numpy.ndarray of colatitudes, with respect to a rotational pole, at which to evaluate the local effective gravity. on the surface. Specifically, the angle is the cosine of the colatitude of the ray direction to the local surface normal, in the local comoving frame. R_eq (double[::1]) – A 1D numpy.ndarray of surface coordinate equatorial radii in metres. zeta (double[::1]) – A 1D numpy.ndarray of a dimensionless function of stellar properties. See zeta. epsilon (double[::1]) – A 1D numpy.ndarray of a dimensionless function of stellar properties. See epsilon. A 1D numpy.ndarray of the logarithms (base 10) of the effective gravities in units of cm/s^2.

A numerical atmosphere¶

The following is a customized version of the template extension module hot.pyx for surface radiation field evaluation that may be found on path surface_radiation_field/archive/hot/numerical.pyx. This extension works with numerical atmosphere preloading to execute cubic polynomial interpolation in four dimensions.

#cython: cdivision=True
#cython: boundscheck=False
#cython: nonecheck=False
#cython: wraparound=False

from libc.math cimport M_PI, sqrt, sin, cos, acos, log10, pow, exp, fabs
from libc.stdio cimport printf, fopen, fclose, fread, FILE
from GSL cimport gsl_isnan, gsl_isinf
from libc.stdlib cimport malloc, free

from xpsi.global_imports import _keV, _k_B

cdef int SUCCESS = 0
cdef int ERROR = 1

cdef double erg = 1.0e-7
cdef double k_B = _k_B
cdef double keV = _keV
cdef double k_B_over_keV = k_B / keV
cdef int VERBOSE = 0

ctypedef struct ACCELERATE:
size_t **BN                # base node for interpolation
double **node_vals
double **SPACE
double **DIFF
double **INTENSITY_CACHE
double **VEC_CACHE

# Modify this struct if useful for the user-defined source radiation field.
# Note that the members of DATA will be shared by all threads and are
# statically allocated, whereas the members of ACCELERATE will point to
# dynamically allocated memory, not shared by threads.

ctypedef struct DATA:
ACCELERATE acc

#----------------------------------------------------------------------->>>
# >>> User modifiable functions.
# >>> Note that the user is entirely free to wrap thread-safe and
# ... non-parallel external C routines from an external library.
# >>> Thus the bodies of the following need not be written explicitly in
# ... the Cython language.
#----------------------------------------------------------------------->>>
# This function must match the free management routine free_hot()
# in terms of freeing dynamically allocated memory. This is entirely
# the user's responsibility to manage.
# Return NULL if dynamic memory is not required for the model

cdef DATA *D = <DATA*> malloc(sizeof(DATA))

D.p.BLOCKS[0] = 64
D.p.BLOCKS[1] = 16
D.p.BLOCKS[2] = 4

cdef size_t T, i, j, k, l

D.acc.BN = <size_t**> malloc(numThreads * sizeof(size_t*))
D.acc.node_vals = <double**> malloc(numThreads * sizeof(double*))
D.acc.SPACE = <double**> malloc(numThreads * sizeof(double*))
D.acc.DIFF = <double**> malloc(numThreads * sizeof(double*))
D.acc.INTENSITY_CACHE = <double**> malloc(numThreads * sizeof(double*))
D.acc.VEC_CACHE = <double**> malloc(numThreads * sizeof(double*))

D.acc.BN[T] = <size_t*> malloc(D.p.ndims * sizeof(size_t))
D.acc.node_vals[T] = <double*> malloc(2 * D.p.ndims * sizeof(double))
D.acc.SPACE[T] = <double*> malloc(4 * D.p.ndims * sizeof(double))
D.acc.DIFF[T] = <double*> malloc(4 * D.p.ndims * sizeof(double))
D.acc.INTENSITY_CACHE[T] = <double*> malloc(256 * sizeof(double))
D.acc.VEC_CACHE[T] = <double*> malloc(D.p.ndims * sizeof(double))
for i in range(D.p.ndims):
D.acc.BN[T][i] = 0
D.acc.VEC_CACHE[T][i] = D.p.params[i][1]
D.acc.node_vals[T][2*i] = D.p.params[i][1]
D.acc.node_vals[T][2*i + 1] = D.p.params[i][2]

j = 4*i

D.acc.SPACE[T][j] = 1.0 / (D.p.params[i][0] - D.p.params[i][1])
D.acc.SPACE[T][j] /= D.p.params[i][0] - D.p.params[i][2]
D.acc.SPACE[T][j] /= D.p.params[i][0] - D.p.params[i][3]

D.acc.SPACE[T][j + 1] = 1.0 / (D.p.params[i][1] - D.p.params[i][0])
D.acc.SPACE[T][j + 1] /= D.p.params[i][1] - D.p.params[i][2]
D.acc.SPACE[T][j + 1] /= D.p.params[i][1] - D.p.params[i][3]

D.acc.SPACE[T][j + 2] = 1.0 / (D.p.params[i][2] - D.p.params[i][0])
D.acc.SPACE[T][j + 2] /= D.p.params[i][2] - D.p.params[i][1]
D.acc.SPACE[T][j + 2] /= D.p.params[i][2] - D.p.params[i][3]

D.acc.SPACE[T][j + 3] = 1.0 / (D.p.params[i][3] - D.p.params[i][0])
D.acc.SPACE[T][j + 3] /= D.p.params[i][3] - D.p.params[i][1]
D.acc.SPACE[T][j + 3] /= D.p.params[i][3] - D.p.params[i][2]

D.acc.DIFF[T][j] = D.acc.VEC_CACHE[T][i] - D.p.params[i][1]
D.acc.DIFF[T][j] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][2]
D.acc.DIFF[T][j] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][3]

D.acc.DIFF[T][j + 1] = D.acc.VEC_CACHE[T][i] - D.p.params[i][0]
D.acc.DIFF[T][j + 1] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][2]
D.acc.DIFF[T][j + 1] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][3]

D.acc.DIFF[T][j + 2] = D.acc.VEC_CACHE[T][i] - D.p.params[i][0]
D.acc.DIFF[T][j + 2] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][1]
D.acc.DIFF[T][j + 2] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][3]

D.acc.DIFF[T][j + 3] = D.acc.VEC_CACHE[T][i] - D.p.params[i][0]
D.acc.DIFF[T][j + 3] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][1]
D.acc.DIFF[T][j + 3] *= D.acc.VEC_CACHE[T][i] - D.p.params[i][2]

# Cache intensity
for i in range(4):
for j in range(4):
for k in range(4):
for l in range(4):
address = D.p.I + (D.acc.BN[T][0] + i) * D.p.S[0]
address += (D.acc.BN[T][1] + j) * D.p.S[1]
address += (D.acc.BN[T][2] + k) * D.p.S[2]
D.acc.INTENSITY_CACHE[T][i * D.p.BLOCKS[0] + j * D.p.BLOCKS[1] + k * D.p.BLOCKS[2] + l] = address[0]

# Cast for generalised usage in integration routines
return <void*> D

cdef int free_hot(size_t numThreads, void *const data) nogil:
# This function must match the initialisation routine init_hot()
# in terms of freeing dynamically allocated memory. This is entirely
# the user's responsibility to manage.
# The void pointer must be appropriately cast before memory is freed --
# only the user can know this at compile time.
# Just use free(<void*> data) iff no memory was dynamically
# allocated in the function:
#   init_hot()
# because data is expected to be NULL in this case

cdef DATA *D = <DATA*> data

cdef size_t T

free(D.acc.BN[T])
free(D.acc.node_vals[T])
free(D.acc.SPACE[T])
free(D.acc.DIFF[T])
free(D.acc.INTENSITY_CACHE[T])
free(D.acc.VEC_CACHE[T])

free(D.acc.BN)
free(D.acc.node_vals)
free(D.acc.SPACE)
free(D.acc.DIFF)
free(D.acc.INTENSITY_CACHE)
free(D.acc.VEC_CACHE)

free(D)

return SUCCESS

#----------------------------------------------------------------------->>>
# >>> Cubic polynomial interpolation.
# >>> Improve acceleration properties... i.e. do not recompute numerical
# ... weights or re-read intensities if not necessary.
#----------------------------------------------------------------------->>>
double E,
double mu,
const double *const VEC,
void *const data) nogil:
# Arguments:
# E = photon energy in keV
# mu = cosine of ray zenith angle (i.e., angle to surface normal)
# VEC = variables such as temperature, effective gravity, ...
# data = numerical model data required for intensity evaluation

# This function must cast the void pointer appropriately for use.
cdef DATA *D = <DATA*> data

cdef:
size_t i = 0, ii
double I = 0.0, temp
double vec[4]
double E_eff = k_B_over_keV * pow(10.0, VEC[0])
int update_baseNode[4]
int CACHE = 0

vec[0] = VEC[0]
vec[1] = VEC[1]
vec[2] = mu
vec[3] = log10(E / E_eff)

while i < D.p.ndims:
# if parallel == 31:
#     printf("\nDimension: %d", <int>i)
update_baseNode[i] = 0
if vec[i] < node_vals[2*i] and BN[i] != 0:
# if parallel == 31:
#     printf("\nExecute block 1: %d", <int>i)
update_baseNode[i] = 1
while vec[i] < D.p.params[i][BN[i] + 1]:
# if parallel == 31:
#     printf("\n!")
#     printf("\nvec i: %.8e", vec[i])
#     printf("\nBase node: %d", <int>BN[i])
if BN[i] > 0:
BN[i] -= 1
elif vec[i] <= D.p.params[i][0]:
vec[i] = D.p.params[i][0]
break
elif BN[i] == 0:
break

node_vals[2*i] = D.p.params[i][BN[i] + 1]
node_vals[2*i + 1] = D.p.params[i][BN[i] + 2]

# if parallel == 31:
#     printf("\nEnd Block 1: %d", <int>i)

elif vec[i] > node_vals[2*i + 1] and BN[i] != D.p.N[i] - 4:
# if parallel == 31:
#     printf("\nExecute block 2: %d", <int>i)
update_baseNode[i] = 1
while vec[i] > D.p.params[i][BN[i] + 2]:
if BN[i] < D.p.N[i] - 4:
BN[i] += 1
elif vec[i] >= D.p.params[i][D.p.N[i] - 1]:
vec[i] = D.p.params[i][D.p.N[i] - 1]
break
elif BN[i] == D.p.N[i] - 4:
break

node_vals[2*i] = D.p.params[i][BN[i] + 1]
node_vals[2*i + 1] = D.p.params[i][BN[i] + 2]

# if parallel == 31:
#     printf("\nEnd Block 2: %d", <int>i)

# if parallel == 31:
#     printf("\nTry block 3: %d", <int>i)

if V_CACHE[i] != vec[i] or update_baseNode[i] == 1:
# if parallel == 31:
#     printf("\nExecute block 3: %d", <int>i)
ii = 4*i
DIFF[ii] = vec[i] - D.p.params[i][BN[i] + 1]
DIFF[ii] *= vec[i] - D.p.params[i][BN[i] + 2]
DIFF[ii] *= vec[i] - D.p.params[i][BN[i] + 3]

DIFF[ii + 1] = vec[i] - D.p.params[i][BN[i]]
DIFF[ii + 1] *= vec[i] - D.p.params[i][BN[i] + 2]
DIFF[ii + 1] *= vec[i] - D.p.params[i][BN[i] + 3]

DIFF[ii + 2] = vec[i] - D.p.params[i][BN[i]]
DIFF[ii + 2] *= vec[i] - D.p.params[i][BN[i] + 1]
DIFF[ii + 2] *= vec[i] - D.p.params[i][BN[i] + 3]

DIFF[ii + 3] = vec[i] - D.p.params[i][BN[i]]
DIFF[ii + 3] *= vec[i] - D.p.params[i][BN[i] + 1]
DIFF[ii + 3] *= vec[i] - D.p.params[i][BN[i] + 2]

V_CACHE[i] = vec[i]

# if parallel == 31:
#     printf("\nEnd block 3: %d", <int>i)

# if parallel == 31:
#     printf("\nTry block 4: %d", <int>i)

if update_baseNode[i] == 1:
# if parallel == 31:
#     printf("\nExecute block 4: %d", <int>i)
CACHE = 1
SPACE[ii] = 1.0 / (D.p.params[i][BN[i]] - D.p.params[i][BN[i] + 1])
SPACE[ii] /= D.p.params[i][BN[i]] - D.p.params[i][BN[i] + 2]
SPACE[ii] /= D.p.params[i][BN[i]] - D.p.params[i][BN[i] + 3]

SPACE[ii + 1] = 1.0 / (D.p.params[i][BN[i] + 1] - D.p.params[i][BN[i]])
SPACE[ii + 1] /= D.p.params[i][BN[i] + 1] - D.p.params[i][BN[i] + 2]
SPACE[ii + 1] /= D.p.params[i][BN[i] + 1] - D.p.params[i][BN[i] + 3]

SPACE[ii + 2] = 1.0 / (D.p.params[i][BN[i] + 2] - D.p.params[i][BN[i]])
SPACE[ii + 2] /= D.p.params[i][BN[i] + 2] - D.p.params[i][BN[i] + 1]
SPACE[ii + 2] /= D.p.params[i][BN[i] + 2] - D.p.params[i][BN[i] + 3]

SPACE[ii + 3] = 1.0 / (D.p.params[i][BN[i] + 3] - D.p.params[i][BN[i]])
SPACE[ii + 3] /= D.p.params[i][BN[i] + 3] - D.p.params[i][BN[i] + 1]
SPACE[ii + 3] /= D.p.params[i][BN[i] + 3] - D.p.params[i][BN[i] + 2]

# if parallel == 31:
#     printf("\nEnd block 4: %d", <int>i)

i += 1

cdef size_t j, k, l, INDEX, II, JJ, KK
# Combinatorics over nodes of hypercube; weight cgs intensities
for i in range(4):
II = i * D.p.BLOCKS[0]
for j in range(4):
JJ = j * D.p.BLOCKS[1]
for k in range(4):
KK = k * D.p.BLOCKS[2]
for l in range(4):
address = D.p.I + (BN[0] + i) * D.p.S[0]
address += (BN[1] + j) * D.p.S[1]
address += (BN[2] + k) * D.p.S[2]

temp = DIFF[i] * DIFF[4 + j] * DIFF[8 + k] * DIFF[12 + l]
temp *= SPACE[i] * SPACE[4 + j] * SPACE[8 + k] * SPACE[12 + l]
INDEX = II + JJ + KK + l
if CACHE == 1:
I += temp * I_CACHE[INDEX]

#if gsl_isnan(I) == 1:
#printf("\nIntensity: NaN; Index [%d,%d,%d,%d] ",
#<int>BN[0], <int>BN[1], <int>BN[2], <int>BN[3])

#printf("\nBase-nodes [%d,%d,%d,%d] ",
#<int>BN[0], <int>BN[1], <int>BN[2], <int>BN[3])

if I < 0.0:
return 0.0

return I * pow(10.0, 3.0 * vec[0])

cdef double eval_hot_norm() nogil:
# Source radiation field normalisation which is independent of the
# parameters of the parametrised model -- i.e. cell properties, energy,
# and angle.
# Writing the normalisation here reduces the number of operations required
# during integration.
# The units of the specific intensity need to be J/cm^2/s/keV/steradian.

return erg / 4.135667662e-18