Likelihoods¶
Likelihood function implementations.
Overview¶
Users and developers can implement the final phase of likelihood function evaluation as an extension module.
These functions must operate on a channelphase resolved set of signals (one per hot region). Each component will have an associated phase shift to account for. An exposure time must be invoked to scale the signal to yield the expected count numbers.
A phaseresolved signal means that the signal is evaluated at a sequence of phase points, and thus the signal is to be integrated over a sequence of phase intervals if the data (event) space in each channel is discrete. If one desires an unbinned likelihood function, this can be approximated with a high number of phase bins using existing extensions, or a new extension module can be straighforwardly developed to loop over events at greater computational expense.
Lastly, a background model is required. If a physical background model is condition upon, an extension can operate with the expected background count (rate or number) signal. In lieu of such a model, one can invoke the default background marginalisation protocol; channelbychannel background count rate variables will be numerically and rapidly marginalised over. See the discussion in Riley et al. 2019 (ApJL, 887, L21) and the XPSI technical notes to evaluate whether this is appropriate for coupling with a given model for target source signal.
Extensions¶

xpsi.likelihoods.
precomputation
(__Pyx_memviewslice data)¶ Compute negative of sum of logfactorials of data count numbers.
Use this function to perform precomputation before repeatedly calling
eval_marginal_likelihood()
.Parameters: data (int[:,::1]) – Phasechannel resolved count numbers (integers). Phase increases with column number. Returns: A 1D numpy.ndarray
information, one element per channel. Each element is the negative of the sum (over phase intervals) of logfactorials of data count numbers.

xpsi.likelihoods.
eval_marginal_likelihood
(double exposure_time, __Pyx_memviewslice phases, __Pyx_memviewslice counts, components, component_phases, phase_shifts, __Pyx_memviewslice neg_sum_ln_data_factorial, __Pyx_memviewslice support, size_t workspace_intervals, double epsabs, double epsrel, double epsilon, double sigmas, double llzero, allow_negative=False)¶ Evaluate the Poisson likelihood.
The count rate is integrated over phase intervals.
A Newton iteration procedure is implemented channelbychannel to approximate the conditionalML background countrate variable given a fiducial estimate. Marginalisation over each background variable (in each channel) is then executed numerically between bounds centred on the ML background estimate. The bounds are based on a Gaussian expansion of the conditional likelihood function, and are
sigmas
standard deviations above and below the ML estimate. The marginalisation is with respect to a flat bounded or unbounded prior density function of each background variable.Parameters:  exposure_time (double) – Exposure time in seconds by which to scale the expected count rate in each phase interval.
 phases (double[::1]) – A
numpy.ndarray
of phase interval edges in cycles.  components (tuple) – Component signals, each a Ccontiguous
numpy.ndarray
of signal count rates where phase increases with column number.  component_phases (tuple) – For each component, a Ccontiguous
numpy.ndarray
of phases in cycles at which the modelsignal
is evaluated on the interval[0,1]
.  phase_shifts (arraylike) – Phase shifts in cycles, such as on the interval
[0.5,0.5]
, for the component signals.  neg_sum_ln_data_factorial (double[::1]) – The precomputed output of
precomputation()
given the data count numbers.  support (double[:,::1]) – The prior support of the background countrate variables. The first
column contains lowerbounds as a function of channel number. Lower
bounds must be greater than or equal to zero. The second column contains
the upperbounds as a function of channel number. Upperbounds for a
proper prior must be greater than zero and greater than the
corresponding lowerbound but finite. If the upperbound is less than
zero the prior is semiunbounded and thus improper. Must be a
Ccontiguous
numpy.ndarray
.  workspace_intervals (size_t) – The size of the workspace to allocate for marginalisation via numerical
quadrature using the GSL
cquad
integration routine.  epsabs (double) – The absolute tolerance for marginalisation via numerical quadrature
using the GSL
cquad
integration routine.  epsrel (double) – The relative tolerance for marginalisation via numerical quadrature
using the GSL
cquad
integration routine.  epsilon (double) – The fraction of the standard deviation of the approximating Gaussian function to tolerate when a new step of the quadratic maximisation scheme is proposed. This fraction should be some adequately small number. The standard deviation is recalculated with each iteration as the position (in each background variable) evolves.
 sigmas (double) – The number of approximating Gaussian standard deviations to expand the integration domain about the point estimated to maximise the conditional likelihood function in each channel. This number should probably be at least five but no more than ten.
 llzero (double) – The loglikelihood that a MultiNest process treats as zero and thus ignores points with smaller loglikelihood. This number will be very negative. This is useful because if the likelihood is predicted, in advance of full executation of this function, to be incredibly small, computation can be avoided, returning a number slightly above this zerothreshold.
 allow_negative (obj) – A boolean or an array of booleans, one per component, declaring whether to allow negative phase interpolant integrals. If the interpolant is not a Steffen spline, then the interpolant of a nonnegative function can be negative due to oscillations. For the default Akima Periodic spline from GSL, such oscillations should manifest as small relative to those present in cubic splines, for instance, because it is designed to handle a rapidly changing secondorder derivative.
Returns: A tuple
(double, 2D ndarray, 1D ndarray)
. The first element is the logarithm of the marginal likelihood. The second element is the expected count numbers in joint phasechannel intervals from the star (the target source). The last element is the vector of background count numbers that are estimated to maximise the conditional likelihood function, one per channel.

xpsi.likelihoods.
poisson_likelihood_given_background
(double exposure_time, __Pyx_memviewslice phases, __Pyx_memviewslice counts, components, component_phases, phase_shifts, __Pyx_memviewslice background, allow_negative=False)¶ Evaluate the Poisson likelihood.
The count rate is integrated over phase intervals.
Parameters:  exposure_time (double) – Exposure time in seconds by which to scale the expected count rate in each phase interval.
 phases (double[::1]) – A
numpy.ndarray
of phase interval edges in cycles.  components (tuple) – Component signals, each a Ccontiguous
numpy.ndarray
of signal count rates where phase increases with column number.  component_phases (tuple) – For each component, a Ccontiguous
numpy.ndarray
of phases in cycles at which the modelsignal
is evaluated on the interval[0,1]
.  phase_shifts (arraylike) – Phase shifts in cycles, such as on the interval
[0.5,0.5]
, for the component signals.  background (double[:,::1]) – A Ccontiguous
numpy.ndarray
of background expected counts, whose shape matches the number of channels in each element ofcomponents
and the number of phase intervals constructed fromphases
.  allow_negative (obj) – A boolean or an array of booleans, one per component, declaring whether to allow negative phase interpolant integrals. If the interpolant is not a Steffen spline, then the interpolant of a nonnegative function can be negative due to oscillations. For the default Akima Periodic spline from GSL, such oscillations should manifest as small relative to those present in cubic splines, for instance, because it is designed to handle a rapidly changing secondorder derivative.
Returns: A tuple
(double, 2D ndarray)
. The first element is the logarithm of the marginal likelihood. The second element is the expected count numbers in joint phasechannel intervals from the star (the target source).
Example¶
The following is an example of a custom extension module to evaluate a simple likelihood function based on Poisson sampling distribution of count data.
#cython: cdivision=True
#cython: boundscheck=False
#cython: nonecheck=False
#cython: wraparound=False
#cython: embedsignature=True
from __future__ import division
import numpy as np
from libc.math cimport pow, log, floor
from libc.stdlib cimport malloc, free
from GSL cimport (gsl_interp,
gsl_interp_alloc,
gsl_interp_init,
gsl_interp_free,
gsl_interp_eval,
gsl_interp_eval_integ,
gsl_interp_steffen,
gsl_interp_accel,
gsl_interp_accel_alloc,
gsl_interp_accel_free,
gsl_interp_accel_reset)
ctypedef gsl_interp_accel accel
def poisson_likelihood_given_background(double exposure_time,
double[::1] phases,
double[:,::1] counts,
components,
component_phases,
phase_shifts,
double[:,::1] background):
""" Evaluate the Poisson likelihood.
The count rate is integrated over phase intervals.
:param double exposure_time:
Exposure time in seconds by which to scale the expected count rate
in each phase interval.
:param phases:
A Ccontiguous :class:`numpy.ndarray` of phase interval edges in cycles.
:param tuple components:
Component signals, each a Ccontiguous :class:`numpy.ndarray` of
signal count rates where phase increases with column number.
:param tuple component_phases:
For each component, a Ccontiguous :class:`numpy.ndarray` of phases
in cycles at which the model :obj:`signal` is evaluated on
the interval ``[0,1]``.
:param arraylike phase_shifts:
Phase shifts in cycles, such as on the interval ``[0.5,0.5]``, for
the component signals.
:param background:
A Ccontiguous :class:`numpy.ndarray` of background expected
*counts*, whose shape matches the number of channels in each element
of :obj:`components` and the number of phase intervals constructed
from :obj:`phases`.
"""
cdef:
size_t i, j, p, num_components = len(components)
double a, b
double[:,::1] STAR = np.zeros((components[0].shape[0], phases.shape[0]1),
dtype = np.double)
cdef double *phases_ptr = NULL
cdef double *signal_ptr = NULL
cdef double[:,::1] signal
cdef double[::1] signal_phase_set
cdef double phase_shift
cdef gsl_interp **interp = <gsl_interp**> malloc(num_components * sizeof(gsl_interp*))
cdef accel **acc = <accel**> malloc(num_components * sizeof(accel*))
for p in range(num_components):
signal_phase_set = component_phases[p]
interp[p] = gsl_interp_alloc(gsl_interp_steffen, signal_phase_set.shape[0])
acc[p] = gsl_interp_accel_alloc()
gsl_interp_accel_reset(acc[p])
cdef gsl_interp *inter_ptr = NULL
cdef accel *acc_ptr = NULL
for i in range(STAR.shape[0]):
for p in range(num_components):
signal = components[p]
signal_phase_set = component_phases[p]
phase_shift = phase_shifts[p]
interp_ptr = interp[p]
acc_ptr = acc[p]
phases_ptr = &(signal_phase_set[0])
signal_ptr = &(signal[i,0])
gsl_interp_init(interp_ptr, phases_ptr, signal_ptr,
signal_phase_set.shape[0])
for j in range(phases.shape[0]  1):
a = phases[j] + phase_shift
b = phases[j+1] + phase_shift
a = floor(a)
b = floor(b)
if a < b:
STAR[i,j] += gsl_interp_eval_integ(interp_ptr,
phases_ptr,
signal_ptr,
a, b,
acc_ptr)
else:
STAR[i,j] += gsl_interp_eval_integ(interp_ptr,
phases_ptr,
signal_ptr,
a, 1.0,
acc_ptr)
STAR[i,j] += gsl_interp_eval_integ(interp_ptr,
phases_ptr,
signal_ptr,
0.0, b,
acc_ptr)
for p in range(num_components):
gsl_interp_accel_free(acc[p])
gsl_interp_free(interp[p])
free(acc)
free(interp)
cdef:
double LOGLIKE = 0.0, EXPEC = 0.0
double n = <double>(phases.shape[0]  1)
for i in range(STAR.shape[0]):
for j in range(STAR.shape[1]):
EXPEC = (STAR[i,j] + background[i,j]/n) * exposure_time
LOGLIKE = EXPEC
LOGLIKE += counts[i,j] * log(EXPEC)
STAR[i,j] += background[i,j]/n
STAR[i,j] *= exposure_time
return (LOGLIKE, np.asarray(STAR, order='C', dtype=np.double))