LSST Applications
21.0.0172gfb10e10a+18fedfabac,22.0.0+297cba6710,22.0.0+80564b0ff1,22.0.0+8d77f4f51a,22.0.0+a28f4c53b1,22.0.0+dcf3732eb2,22.0.11g7d6de66+2a20fdde0d,22.0.11g8e32f31+297cba6710,22.0.11geca5380+7fa3b7d9b6,22.0.112g44dc1dc+2a20fdde0d,22.0.115g6a90155+515f58c32b,22.0.116g9282f48+790f5f2caa,22.0.12g92698f7+dcf3732eb2,22.0.12ga9b0f51+7fa3b7d9b6,22.0.12gd1925c9+bf4f0e694f,22.0.124g1ad7a390+a9625a72a8,22.0.125g5bf6245+3ad8ecd50b,22.0.125gb120d7b+8b5510f75f,22.0.127g97737f7+2a20fdde0d,22.0.132gf62ce7b1+aa4237961e,22.0.14g0b3f228+2a20fdde0d,22.0.14g243d05b+871c1b8305,22.0.14g3a563be+32dcf1063f,22.0.14g44f2e3d+9e4ab0f4fa,22.0.142gca6935d93+ba5e5ca3eb,22.0.15g15c806e+85460ae5f3,22.0.15g58711c4+611d128589,22.0.15g75bb458+99c117b92f,22.0.16g1c63a23+7fa3b7d9b6,22.0.16g50866e6+84ff5a128b,22.0.16g8d3140d+720564cf76,22.0.16gd805d02+cc5644f571,22.0.18ge5750ce+85460ae5f3,masterg6e05de7fdc+babf819c66,masterg99da0e417a+8d77f4f51a,w.2021.48
LSST Data Management Base Package

The meas_modelfit package contains code for fitting PSFconvolved models of astronomical objects to data. This includes both traditional fitting via "greedy" derivativebased optimizers and Monte Carlo exploration using adaptive importance sampling.
The galaxy modeling code can be roughly divided into three levels:
The high and mediumlevel code in meas_modelfit is discussed in the next few sections, with links there to the API documentation for individual classes. For more information on the lowlevel model evaluation code, see the lsst::shapelet package, and in particular the lsst::shapelet::MultiShapeletBasis and lsst::shapelet::MultiShapeletMatrixBuilder classes.
The Model and Interpreter classes define a number of different types of parameters, and understanding the relationships between these parameters is crucial in understanding how the package works.
The Model class represents a particular parametrized form and defines how it is mapped to shapelet expansions for convolution and evaluation. An instance of Model does not itself contain parameters: we have one Model instance for a particular fitting configuration, not one Model instance per fitted object. Model splits its parameters up into three separate arrays:
'nonlinear'
are the nonlinear parameters of the model that will be fit for. These usually define one or more ellipses. In the mathematical notation used throughout the meas_modelfit documentation, the symbol \(\theta\) is used for the nonlinear parameters.'amplitudes'
are linear parameters of the model, in the sense that an evaluation of the model on a pixel grid can be represented by a matrix that is multiplied by the amplitude vector (see Likelihood). The symbol \(\alpha\) is used to represent the amplitudes in mathematical notation.'fixed'
are nonlinear parameters that are not fit for, but are rather held fixed at their initial values. It's particularly common to fix the position parameters of the model. Note that we can't just subsume the fixed parameters into the definition of the Model instance itself, as the fixed parameters may change from source to source but the Model instance is defined purely by the configuration.Like Model, an Interpreter instance is defined only by configuration, but it contains information not only about the model (via a Model object attribute), but about the fitter and Prior probabilities used. A single Interpreter instance is attached to the ModelFitCatalog used to store fitting results, and it contains methods that can be used to interpret the records of that catalog. We have different Interpreter classes for Monte Carlo sampling (SamplingInterpreter and its subclasses) and greedy optimization (OptimizerInterpreter), as well as different classes to account for different ways the fitters can handle the amplitude parameters: because the model is linear in the amplitudes, and we assume Gaussian noise on the data, we know the likelihood is Gaussian in the amplitude parameters. This can be used in different ways by different fitters to reduce the dimensionality of the fit. In order to do this without specializing the fitters themselves, we introduce one more set of parameters, called simply 'parameters'
. These are some combination of the amplitude and nonlinear parameters defined by the model, and represent the actual space explored by the fitter. There are generally two options here:
'parameters'
vector is simply the concatenation of the 'nonlinear'
vector with the 'amplitudes'
vector; we explore the full space at once, and do not take advantage of the special nature of the amplitudes. This is the approach used by the DirectSamplingInterpreter and the OptimizerInterpreter.'parameters'
vector is just the 'nonlinear'
vector. At each point in the nonlinear parameter space we analytically marginalize over or otherwise project out the amplitude dimensions and store information necessary to reconstruct them with each sample. This approach is currently only used by the MarginalSamplingInterpreter, but a variant for greedy optimization may be added in the future.By using a polymorphic Interpreter class whose instances are shared by all output records, we can avoid having polymorphic output objects that must be subclassed for every kind of fitter and objective. We use a single ModelFitRecord class to store the results of any fitter. ModelFitRecord contains a number of normal, Schemadefined fields, as well three "blob" objects:
The Likelihood class represents the comparison between data and the evaluation of a Model in the coordinate and photometric system of that data. The data can correspond to a single image or multiple images to be fit simultaneously. Likelihood also handles the inversevariance weighting of both model and data, so the two can be compared directly. It does not actually compute the likelihood or log likelihood values, however  it simply stores the weighted data and computes a pixelized model that can be compared with it.
It is the responsibility of the Objective classes to take the Likelihood and actually compare the data and model in a way that is useful for a particular fitter. We have different Objective class hierarchies with different APIs for sampling (SamplingObjective) and optimization (OptimizerObjective), reflecting the fact that we need to compute slightly different quantities in these cases. Each Objective class has a corresponding Interpreter class that can be used to interpret the outputs of that particular Objective.
In addition to the Likelihood, the Objective classes also include a Bayesian prior, represented by the Prior class. There is currenly only one concrete Prior class, MixturePrior, which uses the Mixture class to represent the nonlinear parameters along with a flat prior for the amplitudes (though the amplitudes are required to be nonnegative). Using a flat prior for the amplitudes means that the posterior probability as well as the likelihood will be Gaussian, but the nonnegative constraint means it will be truncated. As a result, MixturePrior offloads many of its more complex numerical calculations to the TruncatedGaussian class.
All parameters in meas_modelfit are defined in coordinate and photometric systems that are unique to each object (or perhaps a small group of neighboring objects).
The coordinate system is a local TAN projection centered on the object, with arcsecondsize pixels; this allows the model to be defined in a coordinate system that can be trivially converted to celestial coordinates with no changes in ellipticity, while keeping radius parameters around unity. It also allows us to directly evaluate the prior probability of a set of parameters without having to transform the prior to different coordinate systems.
We also define a custom photometric system for each source, using a crudelyestimated nominal magnitude (provided as an input to the fitting code; usually this will be a simple aperture magnitude). We set the photometric system such that this nominal magnitude corresponds to unit flux, ensuring that amplitude parameters also remain of order unity. This helps avoid numerical problems, and (along with the choice of arcseconds for radius units and a sensible definition of ellipticity) allows us to avoid rescaling parameters within our fitters when using algorithms that must define sensible norms over the parameter space.
See the UnitSystem and ProjectedLikelihood classes for more information on the unit systems and transforms.
There are three concrete commandline Tasks in meas_modelfit:
The first two run on CCD images and coadd patch images, respectively, and inherit from MeasureImageTask, which provides much of the implementation. (this is analogous to lsst.pipe.tasks.ProcessImageTask and its subclasses).
MeasureMultiTask fits to multiple CCD images simultaneously, using the output from MeasureCoaddTask as an input (the CCD images included in the fit correspond to those that overlap a particular coadd patch).
All of these tasks ultimately inherit from BaseMeasureTask, which constains most of the implementation: subclasses of BaseMeasureTask only have to define how to create load inputs, write outputs, and create a Likelihood object.
BaseMeasureTask contains two registries that act as factories for Model and Prior objects (see models.py and priors.py) and a "fitter" subclass that must be one of AdaptiveImportanceSamplerTask or OptimizerTask. Determination of the pixel region to fit is handled by the code in fitRegion.py, and the shapelet PSF approximation needed to convolve the galaxy models is currently delegated to the meas_extensions_multiShapelet package.
All of these tasks have four distinct stages:
We currently provide two very different kinds of fitters:
In both cases, we provide a Task that may be used as the "fitter" subtask of BaseMeasureTask. These tasks initialize the sample schema and create an Interpreter upon construction, which will be used by BaseMeasureTask when creating the output catalogs. They also provide two methods (initialize and adaptPrevious) that update fitterspecific quantities during the catalogprep stage. Their run() methods do the actual fitting, using a Likelihood object and an output ModelFitRecord provided by BaseMeasureTask.
src
catalog as an input, so the Tasks can be run on arbitrary data, not just the S13 sims.