LSSTApplications  17.0+10,17.0+52,17.0+91,18.0.0+11,18.0.0+16,18.0.0+38,18.0.0+4,18.0.0-2-ge43143a+8,18.1.0-1-g0001055+4,18.1.0-1-g1349e88+13,18.1.0-1-g2505f39+10,18.1.0-1-g380d4d4+13,18.1.0-1-g5315e5e,18.1.0-1-g5e4b7ea+4,18.1.0-1-g7e8fceb,18.1.0-1-g85f8cd4+10,18.1.0-1-g9a6769a+4,18.1.0-1-ga1a4c1a+9,18.1.0-1-gd55f500+5,18.1.0-1-ge10677a+10,18.1.0-11-gb2589d7b,18.1.0-13-g451e75588+2,18.1.0-13-gbfe7f7f+4,18.1.0-14-g2e73c10+1,18.1.0-2-g31c43f9+10,18.1.0-2-g919ecaf,18.1.0-2-g9c63283+13,18.1.0-2-gdf0b915+13,18.1.0-2-gfefb8b5+2,18.1.0-3-g52aa583+4,18.1.0-3-g8f4a2b1+4,18.1.0-3-g9cb968e+12,18.1.0-3-gab23065,18.1.0-4-g7bbbad0+4,18.1.0-5-g510c42a+12,18.1.0-5-gaeab27e+13,18.1.0-6-gc4bdb98+2,18.1.0-6-gdda7f3e+15,18.1.0-9-g9613d271+1,w.2019.34
LSSTDataManagementBasePackage
Public Member Functions | Public Attributes | Static Public Attributes | List of all members
lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask Class Reference
Inheritance diagram for lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask:
lsst.meas.base.forcedPhotImage.ForcedPhotImageTask lsst.pipe.base.pipelineTask.PipelineTask lsst.pipe.base.cmdLineTask.CmdLineTask lsst.pipe.base.task.Task lsst.pipe.base.task.Task

Public Member Functions

def adaptArgsAndRun (self, inputData, inputDataIds, outputDataIds, butler)
 
def filterReferences (self, exposure, refCat, refWcs)
 
def makeIdFactory (self, dataRef)
 
def getExposureId (self, dataRef)
 
def fetchReferences (self, dataRef, exposure)
 
def getExposure (self, dataRef)
 
def getInitOutputDatasets (self)
 
def generateMeasCat (self, exposureDataId, exposure, refCat, refWcs, idPackerName, butler)
 
def runDataRef (self, dataRef, psfCache=None)
 
def run (self, measCat, exposure, refCat, refWcs, exposureId=None)
 
def run (self, kwargs)
 
def attachFootprints (self, sources, refCat, exposure, refWcs, dataRef)
 
def writeOutput (self, dataRef, sources)
 
def getSchemaCatalogs (self)
 
def getInputDatasetTypes (cls, config)
 
def getOutputDatasetTypes (cls, config)
 
def getPrerequisiteDatasetTypes (cls, config)
 
def getInitInputDatasetTypes (cls, config)
 
def getInitOutputDatasetTypes (cls, config)
 
def getDatasetTypes (cls, config, configClass)
 
def getPerDatasetTypeDimensions (cls, config)
 
def runQuantum (self, quantum, butler)
 
def saveStruct (self, struct, outputDataRefs, butler)
 
def getResourceConfig (self)
 
def emptyMetadata (self)
 
def emptyMetadata (self)
 
def getAllSchemaCatalogs (self)
 
def getAllSchemaCatalogs (self)
 
def getFullMetadata (self)
 
def getFullMetadata (self)
 
def getFullName (self)
 
def getFullName (self)
 
def getName (self)
 
def getName (self)
 
def getTaskDict (self)
 
def getTaskDict (self)
 
def makeSubtask (self, name, keyArgs)
 
def makeSubtask (self, name, keyArgs)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def makeField (cls, doc)
 
def makeField (cls, doc)
 
def __reduce__ (self)
 
def __reduce__ (self)
 
def applyOverrides (cls, config)
 
def parseAndRun (cls, args=None, config=None, log=None, doReturnResults=False)
 
def writeConfig (self, butler, clobber=False, doBackup=True)
 
def writeSchemas (self, butler, clobber=False, doBackup=True)
 
def writeMetadata (self, dataRef)
 
def writePackageVersions (self, butler, clobber=False, doBackup=True, dataset="packages")
 

Public Attributes

 metadata
 
 metadata
 
 log
 
 log
 
 config
 
 config
 

Static Public Attributes

 ConfigClass = ForcedPhotCcdConfig
 
 RunnerClass = lsst.pipe.base.ButlerInitializedTaskRunner
 
string dataPrefix = ""
 
bool canMultiprocess = True
 
bool canMultiprocess = True
 

Detailed Description

A command-line driver for performing forced measurement on CCD images.

Notes
-----
This task is a subclass of
:lsst-task:`lsst.meas.base.forcedPhotImage.ForcedPhotImageTask` which is
specifically for doing forced measurement on a single CCD exposure, using
as a reference catalog the detections which were made on overlapping
coadds.

The `run` method (inherited from `ForcedPhotImageTask`) takes a
`~lsst.daf.persistence.ButlerDataRef` argument that corresponds to a single
CCD. This should contain the data ID keys that correspond to the
``forced_src`` dataset (the output dataset for this task), which are
typically all those used to specify the ``calexp`` dataset (``visit``,
``raft``, ``sensor`` for LSST data) as well as a coadd tract. The tract is
used to look up the appropriate coadd measurement catalogs to use as
references (e.g. ``deepCoadd_src``; see
:lsst-task:`lsst.meas.base.references.CoaddSrcReferencesTask` for more
information). While the tract must be given as part of the dataRef, the
patches are determined automatically from the bounding box and WCS of the
calexp to be measured, and the filter used to fetch references is set via
the ``filter`` option in the configuration of
:lsst-task:`lsst.meas.base.references.BaseReferencesTask`).

In addition to the `run` method, `ForcedPhotCcdTask` overrides several
methods of `ForcedPhotImageTask` to specialize it for single-CCD
processing, including `~ForcedPhotImageTask.makeIdFactory`,
`~ForcedPhotImageTask.fetchReferences`, and
`~ForcedPhotImageTask.getExposure`. None of these should be called
directly by the user, though it may be useful to override them further in
subclasses.

Definition at line 168 of file forcedPhotCcd.py.

Member Function Documentation

◆ __reduce__() [1/2]

def lsst.pipe.base.task.Task.__reduce__ (   self)
inherited
Pickler.

Definition at line 373 of file task.py.

373  def __reduce__(self):
374  """Pickler.
375  """
376  return self.__class__, (self.config, self._name, self._parentTask, None)
377 

◆ __reduce__() [2/2]

def lsst.pipe.base.task.Task.__reduce__ (   self)
inherited
Pickler.

Definition at line 373 of file task.py.

373  def __reduce__(self):
374  """Pickler.
375  """
376  return self.__class__, (self.config, self._name, self._parentTask, None)
377 

◆ adaptArgsAndRun()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.adaptArgsAndRun (   self,
  inputData,
  inputDataIds,
  outputDataIds,
  butler 
)

Definition at line 208 of file forcedPhotCcd.py.

208  def adaptArgsAndRun(self, inputData, inputDataIds, outputDataIds, butler):
209  inputData['refWcs'] = butler.get(f"{self.config.refWcs.name}.wcs", inputDataIds["refWcs"])
210  inputData['refCat'] = self.filterReferences(inputData['exposure'],
211  inputData['refCat'], inputData['refWcs'])
212  inputData['measCat'] = self.generateMeasCat(inputDataIds['exposure'],
213  inputData['exposure'],
214  inputData['refCat'], inputData['refWcs'],
215  "visit_detector", butler)
216 
217  return self.run(**inputData)
218 

◆ applyOverrides()

def lsst.pipe.base.cmdLineTask.CmdLineTask.applyOverrides (   cls,
  config 
)
inherited
A hook to allow a task to change the values of its config *after* the camera-specific
overrides are loaded but before any command-line overrides are applied.

Parameters
----------
config : instance of task's ``ConfigClass``
    Task configuration.

Notes
-----
This is necessary in some cases because the camera-specific overrides may retarget subtasks,
wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.

.. warning::

   This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
   these overrides.

Definition at line 527 of file cmdLineTask.py.

527  def applyOverrides(cls, config):
528  """A hook to allow a task to change the values of its config *after* the camera-specific
529  overrides are loaded but before any command-line overrides are applied.
530 
531  Parameters
532  ----------
533  config : instance of task's ``ConfigClass``
534  Task configuration.
535 
536  Notes
537  -----
538  This is necessary in some cases because the camera-specific overrides may retarget subtasks,
539  wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.
540 
541  .. warning::
542 
543  This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
544  these overrides.
545  """
546  pass
547 

◆ attachFootprints()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.attachFootprints (   self,
  sources,
  refCat,
  exposure,
  refWcs,
  dataRef 
)
inherited
Attach footprints to blank sources prior to measurements.

Notes
-----
`~lsst.afw.detection.Footprint`\ s for forced photometry must be in the
pixel coordinate system of the image being measured, while the actual
detections may start out in a different coordinate system.

Subclasses of this class must implement this method to define how
those `~lsst.afw.detection.Footprint`\ s should be generated.

This default implementation transforms the
`~lsst.afw.detection.Footprint`\ s from the reference catalog from the
reference WCS to the exposure's WcS, which downgrades
`lsst.afw.detection.heavyFootprint.HeavyFootprint`\ s into regular
`~lsst.afw.detection.Footprint`\ s, destroying deblend information.

Definition at line 327 of file forcedPhotImage.py.

327  def attachFootprints(self, sources, refCat, exposure, refWcs, dataRef):
328  r"""Attach footprints to blank sources prior to measurements.
329 
330  Notes
331  -----
332  `~lsst.afw.detection.Footprint`\ s for forced photometry must be in the
333  pixel coordinate system of the image being measured, while the actual
334  detections may start out in a different coordinate system.
335 
336  Subclasses of this class must implement this method to define how
337  those `~lsst.afw.detection.Footprint`\ s should be generated.
338 
339  This default implementation transforms the
340  `~lsst.afw.detection.Footprint`\ s from the reference catalog from the
341  reference WCS to the exposure's WcS, which downgrades
342  `lsst.afw.detection.heavyFootprint.HeavyFootprint`\ s into regular
343  `~lsst.afw.detection.Footprint`\ s, destroying deblend information.
344  """
345  return self.measurement.attachTransformedFootprints(sources, refCat, exposure, refWcs)
346 

◆ emptyMetadata() [1/2]

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 153 of file task.py.

153  def emptyMetadata(self):
154  """Empty (clear) the metadata for this Task and all sub-Tasks.
155  """
156  for subtask in self._taskDict.values():
157  subtask.metadata = dafBase.PropertyList()
158 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68

◆ emptyMetadata() [2/2]

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 153 of file task.py.

153  def emptyMetadata(self):
154  """Empty (clear) the metadata for this Task and all sub-Tasks.
155  """
156  for subtask in self._taskDict.values():
157  subtask.metadata = dafBase.PropertyList()
158 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68

◆ fetchReferences()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.fetchReferences (   self,
  dataRef,
  exposure 
)
Get sources that overlap the exposure.

Parameters
----------
dataRef : `lsst.daf.persistence.ButlerDataRef`
    Butler data reference corresponding to the image to be measured;
    should have ``tract``, ``patch``, and ``filter`` keys.
exposure : `lsst.afw.image.Exposure`
    The image to be measured (used only to obtain a WCS and bounding
    box).

Returns
-------
referencs : `lsst.afw.table.SourceCatalog`
    Catalog of sources that overlap the exposure

Notes
-----
The returned catalog is sorted by ID and guarantees that all included
children have their parent included and that all Footprints are valid.

All work is delegated to the references subtask; see
:lsst-task:`lsst.meas.base.references.CoaddSrcReferencesTask`
for information about the default behavior.

Definition at line 311 of file forcedPhotCcd.py.

311  def fetchReferences(self, dataRef, exposure):
312  """Get sources that overlap the exposure.
313 
314  Parameters
315  ----------
316  dataRef : `lsst.daf.persistence.ButlerDataRef`
317  Butler data reference corresponding to the image to be measured;
318  should have ``tract``, ``patch``, and ``filter`` keys.
319  exposure : `lsst.afw.image.Exposure`
320  The image to be measured (used only to obtain a WCS and bounding
321  box).
322 
323  Returns
324  -------
325  referencs : `lsst.afw.table.SourceCatalog`
326  Catalog of sources that overlap the exposure
327 
328  Notes
329  -----
330  The returned catalog is sorted by ID and guarantees that all included
331  children have their parent included and that all Footprints are valid.
332 
333  All work is delegated to the references subtask; see
334  :lsst-task:`lsst.meas.base.references.CoaddSrcReferencesTask`
335  for information about the default behavior.
336  """
337  references = lsst.afw.table.SourceCatalog(self.references.schema)
338  badParents = set()
339  unfiltered = self.references.fetchInBox(dataRef, exposure.getBBox(), exposure.getWcs())
340  for record in unfiltered:
341  if record.getFootprint() is None or record.getFootprint().getArea() == 0:
342  if record.getParent() != 0:
343  self.log.warn("Skipping reference %s (child of %s) with bad Footprint",
344  record.getId(), record.getParent())
345  else:
346  self.log.warn("Skipping reference parent %s with bad Footprint", record.getId())
347  badParents.add(record.getId())
348  elif record.getParent() not in badParents:
349  references.append(record)
350  # catalog must be sorted by parent ID for lsst.afw.table.getChildren to work
351  references.sort(lsst.afw.table.SourceTable.getParentKey())
352  return references
353 
daf::base::PropertySet * set
Definition: fits.cc:884
static Key< RecordId > getParentKey()
Key for the parent ID.
Definition: Source.h:277

◆ filterReferences()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.filterReferences (   self,
  exposure,
  refCat,
  refWcs 
)
Filter reference catalog so that all sources are within the
boundaries of the exposure.

Parameters
----------
exposure : `lsst.afw.image.exposure.Exposure`
    Exposure to generate the catalog for.
refCat : `lsst.afw.table.SourceCatalog`
    Catalog of shapes and positions at which to force photometry.
refWcs : `lsst.afw.image.SkyWcs`
    Reference world coordinate system.

Returns
-------
refSources : `lsst.afw.table.SourceCatalog`
    Filtered catalog of forced sources to measure.

Notes
-----
Filtering the reference catalog is currently handled by Gen2
specific methods.  To function for Gen3, this method copies
code segments to do the filtering and transformation.  The
majority of this code is based on the methods of
lsst.meas.algorithms.loadReferenceObjects.ReferenceObjectLoader

Definition at line 219 of file forcedPhotCcd.py.

219  def filterReferences(self, exposure, refCat, refWcs):
220  """Filter reference catalog so that all sources are within the
221  boundaries of the exposure.
222 
223  Parameters
224  ----------
225  exposure : `lsst.afw.image.exposure.Exposure`
226  Exposure to generate the catalog for.
227  refCat : `lsst.afw.table.SourceCatalog`
228  Catalog of shapes and positions at which to force photometry.
229  refWcs : `lsst.afw.image.SkyWcs`
230  Reference world coordinate system.
231 
232  Returns
233  -------
234  refSources : `lsst.afw.table.SourceCatalog`
235  Filtered catalog of forced sources to measure.
236 
237  Notes
238  -----
239  Filtering the reference catalog is currently handled by Gen2
240  specific methods. To function for Gen3, this method copies
241  code segments to do the filtering and transformation. The
242  majority of this code is based on the methods of
243  lsst.meas.algorithms.loadReferenceObjects.ReferenceObjectLoader
244 
245  """
246 
247  # Step 1: Determine bounds of the exposure photometry will
248  # be performed on.
249  expWcs = exposure.getWcs()
250  expRegion = exposure.getBBox(lsst.afw.image.PARENT)
251  expBBox = lsst.geom.Box2D(expRegion)
252  expBoxCorners = expBBox.getCorners()
253  expSkyCorners = [expWcs.pixelToSky(corner).getVector() for
254  corner in expBoxCorners]
255  expPolygon = lsst.sphgeom.ConvexPolygon(expSkyCorners)
256 
257  # Step 2: Filter out reference catalog sources that are
258  # not contained within the exposure boundaries.
259  sources = type(refCat)(refCat.table)
260  for record in refCat:
261  if expPolygon.contains(record.getCoord().getVector()):
262  sources.append(record)
263  refCatIdDict = {ref.getId(): ref.getParent() for ref in sources}
264 
265  # Step 3: Cull sources that do not have their parent
266  # source in the filtered catalog. Save two copies of each
267  # source.
268  refSources = type(refCat)(refCat.table)
269  for record in refCat:
270  if expPolygon.contains(record.getCoord().getVector()):
271  recordId = record.getId()
272  topId = recordId
273  while (topId > 0):
274  if topId in refCatIdDict:
275  topId = refCatIdDict[topId]
276  else:
277  break
278  if topId == 0:
279  refSources.append(record)
280 
281  # Step 4: Transform source footprints from the reference
282  # coordinates to the exposure coordinates.
283  for refRecord in refSources:
284  refRecord.setFootprint(refRecord.getFootprint().transform(refWcs,
285  expWcs, expRegion))
286  # Step 5: Replace reference catalog with filtered source list.
287  return refSources
288 
A floating-point coordinate rectangle geometry.
Definition: Box.h:305
table::Key< int > type
Definition: Detector.cc:167
ConvexPolygon is a closed convex polygon on the unit sphere.
Definition: ConvexPolygon.h:57

◆ generateMeasCat()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.generateMeasCat (   self,
  exposureDataId,
  exposure,
  refCat,
  refWcs,
  idPackerName,
  butler 
)
inherited
Generate a measurement catalog for Gen3.

Parameters
----------
exposureDataId : `DataId`
    Butler dataId for this exposure.
exposure : `lsst.afw.image.exposure.Exposure`
    Exposure to generate the catalog for.
refCat : `lsst.afw.table.SourceCatalog`
    Catalog of shapes and positions at which to force photometry.
refWcs : `lsst.afw.image.SkyWcs`
    Reference world coordinate system.
idPackerName : `str`
    Type of ID packer to construct from the registry.
butler : `lsst.daf.persistence.butler.Butler`
    Butler to use to construct id packer.

Returns
-------
measCat : `lsst.afw.table.SourceCatalog`
    Catalog of forced sources to measure.

Definition at line 189 of file forcedPhotImage.py.

189  def generateMeasCat(self, exposureDataId, exposure, refCat, refWcs, idPackerName, butler):
190  """Generate a measurement catalog for Gen3.
191 
192  Parameters
193  ----------
194  exposureDataId : `DataId`
195  Butler dataId for this exposure.
196  exposure : `lsst.afw.image.exposure.Exposure`
197  Exposure to generate the catalog for.
198  refCat : `lsst.afw.table.SourceCatalog`
199  Catalog of shapes and positions at which to force photometry.
200  refWcs : `lsst.afw.image.SkyWcs`
201  Reference world coordinate system.
202  idPackerName : `str`
203  Type of ID packer to construct from the registry.
204  butler : `lsst.daf.persistence.butler.Butler`
205  Butler to use to construct id packer.
206 
207  Returns
208  -------
209  measCat : `lsst.afw.table.SourceCatalog`
210  Catalog of forced sources to measure.
211  """
212  packer = butler.registry.makeDataIdPacker(idPackerName, exposureDataId)
213  expId = packer.pack(exposureDataId)
214  expBits = packer.maxBits
215  idFactory = lsst.afw.table.IdFactory.makeSource(expId, 64 - expBits)
216 
217  measCat = self.measurement.generateMeasCat(exposure, refCat, refWcs,
218  idFactory=idFactory)
219  return measCat
220 
static std::shared_ptr< IdFactory > makeSource(RecordId expId, int reserved)
Return an IdFactory that includes another, fixed ID in the higher-order bits.
Definition: IdFactory.cc:72

◆ getAllSchemaCatalogs() [1/2]

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
    lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.

Definition at line 188 of file task.py.

188  def getAllSchemaCatalogs(self):
189  """Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
190 
191  Returns
192  -------
193  schemacatalogs : `dict`
194  Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
195  lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
196  through all subtasks.
197 
198  Notes
199  -----
200  This method may be called on any task in the hierarchy; it will return the same answer, regardless.
201 
202  The default implementation should always suffice. If your subtask uses schemas the override
203  `Task.getSchemaCatalogs`, not this method.
204  """
205  schemaDict = self.getSchemaCatalogs()
206  for subtask in self._taskDict.values():
207  schemaDict.update(subtask.getSchemaCatalogs())
208  return schemaDict
209 

◆ getAllSchemaCatalogs() [2/2]

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
    lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.

Definition at line 188 of file task.py.

188  def getAllSchemaCatalogs(self):
189  """Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
190 
191  Returns
192  -------
193  schemacatalogs : `dict`
194  Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
195  lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
196  through all subtasks.
197 
198  Notes
199  -----
200  This method may be called on any task in the hierarchy; it will return the same answer, regardless.
201 
202  The default implementation should always suffice. If your subtask uses schemas the override
203  `Task.getSchemaCatalogs`, not this method.
204  """
205  schemaDict = self.getSchemaCatalogs()
206  for subtask in self._taskDict.values():
207  schemaDict.update(subtask.getSchemaCatalogs())
208  return schemaDict
209 

◆ getDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getDatasetTypes (   cls,
  config,
  configClass 
)
inherited
Return dataset type descriptors defined in task configuration.

This method can be used by other methods that need to extract dataset
types from task configuration (e.g. `getInputDatasetTypes` or
sub-class methods).

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.
configClass : `type`
    Class of the configuration object which defines dataset type.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
Returns empty dict if configuration has no fields with the specified
``configClass``.

Definition at line 353 of file pipelineTask.py.

353  def getDatasetTypes(cls, config, configClass):
354  """Return dataset type descriptors defined in task configuration.
355 
356  This method can be used by other methods that need to extract dataset
357  types from task configuration (e.g. `getInputDatasetTypes` or
358  sub-class methods).
359 
360  Parameters
361  ----------
362  config : `Config`
363  Configuration for this task. Typically datasets are defined in
364  a task configuration.
365  configClass : `type`
366  Class of the configuration object which defines dataset type.
367 
368  Returns
369  -------
370  Dictionary where key is the name (arbitrary) of the output dataset
371  and value is the `DatasetTypeDescriptor` instance. Default
372  implementation uses configuration field name as dictionary key.
373  Returns empty dict if configuration has no fields with the specified
374  ``configClass``.
375  """
376  dsTypes = {}
377  for key, value in config.items():
378  if isinstance(value, configClass):
379  dsTypes[key] = DatasetTypeDescriptor.fromConfig(value)
380  return dsTypes
381 

◆ getExposure()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.getExposure (   self,
  dataRef 
)
Read input exposure for measurement.

Parameters
----------
dataRef : `lsst.daf.persistence.ButlerDataRef`
    Butler data reference. Only the ``calexp`` dataset is used, unless
    ``config.doApplyUberCal`` is `True`, in which case the
    corresponding meas_mosaic outputs are used as well.

Definition at line 354 of file forcedPhotCcd.py.

354  def getExposure(self, dataRef):
355  """Read input exposure for measurement.
356 
357  Parameters
358  ----------
359  dataRef : `lsst.daf.persistence.ButlerDataRef`
360  Butler data reference. Only the ``calexp`` dataset is used, unless
361  ``config.doApplyUberCal`` is `True`, in which case the
362  corresponding meas_mosaic outputs are used as well.
363  """
364  exposure = ForcedPhotImageTask.getExposure(self, dataRef)
365  if not self.config.doApplyUberCal:
366  return exposure
367  if applyMosaicResults is None:
368  raise RuntimeError(
369  "Cannot use improved calibrations for %s because meas_mosaic could not be imported."
370  % (dataRef.dataId,))
371  else:
372  applyMosaicResults(dataRef, calexp=exposure)
373  return exposure
374 

◆ getExposureId()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.getExposureId (   self,
  dataRef 
)

Definition at line 308 of file forcedPhotCcd.py.

308  def getExposureId(self, dataRef):
309  return int(dataRef.get("ccdExposureId", immediate=True))
310 

◆ getFullMetadata() [1/2]

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
    for the top-level task and all subtasks, sub-subtasks, etc..

Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.

Definition at line 210 of file task.py.

210  def getFullMetadata(self):
211  """Get metadata for all tasks.
212 
213  Returns
214  -------
215  metadata : `lsst.daf.base.PropertySet`
216  The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
217  for the top-level task and all subtasks, sub-subtasks, etc..
218 
219  Notes
220  -----
221  The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
222  and any metadata set by the task. The name of each item consists of the full task name
223  with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
224 
225  topLevelTaskName:subtaskName:subsubtaskName.itemName
226 
227  using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
228  and a metadata item with the same name.
229  """
230  fullMetadata = dafBase.PropertySet()
231  for fullName, task in self.getTaskDict().items():
232  fullMetadata.set(fullName.replace(".", ":"), task.metadata)
233  return fullMetadata
234 
std::vector< SchemaItem< Flag > > * items
Class for storing generic metadata.
Definition: PropertySet.h:67

◆ getFullMetadata() [2/2]

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
    for the top-level task and all subtasks, sub-subtasks, etc..

Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.

Definition at line 210 of file task.py.

210  def getFullMetadata(self):
211  """Get metadata for all tasks.
212 
213  Returns
214  -------
215  metadata : `lsst.daf.base.PropertySet`
216  The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
217  for the top-level task and all subtasks, sub-subtasks, etc..
218 
219  Notes
220  -----
221  The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
222  and any metadata set by the task. The name of each item consists of the full task name
223  with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
224 
225  topLevelTaskName:subtaskName:subsubtaskName.itemName
226 
227  using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
228  and a metadata item with the same name.
229  """
230  fullMetadata = dafBase.PropertySet()
231  for fullName, task in self.getTaskDict().items():
232  fullMetadata.set(fullName.replace(".", ":"), task.metadata)
233  return fullMetadata
234 
std::vector< SchemaItem< Flag > > * items
Class for storing generic metadata.
Definition: PropertySet.h:67

◆ getFullName() [1/2]

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each subtask separated by periods.
    For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".

Definition at line 235 of file task.py.

235  def getFullName(self):
236  """Get the task name as a hierarchical name including parent task names.
237 
238  Returns
239  -------
240  fullName : `str`
241  The full name consists of the name of the parent task and each subtask separated by periods.
242  For example:
243 
244  - The full name of top-level task "top" is simply "top".
245  - The full name of subtask "sub" of top-level task "top" is "top.sub".
246  - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
247  """
248  return self._fullName
249 

◆ getFullName() [2/2]

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each subtask separated by periods.
    For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".

Definition at line 235 of file task.py.

235  def getFullName(self):
236  """Get the task name as a hierarchical name including parent task names.
237 
238  Returns
239  -------
240  fullName : `str`
241  The full name consists of the name of the parent task and each subtask separated by periods.
242  For example:
243 
244  - The full name of top-level task "top" is simply "top".
245  - The full name of subtask "sub" of top-level task "top" is "top.sub".
246  - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
247  """
248  return self._fullName
249 

◆ getInitInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitInputDatasetTypes (   cls,
  config 
)
inherited
Return dataset type descriptors that can be used to retrieve the
``initInputs`` constructor argument.

Datasets used in initialization may not be associated with any
Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitInputInputDatasetConfig` in configuration (non-recursively) and
uses them for constructing `DatasetTypeDescriptor` instances. The
names of these fields are used as keys in returned dictionary.
Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task requires no initialization inputs, should return an
empty dict.

Definition at line 291 of file pipelineTask.py.

291  def getInitInputDatasetTypes(cls, config):
292  """Return dataset type descriptors that can be used to retrieve the
293  ``initInputs`` constructor argument.
294 
295  Datasets used in initialization may not be associated with any
296  Dimension (i.e. their data IDs must be empty dictionaries).
297 
298  Default implementation finds all fields of type
299  `InitInputInputDatasetConfig` in configuration (non-recursively) and
300  uses them for constructing `DatasetTypeDescriptor` instances. The
301  names of these fields are used as keys in returned dictionary.
302  Subclasses can override this behavior.
303 
304  Parameters
305  ----------
306  config : `Config`
307  Configuration for this task. Typically datasets are defined in
308  a task configuration.
309 
310  Returns
311  -------
312  Dictionary where key is the name (arbitrary) of the input dataset
313  and value is the `DatasetTypeDescriptor` instance. Default
314  implementation uses configuration field name as dictionary key.
315 
316  When the task requires no initialization inputs, should return an
317  empty dict.
318  """
319  return cls.getDatasetTypes(config, InitInputDatasetConfig)
320 

◆ getInitOutputDatasets()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.getInitOutputDatasets (   self)
inherited

Definition at line 177 of file forcedPhotImage.py.

177  def getInitOutputDatasets(self):
178  return {"outputSchema": lsst.afw.table.SourceCatalog(self.measurement.schema)}
179 

◆ getInitOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitOutputDatasetTypes (   cls,
  config 
)
inherited
Return dataset type descriptors that can be used to write the
objects returned by `getOutputDatasets`.

Datasets used in initialization may not be associated with any
Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitOutputDatasetConfig` in configuration (non-recursively) and uses
them for constructing `DatasetTypeDescriptor` instances. The names of
these fields are used as keys in returned dictionary. Subclasses can
override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task produces no initialization outputs, should return an
empty dict.

Definition at line 322 of file pipelineTask.py.

322  def getInitOutputDatasetTypes(cls, config):
323  """Return dataset type descriptors that can be used to write the
324  objects returned by `getOutputDatasets`.
325 
326  Datasets used in initialization may not be associated with any
327  Dimension (i.e. their data IDs must be empty dictionaries).
328 
329  Default implementation finds all fields of type
330  `InitOutputDatasetConfig` in configuration (non-recursively) and uses
331  them for constructing `DatasetTypeDescriptor` instances. The names of
332  these fields are used as keys in returned dictionary. Subclasses can
333  override this behavior.
334 
335  Parameters
336  ----------
337  config : `Config`
338  Configuration for this task. Typically datasets are defined in
339  a task configuration.
340 
341  Returns
342  -------
343  Dictionary where key is the name (arbitrary) of the output dataset
344  and value is the `DatasetTypeDescriptor` instance. Default
345  implementation uses configuration field name as dictionary key.
346 
347  When the task produces no initialization outputs, should return an
348  empty dict.
349  """
350  return cls.getDatasetTypes(config, InitOutputDatasetConfig)
351 

◆ getInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInputDatasetTypes (   cls,
  config 
)
inherited
Return input dataset type descriptors for this task.

Default implementation finds all fields of type `InputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The names of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 214 of file pipelineTask.py.

214  def getInputDatasetTypes(cls, config):
215  """Return input dataset type descriptors for this task.
216 
217  Default implementation finds all fields of type `InputDatasetConfig`
218  in configuration (non-recursively) and uses them for constructing
219  `DatasetTypeDescriptor` instances. The names of these fields are used
220  as keys in returned dictionary. Subclasses can override this behavior.
221 
222  Parameters
223  ----------
224  config : `Config`
225  Configuration for this task. Typically datasets are defined in
226  a task configuration.
227 
228  Returns
229  -------
230  Dictionary where key is the name (arbitrary) of the input dataset
231  and value is the `DatasetTypeDescriptor` instance. Default
232  implementation uses configuration field name as dictionary key.
233  """
234  return cls.getDatasetTypes(config, InputDatasetConfig)
235 

◆ getName() [1/2]

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 250 of file task.py.

250  def getName(self):
251  """Get the name of the task.
252 
253  Returns
254  -------
255  taskName : `str`
256  Name of the task.
257 
258  See also
259  --------
260  getFullName
261  """
262  return self._name
263 

◆ getName() [2/2]

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 250 of file task.py.

250  def getName(self):
251  """Get the name of the task.
252 
253  Returns
254  -------
255  taskName : `str`
256  Name of the task.
257 
258  See also
259  --------
260  getFullName
261  """
262  return self._name
263 

◆ getOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getOutputDatasetTypes (   cls,
  config 
)
inherited
Return output dataset type descriptors for this task.

Default implementation finds all fields of type `OutputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The keys of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 237 of file pipelineTask.py.

237  def getOutputDatasetTypes(cls, config):
238  """Return output dataset type descriptors for this task.
239 
240  Default implementation finds all fields of type `OutputDatasetConfig`
241  in configuration (non-recursively) and uses them for constructing
242  `DatasetTypeDescriptor` instances. The keys of these fields are used
243  as keys in returned dictionary. Subclasses can override this behavior.
244 
245  Parameters
246  ----------
247  config : `Config`
248  Configuration for this task. Typically datasets are defined in
249  a task configuration.
250 
251  Returns
252  -------
253  Dictionary where key is the name (arbitrary) of the output dataset
254  and value is the `DatasetTypeDescriptor` instance. Default
255  implementation uses configuration field name as dictionary key.
256  """
257  return cls.getDatasetTypes(config, OutputDatasetConfig)
258 

◆ getPerDatasetTypeDimensions()

def lsst.pipe.base.pipelineTask.PipelineTask.getPerDatasetTypeDimensions (   cls,
  config 
)
inherited
Return any Dimensions that are permitted to have different values
for different DatasetTypes within the same quantum.

Parameters
----------
config : `Config`
    Configuration for this task.

Returns
-------
dimensions : `~collections.abc.Set` of `Dimension` or `str`
    The dimensions or names thereof that should be considered
    per-DatasetType.

Notes
-----
Any Dimension declared to be per-DatasetType by a PipelineTask must
also be declared to be per-DatasetType by other PipelineTasks in the
same Pipeline.

The classic example of a per-DatasetType dimension is the
``CalibrationLabel`` dimension that maps to a validity range for
master calibrations.  When running Instrument Signature Removal, one
does not care that different dataset types like flat, bias, and dark
have different validity ranges, as long as those validity ranges all
overlap the relevant observation.

Definition at line 383 of file pipelineTask.py.

383  def getPerDatasetTypeDimensions(cls, config):
384  """Return any Dimensions that are permitted to have different values
385  for different DatasetTypes within the same quantum.
386 
387  Parameters
388  ----------
389  config : `Config`
390  Configuration for this task.
391 
392  Returns
393  -------
394  dimensions : `~collections.abc.Set` of `Dimension` or `str`
395  The dimensions or names thereof that should be considered
396  per-DatasetType.
397 
398  Notes
399  -----
400  Any Dimension declared to be per-DatasetType by a PipelineTask must
401  also be declared to be per-DatasetType by other PipelineTasks in the
402  same Pipeline.
403 
404  The classic example of a per-DatasetType dimension is the
405  ``CalibrationLabel`` dimension that maps to a validity range for
406  master calibrations. When running Instrument Signature Removal, one
407  does not care that different dataset types like flat, bias, and dark
408  have different validity ranges, as long as those validity ranges all
409  overlap the relevant observation.
410  """
411  return frozenset()
412 

◆ getPrerequisiteDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getPrerequisiteDatasetTypes (   cls,
  config 
)
inherited
Return the local names of input dataset types that should be
assumed to exist instead of constraining what data to process with
this task.

Usually, when running a `PipelineTask`, the presence of input datasets
constrains the processing to be done (as defined by the `QuantumGraph`
generated during "preflight").  "Prerequisites" are special input
datasets that do not constrain that graph, but instead cause a hard
failure when missing.  Calibration products and reference catalogs
are examples of dataset types that should usually be marked as
prerequisites.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
prerequisite : `~collections.abc.Set` of `str`
    The keys in the dictionary returned by `getInputDatasetTypes` that
    represent dataset types that should be considered prerequisites.
    Names returned here that are not keys in that dictionary are
    ignored; that way, if a config option removes an input dataset type
    only `getInputDatasetTypes` needs to be updated.

Definition at line 260 of file pipelineTask.py.

260  def getPrerequisiteDatasetTypes(cls, config):
261  """Return the local names of input dataset types that should be
262  assumed to exist instead of constraining what data to process with
263  this task.
264 
265  Usually, when running a `PipelineTask`, the presence of input datasets
266  constrains the processing to be done (as defined by the `QuantumGraph`
267  generated during "preflight"). "Prerequisites" are special input
268  datasets that do not constrain that graph, but instead cause a hard
269  failure when missing. Calibration products and reference catalogs
270  are examples of dataset types that should usually be marked as
271  prerequisites.
272 
273  Parameters
274  ----------
275  config : `Config`
276  Configuration for this task. Typically datasets are defined in
277  a task configuration.
278 
279  Returns
280  -------
281  prerequisite : `~collections.abc.Set` of `str`
282  The keys in the dictionary returned by `getInputDatasetTypes` that
283  represent dataset types that should be considered prerequisites.
284  Names returned here that are not keys in that dictionary are
285  ignored; that way, if a config option removes an input dataset type
286  only `getInputDatasetTypes` needs to be updated.
287  """
288  return frozenset()
289 

◆ getResourceConfig()

def lsst.pipe.base.pipelineTask.PipelineTask.getResourceConfig (   self)
inherited
Return resource configuration for this task.

Returns
-------
Object of type `~config.ResourceConfig` or ``None`` if resource
configuration is not defined for this task.

Definition at line 641 of file pipelineTask.py.

641  def getResourceConfig(self):
642  """Return resource configuration for this task.
643 
644  Returns
645  -------
646  Object of type `~config.ResourceConfig` or ``None`` if resource
647  configuration is not defined for this task.
648  """
649  return getattr(self.config, "resources", None)
650 

◆ getSchemaCatalogs()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.getSchemaCatalogs (   self)
inherited
The schema catalogs that will be used by this task.

Returns
-------
schemaCatalogs : `dict`
    Dictionary mapping dataset type to schema catalog.

Notes
-----
There is only one schema for each type of forced measurement. The
dataset type for this measurement is defined in the mapper.

Definition at line 370 of file forcedPhotImage.py.

370  def getSchemaCatalogs(self):
371  """The schema catalogs that will be used by this task.
372 
373  Returns
374  -------
375  schemaCatalogs : `dict`
376  Dictionary mapping dataset type to schema catalog.
377 
378  Notes
379  -----
380  There is only one schema for each type of forced measurement. The
381  dataset type for this measurement is defined in the mapper.
382  """
383  catalog = lsst.afw.table.SourceCatalog(self.measurement.schema)
384  catalog.getTable().setMetadata(self.measurement.algMetadata)
385  datasetType = self.dataPrefix + "forced_src"
386  return {datasetType: catalog}
387 

◆ getTaskDict() [1/2]

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level task and all subtasks,
    sub-subtasks, etc..

Definition at line 264 of file task.py.

264  def getTaskDict(self):
265  """Get a dictionary of all tasks as a shallow copy.
266 
267  Returns
268  -------
269  taskDict : `dict`
270  Dictionary containing full task name: task object for the top-level task and all subtasks,
271  sub-subtasks, etc..
272  """
273  return self._taskDict.copy()
274 
def getTaskDict(config, taskDict=None, baseName="")

◆ getTaskDict() [2/2]

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level task and all subtasks,
    sub-subtasks, etc..

Definition at line 264 of file task.py.

264  def getTaskDict(self):
265  """Get a dictionary of all tasks as a shallow copy.
266 
267  Returns
268  -------
269  taskDict : `dict`
270  Dictionary containing full task name: task object for the top-level task and all subtasks,
271  sub-subtasks, etc..
272  """
273  return self._taskDict.copy()
274 
def getTaskDict(config, taskDict=None, baseName="")

◆ makeField() [1/2]

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use::

    class OtherTaskConfig(lsst.pex.config.Config)
aSubtask = ATaskClass.makeField("a brief description of what this task does")

Definition at line 329 of file task.py.

329  def makeField(cls, doc):
330  """Make a `lsst.pex.config.ConfigurableField` for this task.
331 
332  Parameters
333  ----------
334  doc : `str`
335  Help text for the field.
336 
337  Returns
338  -------
339  configurableField : `lsst.pex.config.ConfigurableField`
340  A `~ConfigurableField` for this task.
341 
342  Examples
343  --------
344  Provides a convenient way to specify this task is a subtask of another task.
345 
346  Here is an example of use::
347 
348  class OtherTaskConfig(lsst.pex.config.Config)
349  aSubtask = ATaskClass.makeField("a brief description of what this task does")
350  """
351  return ConfigurableField(doc=doc, target=cls)
352 

◆ makeField() [2/2]

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use::

    class OtherTaskConfig(lsst.pex.config.Config)
aSubtask = ATaskClass.makeField("a brief description of what this task does")

Definition at line 329 of file task.py.

329  def makeField(cls, doc):
330  """Make a `lsst.pex.config.ConfigurableField` for this task.
331 
332  Parameters
333  ----------
334  doc : `str`
335  Help text for the field.
336 
337  Returns
338  -------
339  configurableField : `lsst.pex.config.ConfigurableField`
340  A `~ConfigurableField` for this task.
341 
342  Examples
343  --------
344  Provides a convenient way to specify this task is a subtask of another task.
345 
346  Here is an example of use::
347 
348  class OtherTaskConfig(lsst.pex.config.Config)
349  aSubtask = ATaskClass.makeField("a brief description of what this task does")
350  """
351  return ConfigurableField(doc=doc, target=cls)
352 

◆ makeIdFactory()

def lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.makeIdFactory (   self,
  dataRef 
)
Create an object that generates globally unique source IDs.

Source IDs are created based on a per-CCD ID and the ID of the CCD
itself.

Parameters
----------
dataRef : `lsst.daf.persistence.ButlerDataRef`
    Butler data reference. The ``ccdExposureId_bits`` and
    ``ccdExposureId`` datasets are accessed. The data ID must have the
    keys that correspond to ``ccdExposureId``, which are generally the
    same as those that correspond to ``calexp`` (``visit``, ``raft``,
    ``sensor`` for LSST data).

Definition at line 289 of file forcedPhotCcd.py.

289  def makeIdFactory(self, dataRef):
290  """Create an object that generates globally unique source IDs.
291 
292  Source IDs are created based on a per-CCD ID and the ID of the CCD
293  itself.
294 
295  Parameters
296  ----------
297  dataRef : `lsst.daf.persistence.ButlerDataRef`
298  Butler data reference. The ``ccdExposureId_bits`` and
299  ``ccdExposureId`` datasets are accessed. The data ID must have the
300  keys that correspond to ``ccdExposureId``, which are generally the
301  same as those that correspond to ``calexp`` (``visit``, ``raft``,
302  ``sensor`` for LSST data).
303  """
304  expBits = dataRef.get("ccdExposureId_bits")
305  expId = int(dataRef.get("ccdExposureId"))
306  return lsst.afw.table.IdFactory.makeSource(expId, 64 - expBits)
307 
static std::shared_ptr< IdFactory > makeSource(RecordId expId, int reserved)
Return an IdFactory that includes another, fixed ID in the higher-order bits.
Definition: IdFactory.cc:72

◆ makeSubtask() [1/2]

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following arguments are automatically
    provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
or RegistryField.

Definition at line 275 of file task.py.

275  def makeSubtask(self, name, **keyArgs):
276  """Create a subtask as a new instance as the ``name`` attribute of this task.
277 
278  Parameters
279  ----------
280  name : `str`
281  Brief name of the subtask.
282  keyArgs
283  Extra keyword arguments used to construct the task. The following arguments are automatically
284  provided and cannot be overridden:
285 
286  - "config".
287  - "parentTask".
288 
289  Notes
290  -----
291  The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
292  or RegistryField.
293  """
294  taskField = getattr(self.config, name, None)
295  if taskField is None:
296  raise KeyError("%s's config does not have field %r" % (self.getFullName(), name))
297  subtask = taskField.apply(name=name, parentTask=self, **keyArgs)
298  setattr(self, name, subtask)
299 

◆ makeSubtask() [2/2]

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following arguments are automatically
    provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
or RegistryField.

Definition at line 275 of file task.py.

275  def makeSubtask(self, name, **keyArgs):
276  """Create a subtask as a new instance as the ``name`` attribute of this task.
277 
278  Parameters
279  ----------
280  name : `str`
281  Brief name of the subtask.
282  keyArgs
283  Extra keyword arguments used to construct the task. The following arguments are automatically
284  provided and cannot be overridden:
285 
286  - "config".
287  - "parentTask".
288 
289  Notes
290  -----
291  The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
292  or RegistryField.
293  """
294  taskField = getattr(self.config, name, None)
295  if taskField is None:
296  raise KeyError("%s's config does not have field %r" % (self.getFullName(), name))
297  subtask = taskField.apply(name=name, parentTask=self, **keyArgs)
298  setattr(self, name, subtask)
299 

◆ parseAndRun()

def lsst.pipe.base.cmdLineTask.CmdLineTask.parseAndRun (   cls,
  args = None,
  config = None,
  log = None,
  doReturnResults = False 
)
inherited
Parse an argument list and run the command.

Parameters
----------
args : `list`, optional
    List of command-line arguments; if `None` use `sys.argv`.
config : `lsst.pex.config.Config`-type, optional
    Config for task. If `None` use `Task.ConfigClass`.
log : `lsst.log.Log`-type, optional
    Log. If `None` use the default log.
doReturnResults : `bool`, optional
    If `True`, return the results of this task. Default is `False`. This is only intended for
    unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
    call it enough times) and it will fail when using multiprocessing if the returned data cannot be
    pickled.

Returns
-------
struct : `lsst.pipe.base.Struct`
    Fields are:

    ``argumentParser``
the argument parser (`lsst.pipe.base.ArgumentParser`).
    ``parsedCmd``
the parsed command returned by the argument parser's
`~lsst.pipe.base.ArgumentParser.parse_args` method
(`argparse.Namespace`).
    ``taskRunner``
the task runner used to run the task (an instance of `Task.RunnerClass`).
    ``resultList``
results returned by the task runner's ``run`` method, one entry
per invocation (`list`). This will typically be a list of
`Struct`, each containing at least an ``exitStatus`` integer
(0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
more details.

Notes
-----
Calling this method with no arguments specified is the standard way to run a command-line task
from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
file in that directory.

If one or more of the dataIds fails then this routine will exit (with a status giving the
number of failed dataIds) rather than returning this struct;  this behaviour can be
overridden by specifying the ``--noExit`` command-line option.

Definition at line 549 of file cmdLineTask.py.

549  def parseAndRun(cls, args=None, config=None, log=None, doReturnResults=False):
550  """Parse an argument list and run the command.
551 
552  Parameters
553  ----------
554  args : `list`, optional
555  List of command-line arguments; if `None` use `sys.argv`.
556  config : `lsst.pex.config.Config`-type, optional
557  Config for task. If `None` use `Task.ConfigClass`.
558  log : `lsst.log.Log`-type, optional
559  Log. If `None` use the default log.
560  doReturnResults : `bool`, optional
561  If `True`, return the results of this task. Default is `False`. This is only intended for
562  unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
563  call it enough times) and it will fail when using multiprocessing if the returned data cannot be
564  pickled.
565 
566  Returns
567  -------
568  struct : `lsst.pipe.base.Struct`
569  Fields are:
570 
571  ``argumentParser``
572  the argument parser (`lsst.pipe.base.ArgumentParser`).
573  ``parsedCmd``
574  the parsed command returned by the argument parser's
575  `~lsst.pipe.base.ArgumentParser.parse_args` method
576  (`argparse.Namespace`).
577  ``taskRunner``
578  the task runner used to run the task (an instance of `Task.RunnerClass`).
579  ``resultList``
580  results returned by the task runner's ``run`` method, one entry
581  per invocation (`list`). This will typically be a list of
582  `Struct`, each containing at least an ``exitStatus`` integer
583  (0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
584  more details.
585 
586  Notes
587  -----
588  Calling this method with no arguments specified is the standard way to run a command-line task
589  from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
590  file in that directory.
591 
592  If one or more of the dataIds fails then this routine will exit (with a status giving the
593  number of failed dataIds) rather than returning this struct; this behaviour can be
594  overridden by specifying the ``--noExit`` command-line option.
595  """
596  if args is None:
597  commandAsStr = " ".join(sys.argv)
598  args = sys.argv[1:]
599  else:
600  commandAsStr = "{}{}".format(lsst.utils.get_caller_name(skip=1), tuple(args))
601 
602  argumentParser = cls._makeArgumentParser()
603  if config is None:
604  config = cls.ConfigClass()
605  parsedCmd = argumentParser.parse_args(config=config, args=args, log=log, override=cls.applyOverrides)
606  # print this message after parsing the command so the log is fully configured
607  parsedCmd.log.info("Running: %s", commandAsStr)
608 
609  taskRunner = cls.RunnerClass(TaskClass=cls, parsedCmd=parsedCmd, doReturnResults=doReturnResults)
610  resultList = taskRunner.run(parsedCmd)
611 
612  try:
613  nFailed = sum(((res.exitStatus != 0) for res in resultList))
614  except (TypeError, AttributeError) as e:
615  # NOTE: TypeError if resultList is None, AttributeError if it doesn't have exitStatus.
616  parsedCmd.log.warn("Unable to retrieve exit status (%s); assuming success", e)
617  nFailed = 0
618 
619  if nFailed > 0:
620  if parsedCmd.noExit:
621  parsedCmd.log.error("%d dataRefs failed; not exiting as --noExit was set", nFailed)
622  else:
623  sys.exit(nFailed)
624 
625  return Struct(
626  argumentParser=argumentParser,
627  parsedCmd=parsedCmd,
628  taskRunner=taskRunner,
629  resultList=resultList,
630  )
631 
def format(config, name=None, writeSourceLine=True, prefix="", verbose=False)
Definition: history.py:167

◆ run() [1/2]

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.run (   self,
  measCat,
  exposure,
  refCat,
  refWcs,
  exposureId = None 
)
inherited
Perform forced measurement on a single exposure.

Parameters
----------
measCat : `lsst.afw.table.SourceCatalog`
    The measurement catalog, based on the sources listed in the
    reference catalog.
exposure : `lsst.afw.image.Exposure`
    The measurement image upon which to perform forced detection.
refCat : `lsst.afw.table.SourceCatalog`
    The reference catalog of sources to measure.
refWcs : `lsst.afw.image.SkyWcs`
    The WCS for the references.
exposureId : `int`
    Optional unique exposureId used for random seed in measurement
    task.

Returns
-------
result : `lsst.pipe.base.Struct`
    Structure with fields:

    ``measCat``
Catalog of forced measurement results
(`lsst.afw.table.SourceCatalog`).

Definition at line 263 of file forcedPhotImage.py.

263  def run(self, measCat, exposure, refCat, refWcs, exposureId=None):
264  """Perform forced measurement on a single exposure.
265 
266  Parameters
267  ----------
268  measCat : `lsst.afw.table.SourceCatalog`
269  The measurement catalog, based on the sources listed in the
270  reference catalog.
271  exposure : `lsst.afw.image.Exposure`
272  The measurement image upon which to perform forced detection.
273  refCat : `lsst.afw.table.SourceCatalog`
274  The reference catalog of sources to measure.
275  refWcs : `lsst.afw.image.SkyWcs`
276  The WCS for the references.
277  exposureId : `int`
278  Optional unique exposureId used for random seed in measurement
279  task.
280 
281  Returns
282  -------
283  result : `lsst.pipe.base.Struct`
284  Structure with fields:
285 
286  ``measCat``
287  Catalog of forced measurement results
288  (`lsst.afw.table.SourceCatalog`).
289  """
290  self.measurement.run(measCat, exposure, refCat, refWcs, exposureId=exposureId)
291  if self.config.doApCorr:
292  self.applyApCorr.run(
293  catalog=measCat,
294  apCorrMap=exposure.getInfo().getApCorrMap()
295  )
296  self.catalogCalculation.run(measCat)
297 
298  return lsst.pipe.base.Struct(measCat=measCat)
299 

◆ run() [2/2]

def lsst.pipe.base.pipelineTask.PipelineTask.run (   self,
  kwargs 
)
inherited
Run task algorithm on in-memory data.

This method should be implemented in a subclass unless tasks overrides
`adaptArgsAndRun` to do something different from its default
implementation. With default implementation of `adaptArgsAndRun` this
method will receive keyword arguments whose names will be the same as
names of configuration fields describing input dataset types. Argument
values will be data objects retrieved from data butler. If a dataset
type is configured with ``scalar`` field set to ``True`` then argument
value will be a single object, otherwise it will be a list of objects.

If the task needs to know its input or output DataIds then it has to
override `adaptArgsAndRun` method instead.

Returns
-------
struct : `Struct`
    See description of `adaptArgsAndRun` method.

Examples
--------
Typical implementation of this method may look like::

    def run(self, input, calib):
# "input", "calib", and "output" are the names of the config fields

# Assuming that input/calib datasets are `scalar` they are simple objects,
# do something with inputs and calibs, produce output image.
image = self.makeImage(input, calib)

# If output dataset is `scalar` then return object, not list
return Struct(output=image)

Definition at line 469 of file pipelineTask.py.

469  def run(self, **kwargs):
470  """Run task algorithm on in-memory data.
471 
472  This method should be implemented in a subclass unless tasks overrides
473  `adaptArgsAndRun` to do something different from its default
474  implementation. With default implementation of `adaptArgsAndRun` this
475  method will receive keyword arguments whose names will be the same as
476  names of configuration fields describing input dataset types. Argument
477  values will be data objects retrieved from data butler. If a dataset
478  type is configured with ``scalar`` field set to ``True`` then argument
479  value will be a single object, otherwise it will be a list of objects.
480 
481  If the task needs to know its input or output DataIds then it has to
482  override `adaptArgsAndRun` method instead.
483 
484  Returns
485  -------
486  struct : `Struct`
487  See description of `adaptArgsAndRun` method.
488 
489  Examples
490  --------
491  Typical implementation of this method may look like::
492 
493  def run(self, input, calib):
494  # "input", "calib", and "output" are the names of the config fields
495 
496  # Assuming that input/calib datasets are `scalar` they are simple objects,
497  # do something with inputs and calibs, produce output image.
498  image = self.makeImage(input, calib)
499 
500  # If output dataset is `scalar` then return object, not list
501  return Struct(output=image)
502 
503  """
504  raise NotImplementedError("run() is not implemented")
505 

◆ runDataRef()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.runDataRef (   self,
  dataRef,
  psfCache = None 
)
inherited
Perform forced measurement on a single exposure.

Parameters
----------
dataRef : `lsst.daf.persistence.ButlerDataRef`
    Passed to the ``references`` subtask to obtain the reference WCS,
    the ``getExposure`` method (implemented by derived classes) to
    read the measurment image, and the ``fetchReferences`` method to
    get the exposure and load the reference catalog (see
    :lsst-task`lsst.meas.base.references.CoaddSrcReferencesTask`).
    Refer to derived class documentation for details of the datasets
    and data ID keys which are used.
psfCache : `int`, optional
    Size of PSF cache, or `None`. The size of the PSF cache can have
    a significant effect upon the runtime for complicated PSF models.

Notes
-----
Sources are generated with ``generateMeasCat`` in the ``measurement``
subtask. These are passed to ``measurement``'s ``run`` method, which
fills the source catalog with the forced measurement results. The
sources are then passed to the ``writeOutputs`` method (implemented by
derived classes) which writes the outputs.

Definition at line 221 of file forcedPhotImage.py.

221  def runDataRef(self, dataRef, psfCache=None):
222  """Perform forced measurement on a single exposure.
223 
224  Parameters
225  ----------
226  dataRef : `lsst.daf.persistence.ButlerDataRef`
227  Passed to the ``references`` subtask to obtain the reference WCS,
228  the ``getExposure`` method (implemented by derived classes) to
229  read the measurment image, and the ``fetchReferences`` method to
230  get the exposure and load the reference catalog (see
231  :lsst-task`lsst.meas.base.references.CoaddSrcReferencesTask`).
232  Refer to derived class documentation for details of the datasets
233  and data ID keys which are used.
234  psfCache : `int`, optional
235  Size of PSF cache, or `None`. The size of the PSF cache can have
236  a significant effect upon the runtime for complicated PSF models.
237 
238  Notes
239  -----
240  Sources are generated with ``generateMeasCat`` in the ``measurement``
241  subtask. These are passed to ``measurement``'s ``run`` method, which
242  fills the source catalog with the forced measurement results. The
243  sources are then passed to the ``writeOutputs`` method (implemented by
244  derived classes) which writes the outputs.
245  """
246  refWcs = self.references.getWcs(dataRef)
247  exposure = self.getExposure(dataRef)
248  if psfCache is not None:
249  exposure.getPsf().setCacheSize(psfCache)
250  refCat = self.fetchReferences(dataRef, exposure)
251 
252  measCat = self.measurement.generateMeasCat(exposure, refCat, refWcs,
253  idFactory=self.makeIdFactory(dataRef))
254  self.log.info("Performing forced measurement on %s" % (dataRef.dataId,))
255  self.attachFootprints(measCat, refCat, exposure, refWcs, dataRef)
256 
257  exposureId = self.getExposureId(dataRef)
258 
259  forcedPhotResult = self.run(measCat, exposure, refCat, refWcs, exposureId=exposureId)
260 
261  self.writeOutput(dataRef, forcedPhotResult.measCat)
262 

◆ runQuantum()

def lsst.pipe.base.pipelineTask.PipelineTask.runQuantum (   self,
  quantum,
  butler 
)
inherited
Execute PipelineTask algorithm on single quantum of data.

Typical implementation of this method will use inputs from quantum
to retrieve Python-domain objects from data butler and call
`adaptArgsAndRun` method on that data. On return from
`adaptArgsAndRun` this method will extract data from returned
`Struct` instance and save that data to butler.

The `Struct` returned from `adaptArgsAndRun` is expected to contain
data attributes with the names equal to the names of the
configuration fields defining output dataset types. The values of
the data attributes must be data objects corresponding to
the DataIds of output dataset types. All data objects will be
saved in butler using DataRefs from Quantum's output dictionary.

This method does not return anything to the caller, on errors
corresponding exception is raised.

Parameters
----------
quantum : `Quantum`
    Object describing input and output corresponding to this
    invocation of PipelineTask instance.
butler : object
    Data butler instance.

Raises
------
`ScalarError` if a dataset type is configured as scalar but receives
multiple DataIds in `quantum`. Any exceptions that happen in data
butler or in `adaptArgsAndRun` method.

Definition at line 506 of file pipelineTask.py.

506  def runQuantum(self, quantum, butler):
507  """Execute PipelineTask algorithm on single quantum of data.
508 
509  Typical implementation of this method will use inputs from quantum
510  to retrieve Python-domain objects from data butler and call
511  `adaptArgsAndRun` method on that data. On return from
512  `adaptArgsAndRun` this method will extract data from returned
513  `Struct` instance and save that data to butler.
514 
515  The `Struct` returned from `adaptArgsAndRun` is expected to contain
516  data attributes with the names equal to the names of the
517  configuration fields defining output dataset types. The values of
518  the data attributes must be data objects corresponding to
519  the DataIds of output dataset types. All data objects will be
520  saved in butler using DataRefs from Quantum's output dictionary.
521 
522  This method does not return anything to the caller, on errors
523  corresponding exception is raised.
524 
525  Parameters
526  ----------
527  quantum : `Quantum`
528  Object describing input and output corresponding to this
529  invocation of PipelineTask instance.
530  butler : object
531  Data butler instance.
532 
533  Raises
534  ------
535  `ScalarError` if a dataset type is configured as scalar but receives
536  multiple DataIds in `quantum`. Any exceptions that happen in data
537  butler or in `adaptArgsAndRun` method.
538  """
539 
540  def makeDataRefs(descriptors, refMap):
541  """Generate map of DatasetRefs and DataIds.
542 
543  Given a map of DatasetTypeDescriptor and a map of Quantum
544  DatasetRefs makes maps of DataIds and and DatasetRefs.
545  For scalar dataset types unpacks DatasetRefs and DataIds.
546 
547  Parameters
548  ----------
549  descriptors : `dict`
550  Map of (dataset key, DatasetTypeDescriptor).
551  refMap : `dict`
552  Map of (dataset type name, DatasetRefs).
553 
554  Returns
555  -------
556  dataIds : `dict`
557  Map of (dataset key, DataIds)
558  dataRefs : `dict`
559  Map of (dataset key, DatasetRefs)
560 
561  Raises
562  ------
563  ScalarError
564  Raised if dataset type is configured as scalar but more than
565  one DatasetRef exists for it.
566  """
567  dataIds = {}
568  dataRefs = {}
569  for key, descriptor in descriptors.items():
570  datasetType = descriptor.makeDatasetType(butler.registry.dimensions)
571  keyDataRefs = refMap[datasetType.name]
572  keyDataIds = [dataRef.dataId for dataRef in keyDataRefs]
573  if descriptor.scalar:
574  # unpack single-item lists
575  if len(keyDataRefs) != 1:
576  raise ScalarError(key, len(keyDataRefs))
577  keyDataRefs = keyDataRefs[0]
578  keyDataIds = keyDataIds[0]
579  dataIds[key] = keyDataIds
580  if not descriptor.manualLoad:
581  dataRefs[key] = keyDataRefs
582  return dataIds, dataRefs
583 
584  # lists of DataRefs/DataIds for input datasets
585  descriptors = self.getInputDatasetTypes(self.config)
586  inputDataIds, inputDataRefs = makeDataRefs(descriptors, quantum.predictedInputs)
587 
588  # get all data from butler
589  inputs = {}
590  for key, dataRefs in inputDataRefs.items():
591  if isinstance(dataRefs, list):
592  inputs[key] = [butler.get(dataRef) for dataRef in dataRefs]
593  else:
594  inputs[key] = butler.get(dataRefs)
595  del inputDataRefs
596 
597  # lists of DataRefs/DataIds for output datasets
598  descriptors = self.getOutputDatasetTypes(self.config)
599  outputDataIds, outputDataRefs = makeDataRefs(descriptors, quantum.outputs)
600 
601  # call run method with keyword arguments
602  struct = self.adaptArgsAndRun(inputs, inputDataIds, outputDataIds, butler)
603 
604  # store produced ouput data
605  self.saveStruct(struct, outputDataRefs, butler)
606 

◆ saveStruct()

def lsst.pipe.base.pipelineTask.PipelineTask.saveStruct (   self,
  struct,
  outputDataRefs,
  butler 
)
inherited
Save data in butler.

Convention is that struct returned from ``run()`` method has data
field(s) with the same names as the config fields defining
output DatasetTypes. Subclasses may override this method to implement
different convention for `Struct` content or in case any
post-processing of data may be needed.

Parameters
----------
struct : `Struct`
    Data produced by the task packed into `Struct` instance
outputDataRefs : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing output dataset types and values are lists of DataRefs.
    DataRefs must match corresponding data objects in ``struct`` in
    number and order.
butler : object
    Data butler instance.

Definition at line 607 of file pipelineTask.py.

607  def saveStruct(self, struct, outputDataRefs, butler):
608  """Save data in butler.
609 
610  Convention is that struct returned from ``run()`` method has data
611  field(s) with the same names as the config fields defining
612  output DatasetTypes. Subclasses may override this method to implement
613  different convention for `Struct` content or in case any
614  post-processing of data may be needed.
615 
616  Parameters
617  ----------
618  struct : `Struct`
619  Data produced by the task packed into `Struct` instance
620  outputDataRefs : `dict`
621  Dictionary whose keys are the names of the configuration fields
622  describing output dataset types and values are lists of DataRefs.
623  DataRefs must match corresponding data objects in ``struct`` in
624  number and order.
625  butler : object
626  Data butler instance.
627  """
628  structDict = struct.getDict()
629  descriptors = self.getOutputDatasetTypes(self.config)
630  for key in descriptors.keys():
631  dataList = structDict[key]
632  dataRefs = outputDataRefs[key]
633  if not isinstance(dataRefs, list):
634  # scalar outputs, make them lists again
635  dataRefs = [dataRefs]
636  dataList = [dataList]
637  # TODO: check that data objects and data refs are aligned
638  for dataRef, data in zip(dataRefs, dataList):
639  butler.put(data, dataRef.datasetType.name, dataRef.dataId)
640 

◆ timer() [1/2]

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context::

    with self.timer("someCodeToTime"):
pass  # code to time

See also
--------
timer.logInfo

Definition at line 301 of file task.py.

301  def timer(self, name, logLevel=Log.DEBUG):
302  """Context manager to log performance data for an arbitrary block of code.
303 
304  Parameters
305  ----------
306  name : `str`
307  Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
308  logLevel
309  A `lsst.log` level constant.
310 
311  Examples
312  --------
313  Creating a timer context::
314 
315  with self.timer("someCodeToTime"):
316  pass # code to time
317 
318  See also
319  --------
320  timer.logInfo
321  """
322  logInfo(obj=self, prefix=name + "Start", logLevel=logLevel)
323  try:
324  yield
325  finally:
326  logInfo(obj=self, prefix=name + "End", logLevel=logLevel)
327 
def logInfo(obj, prefix, logLevel=Log.DEBUG)
Definition: timer.py:62

◆ timer() [2/2]

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context::

    with self.timer("someCodeToTime"):
pass  # code to time

See also
--------
timer.logInfo

Definition at line 301 of file task.py.

301  def timer(self, name, logLevel=Log.DEBUG):
302  """Context manager to log performance data for an arbitrary block of code.
303 
304  Parameters
305  ----------
306  name : `str`
307  Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
308  logLevel
309  A `lsst.log` level constant.
310 
311  Examples
312  --------
313  Creating a timer context::
314 
315  with self.timer("someCodeToTime"):
316  pass # code to time
317 
318  See also
319  --------
320  timer.logInfo
321  """
322  logInfo(obj=self, prefix=name + "Start", logLevel=logLevel)
323  try:
324  yield
325  finally:
326  logInfo(obj=self, prefix=name + "End", logLevel=logLevel)
327 
def logInfo(obj, prefix, logLevel=Log.DEBUG)
Definition: timer.py:62

◆ writeConfig()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeConfig (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the configuration used for processing the data, or check that an existing
one is equal to the new one if present.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the config. The config is written to dataset type
    `CmdLineTask._getConfigName`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a config already has been saved:
    - `True`: overwrite or rename the existing config, depending on ``doBackup``.
    - `False`: raise `TaskError` if this config does not match the existing config.
doBackup : bool, optional
    Set to `True` to backup the config files if clobbering.

Definition at line 656 of file cmdLineTask.py.

656  def writeConfig(self, butler, clobber=False, doBackup=True):
657  """Write the configuration used for processing the data, or check that an existing
658  one is equal to the new one if present.
659 
660  Parameters
661  ----------
662  butler : `lsst.daf.persistence.Butler`
663  Data butler used to write the config. The config is written to dataset type
664  `CmdLineTask._getConfigName`.
665  clobber : `bool`, optional
666  A boolean flag that controls what happens if a config already has been saved:
667  - `True`: overwrite or rename the existing config, depending on ``doBackup``.
668  - `False`: raise `TaskError` if this config does not match the existing config.
669  doBackup : bool, optional
670  Set to `True` to backup the config files if clobbering.
671  """
672  configName = self._getConfigName()
673  if configName is None:
674  return
675  if clobber:
676  butler.put(self.config, configName, doBackup=doBackup)
677  elif butler.datasetExists(configName, write=True):
678  # this may be subject to a race condition; see #2789
679  try:
680  oldConfig = butler.get(configName, immediate=True)
681  except Exception as exc:
682  raise type(exc)("Unable to read stored config file %s (%s); consider using --clobber-config" %
683  (configName, exc))
684 
685  def logConfigMismatch(msg):
686  self.log.fatal("Comparing configuration: %s", msg)
687 
688  if not self.config.compare(oldConfig, shortcut=False, output=logConfigMismatch):
689  raise TaskError(
690  ("Config does not match existing task config %r on disk; tasks configurations " +
691  "must be consistent within the same output repo (override with --clobber-config)") %
692  (configName,))
693  else:
694  butler.put(self.config, configName)
695 
table::Key< int > type
Definition: Detector.cc:167

◆ writeMetadata()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeMetadata (   self,
  dataRef 
)
inherited
Write the metadata produced from processing the data.

Parameters
----------
dataRef
    Butler data reference used to write the metadata.
    The metadata is written to dataset type `CmdLineTask._getMetadataName`.

Definition at line 731 of file cmdLineTask.py.

731  def writeMetadata(self, dataRef):
732  """Write the metadata produced from processing the data.
733 
734  Parameters
735  ----------
736  dataRef
737  Butler data reference used to write the metadata.
738  The metadata is written to dataset type `CmdLineTask._getMetadataName`.
739  """
740  try:
741  metadataName = self._getMetadataName()
742  if metadataName is not None:
743  dataRef.put(self.getFullMetadata(), metadataName)
744  except Exception as e:
745  self.log.warn("Could not persist metadata for dataId=%s: %s", dataRef.dataId, e)
746 

◆ writeOutput()

def lsst.meas.base.forcedPhotImage.ForcedPhotImageTask.writeOutput (   self,
  dataRef,
  sources 
)
inherited
Write forced source table

Parameters
----------
dataRef : `lsst.daf.persistence.ButlerDataRef`
    Butler data reference. The forced_src dataset (with
    self.dataPrefix prepended) is all that will be modified.
sources : `lsst.afw.table.SourceCatalog`
    Catalog of sources to save.

Definition at line 357 of file forcedPhotImage.py.

357  def writeOutput(self, dataRef, sources):
358  """Write forced source table
359 
360  Parameters
361  ----------
362  dataRef : `lsst.daf.persistence.ButlerDataRef`
363  Butler data reference. The forced_src dataset (with
364  self.dataPrefix prepended) is all that will be modified.
365  sources : `lsst.afw.table.SourceCatalog`
366  Catalog of sources to save.
367  """
368  dataRef.put(sources, self.dataPrefix + "forced_src", flags=lsst.afw.table.SOURCE_IO_NO_FOOTPRINTS)
369 

◆ writePackageVersions()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writePackageVersions (   self,
  butler,
  clobber = False,
  doBackup = True,
  dataset = "packages" 
)
inherited
Compare and write package versions.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to read/write the package versions.
clobber : `bool`, optional
    A boolean flag that controls what happens if versions already have been saved:
    - `True`: overwrite or rename the existing version info, depending on ``doBackup``.
    - `False`: raise `TaskError` if this version info does not match the existing.
doBackup : `bool`, optional
    If `True` and clobbering, old package version files are backed up.
dataset : `str`, optional
    Name of dataset to read/write.

Raises
------
TaskError
    Raised if there is a version mismatch with current and persisted lists of package versions.

Notes
-----
Note that this operation is subject to a race condition.

Definition at line 747 of file cmdLineTask.py.

747  def writePackageVersions(self, butler, clobber=False, doBackup=True, dataset="packages"):
748  """Compare and write package versions.
749 
750  Parameters
751  ----------
752  butler : `lsst.daf.persistence.Butler`
753  Data butler used to read/write the package versions.
754  clobber : `bool`, optional
755  A boolean flag that controls what happens if versions already have been saved:
756  - `True`: overwrite or rename the existing version info, depending on ``doBackup``.
757  - `False`: raise `TaskError` if this version info does not match the existing.
758  doBackup : `bool`, optional
759  If `True` and clobbering, old package version files are backed up.
760  dataset : `str`, optional
761  Name of dataset to read/write.
762 
763  Raises
764  ------
765  TaskError
766  Raised if there is a version mismatch with current and persisted lists of package versions.
767 
768  Notes
769  -----
770  Note that this operation is subject to a race condition.
771  """
772  packages = Packages.fromSystem()
773 
774  if clobber:
775  return butler.put(packages, dataset, doBackup=doBackup)
776  if not butler.datasetExists(dataset, write=True):
777  return butler.put(packages, dataset)
778 
779  try:
780  old = butler.get(dataset, immediate=True)
781  except Exception as exc:
782  raise type(exc)("Unable to read stored version dataset %s (%s); "
783  "consider using --clobber-versions or --no-versions" %
784  (dataset, exc))
785  # Note that because we can only detect python modules that have been imported, the stored
786  # list of products may be more or less complete than what we have now. What's important is
787  # that the products that are in common have the same version.
788  diff = packages.difference(old)
789  if diff:
790  raise TaskError(
791  "Version mismatch (" +
792  "; ".join("%s: %s vs %s" % (pkg, diff[pkg][1], diff[pkg][0]) for pkg in diff) +
793  "); consider using --clobber-versions or --no-versions")
794  # Update the old set of packages in case we have more packages that haven't been persisted.
795  extra = packages.extra(old)
796  if extra:
797  old.update(packages)
798  butler.put(old, dataset, doBackup=doBackup)
799 
table::Key< int > type
Definition: Detector.cc:167

◆ writeSchemas()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeSchemas (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the schema. Each schema is written to the dataset type specified as the
    key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a schema already has been saved:
    - `True`: overwrite or rename the existing schema, depending on ``doBackup``.
    - `False`: raise `TaskError` if this schema does not match the existing schema.
doBackup : `bool`, optional
    Set to `True` to backup the schema files if clobbering.

Notes
-----
If ``clobber`` is `False` and an existing schema does not match a current schema,
then some schemas may have been saved successfully and others may not, and there is no easy way to
tell which is which.

Definition at line 696 of file cmdLineTask.py.

696  def writeSchemas(self, butler, clobber=False, doBackup=True):
697  """Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.
698 
699  Parameters
700  ----------
701  butler : `lsst.daf.persistence.Butler`
702  Data butler used to write the schema. Each schema is written to the dataset type specified as the
703  key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
704  clobber : `bool`, optional
705  A boolean flag that controls what happens if a schema already has been saved:
706  - `True`: overwrite or rename the existing schema, depending on ``doBackup``.
707  - `False`: raise `TaskError` if this schema does not match the existing schema.
708  doBackup : `bool`, optional
709  Set to `True` to backup the schema files if clobbering.
710 
711  Notes
712  -----
713  If ``clobber`` is `False` and an existing schema does not match a current schema,
714  then some schemas may have been saved successfully and others may not, and there is no easy way to
715  tell which is which.
716  """
717  for dataset, catalog in self.getAllSchemaCatalogs().items():
718  schemaDataset = dataset + "_schema"
719  if clobber:
720  butler.put(catalog, schemaDataset, doBackup=doBackup)
721  elif butler.datasetExists(schemaDataset, write=True):
722  oldSchema = butler.get(schemaDataset, immediate=True).getSchema()
723  if not oldSchema.compare(catalog.getSchema(), afwTable.Schema.IDENTICAL):
724  raise TaskError(
725  ("New schema does not match schema %r on disk; schemas must be " +
726  " consistent within the same output repo (override with --clobber-config)") %
727  (dataset,))
728  else:
729  butler.put(catalog, schemaDataset)
730 
std::vector< SchemaItem< Flag > > * items

Member Data Documentation

◆ canMultiprocess [1/2]

bool lsst.pipe.base.pipelineTask.PipelineTask.canMultiprocess = True
staticinherited

Definition at line 186 of file pipelineTask.py.

◆ canMultiprocess [2/2]

bool lsst.pipe.base.cmdLineTask.CmdLineTask.canMultiprocess = True
staticinherited

Definition at line 524 of file cmdLineTask.py.

◆ config [1/2]

lsst.pipe.base.task.Task.config
inherited

Definition at line 149 of file task.py.

◆ config [2/2]

lsst.pipe.base.task.Task.config
inherited

Definition at line 149 of file task.py.

◆ ConfigClass

lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.ConfigClass = ForcedPhotCcdConfig
static

Definition at line 203 of file forcedPhotCcd.py.

◆ dataPrefix

string lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.dataPrefix = ""
static

Definition at line 206 of file forcedPhotCcd.py.

◆ log [1/2]

lsst.pipe.base.task.Task.log
inherited

Definition at line 148 of file task.py.

◆ log [2/2]

lsst.pipe.base.task.Task.log
inherited

Definition at line 148 of file task.py.

◆ metadata [1/2]

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 121 of file task.py.

◆ metadata [2/2]

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 121 of file task.py.

◆ RunnerClass

lsst.meas.base.forcedPhotCcd.ForcedPhotCcdTask.RunnerClass = lsst.pipe.base.ButlerInitializedTaskRunner
static

Definition at line 204 of file forcedPhotCcd.py.


The documentation for this class was generated from the following file:
  • /j/snowflake/release/lsstsw/stack/Linux64/meas_base/18.1.0-2-g9c63283+13/python/lsst/meas/base/forcedPhotCcd.py