LSSTApplications  18.1.0
LSSTDataManagementBasePackage
Public Member Functions | Public Attributes | Static Public Attributes | List of all members
lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask Class Reference
Inheritance diagram for lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask:
lsst.pipe.base.pipelineTask.PipelineTask lsst.pipe.base.cmdLineTask.CmdLineTask lsst.pipe.base.task.Task lsst.pipe.base.task.Task

Public Member Functions

def __init__ (self, schema=None, kwargs)
 
def runDataRef (self, dataRef)
 
def adaptArgsAndRun (self, inputData, inputDataIds, outputDataIds, butler)
 
def run (self, fakeCat, exposure, wcs=None, photoCalib=None, exposureIdInfo=None)
 
def getInitOutputDatasets (self)
 
def getInputDatasetTypes (cls, config)
 
def getOutputDatasetTypes (cls, config)
 
def getPrerequisiteDatasetTypes (cls, config)
 
def getInitInputDatasetTypes (cls, config)
 
def getInitOutputDatasetTypes (cls, config)
 
def getDatasetTypes (cls, config, configClass)
 
def getPerDatasetTypeDimensions (cls, config)
 
def run (self, kwargs)
 
def runQuantum (self, quantum, butler)
 
def saveStruct (self, struct, outputDataRefs, butler)
 
def getResourceConfig (self)
 
def emptyMetadata (self)
 
def getSchemaCatalogs (self)
 
def getAllSchemaCatalogs (self)
 
def getFullMetadata (self)
 
def getFullName (self)
 
def getName (self)
 
def getTaskDict (self)
 
def makeSubtask (self, name, keyArgs)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def makeField (cls, doc)
 
def __reduce__ (self)
 
def applyOverrides (cls, config)
 
def parseAndRun (cls, args=None, config=None, log=None, doReturnResults=False)
 
def writeConfig (self, butler, clobber=False, doBackup=True)
 
def writeSchemas (self, butler, clobber=False, doBackup=True)
 
def writeMetadata (self, dataRef)
 
def writePackageVersions (self, butler, clobber=False, doBackup=True, dataset="packages")
 
def emptyMetadata (self)
 
def getSchemaCatalogs (self)
 
def getAllSchemaCatalogs (self)
 
def getFullMetadata (self)
 
def getFullName (self)
 
def getName (self)
 
def getTaskDict (self)
 
def makeSubtask (self, name, keyArgs)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def makeField (cls, doc)
 
def __reduce__ (self)
 

Public Attributes

 schema
 
 algMetadata
 
 metadata
 
 log
 
 config
 
 metadata
 
 log
 
 config
 

Static Public Attributes

 ConfigClass = ProcessCcdWithFakesConfig
 
bool canMultiprocess = True
 
 RunnerClass = TaskRunner
 
bool canMultiprocess = True
 

Detailed Description

Insert fake objects into calexps.

Add fake stars and galaxies to the given calexp, specified in the dataRef. Galaxy parameters are read in
from the specified file and then modelled using galsim. Re-runs characterize image and calibrate image to
give a new background estimation and measurement of the calexp.

`ProcessFakeSourcesTask` inherits six functions from insertFakesTask that make images of the fake
sources and then add them to the calexp.

`addPixCoords`
    Use the WCS information to add the pixel coordinates of each source
    Adds an ``x`` and ``y`` column to the catalog of fake sources.
`trimFakeCat`
    Trim the fake cat to about the size of the input image.
`mkFakeGalsimGalaxies`
    Use Galsim to make fake double sersic galaxies for each set of galaxy parameters in the input file.
`mkFakeStars`
    Use the PSF information from the calexp to make a fake star using the magnitude information from the
    input file.
`cleanCat`
    Remove rows of the input fake catalog which have half light radius, of either the bulge or the disk,
    that are 0.
`addFakeSources`
    Add the fake sources to the calexp.

Notes
-----
The ``calexp`` with fake souces added to it is written out as the datatype ``calexp_fakes``.

Definition at line 137 of file processFakes.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.__init__ (   self,
  schema = None,
  kwargs 
)
Initalize tings! This should go above in the class docstring

Definition at line 171 of file processFakes.py.

171  def __init__(self, schema=None, **kwargs):
172  """Initalize tings! This should go above in the class docstring
173  """
174 
175  super().__init__(**kwargs)
176 
177  if schema is None:
178  schema = SourceTable.makeMinimalSchema()
179  self.schema = schema
180  self.makeSubtask("insertFakes")
181  self.algMetadata = dafBase.PropertyList()
182  self.makeSubtask("detection", schema=self.schema)
183  self.makeSubtask("deblend", schema=self.schema)
184  self.makeSubtask("measurement", schema=self.schema, algMetadata=self.algMetadata)
185  self.makeSubtask("applyApCorr", schema=self.schema)
186  self.makeSubtask("catalogCalculation", schema=self.schema)
187 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68
def __init__(self, minimum, dataRange, Q)

Member Function Documentation

◆ __reduce__() [1/2]

def lsst.pipe.base.task.Task.__reduce__ (   self)
inherited
Pickler.

Definition at line 373 of file task.py.

373  def __reduce__(self):
374  """Pickler.
375  """
376  return self.__class__, (self.config, self._name, self._parentTask, None)
377 

◆ __reduce__() [2/2]

def lsst.pipe.base.task.Task.__reduce__ (   self)
inherited
Pickler.

Definition at line 373 of file task.py.

373  def __reduce__(self):
374  """Pickler.
375  """
376  return self.__class__, (self.config, self._name, self._parentTask, None)
377 

◆ adaptArgsAndRun()

def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.adaptArgsAndRun (   self,
  inputData,
  inputDataIds,
  outputDataIds,
  butler 
)

Definition at line 231 of file processFakes.py.

231  def adaptArgsAndRun(self, inputData, inputDataIds, outputDataIds, butler):
232  if 'exposureIdInfo' not in inputData.keys():
233  packer = butler.registry.makeDataIdPacker("VisitDetector", inputDataIds['exposure'])
234  exposureIdInfo = ExposureIdInfo()
235  exposureIdInfo.expId = packer.pack(inputDataIds['exposure'])
236  exposureIdInfo.expBits = packer.maxBits
237  inputData['exposureIdInfo'] = exposureIdInfo
238 
239  if inputData["wcs"] is None:
240  inputData["wcs"] = inputData["image"].getWcs()
241  if inputData["photoCalib"] is None:
242  inputData["photoCalib"] = inputData["image"].getCalib()
243 
244  return self.run(**inputData)
245 

◆ applyOverrides()

def lsst.pipe.base.cmdLineTask.CmdLineTask.applyOverrides (   cls,
  config 
)
inherited
A hook to allow a task to change the values of its config *after* the camera-specific
overrides are loaded but before any command-line overrides are applied.

Parameters
----------
config : instance of task's ``ConfigClass``
    Task configuration.

Notes
-----
This is necessary in some cases because the camera-specific overrides may retarget subtasks,
wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.

.. warning::

   This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
   these overrides.

Definition at line 527 of file cmdLineTask.py.

527  def applyOverrides(cls, config):
528  """A hook to allow a task to change the values of its config *after* the camera-specific
529  overrides are loaded but before any command-line overrides are applied.
530 
531  Parameters
532  ----------
533  config : instance of task's ``ConfigClass``
534  Task configuration.
535 
536  Notes
537  -----
538  This is necessary in some cases because the camera-specific overrides may retarget subtasks,
539  wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.
540 
541  .. warning::
542 
543  This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
544  these overrides.
545  """
546  pass
547 

◆ emptyMetadata() [1/2]

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 153 of file task.py.

153  def emptyMetadata(self):
154  """Empty (clear) the metadata for this Task and all sub-Tasks.
155  """
156  for subtask in self._taskDict.values():
157  subtask.metadata = dafBase.PropertyList()
158 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68

◆ emptyMetadata() [2/2]

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 153 of file task.py.

153  def emptyMetadata(self):
154  """Empty (clear) the metadata for this Task and all sub-Tasks.
155  """
156  for subtask in self._taskDict.values():
157  subtask.metadata = dafBase.PropertyList()
158 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68

◆ getAllSchemaCatalogs() [1/2]

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
    lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.

Definition at line 188 of file task.py.

188  def getAllSchemaCatalogs(self):
189  """Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
190 
191  Returns
192  -------
193  schemacatalogs : `dict`
194  Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
195  lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
196  through all subtasks.
197 
198  Notes
199  -----
200  This method may be called on any task in the hierarchy; it will return the same answer, regardless.
201 
202  The default implementation should always suffice. If your subtask uses schemas the override
203  `Task.getSchemaCatalogs`, not this method.
204  """
205  schemaDict = self.getSchemaCatalogs()
206  for subtask in self._taskDict.values():
207  schemaDict.update(subtask.getSchemaCatalogs())
208  return schemaDict
209 

◆ getAllSchemaCatalogs() [2/2]

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
    lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.

Definition at line 188 of file task.py.

188  def getAllSchemaCatalogs(self):
189  """Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
190 
191  Returns
192  -------
193  schemacatalogs : `dict`
194  Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
195  lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
196  through all subtasks.
197 
198  Notes
199  -----
200  This method may be called on any task in the hierarchy; it will return the same answer, regardless.
201 
202  The default implementation should always suffice. If your subtask uses schemas the override
203  `Task.getSchemaCatalogs`, not this method.
204  """
205  schemaDict = self.getSchemaCatalogs()
206  for subtask in self._taskDict.values():
207  schemaDict.update(subtask.getSchemaCatalogs())
208  return schemaDict
209 

◆ getDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getDatasetTypes (   cls,
  config,
  configClass 
)
inherited
Return dataset type descriptors defined in task configuration.

This method can be used by other methods that need to extract dataset
types from task configuration (e.g. `getInputDatasetTypes` or
sub-class methods).

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.
configClass : `type`
    Class of the configuration object which defines dataset type.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
Returns empty dict if configuration has no fields with the specified
``configClass``.

Definition at line 328 of file pipelineTask.py.

328  def getDatasetTypes(cls, config, configClass):
329  """Return dataset type descriptors defined in task configuration.
330 
331  This method can be used by other methods that need to extract dataset
332  types from task configuration (e.g. `getInputDatasetTypes` or
333  sub-class methods).
334 
335  Parameters
336  ----------
337  config : `Config`
338  Configuration for this task. Typically datasets are defined in
339  a task configuration.
340  configClass : `type`
341  Class of the configuration object which defines dataset type.
342 
343  Returns
344  -------
345  Dictionary where key is the name (arbitrary) of the output dataset
346  and value is the `DatasetTypeDescriptor` instance. Default
347  implementation uses configuration field name as dictionary key.
348  Returns empty dict if configuration has no fields with the specified
349  ``configClass``.
350  """
351  dsTypes = {}
352  for key, value in config.items():
353  if isinstance(value, configClass):
354  dsTypes[key] = DatasetTypeDescriptor.fromConfig(value)
355  return dsTypes
356 

◆ getFullMetadata() [1/2]

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
    for the top-level task and all subtasks, sub-subtasks, etc..

Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.

Definition at line 210 of file task.py.

210  def getFullMetadata(self):
211  """Get metadata for all tasks.
212 
213  Returns
214  -------
215  metadata : `lsst.daf.base.PropertySet`
216  The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
217  for the top-level task and all subtasks, sub-subtasks, etc..
218 
219  Notes
220  -----
221  The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
222  and any metadata set by the task. The name of each item consists of the full task name
223  with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
224 
225  topLevelTaskName:subtaskName:subsubtaskName.itemName
226 
227  using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
228  and a metadata item with the same name.
229  """
230  fullMetadata = dafBase.PropertySet()
231  for fullName, task in self.getTaskDict().items():
232  fullMetadata.set(fullName.replace(".", ":"), task.metadata)
233  return fullMetadata
234 
Class for storing generic metadata.
Definition: PropertySet.h:68
std::vector< SchemaItem< Flag > > * items

◆ getFullMetadata() [2/2]

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
    for the top-level task and all subtasks, sub-subtasks, etc..

Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.

Definition at line 210 of file task.py.

210  def getFullMetadata(self):
211  """Get metadata for all tasks.
212 
213  Returns
214  -------
215  metadata : `lsst.daf.base.PropertySet`
216  The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
217  for the top-level task and all subtasks, sub-subtasks, etc..
218 
219  Notes
220  -----
221  The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
222  and any metadata set by the task. The name of each item consists of the full task name
223  with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
224 
225  topLevelTaskName:subtaskName:subsubtaskName.itemName
226 
227  using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
228  and a metadata item with the same name.
229  """
230  fullMetadata = dafBase.PropertySet()
231  for fullName, task in self.getTaskDict().items():
232  fullMetadata.set(fullName.replace(".", ":"), task.metadata)
233  return fullMetadata
234 
Class for storing generic metadata.
Definition: PropertySet.h:68
std::vector< SchemaItem< Flag > > * items

◆ getFullName() [1/2]

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each subtask separated by periods.
    For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".

Definition at line 235 of file task.py.

235  def getFullName(self):
236  """Get the task name as a hierarchical name including parent task names.
237 
238  Returns
239  -------
240  fullName : `str`
241  The full name consists of the name of the parent task and each subtask separated by periods.
242  For example:
243 
244  - The full name of top-level task "top" is simply "top".
245  - The full name of subtask "sub" of top-level task "top" is "top.sub".
246  - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
247  """
248  return self._fullName
249 

◆ getFullName() [2/2]

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each subtask separated by periods.
    For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".

Definition at line 235 of file task.py.

235  def getFullName(self):
236  """Get the task name as a hierarchical name including parent task names.
237 
238  Returns
239  -------
240  fullName : `str`
241  The full name consists of the name of the parent task and each subtask separated by periods.
242  For example:
243 
244  - The full name of top-level task "top" is simply "top".
245  - The full name of subtask "sub" of top-level task "top" is "top.sub".
246  - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
247  """
248  return self._fullName
249 

◆ getInitInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitInputDatasetTypes (   cls,
  config 
)
inherited
Return dataset type descriptors that can be used to retrieve the
``initInputs`` constructor argument.

Datasets used in initialization may not be associated with any
Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitInputInputDatasetConfig` in configuration (non-recursively) and
uses them for constructing `DatasetTypeDescriptor` instances. The
names of these fields are used as keys in returned dictionary.
Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task requires no initialization inputs, should return an
empty dict.

Definition at line 266 of file pipelineTask.py.

266  def getInitInputDatasetTypes(cls, config):
267  """Return dataset type descriptors that can be used to retrieve the
268  ``initInputs`` constructor argument.
269 
270  Datasets used in initialization may not be associated with any
271  Dimension (i.e. their data IDs must be empty dictionaries).
272 
273  Default implementation finds all fields of type
274  `InitInputInputDatasetConfig` in configuration (non-recursively) and
275  uses them for constructing `DatasetTypeDescriptor` instances. The
276  names of these fields are used as keys in returned dictionary.
277  Subclasses can override this behavior.
278 
279  Parameters
280  ----------
281  config : `Config`
282  Configuration for this task. Typically datasets are defined in
283  a task configuration.
284 
285  Returns
286  -------
287  Dictionary where key is the name (arbitrary) of the input dataset
288  and value is the `DatasetTypeDescriptor` instance. Default
289  implementation uses configuration field name as dictionary key.
290 
291  When the task requires no initialization inputs, should return an
292  empty dict.
293  """
294  return cls.getDatasetTypes(config, InitInputDatasetConfig)
295 

◆ getInitOutputDatasets()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitOutputDatasets (   self)
inherited
Return persistable outputs that are available immediately after
the task has been constructed.

Subclasses that operate on catalogs should override this method to
return the schema(s) of the catalog(s) they produce.

It is not necessary to return the PipelineTask's configuration or
other provenance information in order for it to be persisted; that is
the responsibility of the execution system.

Returns
-------
datasets : `dict`
    Dictionary with keys that match those of the dict returned by
    `getInitOutputDatasetTypes` values that can be written by calling
    `Butler.put` with those DatasetTypes and no data IDs. An empty
    `dict` should be returned by tasks that produce no initialization
    outputs.

Definition at line 166 of file pipelineTask.py.

166  def getInitOutputDatasets(self):
167  """Return persistable outputs that are available immediately after
168  the task has been constructed.
169 
170  Subclasses that operate on catalogs should override this method to
171  return the schema(s) of the catalog(s) they produce.
172 
173  It is not necessary to return the PipelineTask's configuration or
174  other provenance information in order for it to be persisted; that is
175  the responsibility of the execution system.
176 
177  Returns
178  -------
179  datasets : `dict`
180  Dictionary with keys that match those of the dict returned by
181  `getInitOutputDatasetTypes` values that can be written by calling
182  `Butler.put` with those DatasetTypes and no data IDs. An empty
183  `dict` should be returned by tasks that produce no initialization
184  outputs.
185  """
186  return {}
187 

◆ getInitOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitOutputDatasetTypes (   cls,
  config 
)
inherited
Return dataset type descriptors that can be used to write the
objects returned by `getOutputDatasets`.

Datasets used in initialization may not be associated with any
Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitOutputDatasetConfig` in configuration (non-recursively) and uses
them for constructing `DatasetTypeDescriptor` instances. The names of
these fields are used as keys in returned dictionary. Subclasses can
override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task produces no initialization outputs, should return an
empty dict.

Definition at line 297 of file pipelineTask.py.

297  def getInitOutputDatasetTypes(cls, config):
298  """Return dataset type descriptors that can be used to write the
299  objects returned by `getOutputDatasets`.
300 
301  Datasets used in initialization may not be associated with any
302  Dimension (i.e. their data IDs must be empty dictionaries).
303 
304  Default implementation finds all fields of type
305  `InitOutputDatasetConfig` in configuration (non-recursively) and uses
306  them for constructing `DatasetTypeDescriptor` instances. The names of
307  these fields are used as keys in returned dictionary. Subclasses can
308  override this behavior.
309 
310  Parameters
311  ----------
312  config : `Config`
313  Configuration for this task. Typically datasets are defined in
314  a task configuration.
315 
316  Returns
317  -------
318  Dictionary where key is the name (arbitrary) of the output dataset
319  and value is the `DatasetTypeDescriptor` instance. Default
320  implementation uses configuration field name as dictionary key.
321 
322  When the task produces no initialization outputs, should return an
323  empty dict.
324  """
325  return cls.getDatasetTypes(config, InitOutputDatasetConfig)
326 

◆ getInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInputDatasetTypes (   cls,
  config 
)
inherited
Return input dataset type descriptors for this task.

Default implementation finds all fields of type `InputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The names of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 189 of file pipelineTask.py.

189  def getInputDatasetTypes(cls, config):
190  """Return input dataset type descriptors for this task.
191 
192  Default implementation finds all fields of type `InputDatasetConfig`
193  in configuration (non-recursively) and uses them for constructing
194  `DatasetTypeDescriptor` instances. The names of these fields are used
195  as keys in returned dictionary. Subclasses can override this behavior.
196 
197  Parameters
198  ----------
199  config : `Config`
200  Configuration for this task. Typically datasets are defined in
201  a task configuration.
202 
203  Returns
204  -------
205  Dictionary where key is the name (arbitrary) of the input dataset
206  and value is the `DatasetTypeDescriptor` instance. Default
207  implementation uses configuration field name as dictionary key.
208  """
209  return cls.getDatasetTypes(config, InputDatasetConfig)
210 

◆ getName() [1/2]

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 250 of file task.py.

250  def getName(self):
251  """Get the name of the task.
252 
253  Returns
254  -------
255  taskName : `str`
256  Name of the task.
257 
258  See also
259  --------
260  getFullName
261  """
262  return self._name
263 

◆ getName() [2/2]

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 250 of file task.py.

250  def getName(self):
251  """Get the name of the task.
252 
253  Returns
254  -------
255  taskName : `str`
256  Name of the task.
257 
258  See also
259  --------
260  getFullName
261  """
262  return self._name
263 

◆ getOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getOutputDatasetTypes (   cls,
  config 
)
inherited
Return output dataset type descriptors for this task.

Default implementation finds all fields of type `OutputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The keys of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 212 of file pipelineTask.py.

212  def getOutputDatasetTypes(cls, config):
213  """Return output dataset type descriptors for this task.
214 
215  Default implementation finds all fields of type `OutputDatasetConfig`
216  in configuration (non-recursively) and uses them for constructing
217  `DatasetTypeDescriptor` instances. The keys of these fields are used
218  as keys in returned dictionary. Subclasses can override this behavior.
219 
220  Parameters
221  ----------
222  config : `Config`
223  Configuration for this task. Typically datasets are defined in
224  a task configuration.
225 
226  Returns
227  -------
228  Dictionary where key is the name (arbitrary) of the output dataset
229  and value is the `DatasetTypeDescriptor` instance. Default
230  implementation uses configuration field name as dictionary key.
231  """
232  return cls.getDatasetTypes(config, OutputDatasetConfig)
233 

◆ getPerDatasetTypeDimensions()

def lsst.pipe.base.pipelineTask.PipelineTask.getPerDatasetTypeDimensions (   cls,
  config 
)
inherited
Return any Dimensions that are permitted to have different values
for different DatasetTypes within the same quantum.

Parameters
----------
config : `Config`
    Configuration for this task.

Returns
-------
dimensions : `~collections.abc.Set` of `Dimension` or `str`
    The dimensions or names thereof that should be considered
    per-DatasetType.

Notes
-----
Any Dimension declared to be per-DatasetType by a PipelineTask must
also be declared to be per-DatasetType by other PipelineTasks in the
same Pipeline.

The classic example of a per-DatasetType dimension is the
``CalibrationLabel`` dimension that maps to a validity range for
master calibrations.  When running Instrument Signature Removal, one
does not care that different dataset types like flat, bias, and dark
have different validity ranges, as long as those validity ranges all
overlap the relevant observation.

Definition at line 358 of file pipelineTask.py.

358  def getPerDatasetTypeDimensions(cls, config):
359  """Return any Dimensions that are permitted to have different values
360  for different DatasetTypes within the same quantum.
361 
362  Parameters
363  ----------
364  config : `Config`
365  Configuration for this task.
366 
367  Returns
368  -------
369  dimensions : `~collections.abc.Set` of `Dimension` or `str`
370  The dimensions or names thereof that should be considered
371  per-DatasetType.
372 
373  Notes
374  -----
375  Any Dimension declared to be per-DatasetType by a PipelineTask must
376  also be declared to be per-DatasetType by other PipelineTasks in the
377  same Pipeline.
378 
379  The classic example of a per-DatasetType dimension is the
380  ``CalibrationLabel`` dimension that maps to a validity range for
381  master calibrations. When running Instrument Signature Removal, one
382  does not care that different dataset types like flat, bias, and dark
383  have different validity ranges, as long as those validity ranges all
384  overlap the relevant observation.
385  """
386  return frozenset()
387 

◆ getPrerequisiteDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getPrerequisiteDatasetTypes (   cls,
  config 
)
inherited
Return the local names of input dataset types that should be
assumed to exist instead of constraining what data to process with
this task.

Usually, when running a `PipelineTask`, the presence of input datasets
constrains the processing to be done (as defined by the `QuantumGraph`
generated during "preflight").  "Prerequisites" are special input
datasets that do not constrain that graph, but instead cause a hard
failure when missing.  Calibration products and reference catalogs
are examples of dataset types that should usually be marked as
prerequisites.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
prerequisite : `~collections.abc.Set` of `str`
    The keys in the dictionary returned by `getInputDatasetTypes` that
    represent dataset types that should be considered prerequisites.
    Names returned here that are not keys in that dictionary are
    ignored; that way, if a config option removes an input dataset type
    only `getInputDatasetTypes` needs to be updated.

Definition at line 235 of file pipelineTask.py.

235  def getPrerequisiteDatasetTypes(cls, config):
236  """Return the local names of input dataset types that should be
237  assumed to exist instead of constraining what data to process with
238  this task.
239 
240  Usually, when running a `PipelineTask`, the presence of input datasets
241  constrains the processing to be done (as defined by the `QuantumGraph`
242  generated during "preflight"). "Prerequisites" are special input
243  datasets that do not constrain that graph, but instead cause a hard
244  failure when missing. Calibration products and reference catalogs
245  are examples of dataset types that should usually be marked as
246  prerequisites.
247 
248  Parameters
249  ----------
250  config : `Config`
251  Configuration for this task. Typically datasets are defined in
252  a task configuration.
253 
254  Returns
255  -------
256  prerequisite : `~collections.abc.Set` of `str`
257  The keys in the dictionary returned by `getInputDatasetTypes` that
258  represent dataset types that should be considered prerequisites.
259  Names returned here that are not keys in that dictionary are
260  ignored; that way, if a config option removes an input dataset type
261  only `getInputDatasetTypes` needs to be updated.
262  """
263  return frozenset()
264 

◆ getResourceConfig()

def lsst.pipe.base.pipelineTask.PipelineTask.getResourceConfig (   self)
inherited
Return resource configuration for this task.

Returns
-------
Object of type `~config.ResourceConfig` or ``None`` if resource
configuration is not defined for this task.

Definition at line 615 of file pipelineTask.py.

615  def getResourceConfig(self):
616  """Return resource configuration for this task.
617 
618  Returns
619  -------
620  Object of type `~config.ResourceConfig` or ``None`` if resource
621  configuration is not defined for this task.
622  """
623  return getattr(self.config, "resources", None)
624 

◆ getSchemaCatalogs() [1/2]

def lsst.pipe.base.task.Task.getSchemaCatalogs (   self)
inherited
Get the schemas generated by this task.

Returns
-------
schemaCatalogs : `dict`
    Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
    `lsst.afw.table` Catalog type) for this task.

Notes
-----

.. warning::

   Subclasses that use schemas must override this method. The default implemenation returns
   an empty dict.

This method may be called at any time after the Task is constructed, which means that all task
schemas should be computed at construction time, *not* when data is actually processed. This
reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

See also
--------
Task.getAllSchemaCatalogs

Definition at line 159 of file task.py.

159  def getSchemaCatalogs(self):
160  """Get the schemas generated by this task.
161 
162  Returns
163  -------
164  schemaCatalogs : `dict`
165  Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
166  `lsst.afw.table` Catalog type) for this task.
167 
168  Notes
169  -----
170 
171  .. warning::
172 
173  Subclasses that use schemas must override this method. The default implemenation returns
174  an empty dict.
175 
176  This method may be called at any time after the Task is constructed, which means that all task
177  schemas should be computed at construction time, *not* when data is actually processed. This
178  reflects the philosophy that the schema should not depend on the data.
179 
180  Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
181 
182  See also
183  --------
184  Task.getAllSchemaCatalogs
185  """
186  return {}
187 

◆ getSchemaCatalogs() [2/2]

def lsst.pipe.base.task.Task.getSchemaCatalogs (   self)
inherited
Get the schemas generated by this task.

Returns
-------
schemaCatalogs : `dict`
    Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
    `lsst.afw.table` Catalog type) for this task.

Notes
-----

.. warning::

   Subclasses that use schemas must override this method. The default implemenation returns
   an empty dict.

This method may be called at any time after the Task is constructed, which means that all task
schemas should be computed at construction time, *not* when data is actually processed. This
reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

See also
--------
Task.getAllSchemaCatalogs

Definition at line 159 of file task.py.

159  def getSchemaCatalogs(self):
160  """Get the schemas generated by this task.
161 
162  Returns
163  -------
164  schemaCatalogs : `dict`
165  Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
166  `lsst.afw.table` Catalog type) for this task.
167 
168  Notes
169  -----
170 
171  .. warning::
172 
173  Subclasses that use schemas must override this method. The default implemenation returns
174  an empty dict.
175 
176  This method may be called at any time after the Task is constructed, which means that all task
177  schemas should be computed at construction time, *not* when data is actually processed. This
178  reflects the philosophy that the schema should not depend on the data.
179 
180  Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
181 
182  See also
183  --------
184  Task.getAllSchemaCatalogs
185  """
186  return {}
187 

◆ getTaskDict() [1/2]

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level task and all subtasks,
    sub-subtasks, etc..

Definition at line 264 of file task.py.

264  def getTaskDict(self):
265  """Get a dictionary of all tasks as a shallow copy.
266 
267  Returns
268  -------
269  taskDict : `dict`
270  Dictionary containing full task name: task object for the top-level task and all subtasks,
271  sub-subtasks, etc..
272  """
273  return self._taskDict.copy()
274 
def getTaskDict(config, taskDict=None, baseName="")

◆ getTaskDict() [2/2]

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level task and all subtasks,
    sub-subtasks, etc..

Definition at line 264 of file task.py.

264  def getTaskDict(self):
265  """Get a dictionary of all tasks as a shallow copy.
266 
267  Returns
268  -------
269  taskDict : `dict`
270  Dictionary containing full task name: task object for the top-level task and all subtasks,
271  sub-subtasks, etc..
272  """
273  return self._taskDict.copy()
274 
def getTaskDict(config, taskDict=None, baseName="")

◆ makeField() [1/2]

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use::

    class OtherTaskConfig(lsst.pex.config.Config)
aSubtask = ATaskClass.makeField("a brief description of what this task does")

Definition at line 329 of file task.py.

329  def makeField(cls, doc):
330  """Make a `lsst.pex.config.ConfigurableField` for this task.
331 
332  Parameters
333  ----------
334  doc : `str`
335  Help text for the field.
336 
337  Returns
338  -------
339  configurableField : `lsst.pex.config.ConfigurableField`
340  A `~ConfigurableField` for this task.
341 
342  Examples
343  --------
344  Provides a convenient way to specify this task is a subtask of another task.
345 
346  Here is an example of use::
347 
348  class OtherTaskConfig(lsst.pex.config.Config)
349  aSubtask = ATaskClass.makeField("a brief description of what this task does")
350  """
351  return ConfigurableField(doc=doc, target=cls)
352 

◆ makeField() [2/2]

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use::

    class OtherTaskConfig(lsst.pex.config.Config)
aSubtask = ATaskClass.makeField("a brief description of what this task does")

Definition at line 329 of file task.py.

329  def makeField(cls, doc):
330  """Make a `lsst.pex.config.ConfigurableField` for this task.
331 
332  Parameters
333  ----------
334  doc : `str`
335  Help text for the field.
336 
337  Returns
338  -------
339  configurableField : `lsst.pex.config.ConfigurableField`
340  A `~ConfigurableField` for this task.
341 
342  Examples
343  --------
344  Provides a convenient way to specify this task is a subtask of another task.
345 
346  Here is an example of use::
347 
348  class OtherTaskConfig(lsst.pex.config.Config)
349  aSubtask = ATaskClass.makeField("a brief description of what this task does")
350  """
351  return ConfigurableField(doc=doc, target=cls)
352 

◆ makeSubtask() [1/2]

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following arguments are automatically
    provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
or RegistryField.

Definition at line 275 of file task.py.

275  def makeSubtask(self, name, **keyArgs):
276  """Create a subtask as a new instance as the ``name`` attribute of this task.
277 
278  Parameters
279  ----------
280  name : `str`
281  Brief name of the subtask.
282  keyArgs
283  Extra keyword arguments used to construct the task. The following arguments are automatically
284  provided and cannot be overridden:
285 
286  - "config".
287  - "parentTask".
288 
289  Notes
290  -----
291  The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
292  or RegistryField.
293  """
294  taskField = getattr(self.config, name, None)
295  if taskField is None:
296  raise KeyError("%s's config does not have field %r" % (self.getFullName(), name))
297  subtask = taskField.apply(name=name, parentTask=self, **keyArgs)
298  setattr(self, name, subtask)
299 

◆ makeSubtask() [2/2]

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following arguments are automatically
    provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
or RegistryField.

Definition at line 275 of file task.py.

275  def makeSubtask(self, name, **keyArgs):
276  """Create a subtask as a new instance as the ``name`` attribute of this task.
277 
278  Parameters
279  ----------
280  name : `str`
281  Brief name of the subtask.
282  keyArgs
283  Extra keyword arguments used to construct the task. The following arguments are automatically
284  provided and cannot be overridden:
285 
286  - "config".
287  - "parentTask".
288 
289  Notes
290  -----
291  The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
292  or RegistryField.
293  """
294  taskField = getattr(self.config, name, None)
295  if taskField is None:
296  raise KeyError("%s's config does not have field %r" % (self.getFullName(), name))
297  subtask = taskField.apply(name=name, parentTask=self, **keyArgs)
298  setattr(self, name, subtask)
299 

◆ parseAndRun()

def lsst.pipe.base.cmdLineTask.CmdLineTask.parseAndRun (   cls,
  args = None,
  config = None,
  log = None,
  doReturnResults = False 
)
inherited
Parse an argument list and run the command.

Parameters
----------
args : `list`, optional
    List of command-line arguments; if `None` use `sys.argv`.
config : `lsst.pex.config.Config`-type, optional
    Config for task. If `None` use `Task.ConfigClass`.
log : `lsst.log.Log`-type, optional
    Log. If `None` use the default log.
doReturnResults : `bool`, optional
    If `True`, return the results of this task. Default is `False`. This is only intended for
    unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
    call it enough times) and it will fail when using multiprocessing if the returned data cannot be
    pickled.

Returns
-------
struct : `lsst.pipe.base.Struct`
    Fields are:

    - ``argumentParser``: the argument parser.
    - ``parsedCmd``: the parsed command returned by the argument parser's
      `lsst.pipe.base.ArgumentParser.parse_args` method.
    - ``taskRunner``: the task runner used to run the task (an instance of `Task.RunnerClass`).
    - ``resultList``: results returned by the task runner's ``run`` method, one entry per invocation.
This will typically be a list of `None` unless ``doReturnResults`` is `True`;
see `Task.RunnerClass` (`TaskRunner` by default) for more information.

Notes
-----
Calling this method with no arguments specified is the standard way to run a command-line task
from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
file in that directory.

If one or more of the dataIds fails then this routine will exit (with a status giving the
number of failed dataIds) rather than returning this struct;  this behaviour can be
overridden by specifying the ``--noExit`` command-line option.

Definition at line 549 of file cmdLineTask.py.

549  def parseAndRun(cls, args=None, config=None, log=None, doReturnResults=False):
550  """Parse an argument list and run the command.
551 
552  Parameters
553  ----------
554  args : `list`, optional
555  List of command-line arguments; if `None` use `sys.argv`.
556  config : `lsst.pex.config.Config`-type, optional
557  Config for task. If `None` use `Task.ConfigClass`.
558  log : `lsst.log.Log`-type, optional
559  Log. If `None` use the default log.
560  doReturnResults : `bool`, optional
561  If `True`, return the results of this task. Default is `False`. This is only intended for
562  unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
563  call it enough times) and it will fail when using multiprocessing if the returned data cannot be
564  pickled.
565 
566  Returns
567  -------
568  struct : `lsst.pipe.base.Struct`
569  Fields are:
570 
571  - ``argumentParser``: the argument parser.
572  - ``parsedCmd``: the parsed command returned by the argument parser's
573  `lsst.pipe.base.ArgumentParser.parse_args` method.
574  - ``taskRunner``: the task runner used to run the task (an instance of `Task.RunnerClass`).
575  - ``resultList``: results returned by the task runner's ``run`` method, one entry per invocation.
576  This will typically be a list of `None` unless ``doReturnResults`` is `True`;
577  see `Task.RunnerClass` (`TaskRunner` by default) for more information.
578 
579  Notes
580  -----
581  Calling this method with no arguments specified is the standard way to run a command-line task
582  from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
583  file in that directory.
584 
585  If one or more of the dataIds fails then this routine will exit (with a status giving the
586  number of failed dataIds) rather than returning this struct; this behaviour can be
587  overridden by specifying the ``--noExit`` command-line option.
588  """
589  if args is None:
590  commandAsStr = " ".join(sys.argv)
591  args = sys.argv[1:]
592  else:
593  commandAsStr = "{}{}".format(lsst.utils.get_caller_name(skip=1), tuple(args))
594 
595  argumentParser = cls._makeArgumentParser()
596  if config is None:
597  config = cls.ConfigClass()
598  parsedCmd = argumentParser.parse_args(config=config, args=args, log=log, override=cls.applyOverrides)
599  # print this message after parsing the command so the log is fully configured
600  parsedCmd.log.info("Running: %s", commandAsStr)
601 
602  taskRunner = cls.RunnerClass(TaskClass=cls, parsedCmd=parsedCmd, doReturnResults=doReturnResults)
603  resultList = taskRunner.run(parsedCmd)
604 
605  try:
606  nFailed = sum(((res.exitStatus != 0) for res in resultList))
607  except (TypeError, AttributeError) as e:
608  # NOTE: TypeError if resultList is None, AttributeError if it doesn't have exitStatus.
609  parsedCmd.log.warn("Unable to retrieve exit status (%s); assuming success", e)
610  nFailed = 0
611 
612  if nFailed > 0:
613  if parsedCmd.noExit:
614  parsedCmd.log.error("%d dataRefs failed; not exiting as --noExit was set", nFailed)
615  else:
616  sys.exit(nFailed)
617 
618  return Struct(
619  argumentParser=argumentParser,
620  parsedCmd=parsedCmd,
621  taskRunner=taskRunner,
622  resultList=resultList,
623  )
624 
def format(config, name=None, writeSourceLine=True, prefix="", verbose=False)
Definition: history.py:168

◆ run() [1/2]

def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.run (   self,
  fakeCat,
  exposure,
  wcs = None,
  photoCalib = None,
  exposureIdInfo = None 
)
Add fake sources to a calexp and then run detection, deblending and measurement.

Parameters
----------
fakeCat : `pandas.core.frame.DataFrame`
    The catalog of fake sources to add to the exposure
exposure : `lsst.afw.image.exposure.exposure.ExposureF`
    The exposure to add the fake sources to
wcs : `lsst.afw.geom.skyWcs.skyWcs.SkyWcs`
    WCS to use to add fake sources
photoCalib : `lsst.afw.image.photoCalib.PhotoCalib`
    Photometric calibration to be used to calibrate the fake sources
exposureIdInfo : `lsst.obs.base.ExposureIdInfo`

Returns
-------
resultStruct : `lsst.pipe.base.struct.Struct`
    contains : outputExposure : `lsst.afw.image.exposure.exposure.ExposureF`
       outputCat : `lsst.afw.table.source.source.SourceCatalog`

Notes
-----
Adds pixel coordinates for each source to the fakeCat and removes objects with bulge or disk half
light radius = 0 (if ``config.cleanCat = True``). These columns are called ``x`` and ``y`` and are in
pixels.

Adds the ``Fake`` mask plane to the exposure which is then set by `addFakeSources` to mark where fake
sources have been added. Uses the information in the ``fakeCat`` to make fake galaxies (using galsim)
and fake stars, using the PSF models from the PSF information for the calexp. These are then added to
the calexp and the calexp with fakes included returned.

The galsim galaxies are made using a double sersic profile, one for the bulge and one for the disk,
this is then convolved with the PSF at that point.

If exposureIdInfo is not provided then the SourceCatalog IDs will not be globally unique.

Definition at line 254 of file processFakes.py.

254  def run(self, fakeCat, exposure, wcs=None, photoCalib=None, exposureIdInfo=None):
255  """Add fake sources to a calexp and then run detection, deblending and measurement.
256 
257  Parameters
258  ----------
259  fakeCat : `pandas.core.frame.DataFrame`
260  The catalog of fake sources to add to the exposure
261  exposure : `lsst.afw.image.exposure.exposure.ExposureF`
262  The exposure to add the fake sources to
263  wcs : `lsst.afw.geom.skyWcs.skyWcs.SkyWcs`
264  WCS to use to add fake sources
265  photoCalib : `lsst.afw.image.photoCalib.PhotoCalib`
266  Photometric calibration to be used to calibrate the fake sources
267  exposureIdInfo : `lsst.obs.base.ExposureIdInfo`
268 
269  Returns
270  -------
271  resultStruct : `lsst.pipe.base.struct.Struct`
272  contains : outputExposure : `lsst.afw.image.exposure.exposure.ExposureF`
273  outputCat : `lsst.afw.table.source.source.SourceCatalog`
274 
275  Notes
276  -----
277  Adds pixel coordinates for each source to the fakeCat and removes objects with bulge or disk half
278  light radius = 0 (if ``config.cleanCat = True``). These columns are called ``x`` and ``y`` and are in
279  pixels.
280 
281  Adds the ``Fake`` mask plane to the exposure which is then set by `addFakeSources` to mark where fake
282  sources have been added. Uses the information in the ``fakeCat`` to make fake galaxies (using galsim)
283  and fake stars, using the PSF models from the PSF information for the calexp. These are then added to
284  the calexp and the calexp with fakes included returned.
285 
286  The galsim galaxies are made using a double sersic profile, one for the bulge and one for the disk,
287  this is then convolved with the PSF at that point.
288 
289  If exposureIdInfo is not provided then the SourceCatalog IDs will not be globally unique.
290  """
291 
292  if wcs is None:
293  wcs = exposure.getWcs()
294 
295  if photoCalib is None:
296  photoCalib = exposure.getCalib()
297 
298  self.insertFakes.run(fakeCat, exposure, wcs, photoCalib)
299 
300  # detect, deblend and measure sources
301  if exposureIdInfo is None:
302  exposureIdInfo = ExposureIdInfo()
303 
304  sourceIdFactory = IdFactory.makeSource(exposureIdInfo.expId, exposureIdInfo.unusedBits)
305  table = SourceTable.make(self.schema, sourceIdFactory)
306  table.setMetadata(self.algMetadata)
307 
308  detRes = self.detection.run(table=table, exposure=exposure, doSmooth=True)
309  sourceCat = detRes.sources
310  self.deblend.run(exposure=exposure, sources=sourceCat)
311  self.measurement.run(measCat=sourceCat, exposure=exposure, exposureId=exposureIdInfo.expId)
312  self.applyApCorr.run(catalog=sourceCat, apCorrMap=exposure.getInfo().getApCorrMap())
313  self.catalogCalculation.run(sourceCat)
314 
315  resultStruct = pipeBase.Struct(outputExposure=exposure, outputCat=sourceCat)
316  return resultStruct

◆ run() [2/2]

def lsst.pipe.base.pipelineTask.PipelineTask.run (   self,
  kwargs 
)
inherited
Run task algorithm on in-memory data.

This method should be implemented in a subclass unless tasks overrides
`adaptArgsAndRun` to do something different from its default
implementation. With default implementation of `adaptArgsAndRun` this
method will receive keyword arguments whose names will be the same as
names of configuration fields describing input dataset types. Argument
values will be data objects retrieved from data butler. If a dataset
type is configured with ``scalar`` field set to ``True`` then argument
value will be a single object, otherwise it will be a list of objects.

If the task needs to know its input or output DataIds then it has to
override `adaptArgsAndRun` method instead.

Returns
-------
struct : `Struct`
    See description of `adaptArgsAndRun` method.

Examples
--------
Typical implementation of this method may look like::

    def run(self, input, calib):
# "input", "calib", and "output" are the names of the config fields

# Assuming that input/calib datasets are `scalar` they are simple objects,
# do something with inputs and calibs, produce output image.
image = self.makeImage(input, calib)

# If output dataset is `scalar` then return object, not list
return Struct(output=image)

Definition at line 444 of file pipelineTask.py.

444  def run(self, **kwargs):
445  """Run task algorithm on in-memory data.
446 
447  This method should be implemented in a subclass unless tasks overrides
448  `adaptArgsAndRun` to do something different from its default
449  implementation. With default implementation of `adaptArgsAndRun` this
450  method will receive keyword arguments whose names will be the same as
451  names of configuration fields describing input dataset types. Argument
452  values will be data objects retrieved from data butler. If a dataset
453  type is configured with ``scalar`` field set to ``True`` then argument
454  value will be a single object, otherwise it will be a list of objects.
455 
456  If the task needs to know its input or output DataIds then it has to
457  override `adaptArgsAndRun` method instead.
458 
459  Returns
460  -------
461  struct : `Struct`
462  See description of `adaptArgsAndRun` method.
463 
464  Examples
465  --------
466  Typical implementation of this method may look like::
467 
468  def run(self, input, calib):
469  # "input", "calib", and "output" are the names of the config fields
470 
471  # Assuming that input/calib datasets are `scalar` they are simple objects,
472  # do something with inputs and calibs, produce output image.
473  image = self.makeImage(input, calib)
474 
475  # If output dataset is `scalar` then return object, not list
476  return Struct(output=image)
477 
478  """
479  raise NotImplementedError("run() is not implemented")
480 

◆ runDataRef()

def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.runDataRef (   self,
  dataRef 
)
Read in/write out the required data products and add fake sources to the calexp.

Parameters
----------
dataRef : `lsst.daf.persistence.butlerSubset.ButlerDataRef`
    Data reference defining the ccd to have fakes added to it.
    Used to access the following data products:
calexp
jointcal_wcs
jointcal_photoCalib

Notes
-----
Uses the calibration and WCS information attached to the calexp for the posistioning and calibration
of the sources unless the config option config.useUpdatedCalibs is set then it uses the
meas_mosaic/jointCal outputs. The config defualts for the column names in the catalog of fakes are
taken from the University of Washington simulations database. Operates on one ccd at a time.

Definition at line 188 of file processFakes.py.

188  def runDataRef(self, dataRef):
189  """Read in/write out the required data products and add fake sources to the calexp.
190 
191  Parameters
192  ----------
193  dataRef : `lsst.daf.persistence.butlerSubset.ButlerDataRef`
194  Data reference defining the ccd to have fakes added to it.
195  Used to access the following data products:
196  calexp
197  jointcal_wcs
198  jointcal_photoCalib
199 
200  Notes
201  -----
202  Uses the calibration and WCS information attached to the calexp for the posistioning and calibration
203  of the sources unless the config option config.useUpdatedCalibs is set then it uses the
204  meas_mosaic/jointCal outputs. The config defualts for the column names in the catalog of fakes are
205  taken from the University of Washington simulations database. Operates on one ccd at a time.
206  """
207  exposureIdInfo = dataRef.get("expIdInfo")
208 
209  if self.config.insertFakes.fakeType == "snapshot":
210  fakeCat = dataRef.get("fakeSourceCat").toDataFrame()
211  elif self.config.insertFakes.fakeType == "static":
212  fakeCat = dataRef.get("deepCoadd_fakeSourceCat").toDataFrame()
213  else:
214  fakeCat = Table.read(self.config.insertFakes.fakeType).to_pandas()
215 
216  calexp = dataRef.get("calexp")
217  if self.config.useUpdatedCalibs:
218  self.log.info("Using updated calibs from meas_mosaic/jointCal")
219  wcs = dataRef.get("jointcal_wcs")
220  photoCalib = dataRef.get("jointcal_photoCalib")
221  else:
222  wcs = calexp.getWcs()
223  photoCalib = calexp.getCalib()
224 
225  resultStruct = self.run(fakeCat, calexp, wcs=wcs, photoCalib=photoCalib,
226  exposureIdInfo=exposureIdInfo)
227 
228  dataRef.put(resultStruct.outputExposure, "fakes_calexp")
229  dataRef.put(resultStruct.outputCat, "fakes_src")
230 

◆ runQuantum()

def lsst.pipe.base.pipelineTask.PipelineTask.runQuantum (   self,
  quantum,
  butler 
)
inherited
Execute PipelineTask algorithm on single quantum of data.

Typical implementation of this method will use inputs from quantum
to retrieve Python-domain objects from data butler and call
`adaptArgsAndRun` method on that data. On return from
`adaptArgsAndRun` this method will extract data from returned
`Struct` instance and save that data to butler.

The `Struct` returned from `adaptArgsAndRun` is expected to contain
data attributes with the names equal to the names of the
configuration fields defining output dataset types. The values of
the data attributes must be data objects corresponding to
the DataIds of output dataset types. All data objects will be
saved in butler using DataRefs from Quantum's output dictionary.

This method does not return anything to the caller, on errors
corresponding exception is raised.

Parameters
----------
quantum : `Quantum`
    Object describing input and output corresponding to this
    invocation of PipelineTask instance.
butler : object
    Data butler instance.

Raises
------
`ScalarError` if a dataset type is configured as scalar but receives
multiple DataIds in `quantum`. Any exceptions that happen in data
butler or in `adaptArgsAndRun` method.

Definition at line 481 of file pipelineTask.py.

481  def runQuantum(self, quantum, butler):
482  """Execute PipelineTask algorithm on single quantum of data.
483 
484  Typical implementation of this method will use inputs from quantum
485  to retrieve Python-domain objects from data butler and call
486  `adaptArgsAndRun` method on that data. On return from
487  `adaptArgsAndRun` this method will extract data from returned
488  `Struct` instance and save that data to butler.
489 
490  The `Struct` returned from `adaptArgsAndRun` is expected to contain
491  data attributes with the names equal to the names of the
492  configuration fields defining output dataset types. The values of
493  the data attributes must be data objects corresponding to
494  the DataIds of output dataset types. All data objects will be
495  saved in butler using DataRefs from Quantum's output dictionary.
496 
497  This method does not return anything to the caller, on errors
498  corresponding exception is raised.
499 
500  Parameters
501  ----------
502  quantum : `Quantum`
503  Object describing input and output corresponding to this
504  invocation of PipelineTask instance.
505  butler : object
506  Data butler instance.
507 
508  Raises
509  ------
510  `ScalarError` if a dataset type is configured as scalar but receives
511  multiple DataIds in `quantum`. Any exceptions that happen in data
512  butler or in `adaptArgsAndRun` method.
513  """
514 
515  def makeDataRefs(descriptors, refMap):
516  """Generate map of DatasetRefs and DataIds.
517 
518  Given a map of DatasetTypeDescriptor and a map of Quantum
519  DatasetRefs makes maps of DataIds and and DatasetRefs.
520  For scalar dataset types unpacks DatasetRefs and DataIds.
521 
522  Parameters
523  ----------
524  descriptors : `dict`
525  Map of (dataset key, DatasetTypeDescriptor).
526  refMap : `dict`
527  Map of (dataset type name, DatasetRefs).
528 
529  Returns
530  -------
531  dataIds : `dict`
532  Map of (dataset key, DataIds)
533  dataRefs : `dict`
534  Map of (dataset key, DatasetRefs)
535 
536  Raises
537  ------
538  ScalarError
539  Raised if dataset type is configured as scalar but more than
540  one DatasetRef exists for it.
541  """
542  dataIds = {}
543  dataRefs = {}
544  for key, descriptor in descriptors.items():
545  keyDataRefs = refMap[descriptor.datasetType.name]
546  keyDataIds = [dataRef.dataId for dataRef in keyDataRefs]
547  if descriptor.scalar:
548  # unpack single-item lists
549  if len(keyDataRefs) != 1:
550  raise ScalarError(key, len(keyDataRefs))
551  keyDataRefs = keyDataRefs[0]
552  keyDataIds = keyDataIds[0]
553  dataIds[key] = keyDataIds
554  if not descriptor.manualLoad:
555  dataRefs[key] = keyDataRefs
556  return dataIds, dataRefs
557 
558  # lists of DataRefs/DataIds for input datasets
559  descriptors = self.getInputDatasetTypes(self.config)
560  inputDataIds, inputDataRefs = makeDataRefs(descriptors, quantum.predictedInputs)
561 
562  # get all data from butler
563  inputs = {}
564  for key, dataRefs in inputDataRefs.items():
565  if isinstance(dataRefs, list):
566  inputs[key] = [butler.get(dataRef) for dataRef in dataRefs]
567  else:
568  inputs[key] = butler.get(dataRefs)
569  del inputDataRefs
570 
571  # lists of DataRefs/DataIds for output datasets
572  descriptors = self.getOutputDatasetTypes(self.config)
573  outputDataIds, outputDataRefs = makeDataRefs(descriptors, quantum.outputs)
574 
575  # call run method with keyword arguments
576  struct = self.adaptArgsAndRun(inputs, inputDataIds, outputDataIds, butler)
577 
578  # store produced ouput data
579  self.saveStruct(struct, outputDataRefs, butler)
580 

◆ saveStruct()

def lsst.pipe.base.pipelineTask.PipelineTask.saveStruct (   self,
  struct,
  outputDataRefs,
  butler 
)
inherited
Save data in butler.

Convention is that struct returned from ``run()`` method has data
field(s) with the same names as the config fields defining
output DatasetTypes. Subclasses may override this method to implement
different convention for `Struct` content or in case any
post-processing of data may be needed.

Parameters
----------
struct : `Struct`
    Data produced by the task packed into `Struct` instance
outputDataRefs : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing output dataset types and values are lists of DataRefs.
    DataRefs must match corresponding data objects in ``struct`` in
    number and order.
butler : object
    Data butler instance.

Definition at line 581 of file pipelineTask.py.

581  def saveStruct(self, struct, outputDataRefs, butler):
582  """Save data in butler.
583 
584  Convention is that struct returned from ``run()`` method has data
585  field(s) with the same names as the config fields defining
586  output DatasetTypes. Subclasses may override this method to implement
587  different convention for `Struct` content or in case any
588  post-processing of data may be needed.
589 
590  Parameters
591  ----------
592  struct : `Struct`
593  Data produced by the task packed into `Struct` instance
594  outputDataRefs : `dict`
595  Dictionary whose keys are the names of the configuration fields
596  describing output dataset types and values are lists of DataRefs.
597  DataRefs must match corresponding data objects in ``struct`` in
598  number and order.
599  butler : object
600  Data butler instance.
601  """
602  structDict = struct.getDict()
603  descriptors = self.getOutputDatasetTypes(self.config)
604  for key in descriptors.keys():
605  dataList = structDict[key]
606  dataRefs = outputDataRefs[key]
607  if not isinstance(dataRefs, list):
608  # scalar outputs, make them lists again
609  dataRefs = [dataRefs]
610  dataList = [dataList]
611  # TODO: check that data objects and data refs are aligned
612  for dataRef, data in zip(dataRefs, dataList):
613  butler.put(data, dataRef.datasetType.name, dataRef.dataId)
614 

◆ timer() [1/2]

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context::

    with self.timer("someCodeToTime"):
pass  # code to time

See also
--------
timer.logInfo

Definition at line 301 of file task.py.

301  def timer(self, name, logLevel=Log.DEBUG):
302  """Context manager to log performance data for an arbitrary block of code.
303 
304  Parameters
305  ----------
306  name : `str`
307  Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
308  logLevel
309  A `lsst.log` level constant.
310 
311  Examples
312  --------
313  Creating a timer context::
314 
315  with self.timer("someCodeToTime"):
316  pass # code to time
317 
318  See also
319  --------
320  timer.logInfo
321  """
322  logInfo(obj=self, prefix=name + "Start", logLevel=logLevel)
323  try:
324  yield
325  finally:
326  logInfo(obj=self, prefix=name + "End", logLevel=logLevel)
327 
def logInfo(obj, prefix, logLevel=Log.DEBUG)
Definition: timer.py:62

◆ timer() [2/2]

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context::

    with self.timer("someCodeToTime"):
pass  # code to time

See also
--------
timer.logInfo

Definition at line 301 of file task.py.

301  def timer(self, name, logLevel=Log.DEBUG):
302  """Context manager to log performance data for an arbitrary block of code.
303 
304  Parameters
305  ----------
306  name : `str`
307  Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
308  logLevel
309  A `lsst.log` level constant.
310 
311  Examples
312  --------
313  Creating a timer context::
314 
315  with self.timer("someCodeToTime"):
316  pass # code to time
317 
318  See also
319  --------
320  timer.logInfo
321  """
322  logInfo(obj=self, prefix=name + "Start", logLevel=logLevel)
323  try:
324  yield
325  finally:
326  logInfo(obj=self, prefix=name + "End", logLevel=logLevel)
327 
def logInfo(obj, prefix, logLevel=Log.DEBUG)
Definition: timer.py:62

◆ writeConfig()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeConfig (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the configuration used for processing the data, or check that an existing
one is equal to the new one if present.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the config. The config is written to dataset type
    `CmdLineTask._getConfigName`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a config already has been saved:
    - `True`: overwrite or rename the existing config, depending on ``doBackup``.
    - `False`: raise `TaskError` if this config does not match the existing config.
doBackup : bool, optional
    Set to `True` to backup the config files if clobbering.

Definition at line 649 of file cmdLineTask.py.

649  def writeConfig(self, butler, clobber=False, doBackup=True):
650  """Write the configuration used for processing the data, or check that an existing
651  one is equal to the new one if present.
652 
653  Parameters
654  ----------
655  butler : `lsst.daf.persistence.Butler`
656  Data butler used to write the config. The config is written to dataset type
657  `CmdLineTask._getConfigName`.
658  clobber : `bool`, optional
659  A boolean flag that controls what happens if a config already has been saved:
660  - `True`: overwrite or rename the existing config, depending on ``doBackup``.
661  - `False`: raise `TaskError` if this config does not match the existing config.
662  doBackup : bool, optional
663  Set to `True` to backup the config files if clobbering.
664  """
665  configName = self._getConfigName()
666  if configName is None:
667  return
668  if clobber:
669  butler.put(self.config, configName, doBackup=doBackup)
670  elif butler.datasetExists(configName, write=True):
671  # this may be subject to a race condition; see #2789
672  try:
673  oldConfig = butler.get(configName, immediate=True)
674  except Exception as exc:
675  raise type(exc)("Unable to read stored config file %s (%s); consider using --clobber-config" %
676  (configName, exc))
677 
678  def logConfigMismatch(msg):
679  self.log.fatal("Comparing configuration: %s", msg)
680 
681  if not self.config.compare(oldConfig, shortcut=False, output=logConfigMismatch):
682  raise TaskError(
683  ("Config does not match existing task config %r on disk; tasks configurations " +
684  "must be consistent within the same output repo (override with --clobber-config)") %
685  (configName,))
686  else:
687  butler.put(self.config, configName)
688 
table::Key< int > type
Definition: Detector.cc:167

◆ writeMetadata()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeMetadata (   self,
  dataRef 
)
inherited
Write the metadata produced from processing the data.

Parameters
----------
dataRef
    Butler data reference used to write the metadata.
    The metadata is written to dataset type `CmdLineTask._getMetadataName`.

Definition at line 724 of file cmdLineTask.py.

724  def writeMetadata(self, dataRef):
725  """Write the metadata produced from processing the data.
726 
727  Parameters
728  ----------
729  dataRef
730  Butler data reference used to write the metadata.
731  The metadata is written to dataset type `CmdLineTask._getMetadataName`.
732  """
733  try:
734  metadataName = self._getMetadataName()
735  if metadataName is not None:
736  dataRef.put(self.getFullMetadata(), metadataName)
737  except Exception as e:
738  self.log.warn("Could not persist metadata for dataId=%s: %s", dataRef.dataId, e)
739 

◆ writePackageVersions()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writePackageVersions (   self,
  butler,
  clobber = False,
  doBackup = True,
  dataset = "packages" 
)
inherited
Compare and write package versions.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to read/write the package versions.
clobber : `bool`, optional
    A boolean flag that controls what happens if versions already have been saved:
    - `True`: overwrite or rename the existing version info, depending on ``doBackup``.
    - `False`: raise `TaskError` if this version info does not match the existing.
doBackup : `bool`, optional
    If `True` and clobbering, old package version files are backed up.
dataset : `str`, optional
    Name of dataset to read/write.

Raises
------
TaskError
    Raised if there is a version mismatch with current and persisted lists of package versions.

Notes
-----
Note that this operation is subject to a race condition.

Definition at line 740 of file cmdLineTask.py.

740  def writePackageVersions(self, butler, clobber=False, doBackup=True, dataset="packages"):
741  """Compare and write package versions.
742 
743  Parameters
744  ----------
745  butler : `lsst.daf.persistence.Butler`
746  Data butler used to read/write the package versions.
747  clobber : `bool`, optional
748  A boolean flag that controls what happens if versions already have been saved:
749  - `True`: overwrite or rename the existing version info, depending on ``doBackup``.
750  - `False`: raise `TaskError` if this version info does not match the existing.
751  doBackup : `bool`, optional
752  If `True` and clobbering, old package version files are backed up.
753  dataset : `str`, optional
754  Name of dataset to read/write.
755 
756  Raises
757  ------
758  TaskError
759  Raised if there is a version mismatch with current and persisted lists of package versions.
760 
761  Notes
762  -----
763  Note that this operation is subject to a race condition.
764  """
765  packages = Packages.fromSystem()
766 
767  if clobber:
768  return butler.put(packages, dataset, doBackup=doBackup)
769  if not butler.datasetExists(dataset, write=True):
770  return butler.put(packages, dataset)
771 
772  try:
773  old = butler.get(dataset, immediate=True)
774  except Exception as exc:
775  raise type(exc)("Unable to read stored version dataset %s (%s); "
776  "consider using --clobber-versions or --no-versions" %
777  (dataset, exc))
778  # Note that because we can only detect python modules that have been imported, the stored
779  # list of products may be more or less complete than what we have now. What's important is
780  # that the products that are in common have the same version.
781  diff = packages.difference(old)
782  if diff:
783  raise TaskError(
784  "Version mismatch (" +
785  "; ".join("%s: %s vs %s" % (pkg, diff[pkg][1], diff[pkg][0]) for pkg in diff) +
786  "); consider using --clobber-versions or --no-versions")
787  # Update the old set of packages in case we have more packages that haven't been persisted.
788  extra = packages.extra(old)
789  if extra:
790  old.update(packages)
791  butler.put(old, dataset, doBackup=doBackup)
792 
table::Key< int > type
Definition: Detector.cc:167

◆ writeSchemas()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeSchemas (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the schema. Each schema is written to the dataset type specified as the
    key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a schema already has been saved:
    - `True`: overwrite or rename the existing schema, depending on ``doBackup``.
    - `False`: raise `TaskError` if this schema does not match the existing schema.
doBackup : `bool`, optional
    Set to `True` to backup the schema files if clobbering.

Notes
-----
If ``clobber`` is `False` and an existing schema does not match a current schema,
then some schemas may have been saved successfully and others may not, and there is no easy way to
tell which is which.

Definition at line 689 of file cmdLineTask.py.

689  def writeSchemas(self, butler, clobber=False, doBackup=True):
690  """Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.
691 
692  Parameters
693  ----------
694  butler : `lsst.daf.persistence.Butler`
695  Data butler used to write the schema. Each schema is written to the dataset type specified as the
696  key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
697  clobber : `bool`, optional
698  A boolean flag that controls what happens if a schema already has been saved:
699  - `True`: overwrite or rename the existing schema, depending on ``doBackup``.
700  - `False`: raise `TaskError` if this schema does not match the existing schema.
701  doBackup : `bool`, optional
702  Set to `True` to backup the schema files if clobbering.
703 
704  Notes
705  -----
706  If ``clobber`` is `False` and an existing schema does not match a current schema,
707  then some schemas may have been saved successfully and others may not, and there is no easy way to
708  tell which is which.
709  """
710  for dataset, catalog in self.getAllSchemaCatalogs().items():
711  schemaDataset = dataset + "_schema"
712  if clobber:
713  butler.put(catalog, schemaDataset, doBackup=doBackup)
714  elif butler.datasetExists(schemaDataset, write=True):
715  oldSchema = butler.get(schemaDataset, immediate=True).getSchema()
716  if not oldSchema.compare(catalog.getSchema(), afwTable.Schema.IDENTICAL):
717  raise TaskError(
718  ("New schema does not match schema %r on disk; schemas must be " +
719  " consistent within the same output repo (override with --clobber-config)") %
720  (dataset,))
721  else:
722  butler.put(catalog, schemaDataset)
723 
std::vector< SchemaItem< Flag > > * items

Member Data Documentation

◆ algMetadata

lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.algMetadata

Definition at line 181 of file processFakes.py.

◆ canMultiprocess [1/2]

bool lsst.pipe.base.pipelineTask.PipelineTask.canMultiprocess = True
staticinherited

Definition at line 161 of file pipelineTask.py.

◆ canMultiprocess [2/2]

bool lsst.pipe.base.cmdLineTask.CmdLineTask.canMultiprocess = True
staticinherited

Definition at line 524 of file cmdLineTask.py.

◆ config [1/2]

lsst.pipe.base.task.Task.config
inherited

Definition at line 149 of file task.py.

◆ config [2/2]

lsst.pipe.base.task.Task.config
inherited

Definition at line 149 of file task.py.

◆ ConfigClass

lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.ConfigClass = ProcessCcdWithFakesConfig
static

Definition at line 169 of file processFakes.py.

◆ log [1/2]

lsst.pipe.base.task.Task.log
inherited

Definition at line 148 of file task.py.

◆ log [2/2]

lsst.pipe.base.task.Task.log
inherited

Definition at line 148 of file task.py.

◆ metadata [1/2]

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 121 of file task.py.

◆ metadata [2/2]

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 121 of file task.py.

◆ RunnerClass

lsst.pipe.base.cmdLineTask.CmdLineTask.RunnerClass = TaskRunner
staticinherited

Definition at line 523 of file cmdLineTask.py.

◆ schema

lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.schema

Definition at line 179 of file processFakes.py.


The documentation for this class was generated from the following file:
  • /j/snowflake/release/lsstsw/stack/Linux64/pipe_tasks/18.1.0/python/lsst/pipe/tasks/processFakes.py