LSST Applications  21.0.0+04719a4bac,21.0.0-1-ga51b5d4+f5e6047307,21.0.0-11-g2b59f77+a9c1acf22d,21.0.0-11-ga42c5b2+86977b0b17,21.0.0-12-gf4ce030+76814010d2,21.0.0-13-g1721dae+760e7a6536,21.0.0-13-g3a573fe+768d78a30a,21.0.0-15-g5a7caf0+f21cbc5713,21.0.0-16-g0fb55c1+b60e2d390c,21.0.0-19-g4cded4ca+71a93a33c0,21.0.0-2-g103fe59+bb20972958,21.0.0-2-g45278ab+04719a4bac,21.0.0-2-g5242d73+3ad5d60fb1,21.0.0-2-g7f82c8f+8babb168e8,21.0.0-2-g8f08a60+06509c8b61,21.0.0-2-g8faa9b5+616205b9df,21.0.0-2-ga326454+8babb168e8,21.0.0-2-gde069b7+5e4aea9c2f,21.0.0-2-gecfae73+1d3a86e577,21.0.0-2-gfc62afb+3ad5d60fb1,21.0.0-25-g1d57be3cd+e73869a214,21.0.0-3-g357aad2+ed88757d29,21.0.0-3-g4a4ce7f+3ad5d60fb1,21.0.0-3-g4be5c26+3ad5d60fb1,21.0.0-3-g65f322c+e0b24896a3,21.0.0-3-g7d9da8d+616205b9df,21.0.0-3-ge02ed75+a9c1acf22d,21.0.0-4-g591bb35+a9c1acf22d,21.0.0-4-g65b4814+b60e2d390c,21.0.0-4-gccdca77+0de219a2bc,21.0.0-4-ge8a399c+6c55c39e83,21.0.0-5-gd00fb1e+05fce91b99,21.0.0-6-gc675373+3ad5d60fb1,21.0.0-64-g1122c245+4fb2b8f86e,21.0.0-7-g04766d7+cd19d05db2,21.0.0-7-gdf92d54+04719a4bac,21.0.0-8-g5674e7b+d1bd76f71f,master-gac4afde19b+a9c1acf22d,w.2021.13
LSST Data Management Base Package
Public Member Functions | Public Attributes | Static Public Attributes | List of all members
lsst.pipe.drivers.coaddDriver.CoaddDriverTask Class Reference
Inheritance diagram for lsst.pipe.drivers.coaddDriver.CoaddDriverTask:
lsst.ctrl.pool.parallel.BatchPoolTask lsst.ctrl.pool.parallel.BatchCmdLineTask lsst.pipe.base.cmdLineTask.CmdLineTask lsst.pipe.base.task.Task

Public Member Functions

def __init__ (self, reuse=tuple(), **kwargs)
 
def __reduce__ (self)
 
def batchWallTime (cls, time, parsedCmd, numCores)
 Return walltime request for batch job. More...
 
def runDataRef (self, tractPatchRefList, butler, selectIdList=[])
 Determine which tracts are non-empty before processing. More...
 
def run (self, patchRefList, butler, selectDataList=[])
 Run stacking on a tract. More...
 
def readSelection (self, cache, selectId)
 Read Wcs of selected inputs. More...
 
def checkTract (self, cache, tractId, selectIdList)
 Check whether a tract has any overlapping inputs. More...
 
def warp (self, cache, patchId, selectDataList)
 Warp all images for a patch. More...
 
def coadd (self, cache, data)
 Construct coadd for a patch and measure. More...
 
def selectExposures (self, patchRef, selectDataList)
 Select exposures to operate upon, via the SelectImagesTask. More...
 
def writeMetadata (self, dataRef)
 
def parseAndRun (cls, *args, **kwargs)
 
def parseAndRun (cls, args=None, config=None, log=None, doReturnResults=False)
 
def parseAndSubmit (cls, args=None, **kwargs)
 
def batchCommand (cls, args)
 Return command to run CmdLineTask. More...
 
def logOperation (self, operation, catch=False, trace=True)
 Provide a context manager for logging an operation. More...
 
def applyOverrides (cls, config)
 
def writeConfig (self, butler, clobber=False, doBackup=True)
 
def writeSchemas (self, butler, clobber=False, doBackup=True)
 
def writePackageVersions (self, butler, clobber=False, doBackup=True, dataset="packages")
 
def emptyMetadata (self)
 
def getSchemaCatalogs (self)
 
def getAllSchemaCatalogs (self)
 
def getFullMetadata (self)
 
def getFullName (self)
 
def getName (self)
 
def getTaskDict (self)
 
def makeSubtask (self, name, **keyArgs)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def makeField (cls, doc)
 

Public Attributes

 reuse
 
 calexpType
 
 metadata
 
 log
 
 config
 

Static Public Attributes

 ConfigClass = CoaddDriverConfig
 
 RunnerClass = CoaddDriverTaskRunner
 
bool canMultiprocess = True
 

Detailed Description

Definition at line 84 of file coaddDriver.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.__init__ (   self,
  reuse = tuple(),
**  kwargs 
)

Definition at line 89 of file coaddDriver.py.

89  def __init__(self, reuse=tuple(), **kwargs):
90  BatchPoolTask.__init__(self, **kwargs)
91  self.reuse = reuse
92  self.makeSubtask("select")
93  self.makeSubtask("makeCoaddTempExp", reuse=("makeCoaddTempExp" in self.reuse))
94  self.makeSubtask("backgroundReference")
95  self.makeSubtask("assembleCoadd")
96  self.makeSubtask("detectCoaddSources")
97  if self.config.hasFakes:
98  self.calexpType = "fakes_calexp"
99  else:
100  self.calexpType = "calexp"
101 

Member Function Documentation

◆ __reduce__()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.__reduce__ (   self)
Pickler

Reimplemented from lsst.pipe.base.task.Task.

Definition at line 102 of file coaddDriver.py.

102  def __reduce__(self):
103  """Pickler"""
104  return unpickle, (self.__class__, [], dict(config=self.config, name=self._name,
105  parentTask=self._parentTask, log=self.log,
106  reuse=self.reuse))
107 

◆ applyOverrides()

def lsst.pipe.base.cmdLineTask.CmdLineTask.applyOverrides (   cls,
  config 
)
inherited
A hook to allow a task to change the values of its config *after*
the camera-specific overrides are loaded but before any command-line
overrides are applied.

Parameters
----------
config : instance of task's ``ConfigClass``
    Task configuration.

Notes
-----
This is necessary in some cases because the camera-specific overrides
may retarget subtasks, wiping out changes made in
ConfigClass.setDefaults. See LSST Trac ticket #2282 for more
discussion.

.. warning::

   This is called by CmdLineTask.parseAndRun; other ways of
   constructing a config will not apply these overrides.

Reimplemented in lsst.pipe.drivers.constructCalibs.FringeTask, lsst.pipe.drivers.constructCalibs.FlatTask, lsst.pipe.drivers.constructCalibs.DarkTask, and lsst.pipe.drivers.constructCalibs.BiasTask.

Definition at line 587 of file cmdLineTask.py.

587  def applyOverrides(cls, config):
588  """A hook to allow a task to change the values of its config *after*
589  the camera-specific overrides are loaded but before any command-line
590  overrides are applied.
591 
592  Parameters
593  ----------
594  config : instance of task's ``ConfigClass``
595  Task configuration.
596 
597  Notes
598  -----
599  This is necessary in some cases because the camera-specific overrides
600  may retarget subtasks, wiping out changes made in
601  ConfigClass.setDefaults. See LSST Trac ticket #2282 for more
602  discussion.
603 
604  .. warning::
605 
606  This is called by CmdLineTask.parseAndRun; other ways of
607  constructing a config will not apply these overrides.
608  """
609  pass
610 

◆ batchCommand()

def lsst.ctrl.pool.parallel.BatchCmdLineTask.batchCommand (   cls,
  args 
)
inherited

Return command to run CmdLineTask.

    @param cls: Class
    @param args: Parsed batch job arguments (from BatchArgumentParser)

Definition at line 476 of file parallel.py.

476  def batchCommand(cls, args):
477  """!Return command to run CmdLineTask
478 
479  @param cls: Class
480  @param args: Parsed batch job arguments (from BatchArgumentParser)
481  """
482  job = args.job if args.job is not None else "job"
483  module = cls.__module__
484  script = ("import os; os.umask(%#05o); " +
485  "import lsst.base; lsst.base.disableImplicitThreading(); " +
486  "import lsst.ctrl.pool.log; lsst.ctrl.pool.log.jobLog(\"%s\"); ") % (UMASK, job)
487 
488  if args.batchStats:
489  script += ("import lsst.ctrl.pool.parallel; import atexit; " +
490  "atexit.register(lsst.ctrl.pool.parallel.printProcessStats); ")
491 
492  script += "import %s; %s.%s.parseAndRun();" % (module, module, cls.__name__)
493 
494  profilePre = "import cProfile; import os; cProfile.run(\"\"\""
495  profilePost = "\"\"\", filename=\"profile-" + job + "-%s-%d.dat\" % (os.uname()[1], os.getpid()))"
496 
497  return ("python -c '" + (profilePre if args.batchProfile else "") + script +
498  (profilePost if args.batchProfile else "") + "' " + shCommandFromArgs(args.leftover) +
499  " --noExit")
500 
def shCommandFromArgs(args)
Definition: parallel.py:42

◆ batchWallTime()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.batchWallTime (   cls,
  time,
  parsedCmd,
  numCores 
)

Return walltime request for batch job.

Parameters
timeRequested time per iteration
parsedCmdResults of argument parsing
numCoresNumber of cores
Returns
float walltime request length

Reimplemented from lsst.ctrl.pool.parallel.BatchCmdLineTask.

Definition at line 125 of file coaddDriver.py.

125  def batchWallTime(cls, time, parsedCmd, numCores):
126  """!
127  Return walltime request for batch job
128 
129  @param time: Requested time per iteration
130  @param parsedCmd: Results of argument parsing
131  @param numCores: Number of cores
132  @return float walltime request length
133  """
134  numTargets = len(parsedCmd.selectId.refList)
135  return time*numTargets/float(numCores)
136 

◆ checkTract()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.checkTract (   self,
  cache,
  tractId,
  selectIdList 
)

Check whether a tract has any overlapping inputs.

    This method only runs on slave nodes.

    @param cache: Pool cache
    @param tractId: Data identifier for tract
    @param selectDataList: List of selection data
    @return whether tract has any overlapping inputs

Definition at line 223 of file coaddDriver.py.

223  def checkTract(self, cache, tractId, selectIdList):
224  """!Check whether a tract has any overlapping inputs
225 
226  This method only runs on slave nodes.
227 
228  @param cache: Pool cache
229  @param tractId: Data identifier for tract
230  @param selectDataList: List of selection data
231  @return whether tract has any overlapping inputs
232  """
233  def makePolygon(wcs, bbox):
234  """Return a polygon for the image, given Wcs and bounding box"""
235  boxPixelCorners = geom.Box2D(bbox).getCorners()
236  boxSkyCorners = wcs.pixelToSky(boxPixelCorners)
237  return lsst.sphgeom.ConvexPolygon.convexHull([coord.getVector() for coord in boxSkyCorners])
238 
239  skymap = cache.skymap
240  tract = skymap[tractId]
241  tractWcs = tract.getWcs()
242  tractPoly = makePolygon(tractWcs, tract.getBBox())
243 
244  for selectData in selectIdList:
245  if not hasattr(selectData, "poly"):
246  selectData.poly = makePolygon(selectData.wcs, selectData.bbox)
247  if tractPoly.intersects(selectData.poly):
248  return True
249  return False
250 
A floating-point coordinate rectangle geometry.
Definition: Box.h:413
static ConvexPolygon convexHull(std::vector< UnitVector3d > const &points)
convexHull returns the convex hull of the given set of points if it exists and throws an exception ot...
Definition: ConvexPolygon.h:65

◆ coadd()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.coadd (   self,
  cache,
  data 
)

Construct coadd for a patch and measure.

    Only slave nodes execute this method.

    Because only one argument may be passed, it is expected to
    contain multiple elements, which are:

    @param patchRef: data reference for patch
    @param selectDataList: List of SelectStruct for inputs

Definition at line 269 of file coaddDriver.py.

269  def coadd(self, cache, data):
270  """!Construct coadd for a patch and measure
271 
272  Only slave nodes execute this method.
273 
274  Because only one argument may be passed, it is expected to
275  contain multiple elements, which are:
276 
277  @param patchRef: data reference for patch
278  @param selectDataList: List of SelectStruct for inputs
279  """
280  patchRef = getDataRef(cache.butler, data.patchId, cache.coaddType)
281  selectDataList = data.selectDataList
282  coadd = None
283 
284  # We skip the assembleCoadd step if either the *Coadd dataset exists
285  # or we aren't configured to write it, we're supposed to reuse
286  # detectCoaddSources outputs too, and those outputs already exist.
287  canSkipDetection = (
288  "detectCoaddSources" in self.reuse and
289  patchRef.datasetExists(self.detectCoaddSources.config.coaddName+"Coadd_det", write=True)
290  )
291  if "assembleCoadd" in self.reuse:
292  if patchRef.datasetExists(cache.coaddType, write=True):
293  self.log.info("%s: Skipping assembleCoadd for %s; outputs already exist." %
294  (NODE, patchRef.dataId))
295  coadd = patchRef.get(cache.coaddType, immediate=True)
296  elif not self.config.assembleCoadd.doWrite and self.config.doDetection and canSkipDetection:
297  self.log.info(
298  "%s: Skipping assembleCoadd and detectCoaddSources for %s; outputs already exist." %
299  (NODE, patchRef.dataId)
300  )
301  return
302  if coadd is None:
303  with self.logOperation("coadding %s" % (patchRef.dataId,), catch=True):
304  coaddResults = self.assembleCoadd.runDataRef(patchRef, selectDataList)
305  if coaddResults is not None:
306  coadd = coaddResults.coaddExposure
307  canSkipDetection = False # can't skip it because coadd may have changed
308  if coadd is None:
309  return
310 
311  # The section of code below determines if the detection task should be
312  # run. If detection is run, then the products are written out as
313  # deepCoadd_calexp. If detection is not run, then the outputs of the
314  # assemble task are written out as deepCoadd.
315  if self.config.doDetection:
316  if canSkipDetection:
317  self.log.info("%s: Skipping detectCoaddSources for %s; outputs already exist." %
318  (NODE, patchRef.dataId))
319  return
320  with self.logOperation("detection on {}".format(patchRef.dataId),
321  catch=True):
322  idFactory = self.detectCoaddSources.makeIdFactory(patchRef)
323  expId = int(patchRef.get(self.config.coaddName + "CoaddId"))
324  # This includes background subtraction, so do it before writing
325  # the coadd
326  detResults = self.detectCoaddSources.run(coadd, idFactory, expId=expId)
327  self.detectCoaddSources.write(detResults, patchRef)
328  else:
329  if self.config.hasFakes:
330  patchRef.put(coadd, "fakes_" + self.assembleCoadd.config.coaddName + "Coadd")
331  else:
332  patchRef.put(coadd, self.assembleCoadd.config.coaddName + "Coadd")
333 
void write(OutputArchiveHandle &handle) const override
def format(config, name=None, writeSourceLine=True, prefix="", verbose=False)
Definition: history.py:174
def getDataRef(butler, dataId, datasetType="raw")
Definition: utils.py:16
def run(self, skyInfo, tempExpRefList, imageScalerList, weightList, altMaskList=None, mask=None, supplementaryData=None)

◆ emptyMetadata()

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 166 of file task.py.

166  def emptyMetadata(self):
167  """Empty (clear) the metadata for this Task and all sub-Tasks.
168  """
169  for subtask in self._taskDict.values():
170  subtask.metadata = dafBase.PropertyList()
171 
Class for storing ordered metadata with comments.
Definition: PropertyList.h:68

◆ getAllSchemaCatalogs()

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the
results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an
    instance of the appropriate `lsst.afw.table` Catalog type) for all
    tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return
the same answer, regardless.

The default implementation should always suffice. If your subtask uses
schemas the override `Task.getSchemaCatalogs`, not this method.

Definition at line 204 of file task.py.

204  def getAllSchemaCatalogs(self):
205  """Get schema catalogs for all tasks in the hierarchy, combining the
206  results into a single dict.
207 
208  Returns
209  -------
210  schemacatalogs : `dict`
211  Keys are butler dataset type, values are a empty catalog (an
212  instance of the appropriate `lsst.afw.table` Catalog type) for all
213  tasks in the hierarchy, from the top-level task down
214  through all subtasks.
215 
216  Notes
217  -----
218  This method may be called on any task in the hierarchy; it will return
219  the same answer, regardless.
220 
221  The default implementation should always suffice. If your subtask uses
222  schemas the override `Task.getSchemaCatalogs`, not this method.
223  """
224  schemaDict = self.getSchemaCatalogs()
225  for subtask in self._taskDict.values():
226  schemaDict.update(subtask.getSchemaCatalogs())
227  return schemaDict
228 

◆ getFullMetadata()

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name.
    Values are metadata for the top-level task and all subtasks,
    sub-subtasks, etc.

Notes
-----
The returned metadata includes timing information (if
``@timer.timeMethod`` is used) and any metadata set by the task. The
name of each item consists of the full task name with ``.`` replaced
by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation
that a task has a subtask and a metadata item with the same name.

Definition at line 229 of file task.py.

229  def getFullMetadata(self):
230  """Get metadata for all tasks.
231 
232  Returns
233  -------
234  metadata : `lsst.daf.base.PropertySet`
235  The `~lsst.daf.base.PropertySet` keys are the full task name.
236  Values are metadata for the top-level task and all subtasks,
237  sub-subtasks, etc.
238 
239  Notes
240  -----
241  The returned metadata includes timing information (if
242  ``@timer.timeMethod`` is used) and any metadata set by the task. The
243  name of each item consists of the full task name with ``.`` replaced
244  by ``:``, followed by ``.`` and the name of the item, e.g.::
245 
246  topLevelTaskName:subtaskName:subsubtaskName.itemName
247 
248  using ``:`` in the full task name disambiguates the rare situation
249  that a task has a subtask and a metadata item with the same name.
250  """
251  fullMetadata = dafBase.PropertySet()
252  for fullName, task in self.getTaskDict().items():
253  fullMetadata.set(fullName.replace(".", ":"), task.metadata)
254  return fullMetadata
255 
std::vector< SchemaItem< Flag > > * items
Class for storing generic metadata.
Definition: PropertySet.h:67

◆ getFullName()

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task
names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each
    subtask separated by periods. For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is
      "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level
      task "top" is "top.sub.sub2".

Definition at line 256 of file task.py.

256  def getFullName(self):
257  """Get the task name as a hierarchical name including parent task
258  names.
259 
260  Returns
261  -------
262  fullName : `str`
263  The full name consists of the name of the parent task and each
264  subtask separated by periods. For example:
265 
266  - The full name of top-level task "top" is simply "top".
267  - The full name of subtask "sub" of top-level task "top" is
268  "top.sub".
269  - The full name of subtask "sub2" of subtask "sub" of top-level
270  task "top" is "top.sub.sub2".
271  """
272  return self._fullName
273 

◆ getName()

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 274 of file task.py.

274  def getName(self):
275  """Get the name of the task.
276 
277  Returns
278  -------
279  taskName : `str`
280  Name of the task.
281 
282  See also
283  --------
284  getFullName
285  """
286  return self._name
287 
std::string const & getName() const noexcept
Return a filter's name.
Definition: Filter.h:78

◆ getSchemaCatalogs()

def lsst.pipe.base.task.Task.getSchemaCatalogs (   self)
inherited
Get the schemas generated by this task.

Returns
-------
schemaCatalogs : `dict`
    Keys are butler dataset type, values are an empty catalog (an
    instance of the appropriate `lsst.afw.table` Catalog type) for
    this task.

Notes
-----

.. warning::

   Subclasses that use schemas must override this method. The default
   implementation returns an empty dict.

This method may be called at any time after the Task is constructed,
which means that all task schemas should be computed at construction
time, *not* when data is actually processed. This reflects the
philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g.
slots for SourceCatalog as well.

See also
--------
Task.getAllSchemaCatalogs

Definition at line 172 of file task.py.

172  def getSchemaCatalogs(self):
173  """Get the schemas generated by this task.
174 
175  Returns
176  -------
177  schemaCatalogs : `dict`
178  Keys are butler dataset type, values are an empty catalog (an
179  instance of the appropriate `lsst.afw.table` Catalog type) for
180  this task.
181 
182  Notes
183  -----
184 
185  .. warning::
186 
187  Subclasses that use schemas must override this method. The default
188  implementation returns an empty dict.
189 
190  This method may be called at any time after the Task is constructed,
191  which means that all task schemas should be computed at construction
192  time, *not* when data is actually processed. This reflects the
193  philosophy that the schema should not depend on the data.
194 
195  Returning catalogs rather than just schemas allows us to save e.g.
196  slots for SourceCatalog as well.
197 
198  See also
199  --------
200  Task.getAllSchemaCatalogs
201  """
202  return {}
203 

◆ getTaskDict()

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level
    task and all subtasks, sub-subtasks, etc.

Definition at line 288 of file task.py.

288  def getTaskDict(self):
289  """Get a dictionary of all tasks as a shallow copy.
290 
291  Returns
292  -------
293  taskDict : `dict`
294  Dictionary containing full task name: task object for the top-level
295  task and all subtasks, sub-subtasks, etc.
296  """
297  return self._taskDict.copy()
298 
def getTaskDict(config, taskDict=None, baseName="")

◆ logOperation()

def lsst.ctrl.pool.parallel.BatchCmdLineTask.logOperation (   self,
  operation,
  catch = False,
  trace = True 
)
inherited

Provide a context manager for logging an operation.

    @param operation: description of operation (string)
    @param catch: Catch all exceptions?
    @param trace: Log a traceback of caught exception?

    Note that if 'catch' is True, all exceptions are swallowed, but there may
    be other side-effects such as undefined variables.

Definition at line 502 of file parallel.py.

502  def logOperation(self, operation, catch=False, trace=True):
503  """!Provide a context manager for logging an operation
504 
505  @param operation: description of operation (string)
506  @param catch: Catch all exceptions?
507  @param trace: Log a traceback of caught exception?
508 
509  Note that if 'catch' is True, all exceptions are swallowed, but there may
510  be other side-effects such as undefined variables.
511  """
512  self.log.info("%s: Start %s" % (NODE, operation))
513  try:
514  yield
515  except Exception:
516  if catch:
517  cls, e, _ = sys.exc_info()
518  self.log.warn("%s: Caught %s while %s: %s" % (NODE, cls.__name__, operation, e))
519  if trace:
520  self.log.info("%s: Traceback:\n%s" % (NODE, traceback.format_exc()))
521  return
522  raise
523  finally:
524  self.log.info("%s: Finished %s" % (NODE, operation))
525 
526 

◆ makeField()

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another
task.

Here is an example of use:

.. code-block:: python

    class OtherTaskConfig(lsst.pex.config.Config):
        aSubtask = ATaskClass.makeField("brief description of task")

Definition at line 359 of file task.py.

359  def makeField(cls, doc):
360  """Make a `lsst.pex.config.ConfigurableField` for this task.
361 
362  Parameters
363  ----------
364  doc : `str`
365  Help text for the field.
366 
367  Returns
368  -------
369  configurableField : `lsst.pex.config.ConfigurableField`
370  A `~ConfigurableField` for this task.
371 
372  Examples
373  --------
374  Provides a convenient way to specify this task is a subtask of another
375  task.
376 
377  Here is an example of use:
378 
379  .. code-block:: python
380 
381  class OtherTaskConfig(lsst.pex.config.Config):
382  aSubtask = ATaskClass.makeField("brief description of task")
383  """
384  return ConfigurableField(doc=doc, target=cls)
385 

◆ makeSubtask()

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
**  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this
task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following
    arguments are automatically provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of
`~lsst.pex.config.ConfigurableField` or
`~lsst.pex.config.RegistryField`.

Definition at line 299 of file task.py.

299  def makeSubtask(self, name, **keyArgs):
300  """Create a subtask as a new instance as the ``name`` attribute of this
301  task.
302 
303  Parameters
304  ----------
305  name : `str`
306  Brief name of the subtask.
307  keyArgs
308  Extra keyword arguments used to construct the task. The following
309  arguments are automatically provided and cannot be overridden:
310 
311  - "config".
312  - "parentTask".
313 
314  Notes
315  -----
316  The subtask must be defined by ``Task.config.name``, an instance of
317  `~lsst.pex.config.ConfigurableField` or
318  `~lsst.pex.config.RegistryField`.
319  """
320  taskField = getattr(self.config, name, None)
321  if taskField is None:
322  raise KeyError(f"{self.getFullName()}'s config does not have field {name!r}")
323  subtask = taskField.apply(name=name, parentTask=self, **keyArgs)
324  setattr(self, name, subtask)
325 

◆ parseAndRun() [1/2]

def lsst.ctrl.pool.parallel.BatchPoolTask.parseAndRun (   cls,
args,
**  kwargs 
)
inherited
Run with a MPI process pool

Definition at line 534 of file parallel.py.

534  def parseAndRun(cls, *args, **kwargs):
535  """Run with a MPI process pool"""
536  pool = startPool()
537  super(BatchPoolTask, cls).parseAndRun(*args, **kwargs)
538  pool.exit()
539 
540 
def startPool(comm=None, root=0, killSlaves=True)
Start a process pool.
Definition: pool.py:1216

◆ parseAndRun() [2/2]

def lsst.pipe.base.cmdLineTask.CmdLineTask.parseAndRun (   cls,
  args = None,
  config = None,
  log = None,
  doReturnResults = False 
)
inherited
Parse an argument list and run the command.

Parameters
----------
args : `list`, optional
    List of command-line arguments; if `None` use `sys.argv`.
config : `lsst.pex.config.Config`-type, optional
    Config for task. If `None` use `Task.ConfigClass`.
log : `lsst.log.Log`-type, optional
    Log. If `None` use the default log.
doReturnResults : `bool`, optional
    If `True`, return the results of this task. Default is `False`.
    This is only intended for unit tests and similar use. It can
    easily exhaust memory (if the task returns enough data and you
    call it enough times) and it will fail when using multiprocessing
    if the returned data cannot be pickled.

Returns
-------
struct : `lsst.pipe.base.Struct`
    Fields are:

    ``argumentParser``
        the argument parser (`lsst.pipe.base.ArgumentParser`).
    ``parsedCmd``
        the parsed command returned by the argument parser's
        `~lsst.pipe.base.ArgumentParser.parse_args` method
        (`argparse.Namespace`).
    ``taskRunner``
        the task runner used to run the task (an instance of
        `Task.RunnerClass`).
    ``resultList``
        results returned by the task runner's ``run`` method, one entry
        per invocation (`list`). This will typically be a list of
        `Struct`, each containing at least an ``exitStatus`` integer
        (0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
        more details.

Notes
-----
Calling this method with no arguments specified is the standard way to
run a command-line task from the command-line. For an example see
``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other file in that
directory.

If one or more of the dataIds fails then this routine will exit (with
a status giving the number of failed dataIds) rather than returning
this struct;  this behaviour can be overridden by specifying the
``--noExit`` command-line option.

Definition at line 612 of file cmdLineTask.py.

612  def parseAndRun(cls, args=None, config=None, log=None, doReturnResults=False):
613  """Parse an argument list and run the command.
614 
615  Parameters
616  ----------
617  args : `list`, optional
618  List of command-line arguments; if `None` use `sys.argv`.
619  config : `lsst.pex.config.Config`-type, optional
620  Config for task. If `None` use `Task.ConfigClass`.
621  log : `lsst.log.Log`-type, optional
622  Log. If `None` use the default log.
623  doReturnResults : `bool`, optional
624  If `True`, return the results of this task. Default is `False`.
625  This is only intended for unit tests and similar use. It can
626  easily exhaust memory (if the task returns enough data and you
627  call it enough times) and it will fail when using multiprocessing
628  if the returned data cannot be pickled.
629 
630  Returns
631  -------
632  struct : `lsst.pipe.base.Struct`
633  Fields are:
634 
635  ``argumentParser``
636  the argument parser (`lsst.pipe.base.ArgumentParser`).
637  ``parsedCmd``
638  the parsed command returned by the argument parser's
639  `~lsst.pipe.base.ArgumentParser.parse_args` method
640  (`argparse.Namespace`).
641  ``taskRunner``
642  the task runner used to run the task (an instance of
643  `Task.RunnerClass`).
644  ``resultList``
645  results returned by the task runner's ``run`` method, one entry
646  per invocation (`list`). This will typically be a list of
647  `Struct`, each containing at least an ``exitStatus`` integer
648  (0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
649  more details.
650 
651  Notes
652  -----
653  Calling this method with no arguments specified is the standard way to
654  run a command-line task from the command-line. For an example see
655  ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other file in that
656  directory.
657 
658  If one or more of the dataIds fails then this routine will exit (with
659  a status giving the number of failed dataIds) rather than returning
660  this struct; this behaviour can be overridden by specifying the
661  ``--noExit`` command-line option.
662  """
663  if args is None:
664  commandAsStr = " ".join(sys.argv)
665  args = sys.argv[1:]
666  else:
667  commandAsStr = "{}{}".format(lsst.utils.get_caller_name(skip=1), tuple(args))
668 
669  argumentParser = cls._makeArgumentParser()
670  if config is None:
671  config = cls.ConfigClass()
672  parsedCmd = argumentParser.parse_args(config=config, args=args, log=log, override=cls.applyOverrides)
673  # print this message after parsing the command so the log is fully
674  # configured
675  parsedCmd.log.info("Running: %s", commandAsStr)
676 
677  taskRunner = cls.RunnerClass(TaskClass=cls, parsedCmd=parsedCmd, doReturnResults=doReturnResults)
678  resultList = taskRunner.run(parsedCmd)
679 
680  try:
681  nFailed = sum(((res.exitStatus != 0) for res in resultList))
682  except (TypeError, AttributeError) as e:
683  # NOTE: TypeError if resultList is None, AttributeError if it
684  # doesn't have exitStatus.
685  parsedCmd.log.warn("Unable to retrieve exit status (%s); assuming success", e)
686  nFailed = 0
687 
688  if nFailed > 0:
689  if parsedCmd.noExit:
690  parsedCmd.log.error("%d dataRefs failed; not exiting as --noExit was set", nFailed)
691  else:
692  sys.exit(nFailed)
693 
694  return Struct(
695  argumentParser=argumentParser,
696  parsedCmd=parsedCmd,
697  taskRunner=taskRunner,
698  resultList=resultList,
699  )
700 

◆ parseAndSubmit()

def lsst.ctrl.pool.parallel.BatchCmdLineTask.parseAndSubmit (   cls,
  args = None,
**  kwargs 
)
inherited

Definition at line 435 of file parallel.py.

435  def parseAndSubmit(cls, args=None, **kwargs):
436  taskParser = cls._makeArgumentParser(doBatch=True, add_help=False)
437  batchParser = BatchArgumentParser(parent=taskParser)
438  batchArgs = batchParser.parse_args(config=cls.ConfigClass(), args=args, override=cls.applyOverrides,
439  **kwargs)
440 
441  if not cls.RunnerClass(cls, batchArgs.parent).precall(batchArgs.parent): # Write config, schema
442  taskParser.error("Error in task preparation")
443 
444  setBatchType(batchArgs.batch)
445 
446  if batchArgs.batch is None: # don't use a batch system
447  sys.argv = [sys.argv[0]] + batchArgs.leftover # Remove all batch arguments
448 
449  return cls.parseAndRun()
450  else:
451  if batchArgs.walltime > 0:
452  walltime = batchArgs.walltime
453  else:
454  numCores = batchArgs.cores if batchArgs.cores > 0 else batchArgs.nodes*batchArgs.procs
455  walltime = cls.batchWallTime(batchArgs.time, batchArgs.parent, numCores)
456 
457  command = cls.batchCommand(batchArgs)
458  batchArgs.batch.run(command, walltime=walltime)
459 
def setBatchType(batchType)
Definition: pool.py:101

◆ readSelection()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.readSelection (   self,
  cache,
  selectId 
)

Read Wcs of selected inputs.

    This method only runs on slave nodes.
    This method is similar to SelectDataIdContainer.makeDataRefList,
    creating a Struct like a SelectStruct, except with a dataId instead
    of a dataRef (to ease MPI).

    @param cache: Pool cache
    @param selectId: Data identifier for selected input
    @return a SelectStruct with a dataId instead of dataRef

Definition at line 200 of file coaddDriver.py.

200  def readSelection(self, cache, selectId):
201  """!Read Wcs of selected inputs
202 
203  This method only runs on slave nodes.
204  This method is similar to SelectDataIdContainer.makeDataRefList,
205  creating a Struct like a SelectStruct, except with a dataId instead
206  of a dataRef (to ease MPI).
207 
208  @param cache: Pool cache
209  @param selectId: Data identifier for selected input
210  @return a SelectStruct with a dataId instead of dataRef
211  """
212  try:
213  ref = getDataRef(cache.butler, selectId, self.calexpType)
214  self.log.info("Reading Wcs from %s" % (selectId,))
215  md = ref.get("calexp_md", immediate=True)
216  wcs = afwGeom.makeSkyWcs(md)
217  data = Struct(dataId=selectId, wcs=wcs, bbox=afwImage.bboxFromMetadata(md))
218  except FitsError:
219  self.log.warn("Unable to construct Wcs from %s" % (selectId,))
220  return None
221  return data
222 
std::shared_ptr< SkyWcs > makeSkyWcs(daf::base::PropertySet &metadata, bool strip=false)
Construct a SkyWcs from FITS keywords.
Definition: SkyWcs.cc:526
lsst::geom::Box2I bboxFromMetadata(daf::base::PropertySet &metadata)
Determine the image bounding box from its metadata (FITS header)
Definition: Image.cc:694

◆ run()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.run (   self,
  patchRefList,
  butler,
  selectDataList = [] 
)

Run stacking on a tract.

    This method only runs on the master node.

    @param patchRefList: List of patch data references for tract
    @param butler: Data butler
    @param selectDataList: List of SelectStruct for inputs

Definition at line 173 of file coaddDriver.py.

173  def run(self, patchRefList, butler, selectDataList=[]):
174  """!Run stacking on a tract
175 
176  This method only runs on the master node.
177 
178  @param patchRefList: List of patch data references for tract
179  @param butler: Data butler
180  @param selectDataList: List of SelectStruct for inputs
181  """
182  pool = Pool("stacker")
183  pool.cacheClear()
184  pool.storeSet(butler=butler, warpType=self.config.coaddName + "Coadd_directWarp",
185  coaddType=self.config.coaddName + "Coadd")
186  patchIdList = [patchRef.dataId for patchRef in patchRefList]
187 
188  selectedData = pool.map(self.warp, patchIdList, selectDataList)
189  if self.config.doBackgroundReference:
190  self.backgroundReference.runDataRef(patchRefList, selectDataList)
191 
192  def refNamer(patchRef):
193  return tuple(map(int, patchRef.dataId["patch"].split(",")))
194 
195  lookup = dict(zip(map(refNamer, patchRefList), selectedData))
196  coaddData = [Struct(patchId=patchRef.dataId, selectDataList=lookup[refNamer(patchRef)]) for
197  patchRef in patchRefList]
198  pool.map(self.coadd, coaddData)
199 

◆ runDataRef()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.runDataRef (   self,
  tractPatchRefList,
  butler,
  selectIdList = [] 
)

Determine which tracts are non-empty before processing.

    @param tractPatchRefList: List of tracts and patches to include in the coaddition
    @param butler: butler reference object
    @param selectIdList: List of data Ids (i.e. visit, ccd) to consider when making the coadd
    @return list of references to sel.runTract function evaluation for each tractPatchRefList member

Definition at line 138 of file coaddDriver.py.

138  def runDataRef(self, tractPatchRefList, butler, selectIdList=[]):
139  """!Determine which tracts are non-empty before processing
140 
141  @param tractPatchRefList: List of tracts and patches to include in the coaddition
142  @param butler: butler reference object
143  @param selectIdList: List of data Ids (i.e. visit, ccd) to consider when making the coadd
144  @return list of references to sel.runTract function evaluation for each tractPatchRefList member
145  """
146  pool = Pool("tracts")
147  pool.storeSet(butler=butler, skymap=butler.get(
148  self.config.coaddName + "Coadd_skyMap"))
149  tractIdList = []
150  for patchRefList in tractPatchRefList:
151  tractSet = set([patchRef.dataId["tract"]
152  for patchRef in patchRefList])
153  assert len(tractSet) == 1
154  tractIdList.append(tractSet.pop())
155 
156  selectDataList = [data for data in pool.mapNoBalance(self.readSelection, selectIdList) if
157  data is not None]
158  nonEmptyList = pool.mapNoBalance(
159  self.checkTract, tractIdList, selectDataList)
160  tractPatchRefList = [patchRefList for patchRefList, nonEmpty in
161  zip(tractPatchRefList, nonEmptyList) if nonEmpty]
162  self.log.info("Non-empty tracts (%d): %s" % (len(tractPatchRefList),
163  [patchRefList[0].dataId["tract"] for patchRefList in
164  tractPatchRefList]))
165  # Install the dataRef in the selectDataList
166  for data in selectDataList:
167  data.dataRef = getDataRef(butler, data.dataId, self.calexpType)
168 
169  # Process the non-empty tracts
170  return [self.run(patchRefList, butler, selectDataList) for patchRefList in tractPatchRefList]
171 
daf::base::PropertySet * set
Definition: fits.cc:912

◆ selectExposures()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.selectExposures (   self,
  patchRef,
  selectDataList 
)

Select exposures to operate upon, via the SelectImagesTask.

    This is very similar to CoaddBaseTask.selectExposures, except we return
    a list of SelectStruct (same as the input), so we can plug the results into
    future uses of SelectImagesTask.

    @param patchRef data reference to a particular patch
    @param selectDataList list of references to specific data products (i.e. visit, ccd)
    @return filtered list of SelectStruct

Definition at line 334 of file coaddDriver.py.

334  def selectExposures(self, patchRef, selectDataList):
335  """!Select exposures to operate upon, via the SelectImagesTask
336 
337  This is very similar to CoaddBaseTask.selectExposures, except we return
338  a list of SelectStruct (same as the input), so we can plug the results into
339  future uses of SelectImagesTask.
340 
341  @param patchRef data reference to a particular patch
342  @param selectDataList list of references to specific data products (i.e. visit, ccd)
343  @return filtered list of SelectStruct
344  """
345  def key(dataRef):
346  return tuple(dataRef.dataId[k] for k in sorted(dataRef.dataId))
347  inputs = dict((key(select.dataRef), select)
348  for select in selectDataList)
349  skyMap = patchRef.get(self.config.coaddName + "Coadd_skyMap")
350  tract = skyMap[patchRef.dataId["tract"]]
351  patch = tract[(tuple(int(i)
352  for i in patchRef.dataId["patch"].split(",")))]
353  bbox = patch.getOuterBBox()
354  wcs = tract.getWcs()
355  cornerPosList = geom.Box2D(bbox).getCorners()
356  coordList = [wcs.pixelToSky(pos) for pos in cornerPosList]
357  dataRefList = self.select.runDataRef(
358  patchRef, coordList, selectDataList=selectDataList).dataRefList
359  return [inputs[key(dataRef)] for dataRef in dataRefList]
360 
Key< U > key
Definition: Schema.cc:281

◆ timer()

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of
code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name:
    ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context:

.. code-block:: python

    with self.timer("someCodeToTime"):
        pass  # code to time

See also
--------
timer.logInfo

Definition at line 327 of file task.py.

327  def timer(self, name, logLevel=Log.DEBUG):
328  """Context manager to log performance data for an arbitrary block of
329  code.
330 
331  Parameters
332  ----------
333  name : `str`
334  Name of code being timed; data will be logged using item name:
335  ``Start`` and ``End``.
336  logLevel
337  A `lsst.log` level constant.
338 
339  Examples
340  --------
341  Creating a timer context:
342 
343  .. code-block:: python
344 
345  with self.timer("someCodeToTime"):
346  pass # code to time
347 
348  See also
349  --------
350  timer.logInfo
351  """
352  logInfo(obj=self, prefix=name + "Start", logLevel=logLevel)
353  try:
354  yield
355  finally:
356  logInfo(obj=self, prefix=name + "End", logLevel=logLevel)
357 
def logInfo(obj, prefix, logLevel=Log.DEBUG)
Definition: timer.py:63

◆ warp()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.warp (   self,
  cache,
  patchId,
  selectDataList 
)

Warp all images for a patch.

    Only slave nodes execute this method.

    Because only one argument may be passed, it is expected to
    contain multiple elements, which are:

    @param patchRef: data reference for patch
    @param selectDataList: List of SelectStruct for inputs
    @return selectDataList with non-overlapping elements removed

Definition at line 251 of file coaddDriver.py.

251  def warp(self, cache, patchId, selectDataList):
252  """!Warp all images for a patch
253 
254  Only slave nodes execute this method.
255 
256  Because only one argument may be passed, it is expected to
257  contain multiple elements, which are:
258 
259  @param patchRef: data reference for patch
260  @param selectDataList: List of SelectStruct for inputs
261  @return selectDataList with non-overlapping elements removed
262  """
263  patchRef = getDataRef(cache.butler, patchId, cache.coaddType)
264  selectDataList = self.selectExposures(patchRef, selectDataList)
265  with self.logOperation("warping %s" % (patchRef.dataId,), catch=True):
266  self.makeCoaddTempExp.runDataRef(patchRef, selectDataList)
267  return selectDataList
268 

◆ writeConfig()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeConfig (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the configuration used for processing the data, or check that
an existing one is equal to the new one if present.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the config. The config is written to
    dataset type `CmdLineTask._getConfigName`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a config already has
    been saved:

    - `True`: overwrite or rename the existing config, depending on
      ``doBackup``.
    - `False`: raise `TaskError` if this config does not match the
      existing config.
doBackup : `bool`, optional
    Set to `True` to backup the config files if clobbering.

Definition at line 727 of file cmdLineTask.py.

727  def writeConfig(self, butler, clobber=False, doBackup=True):
728  """Write the configuration used for processing the data, or check that
729  an existing one is equal to the new one if present.
730 
731  Parameters
732  ----------
733  butler : `lsst.daf.persistence.Butler`
734  Data butler used to write the config. The config is written to
735  dataset type `CmdLineTask._getConfigName`.
736  clobber : `bool`, optional
737  A boolean flag that controls what happens if a config already has
738  been saved:
739 
740  - `True`: overwrite or rename the existing config, depending on
741  ``doBackup``.
742  - `False`: raise `TaskError` if this config does not match the
743  existing config.
744  doBackup : `bool`, optional
745  Set to `True` to backup the config files if clobbering.
746  """
747  configName = self._getConfigName()
748  if configName is None:
749  return
750  if clobber:
751  butler.put(self.config, configName, doBackup=doBackup)
752  elif butler.datasetExists(configName, write=True):
753  # this may be subject to a race condition; see #2789
754  try:
755  oldConfig = butler.get(configName, immediate=True)
756  except Exception as exc:
757  raise type(exc)(f"Unable to read stored config file {configName} (exc); "
758  "consider using --clobber-config")
759 
760  def logConfigMismatch(msg):
761  self.log.fatal("Comparing configuration: %s", msg)
762 
763  if not self.config.compare(oldConfig, shortcut=False, output=logConfigMismatch):
764  raise TaskError(
765  f"Config does not match existing task config {configName!r} on disk; "
766  "tasks configurations must be consistent within the same output repo "
767  "(override with --clobber-config)")
768  else:
769  butler.put(self.config, configName)
770 
table::Key< int > type
Definition: Detector.cc:163

◆ writeMetadata()

def lsst.pipe.drivers.coaddDriver.CoaddDriverTask.writeMetadata (   self,
  dataRef 
)
Write the metadata produced from processing the data.

Parameters
----------
dataRef
    Butler data reference used to write the metadata.
    The metadata is written to dataset type
    `CmdLineTask._getMetadataName`.

Reimplemented from lsst.pipe.base.cmdLineTask.CmdLineTask.

Definition at line 361 of file coaddDriver.py.

361  def writeMetadata(self, dataRef):
362  pass
def writeMetadata(self, dataRefList)
No metadata to write, and not sure how to write it for a list of dataRefs.

◆ writePackageVersions()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writePackageVersions (   self,
  butler,
  clobber = False,
  doBackup = True,
  dataset = "packages" 
)
inherited
Compare and write package versions.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to read/write the package versions.
clobber : `bool`, optional
    A boolean flag that controls what happens if versions already have
    been saved:

    - `True`: overwrite or rename the existing version info, depending
      on ``doBackup``.
    - `False`: raise `TaskError` if this version info does not match
      the existing.
doBackup : `bool`, optional
    If `True` and clobbering, old package version files are backed up.
dataset : `str`, optional
    Name of dataset to read/write.

Raises
------
TaskError
    Raised if there is a version mismatch with current and persisted
    lists of package versions.

Notes
-----
Note that this operation is subject to a race condition.

Definition at line 829 of file cmdLineTask.py.

829  def writePackageVersions(self, butler, clobber=False, doBackup=True, dataset="packages"):
830  """Compare and write package versions.
831 
832  Parameters
833  ----------
834  butler : `lsst.daf.persistence.Butler`
835  Data butler used to read/write the package versions.
836  clobber : `bool`, optional
837  A boolean flag that controls what happens if versions already have
838  been saved:
839 
840  - `True`: overwrite or rename the existing version info, depending
841  on ``doBackup``.
842  - `False`: raise `TaskError` if this version info does not match
843  the existing.
844  doBackup : `bool`, optional
845  If `True` and clobbering, old package version files are backed up.
846  dataset : `str`, optional
847  Name of dataset to read/write.
848 
849  Raises
850  ------
851  TaskError
852  Raised if there is a version mismatch with current and persisted
853  lists of package versions.
854 
855  Notes
856  -----
857  Note that this operation is subject to a race condition.
858  """
859  packages = Packages.fromSystem()
860 
861  if clobber:
862  return butler.put(packages, dataset, doBackup=doBackup)
863  if not butler.datasetExists(dataset, write=True):
864  return butler.put(packages, dataset)
865 
866  try:
867  old = butler.get(dataset, immediate=True)
868  except Exception as exc:
869  raise type(exc)(f"Unable to read stored version dataset {dataset} ({exc}); "
870  "consider using --clobber-versions or --no-versions")
871  # Note that because we can only detect python modules that have been
872  # imported, the stored list of products may be more or less complete
873  # than what we have now. What's important is that the products that
874  # are in common have the same version.
875  diff = packages.difference(old)
876  if diff:
877  versions_str = "; ".join(f"{pkg}: {diff[pkg][1]} vs {diff[pkg][0]}" for pkg in diff)
878  raise TaskError(
879  f"Version mismatch ({versions_str}); consider using --clobber-versions or --no-versions")
880  # Update the old set of packages in case we have more packages that
881  # haven't been persisted.
882  extra = packages.extra(old)
883  if extra:
884  old.update(packages)
885  butler.put(old, dataset, doBackup=doBackup)
886 

◆ writeSchemas()

def lsst.pipe.base.cmdLineTask.CmdLineTask.writeSchemas (   self,
  butler,
  clobber = False,
  doBackup = True 
)
inherited
Write the schemas returned by
`lsst.pipe.base.Task.getAllSchemaCatalogs`.

Parameters
----------
butler : `lsst.daf.persistence.Butler`
    Data butler used to write the schema. Each schema is written to the
    dataset type specified as the key in the dict returned by
    `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
clobber : `bool`, optional
    A boolean flag that controls what happens if a schema already has
    been saved:

    - `True`: overwrite or rename the existing schema, depending on
      ``doBackup``.
    - `False`: raise `TaskError` if this schema does not match the
      existing schema.
doBackup : `bool`, optional
    Set to `True` to backup the schema files if clobbering.

Notes
-----
If ``clobber`` is `False` and an existing schema does not match a
current schema, then some schemas may have been saved successfully
and others may not, and there is no easy way to tell which is which.

Definition at line 771 of file cmdLineTask.py.

771  def writeSchemas(self, butler, clobber=False, doBackup=True):
772  """Write the schemas returned by
773  `lsst.pipe.base.Task.getAllSchemaCatalogs`.
774 
775  Parameters
776  ----------
777  butler : `lsst.daf.persistence.Butler`
778  Data butler used to write the schema. Each schema is written to the
779  dataset type specified as the key in the dict returned by
780  `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
781  clobber : `bool`, optional
782  A boolean flag that controls what happens if a schema already has
783  been saved:
784 
785  - `True`: overwrite or rename the existing schema, depending on
786  ``doBackup``.
787  - `False`: raise `TaskError` if this schema does not match the
788  existing schema.
789  doBackup : `bool`, optional
790  Set to `True` to backup the schema files if clobbering.
791 
792  Notes
793  -----
794  If ``clobber`` is `False` and an existing schema does not match a
795  current schema, then some schemas may have been saved successfully
796  and others may not, and there is no easy way to tell which is which.
797  """
798  for dataset, catalog in self.getAllSchemaCatalogs().items():
799  schemaDataset = dataset + "_schema"
800  if clobber:
801  butler.put(catalog, schemaDataset, doBackup=doBackup)
802  elif butler.datasetExists(schemaDataset, write=True):
803  oldSchema = butler.get(schemaDataset, immediate=True).getSchema()
804  if not oldSchema.compare(catalog.getSchema(), afwTable.Schema.IDENTICAL):
805  raise TaskError(
806  f"New schema does not match schema {dataset!r} on disk; "
807  "schemas must be consistent within the same output repo "
808  "(override with --clobber-config)")
809  else:
810  butler.put(catalog, schemaDataset)
811 

Member Data Documentation

◆ calexpType

lsst.pipe.drivers.coaddDriver.CoaddDriverTask.calexpType

Definition at line 98 of file coaddDriver.py.

◆ canMultiprocess

bool lsst.pipe.base.cmdLineTask.CmdLineTask.canMultiprocess = True
staticinherited

Definition at line 584 of file cmdLineTask.py.

◆ config

lsst.pipe.base.task.Task.config
inherited

Definition at line 162 of file task.py.

◆ ConfigClass

lsst.pipe.drivers.coaddDriver.CoaddDriverTask.ConfigClass = CoaddDriverConfig
static

Definition at line 85 of file coaddDriver.py.

◆ log

lsst.pipe.base.task.Task.log
inherited

Definition at line 161 of file task.py.

◆ metadata

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 134 of file task.py.

◆ reuse

lsst.pipe.drivers.coaddDriver.CoaddDriverTask.reuse

Definition at line 91 of file coaddDriver.py.

◆ RunnerClass

lsst.pipe.drivers.coaddDriver.CoaddDriverTask.RunnerClass = CoaddDriverTaskRunner
static

Definition at line 87 of file coaddDriver.py.


The documentation for this class was generated from the following file: