|
def | applyOverrides (cls, config) |
|
def | parseAndRun (cls, args=None, config=None, log=None, doReturnResults=False) |
|
def | writeConfig (self, butler, clobber=False, doBackup=True) |
|
def | writeSchemas (self, butler, clobber=False, doBackup=True) |
|
def | writeMetadata (self, dataRef) |
|
def | writePackageVersions (self, butler, clobber=False, doBackup=True, dataset="packages") |
|
def | emptyMetadata (self) |
|
def | getSchemaCatalogs (self) |
|
def | getAllSchemaCatalogs (self) |
|
def | getFullMetadata (self) |
|
def | getFullName (self) |
|
def | getName (self) |
|
def | getTaskDict (self) |
|
def | makeSubtask (self, name, **keyArgs) |
|
def | timer (self, name, logLevel=Log.DEBUG) |
|
def | makeField (cls, doc) |
|
def | __reduce__ (self) |
|
Base class for command-line tasks: tasks that may be executed from the command-line.
Notes
-----
See :ref:`task-framework-overview` to learn what tasks are and :ref:`creating-a-command-line-task` for
more information about writing command-line tasks.
Subclasses must specify the following class variables:
- ``ConfigClass``: configuration class for your task (a subclass of `lsst.pex.config.Config`, or if your
task needs no configuration, then `lsst.pex.config.Config` itself).
- ``_DefaultName``: default name used for this task (a str).
Subclasses may also specify the following class variables:
- ``RunnerClass``: a task runner class. The default is ``TaskRunner``, which works for any task
with a runDataRef method that takes exactly one argument: a data reference. If your task does
not meet this requirement then you must supply a variant of ``TaskRunner``; see ``TaskRunner``
for more information.
- ``canMultiprocess``: the default is `True`; set `False` if your task does not support multiprocessing.
Subclasses must specify a method named ``runDataRef``:
- By default ``runDataRef`` accepts a single butler data reference, but you can specify an alternate
task runner (subclass of ``TaskRunner``) as the value of class variable ``RunnerClass`` if your run
method needs something else.
- ``runDataRef`` is expected to return its data in a `lsst.pipe.base.Struct`. This provides safety for
evolution of the task since new values may be added without harming existing code.
- The data returned by ``runDataRef`` must be picklable if your task is to support multiprocessing.
Definition at line 492 of file cmdLineTask.py.
def lsst.pipe.base.cmdLineTask.CmdLineTask.applyOverrides |
( |
|
cls, |
|
|
|
config |
|
) |
| |
A hook to allow a task to change the values of its config *after* the camera-specific
overrides are loaded but before any command-line overrides are applied.
Parameters
----------
config : instance of task's ``ConfigClass``
Task configuration.
Notes
-----
This is necessary in some cases because the camera-specific overrides may retarget subtasks,
wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.
.. warning::
This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
these overrides.
Reimplemented in lsst.pipe.drivers.constructCalibs.FringeTask, lsst.pipe.drivers.constructCalibs.FlatTask, lsst.pipe.drivers.constructCalibs.DarkTask, and lsst.pipe.drivers.constructCalibs.BiasTask.
Definition at line 527 of file cmdLineTask.py.
527 def applyOverrides(cls, config):
528 """A hook to allow a task to change the values of its config *after* the camera-specific
529 overrides are loaded but before any command-line overrides are applied.
533 config : instance of task's ``ConfigClass``
538 This is necessary in some cases because the camera-specific overrides may retarget subtasks,
539 wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.
543 This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply
def lsst.pipe.base.task.Task.getAllSchemaCatalogs |
( |
|
self | ) |
|
|
inherited |
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
Returns
-------
schemacatalogs : `dict`
Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
through all subtasks.
Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.
The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.
Definition at line 188 of file task.py.
188 def getAllSchemaCatalogs(self):
189 """Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
193 schemacatalogs : `dict`
194 Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
195 lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
196 through all subtasks.
200 This method may be called on any task in the hierarchy; it will return the same answer, regardless.
202 The default implementation should always suffice. If your subtask uses schemas the override
203 `Task.getSchemaCatalogs`, not this method.
205 schemaDict = self.getSchemaCatalogs()
206 for subtask
in self._taskDict.values():
207 schemaDict.update(subtask.getSchemaCatalogs())
def lsst.pipe.base.task.Task.getFullMetadata |
( |
|
self | ) |
|
|
inherited |
Get metadata for all tasks.
Returns
-------
metadata : `lsst.daf.base.PropertySet`
The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
for the top-level task and all subtasks, sub-subtasks, etc..
Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
topLevelTaskName:subtaskName:subsubtaskName.itemName
using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.
Definition at line 210 of file task.py.
210 def getFullMetadata(self):
211 """Get metadata for all tasks.
215 metadata : `lsst.daf.base.PropertySet`
216 The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
217 for the top-level task and all subtasks, sub-subtasks, etc..
221 The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
222 and any metadata set by the task. The name of each item consists of the full task name
223 with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::
225 topLevelTaskName:subtaskName:subsubtaskName.itemName
227 using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
228 and a metadata item with the same name.
231 for fullName, task
in self.getTaskDict().
items():
232 fullMetadata.set(fullName.replace(
".",
":"), task.metadata)
def lsst.pipe.base.task.Task.getSchemaCatalogs |
( |
|
self | ) |
|
|
inherited |
Get the schemas generated by this task.
Returns
-------
schemaCatalogs : `dict`
Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
`lsst.afw.table` Catalog type) for this task.
Notes
-----
.. warning::
Subclasses that use schemas must override this method. The default implemenation returns
an empty dict.
This method may be called at any time after the Task is constructed, which means that all task
schemas should be computed at construction time, *not* when data is actually processed. This
reflects the philosophy that the schema should not depend on the data.
Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
See also
--------
Task.getAllSchemaCatalogs
Definition at line 159 of file task.py.
159 def getSchemaCatalogs(self):
160 """Get the schemas generated by this task.
164 schemaCatalogs : `dict`
165 Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
166 `lsst.afw.table` Catalog type) for this task.
173 Subclasses that use schemas must override this method. The default implemenation returns
176 This method may be called at any time after the Task is constructed, which means that all task
177 schemas should be computed at construction time, *not* when data is actually processed. This
178 reflects the philosophy that the schema should not depend on the data.
180 Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
184 Task.getAllSchemaCatalogs
def lsst.pipe.base.cmdLineTask.CmdLineTask.parseAndRun |
( |
|
cls, |
|
|
|
args = None , |
|
|
|
config = None , |
|
|
|
log = None , |
|
|
|
doReturnResults = False |
|
) |
| |
Parse an argument list and run the command.
Parameters
----------
args : `list`, optional
List of command-line arguments; if `None` use `sys.argv`.
config : `lsst.pex.config.Config`-type, optional
Config for task. If `None` use `Task.ConfigClass`.
log : `lsst.log.Log`-type, optional
Log. If `None` use the default log.
doReturnResults : `bool`, optional
If `True`, return the results of this task. Default is `False`. This is only intended for
unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
call it enough times) and it will fail when using multiprocessing if the returned data cannot be
pickled.
Returns
-------
struct : `lsst.pipe.base.Struct`
Fields are:
``argumentParser``
the argument parser (`lsst.pipe.base.ArgumentParser`).
``parsedCmd``
the parsed command returned by the argument parser's
`~lsst.pipe.base.ArgumentParser.parse_args` method
(`argparse.Namespace`).
``taskRunner``
the task runner used to run the task (an instance of `Task.RunnerClass`).
``resultList``
results returned by the task runner's ``run`` method, one entry
per invocation (`list`). This will typically be a list of
`Struct`, each containing at least an ``exitStatus`` integer
(0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
more details.
Notes
-----
Calling this method with no arguments specified is the standard way to run a command-line task
from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
file in that directory.
If one or more of the dataIds fails then this routine will exit (with a status giving the
number of failed dataIds) rather than returning this struct; this behaviour can be
overridden by specifying the ``--noExit`` command-line option.
Definition at line 549 of file cmdLineTask.py.
549 def parseAndRun(cls, args=None, config=None, log=None, doReturnResults=False):
550 """Parse an argument list and run the command.
554 args : `list`, optional
555 List of command-line arguments; if `None` use `sys.argv`.
556 config : `lsst.pex.config.Config`-type, optional
557 Config for task. If `None` use `Task.ConfigClass`.
558 log : `lsst.log.Log`-type, optional
559 Log. If `None` use the default log.
560 doReturnResults : `bool`, optional
561 If `True`, return the results of this task. Default is `False`. This is only intended for
562 unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you
563 call it enough times) and it will fail when using multiprocessing if the returned data cannot be
568 struct : `lsst.pipe.base.Struct`
572 the argument parser (`lsst.pipe.base.ArgumentParser`).
574 the parsed command returned by the argument parser's
575 `~lsst.pipe.base.ArgumentParser.parse_args` method
576 (`argparse.Namespace`).
578 the task runner used to run the task (an instance of `Task.RunnerClass`).
580 results returned by the task runner's ``run`` method, one entry
581 per invocation (`list`). This will typically be a list of
582 `Struct`, each containing at least an ``exitStatus`` integer
583 (0 or 1); see `Task.RunnerClass` (`TaskRunner` by default) for
588 Calling this method with no arguments specified is the standard way to run a command-line task
589 from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other
590 file in that directory.
592 If one or more of the dataIds fails then this routine will exit (with a status giving the
593 number of failed dataIds) rather than returning this struct; this behaviour can be
594 overridden by specifying the ``--noExit`` command-line option.
597 commandAsStr =
" ".join(sys.argv)
602 argumentParser = cls._makeArgumentParser()
604 config = cls.ConfigClass()
605 parsedCmd = argumentParser.parse_args(config=config, args=args, log=log, override=cls.applyOverrides)
607 parsedCmd.log.info(
"Running: %s", commandAsStr)
609 taskRunner = cls.RunnerClass(TaskClass=cls, parsedCmd=parsedCmd, doReturnResults=doReturnResults)
610 resultList = taskRunner.run(parsedCmd)
613 nFailed = sum(((res.exitStatus != 0)
for res
in resultList))
614 except (TypeError, AttributeError)
as e:
616 parsedCmd.log.warn(
"Unable to retrieve exit status (%s); assuming success", e)
621 parsedCmd.log.error(
"%d dataRefs failed; not exiting as --noExit was set", nFailed)
626 argumentParser=argumentParser,
628 taskRunner=taskRunner,
629 resultList=resultList,
def lsst.pipe.base.cmdLineTask.CmdLineTask.writeConfig |
( |
|
self, |
|
|
|
butler, |
|
|
|
clobber = False , |
|
|
|
doBackup = True |
|
) |
| |
Write the configuration used for processing the data, or check that an existing
one is equal to the new one if present.
Parameters
----------
butler : `lsst.daf.persistence.Butler`
Data butler used to write the config. The config is written to dataset type
`CmdLineTask._getConfigName`.
clobber : `bool`, optional
A boolean flag that controls what happens if a config already has been saved:
- `True`: overwrite or rename the existing config, depending on ``doBackup``.
- `False`: raise `TaskError` if this config does not match the existing config.
doBackup : bool, optional
Set to `True` to backup the config files if clobbering.
Reimplemented in lsst.pipe.tasks.postprocess.ConsolidateSourceTableTask.
Definition at line 656 of file cmdLineTask.py.
656 def writeConfig(self, butler, clobber=False, doBackup=True):
657 """Write the configuration used for processing the data, or check that an existing
658 one is equal to the new one if present.
662 butler : `lsst.daf.persistence.Butler`
663 Data butler used to write the config. The config is written to dataset type
664 `CmdLineTask._getConfigName`.
665 clobber : `bool`, optional
666 A boolean flag that controls what happens if a config already has been saved:
667 - `True`: overwrite or rename the existing config, depending on ``doBackup``.
668 - `False`: raise `TaskError` if this config does not match the existing config.
669 doBackup : bool, optional
670 Set to `True` to backup the config files if clobbering.
672 configName = self._getConfigName()
673 if configName
is None:
676 butler.put(self.config, configName, doBackup=doBackup)
677 elif butler.datasetExists(configName, write=
True):
680 oldConfig = butler.get(configName, immediate=
True)
681 except Exception
as exc:
682 raise type(exc)(f
"Unable to read stored config file {configName} (exc); "
683 "consider using --clobber-config")
685 def logConfigMismatch(msg):
686 self.log.
fatal(
"Comparing configuration: %s", msg)
688 if not self.config.compare(oldConfig, shortcut=
False, output=logConfigMismatch):
690 f
"Config does not match existing task config {configName!r} on disk; "
691 "tasks configurations must be consistent within the same output repo "
692 "(override with --clobber-config)")
694 butler.put(self.config, configName)
def lsst.pipe.base.cmdLineTask.CmdLineTask.writePackageVersions |
( |
|
self, |
|
|
|
butler, |
|
|
|
clobber = False , |
|
|
|
doBackup = True , |
|
|
|
dataset = "packages" |
|
) |
| |
Compare and write package versions.
Parameters
----------
butler : `lsst.daf.persistence.Butler`
Data butler used to read/write the package versions.
clobber : `bool`, optional
A boolean flag that controls what happens if versions already have been saved:
- `True`: overwrite or rename the existing version info, depending on ``doBackup``.
- `False`: raise `TaskError` if this version info does not match the existing.
doBackup : `bool`, optional
If `True` and clobbering, old package version files are backed up.
dataset : `str`, optional
Name of dataset to read/write.
Raises
------
TaskError
Raised if there is a version mismatch with current and persisted lists of package versions.
Notes
-----
Note that this operation is subject to a race condition.
Definition at line 747 of file cmdLineTask.py.
747 def writePackageVersions(self, butler, clobber=False, doBackup=True, dataset="packages"):
748 """Compare and write package versions.
752 butler : `lsst.daf.persistence.Butler`
753 Data butler used to read/write the package versions.
754 clobber : `bool`, optional
755 A boolean flag that controls what happens if versions already have been saved:
756 - `True`: overwrite or rename the existing version info, depending on ``doBackup``.
757 - `False`: raise `TaskError` if this version info does not match the existing.
758 doBackup : `bool`, optional
759 If `True` and clobbering, old package version files are backed up.
760 dataset : `str`, optional
761 Name of dataset to read/write.
766 Raised if there is a version mismatch with current and persisted lists of package versions.
770 Note that this operation is subject to a race condition.
772 packages = Packages.fromSystem()
775 return butler.put(packages, dataset, doBackup=doBackup)
776 if not butler.datasetExists(dataset, write=
True):
777 return butler.put(packages, dataset)
780 old = butler.get(dataset, immediate=
True)
781 except Exception
as exc:
782 raise type(exc)(f
"Unable to read stored version dataset {dataset} ({exc}); "
783 "consider using --clobber-versions or --no-versions")
787 diff = packages.difference(old)
789 versions_str =
"; ".join(f
"{pkg}: {diff[pkg][1]} vs {diff[pkg][0]}" for pkg
in diff)
791 f
"Version mismatch ({versions_str}); consider using --clobber-versions or --no-versions")
793 extra = packages.extra(old)
796 butler.put(old, dataset, doBackup=doBackup)
def lsst.pipe.base.cmdLineTask.CmdLineTask.writeSchemas |
( |
|
self, |
|
|
|
butler, |
|
|
|
clobber = False , |
|
|
|
doBackup = True |
|
) |
| |
Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.
Parameters
----------
butler : `lsst.daf.persistence.Butler`
Data butler used to write the schema. Each schema is written to the dataset type specified as the
key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
clobber : `bool`, optional
A boolean flag that controls what happens if a schema already has been saved:
- `True`: overwrite or rename the existing schema, depending on ``doBackup``.
- `False`: raise `TaskError` if this schema does not match the existing schema.
doBackup : `bool`, optional
Set to `True` to backup the schema files if clobbering.
Notes
-----
If ``clobber`` is `False` and an existing schema does not match a current schema,
then some schemas may have been saved successfully and others may not, and there is no easy way to
tell which is which.
Definition at line 696 of file cmdLineTask.py.
696 def writeSchemas(self, butler, clobber=False, doBackup=True):
697 """Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`.
701 butler : `lsst.daf.persistence.Butler`
702 Data butler used to write the schema. Each schema is written to the dataset type specified as the
703 key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`.
704 clobber : `bool`, optional
705 A boolean flag that controls what happens if a schema already has been saved:
706 - `True`: overwrite or rename the existing schema, depending on ``doBackup``.
707 - `False`: raise `TaskError` if this schema does not match the existing schema.
708 doBackup : `bool`, optional
709 Set to `True` to backup the schema files if clobbering.
713 If ``clobber`` is `False` and an existing schema does not match a current schema,
714 then some schemas may have been saved successfully and others may not, and there is no easy way to
717 for dataset, catalog
in self.getAllSchemaCatalogs().
items():
718 schemaDataset = dataset +
"_schema"
720 butler.put(catalog, schemaDataset, doBackup=doBackup)
721 elif butler.datasetExists(schemaDataset, write=
True):
722 oldSchema = butler.get(schemaDataset, immediate=
True).getSchema()
723 if not oldSchema.compare(catalog.getSchema(), afwTable.Schema.IDENTICAL):
725 f
"New schema does not match schema {dataset!r} on disk; "
726 "schemas must be consistent within the same output repo "
727 "(override with --clobber-config)")
729 butler.put(catalog, schemaDataset)