LSSTApplications
18.1.0
LSSTDataManagementBasePackage
|
Public Member Functions | |
def | __init__ (self, schema=None, kwargs) |
def | runDataRef (self, dataRef) |
def | adaptArgsAndRun (self, inputData, inputDataIds, outputDataIds, butler) |
def | run (self, fakeCat, exposure, wcs=None, photoCalib=None, exposureIdInfo=None) |
def | getInitOutputDatasets (self) |
def | getInputDatasetTypes (cls, config) |
def | getOutputDatasetTypes (cls, config) |
def | getPrerequisiteDatasetTypes (cls, config) |
def | getInitInputDatasetTypes (cls, config) |
def | getInitOutputDatasetTypes (cls, config) |
def | getDatasetTypes (cls, config, configClass) |
def | getPerDatasetTypeDimensions (cls, config) |
def | run (self, kwargs) |
def | runQuantum (self, quantum, butler) |
def | saveStruct (self, struct, outputDataRefs, butler) |
def | getResourceConfig (self) |
def | emptyMetadata (self) |
def | getSchemaCatalogs (self) |
def | getAllSchemaCatalogs (self) |
def | getFullMetadata (self) |
def | getFullName (self) |
def | getName (self) |
def | getTaskDict (self) |
def | makeSubtask (self, name, keyArgs) |
def | timer (self, name, logLevel=Log.DEBUG) |
def | makeField (cls, doc) |
def | __reduce__ (self) |
def | applyOverrides (cls, config) |
def | parseAndRun (cls, args=None, config=None, log=None, doReturnResults=False) |
def | writeConfig (self, butler, clobber=False, doBackup=True) |
def | writeSchemas (self, butler, clobber=False, doBackup=True) |
def | writeMetadata (self, dataRef) |
def | writePackageVersions (self, butler, clobber=False, doBackup=True, dataset="packages") |
def | emptyMetadata (self) |
def | getSchemaCatalogs (self) |
def | getAllSchemaCatalogs (self) |
def | getFullMetadata (self) |
def | getFullName (self) |
def | getName (self) |
def | getTaskDict (self) |
def | makeSubtask (self, name, keyArgs) |
def | timer (self, name, logLevel=Log.DEBUG) |
def | makeField (cls, doc) |
def | __reduce__ (self) |
Public Attributes | |
schema | |
algMetadata | |
metadata | |
log | |
config | |
metadata | |
log | |
config | |
Static Public Attributes | |
ConfigClass = ProcessCcdWithFakesConfig | |
bool | canMultiprocess = True |
RunnerClass = TaskRunner | |
bool | canMultiprocess = True |
Insert fake objects into calexps. Add fake stars and galaxies to the given calexp, specified in the dataRef. Galaxy parameters are read in from the specified file and then modelled using galsim. Re-runs characterize image and calibrate image to give a new background estimation and measurement of the calexp. `ProcessFakeSourcesTask` inherits six functions from insertFakesTask that make images of the fake sources and then add them to the calexp. `addPixCoords` Use the WCS information to add the pixel coordinates of each source Adds an ``x`` and ``y`` column to the catalog of fake sources. `trimFakeCat` Trim the fake cat to about the size of the input image. `mkFakeGalsimGalaxies` Use Galsim to make fake double sersic galaxies for each set of galaxy parameters in the input file. `mkFakeStars` Use the PSF information from the calexp to make a fake star using the magnitude information from the input file. `cleanCat` Remove rows of the input fake catalog which have half light radius, of either the bulge or the disk, that are 0. `addFakeSources` Add the fake sources to the calexp. Notes ----- The ``calexp`` with fake souces added to it is written out as the datatype ``calexp_fakes``.
Definition at line 137 of file processFakes.py.
def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.__init__ | ( | self, | |
schema = None , |
|||
kwargs | |||
) |
Initalize tings! This should go above in the class docstring
Definition at line 171 of file processFakes.py.
|
inherited |
Pickler.
|
inherited |
Pickler.
def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.adaptArgsAndRun | ( | self, | |
inputData, | |||
inputDataIds, | |||
outputDataIds, | |||
butler | |||
) |
Definition at line 231 of file processFakes.py.
|
inherited |
A hook to allow a task to change the values of its config *after* the camera-specific overrides are loaded but before any command-line overrides are applied. Parameters ---------- config : instance of task's ``ConfigClass`` Task configuration. Notes ----- This is necessary in some cases because the camera-specific overrides may retarget subtasks, wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion. .. warning:: This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply these overrides.
Definition at line 527 of file cmdLineTask.py.
|
inherited |
Empty (clear) the metadata for this Task and all sub-Tasks.
Definition at line 153 of file task.py.
|
inherited |
Empty (clear) the metadata for this Task and all sub-Tasks.
Definition at line 153 of file task.py.
|
inherited |
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict. Returns ------- schemacatalogs : `dict` Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks. Notes ----- This method may be called on any task in the hierarchy; it will return the same answer, regardless. The default implementation should always suffice. If your subtask uses schemas the override `Task.getSchemaCatalogs`, not this method.
|
inherited |
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict. Returns ------- schemacatalogs : `dict` Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks. Notes ----- This method may be called on any task in the hierarchy; it will return the same answer, regardless. The default implementation should always suffice. If your subtask uses schemas the override `Task.getSchemaCatalogs`, not this method.
|
inherited |
Return dataset type descriptors defined in task configuration. This method can be used by other methods that need to extract dataset types from task configuration (e.g. `getInputDatasetTypes` or sub-class methods). Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. configClass : `type` Class of the configuration object which defines dataset type. Returns ------- Dictionary where key is the name (arbitrary) of the output dataset and value is the `DatasetTypeDescriptor` instance. Default implementation uses configuration field name as dictionary key. Returns empty dict if configuration has no fields with the specified ``configClass``.
Definition at line 328 of file pipelineTask.py.
|
inherited |
Get metadata for all tasks. Returns ------- metadata : `lsst.daf.base.PropertySet` The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.. Notes ----- The returned metadata includes timing information (if ``@timer.timeMethod`` is used) and any metadata set by the task. The name of each item consists of the full task name with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.:: topLevelTaskName:subtaskName:subsubtaskName.itemName using ``:`` in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.
Definition at line 210 of file task.py.
|
inherited |
Get metadata for all tasks. Returns ------- metadata : `lsst.daf.base.PropertySet` The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.. Notes ----- The returned metadata includes timing information (if ``@timer.timeMethod`` is used) and any metadata set by the task. The name of each item consists of the full task name with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.:: topLevelTaskName:subtaskName:subsubtaskName.itemName using ``:`` in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.
Definition at line 210 of file task.py.
|
inherited |
Get the task name as a hierarchical name including parent task names. Returns ------- fullName : `str` The full name consists of the name of the parent task and each subtask separated by periods. For example: - The full name of top-level task "top" is simply "top". - The full name of subtask "sub" of top-level task "top" is "top.sub". - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
Definition at line 235 of file task.py.
|
inherited |
Get the task name as a hierarchical name including parent task names. Returns ------- fullName : `str` The full name consists of the name of the parent task and each subtask separated by periods. For example: - The full name of top-level task "top" is simply "top". - The full name of subtask "sub" of top-level task "top" is "top.sub". - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".
Definition at line 235 of file task.py.
|
inherited |
Return dataset type descriptors that can be used to retrieve the ``initInputs`` constructor argument. Datasets used in initialization may not be associated with any Dimension (i.e. their data IDs must be empty dictionaries). Default implementation finds all fields of type `InitInputInputDatasetConfig` in configuration (non-recursively) and uses them for constructing `DatasetTypeDescriptor` instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior. Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. Returns ------- Dictionary where key is the name (arbitrary) of the input dataset and value is the `DatasetTypeDescriptor` instance. Default implementation uses configuration field name as dictionary key. When the task requires no initialization inputs, should return an empty dict.
Definition at line 266 of file pipelineTask.py.
|
inherited |
Return persistable outputs that are available immediately after the task has been constructed. Subclasses that operate on catalogs should override this method to return the schema(s) of the catalog(s) they produce. It is not necessary to return the PipelineTask's configuration or other provenance information in order for it to be persisted; that is the responsibility of the execution system. Returns ------- datasets : `dict` Dictionary with keys that match those of the dict returned by `getInitOutputDatasetTypes` values that can be written by calling `Butler.put` with those DatasetTypes and no data IDs. An empty `dict` should be returned by tasks that produce no initialization outputs.
Definition at line 166 of file pipelineTask.py.
|
inherited |
Return dataset type descriptors that can be used to write the objects returned by `getOutputDatasets`. Datasets used in initialization may not be associated with any Dimension (i.e. their data IDs must be empty dictionaries). Default implementation finds all fields of type `InitOutputDatasetConfig` in configuration (non-recursively) and uses them for constructing `DatasetTypeDescriptor` instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior. Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. Returns ------- Dictionary where key is the name (arbitrary) of the output dataset and value is the `DatasetTypeDescriptor` instance. Default implementation uses configuration field name as dictionary key. When the task produces no initialization outputs, should return an empty dict.
Definition at line 297 of file pipelineTask.py.
|
inherited |
Return input dataset type descriptors for this task. Default implementation finds all fields of type `InputDatasetConfig` in configuration (non-recursively) and uses them for constructing `DatasetTypeDescriptor` instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior. Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. Returns ------- Dictionary where key is the name (arbitrary) of the input dataset and value is the `DatasetTypeDescriptor` instance. Default implementation uses configuration field name as dictionary key.
Definition at line 189 of file pipelineTask.py.
|
inherited |
Get the name of the task. Returns ------- taskName : `str` Name of the task. See also -------- getFullName
|
inherited |
Get the name of the task. Returns ------- taskName : `str` Name of the task. See also -------- getFullName
|
inherited |
Return output dataset type descriptors for this task. Default implementation finds all fields of type `OutputDatasetConfig` in configuration (non-recursively) and uses them for constructing `DatasetTypeDescriptor` instances. The keys of these fields are used as keys in returned dictionary. Subclasses can override this behavior. Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. Returns ------- Dictionary where key is the name (arbitrary) of the output dataset and value is the `DatasetTypeDescriptor` instance. Default implementation uses configuration field name as dictionary key.
Definition at line 212 of file pipelineTask.py.
|
inherited |
Return any Dimensions that are permitted to have different values for different DatasetTypes within the same quantum. Parameters ---------- config : `Config` Configuration for this task. Returns ------- dimensions : `~collections.abc.Set` of `Dimension` or `str` The dimensions or names thereof that should be considered per-DatasetType. Notes ----- Any Dimension declared to be per-DatasetType by a PipelineTask must also be declared to be per-DatasetType by other PipelineTasks in the same Pipeline. The classic example of a per-DatasetType dimension is the ``CalibrationLabel`` dimension that maps to a validity range for master calibrations. When running Instrument Signature Removal, one does not care that different dataset types like flat, bias, and dark have different validity ranges, as long as those validity ranges all overlap the relevant observation.
Definition at line 358 of file pipelineTask.py.
|
inherited |
Return the local names of input dataset types that should be assumed to exist instead of constraining what data to process with this task. Usually, when running a `PipelineTask`, the presence of input datasets constrains the processing to be done (as defined by the `QuantumGraph` generated during "preflight"). "Prerequisites" are special input datasets that do not constrain that graph, but instead cause a hard failure when missing. Calibration products and reference catalogs are examples of dataset types that should usually be marked as prerequisites. Parameters ---------- config : `Config` Configuration for this task. Typically datasets are defined in a task configuration. Returns ------- prerequisite : `~collections.abc.Set` of `str` The keys in the dictionary returned by `getInputDatasetTypes` that represent dataset types that should be considered prerequisites. Names returned here that are not keys in that dictionary are ignored; that way, if a config option removes an input dataset type only `getInputDatasetTypes` needs to be updated.
Definition at line 235 of file pipelineTask.py.
|
inherited |
Return resource configuration for this task. Returns ------- Object of type `~config.ResourceConfig` or ``None`` if resource configuration is not defined for this task.
Definition at line 615 of file pipelineTask.py.
|
inherited |
Get the schemas generated by this task. Returns ------- schemaCatalogs : `dict` Keys are butler dataset type, values are an empty catalog (an instance of the appropriate `lsst.afw.table` Catalog type) for this task. Notes ----- .. warning:: Subclasses that use schemas must override this method. The default implemenation returns an empty dict. This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, *not* when data is actually processed. This reflects the philosophy that the schema should not depend on the data. Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well. See also -------- Task.getAllSchemaCatalogs
|
inherited |
Get the schemas generated by this task. Returns ------- schemaCatalogs : `dict` Keys are butler dataset type, values are an empty catalog (an instance of the appropriate `lsst.afw.table` Catalog type) for this task. Notes ----- .. warning:: Subclasses that use schemas must override this method. The default implemenation returns an empty dict. This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, *not* when data is actually processed. This reflects the philosophy that the schema should not depend on the data. Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well. See also -------- Task.getAllSchemaCatalogs
|
inherited |
Get a dictionary of all tasks as a shallow copy. Returns ------- taskDict : `dict` Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..
Definition at line 264 of file task.py.
|
inherited |
Get a dictionary of all tasks as a shallow copy. Returns ------- taskDict : `dict` Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..
Definition at line 264 of file task.py.
|
inherited |
Make a `lsst.pex.config.ConfigurableField` for this task. Parameters ---------- doc : `str` Help text for the field. Returns ------- configurableField : `lsst.pex.config.ConfigurableField` A `~ConfigurableField` for this task. Examples -------- Provides a convenient way to specify this task is a subtask of another task. Here is an example of use:: class OtherTaskConfig(lsst.pex.config.Config) aSubtask = ATaskClass.makeField("a brief description of what this task does")
Definition at line 329 of file task.py.
|
inherited |
Make a `lsst.pex.config.ConfigurableField` for this task. Parameters ---------- doc : `str` Help text for the field. Returns ------- configurableField : `lsst.pex.config.ConfigurableField` A `~ConfigurableField` for this task. Examples -------- Provides a convenient way to specify this task is a subtask of another task. Here is an example of use:: class OtherTaskConfig(lsst.pex.config.Config) aSubtask = ATaskClass.makeField("a brief description of what this task does")
Definition at line 329 of file task.py.
|
inherited |
Create a subtask as a new instance as the ``name`` attribute of this task. Parameters ---------- name : `str` Brief name of the subtask. keyArgs Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden: - "config". - "parentTask". Notes ----- The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField or RegistryField.
|
inherited |
Create a subtask as a new instance as the ``name`` attribute of this task. Parameters ---------- name : `str` Brief name of the subtask. keyArgs Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden: - "config". - "parentTask". Notes ----- The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField or RegistryField.
|
inherited |
Parse an argument list and run the command. Parameters ---------- args : `list`, optional List of command-line arguments; if `None` use `sys.argv`. config : `lsst.pex.config.Config`-type, optional Config for task. If `None` use `Task.ConfigClass`. log : `lsst.log.Log`-type, optional Log. If `None` use the default log. doReturnResults : `bool`, optional If `True`, return the results of this task. Default is `False`. This is only intended for unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you call it enough times) and it will fail when using multiprocessing if the returned data cannot be pickled. Returns ------- struct : `lsst.pipe.base.Struct` Fields are: - ``argumentParser``: the argument parser. - ``parsedCmd``: the parsed command returned by the argument parser's `lsst.pipe.base.ArgumentParser.parse_args` method. - ``taskRunner``: the task runner used to run the task (an instance of `Task.RunnerClass`). - ``resultList``: results returned by the task runner's ``run`` method, one entry per invocation. This will typically be a list of `None` unless ``doReturnResults`` is `True`; see `Task.RunnerClass` (`TaskRunner` by default) for more information. Notes ----- Calling this method with no arguments specified is the standard way to run a command-line task from the command-line. For an example see ``pipe_tasks`` ``bin/makeSkyMap.py`` or almost any other file in that directory. If one or more of the dataIds fails then this routine will exit (with a status giving the number of failed dataIds) rather than returning this struct; this behaviour can be overridden by specifying the ``--noExit`` command-line option.
Definition at line 549 of file cmdLineTask.py.
def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.run | ( | self, | |
fakeCat, | |||
exposure, | |||
wcs = None , |
|||
photoCalib = None , |
|||
exposureIdInfo = None |
|||
) |
Add fake sources to a calexp and then run detection, deblending and measurement. Parameters ---------- fakeCat : `pandas.core.frame.DataFrame` The catalog of fake sources to add to the exposure exposure : `lsst.afw.image.exposure.exposure.ExposureF` The exposure to add the fake sources to wcs : `lsst.afw.geom.skyWcs.skyWcs.SkyWcs` WCS to use to add fake sources photoCalib : `lsst.afw.image.photoCalib.PhotoCalib` Photometric calibration to be used to calibrate the fake sources exposureIdInfo : `lsst.obs.base.ExposureIdInfo` Returns ------- resultStruct : `lsst.pipe.base.struct.Struct` contains : outputExposure : `lsst.afw.image.exposure.exposure.ExposureF` outputCat : `lsst.afw.table.source.source.SourceCatalog` Notes ----- Adds pixel coordinates for each source to the fakeCat and removes objects with bulge or disk half light radius = 0 (if ``config.cleanCat = True``). These columns are called ``x`` and ``y`` and are in pixels. Adds the ``Fake`` mask plane to the exposure which is then set by `addFakeSources` to mark where fake sources have been added. Uses the information in the ``fakeCat`` to make fake galaxies (using galsim) and fake stars, using the PSF models from the PSF information for the calexp. These are then added to the calexp and the calexp with fakes included returned. The galsim galaxies are made using a double sersic profile, one for the bulge and one for the disk, this is then convolved with the PSF at that point. If exposureIdInfo is not provided then the SourceCatalog IDs will not be globally unique.
Definition at line 254 of file processFakes.py.
|
inherited |
Run task algorithm on in-memory data. This method should be implemented in a subclass unless tasks overrides `adaptArgsAndRun` to do something different from its default implementation. With default implementation of `adaptArgsAndRun` this method will receive keyword arguments whose names will be the same as names of configuration fields describing input dataset types. Argument values will be data objects retrieved from data butler. If a dataset type is configured with ``scalar`` field set to ``True`` then argument value will be a single object, otherwise it will be a list of objects. If the task needs to know its input or output DataIds then it has to override `adaptArgsAndRun` method instead. Returns ------- struct : `Struct` See description of `adaptArgsAndRun` method. Examples -------- Typical implementation of this method may look like:: def run(self, input, calib): # "input", "calib", and "output" are the names of the config fields # Assuming that input/calib datasets are `scalar` they are simple objects, # do something with inputs and calibs, produce output image. image = self.makeImage(input, calib) # If output dataset is `scalar` then return object, not list return Struct(output=image)
Definition at line 444 of file pipelineTask.py.
def lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.runDataRef | ( | self, | |
dataRef | |||
) |
Read in/write out the required data products and add fake sources to the calexp. Parameters ---------- dataRef : `lsst.daf.persistence.butlerSubset.ButlerDataRef` Data reference defining the ccd to have fakes added to it. Used to access the following data products: calexp jointcal_wcs jointcal_photoCalib Notes ----- Uses the calibration and WCS information attached to the calexp for the posistioning and calibration of the sources unless the config option config.useUpdatedCalibs is set then it uses the meas_mosaic/jointCal outputs. The config defualts for the column names in the catalog of fakes are taken from the University of Washington simulations database. Operates on one ccd at a time.
Definition at line 188 of file processFakes.py.
|
inherited |
Execute PipelineTask algorithm on single quantum of data. Typical implementation of this method will use inputs from quantum to retrieve Python-domain objects from data butler and call `adaptArgsAndRun` method on that data. On return from `adaptArgsAndRun` this method will extract data from returned `Struct` instance and save that data to butler. The `Struct` returned from `adaptArgsAndRun` is expected to contain data attributes with the names equal to the names of the configuration fields defining output dataset types. The values of the data attributes must be data objects corresponding to the DataIds of output dataset types. All data objects will be saved in butler using DataRefs from Quantum's output dictionary. This method does not return anything to the caller, on errors corresponding exception is raised. Parameters ---------- quantum : `Quantum` Object describing input and output corresponding to this invocation of PipelineTask instance. butler : object Data butler instance. Raises ------ `ScalarError` if a dataset type is configured as scalar but receives multiple DataIds in `quantum`. Any exceptions that happen in data butler or in `adaptArgsAndRun` method.
Definition at line 481 of file pipelineTask.py.
|
inherited |
Save data in butler. Convention is that struct returned from ``run()`` method has data field(s) with the same names as the config fields defining output DatasetTypes. Subclasses may override this method to implement different convention for `Struct` content or in case any post-processing of data may be needed. Parameters ---------- struct : `Struct` Data produced by the task packed into `Struct` instance outputDataRefs : `dict` Dictionary whose keys are the names of the configuration fields describing output dataset types and values are lists of DataRefs. DataRefs must match corresponding data objects in ``struct`` in number and order. butler : object Data butler instance.
Definition at line 581 of file pipelineTask.py.
|
inherited |
Context manager to log performance data for an arbitrary block of code. Parameters ---------- name : `str` Name of code being timed; data will be logged using item name: ``Start`` and ``End``. logLevel A `lsst.log` level constant. Examples -------- Creating a timer context:: with self.timer("someCodeToTime"): pass # code to time See also -------- timer.logInfo
Definition at line 301 of file task.py.
|
inherited |
Context manager to log performance data for an arbitrary block of code. Parameters ---------- name : `str` Name of code being timed; data will be logged using item name: ``Start`` and ``End``. logLevel A `lsst.log` level constant. Examples -------- Creating a timer context:: with self.timer("someCodeToTime"): pass # code to time See also -------- timer.logInfo
Definition at line 301 of file task.py.
|
inherited |
Write the configuration used for processing the data, or check that an existing one is equal to the new one if present. Parameters ---------- butler : `lsst.daf.persistence.Butler` Data butler used to write the config. The config is written to dataset type `CmdLineTask._getConfigName`. clobber : `bool`, optional A boolean flag that controls what happens if a config already has been saved: - `True`: overwrite or rename the existing config, depending on ``doBackup``. - `False`: raise `TaskError` if this config does not match the existing config. doBackup : bool, optional Set to `True` to backup the config files if clobbering.
Definition at line 649 of file cmdLineTask.py.
|
inherited |
Write the metadata produced from processing the data. Parameters ---------- dataRef Butler data reference used to write the metadata. The metadata is written to dataset type `CmdLineTask._getMetadataName`.
Definition at line 724 of file cmdLineTask.py.
|
inherited |
Compare and write package versions. Parameters ---------- butler : `lsst.daf.persistence.Butler` Data butler used to read/write the package versions. clobber : `bool`, optional A boolean flag that controls what happens if versions already have been saved: - `True`: overwrite or rename the existing version info, depending on ``doBackup``. - `False`: raise `TaskError` if this version info does not match the existing. doBackup : `bool`, optional If `True` and clobbering, old package version files are backed up. dataset : `str`, optional Name of dataset to read/write. Raises ------ TaskError Raised if there is a version mismatch with current and persisted lists of package versions. Notes ----- Note that this operation is subject to a race condition.
Definition at line 740 of file cmdLineTask.py.
|
inherited |
Write the schemas returned by `lsst.pipe.base.Task.getAllSchemaCatalogs`. Parameters ---------- butler : `lsst.daf.persistence.Butler` Data butler used to write the schema. Each schema is written to the dataset type specified as the key in the dict returned by `~lsst.pipe.base.Task.getAllSchemaCatalogs`. clobber : `bool`, optional A boolean flag that controls what happens if a schema already has been saved: - `True`: overwrite or rename the existing schema, depending on ``doBackup``. - `False`: raise `TaskError` if this schema does not match the existing schema. doBackup : `bool`, optional Set to `True` to backup the schema files if clobbering. Notes ----- If ``clobber`` is `False` and an existing schema does not match a current schema, then some schemas may have been saved successfully and others may not, and there is no easy way to tell which is which.
Definition at line 689 of file cmdLineTask.py.
lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.algMetadata |
Definition at line 181 of file processFakes.py.
|
staticinherited |
Definition at line 161 of file pipelineTask.py.
|
staticinherited |
Definition at line 524 of file cmdLineTask.py.
|
static |
Definition at line 169 of file processFakes.py.
|
staticinherited |
Definition at line 523 of file cmdLineTask.py.
lsst.pipe.tasks.processFakes.ProcessCcdWithFakesTask.schema |
Definition at line 179 of file processFakes.py.