|
LSST Applications
21.0.0-172-gfb10e10a+18fedfabac,22.0.0+297cba6710,22.0.0+80564b0ff1,22.0.0+8d77f4f51a,22.0.0+a28f4c53b1,22.0.0+dcf3732eb2,22.0.1-1-g7d6de66+2a20fdde0d,22.0.1-1-g8e32f31+297cba6710,22.0.1-1-geca5380+7fa3b7d9b6,22.0.1-12-g44dc1dc+2a20fdde0d,22.0.1-15-g6a90155+515f58c32b,22.0.1-16-g9282f48+790f5f2caa,22.0.1-2-g92698f7+dcf3732eb2,22.0.1-2-ga9b0f51+7fa3b7d9b6,22.0.1-2-gd1925c9+bf4f0e694f,22.0.1-24-g1ad7a390+a9625a72a8,22.0.1-25-g5bf6245+3ad8ecd50b,22.0.1-25-gb120d7b+8b5510f75f,22.0.1-27-g97737f7+2a20fdde0d,22.0.1-32-gf62ce7b1+aa4237961e,22.0.1-4-g0b3f228+2a20fdde0d,22.0.1-4-g243d05b+871c1b8305,22.0.1-4-g3a563be+32dcf1063f,22.0.1-4-g44f2e3d+9e4ab0f4fa,22.0.1-42-gca6935d93+ba5e5ca3eb,22.0.1-5-g15c806e+85460ae5f3,22.0.1-5-g58711c4+611d128589,22.0.1-5-g75bb458+99c117b92f,22.0.1-6-g1c63a23+7fa3b7d9b6,22.0.1-6-g50866e6+84ff5a128b,22.0.1-6-g8d3140d+720564cf76,22.0.1-6-gd805d02+cc5644f571,22.0.1-8-ge5750ce+85460ae5f3,master-g6e05de7fdc+babf819c66,master-g99da0e417a+8d77f4f51a,w.2021.48
LSST Data Management Base Package
|
Public Member Functions | |
| def | __new__ (cls, *args, **kwargs) |
| def | __init__ (self, **kwargs) |
| def | __getstate__ (self) |
| def | __setstate__ (self, state) |
| def | keys (self) |
| def | queryMetadata (self, datasetType, format, dataId) |
| def | getDatasetTypes (self) |
| def | map (self, datasetType, dataId, write=False) |
| def | canStandardize (self, datasetType) |
| def | standardize (self, datasetType, item, dataId) |
| def | validate (self, dataId) |
| def | backup (self, datasetType, dataId) |
| def | getRegistry (self) |
Static Public Member Functions | |
| def | Mapper (cfg) |
Mapper is a base class for all mappers.
Subclasses may define the following methods:
map_{datasetType}(self, dataId, write)
Map a dataset id for the given dataset type into a ButlerLocation.
If write=True, this mapping is for an output dataset.
query_{datasetType}(self, key, format, dataId)
Return the possible values for the format fields that would produce
datasets at the granularity of key in combination with the provided
partial dataId.
std_{datasetType}(self, item)
Standardize an object of the given data set type.
Methods that must be overridden:
keys(self)
Return a list of the keys that can be used in data ids.
Other public methods:
__init__(self)
getDatasetTypes(self)
map(self, datasetType, dataId, write=False)
queryMetadata(self, datasetType, key, format, dataId)
canStandardize(self, datasetType)
standardize(self, datasetType, item, dataId)
validate(self, dataId)
|
static |
Instantiate a Mapper from a configuration.
In come cases the cfg may have already been instantiated into a Mapper, this is allowed and
the input var is simply returned.
:param cfg: the cfg for this mapper. It is recommended this be created by calling
Mapper.cfg()
:return: a Mapper instance
Definition at line 71 of file mapper.py.
| def lsst.daf.persistence.mapper.Mapper.__init__ | ( | self, | |
| ** | kwargs | ||
| ) |
| def lsst.daf.persistence.mapper.Mapper.__getstate__ | ( | self | ) |
| def lsst.daf.persistence.mapper.Mapper.__new__ | ( | cls, | |
| * | args, | ||
| ** | kwargs | ||
| ) |
Create a new Mapper, saving arguments for pickling. This is in __new__ instead of __init__ to save the user from having to save the arguments themselves (either explicitly, or by calling the super's __init__ with all their *args,**kwargs. The resulting pickling system (of __new__, __getstate__ and __setstate__ is similar to how __reduce__ is usually used, except that we save the user from any responsibility (except when overriding __new__, but that is not common).
Definition at line 84 of file mapper.py.
| def lsst.daf.persistence.mapper.Mapper.__setstate__ | ( | self, | |
| state | |||
| ) |
| def lsst.daf.persistence.mapper.Mapper.backup | ( | self, | |
| datasetType, | |||
| dataId | |||
| ) |
Rename any existing object with the given type and dataId. Not implemented in the base mapper.
| def lsst.daf.persistence.mapper.Mapper.canStandardize | ( | self, | |
| datasetType | |||
| ) |
Return true if this mapper can standardize an object of the given dataset type.
| def lsst.daf.persistence.mapper.Mapper.getDatasetTypes | ( | self | ) |
| def lsst.daf.persistence.mapper.Mapper.getRegistry | ( | self | ) |
| def lsst.daf.persistence.mapper.Mapper.keys | ( | self | ) |
Reimplemented in lsst.pipe.tasks.mocks.simpleMapper.SimpleMapper.
Definition at line 111 of file mapper.py.
| def lsst.daf.persistence.mapper.Mapper.map | ( | self, | |
| datasetType, | |||
| dataId, | |||
write = False |
|||
| ) |
Map a data id using the mapping method for its dataset type.
Parameters
----------
datasetType : string
The datasetType to map
dataId : DataId instance
The dataId to use when mapping
write : bool, optional
Indicates if the map is being performed for a read operation
(False) or a write operation (True)
Returns
-------
ButlerLocation or a list of ButlerLocation
The location(s) found for the map operation. If write is True, a
list is returned. If write is False a single ButlerLocation is
returned.
Raises
------
NoResults
If no locaiton was found for this map operation, the derived mapper
class may raise a lsst.daf.persistence.NoResults exception. Butler
catches this and will look in the next Repository if there is one.
Definition at line 137 of file mapper.py.
| def lsst.daf.persistence.mapper.Mapper.queryMetadata | ( | self, | |
| datasetType, | |||
| format, | |||
| dataId | |||
| ) |
| def lsst.daf.persistence.mapper.Mapper.standardize | ( | self, | |
| datasetType, | |||
| item, | |||
| dataId | |||
| ) |
| def lsst.daf.persistence.mapper.Mapper.validate | ( | self, | |
| dataId | |||
| ) |