LSST Applications  21.0.0+75b29a8a7f,21.0.0+e70536a077,21.0.0-1-ga51b5d4+62c747d40b,21.0.0-10-gbfb87ad6+3307648ee3,21.0.0-15-gedb9d5423+47cba9fc36,21.0.0-2-g103fe59+fdf0863a2a,21.0.0-2-g1367e85+d38a93257c,21.0.0-2-g45278ab+e70536a077,21.0.0-2-g5242d73+d38a93257c,21.0.0-2-g7f82c8f+e682ffb718,21.0.0-2-g8dde007+d179fbfa6a,21.0.0-2-g8f08a60+9402881886,21.0.0-2-ga326454+e682ffb718,21.0.0-2-ga63a54e+08647d4b1b,21.0.0-2-gde069b7+26c92b3210,21.0.0-2-gecfae73+0445ed2f95,21.0.0-2-gfc62afb+d38a93257c,21.0.0-27-gbbd0d29+ae871e0f33,21.0.0-28-g5fc5e037+feb0e9397b,21.0.0-3-g21c7a62+f4b9c0ff5c,21.0.0-3-g357aad2+57b0bddf0b,21.0.0-3-g4be5c26+d38a93257c,21.0.0-3-g65f322c+3f454acf5d,21.0.0-3-g7d9da8d+75b29a8a7f,21.0.0-3-gaa929c8+9e4ef6332c,21.0.0-3-ge02ed75+4b120a55c4,21.0.0-4-g3300ddd+e70536a077,21.0.0-4-g591bb35+4b120a55c4,21.0.0-4-gc004bbf+4911b9cd27,21.0.0-4-gccdca77+f94adcd104,21.0.0-4-ge8fba5a+2b3a696ff9,21.0.0-5-gb155db7+2c5429117a,21.0.0-5-gdf36809+637e4641ee,21.0.0-6-g00874e7+c9fd7f7160,21.0.0-6-g4e60332+4b120a55c4,21.0.0-7-gc8ca178+40eb9cf840,21.0.0-8-gfbe0b4b+9e4ef6332c,21.0.0-9-g2fd488a+d83b7cd606,w.2021.05
LSST Data Management Base Package
Public Member Functions | Public Attributes | List of all members
lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter Class Reference
Inheritance diagram for lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter:
lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter lsst.obs.base.gen2to3.repoConverter.RepoConverter

Public Member Functions

def __init__ (self, **kwds)
 
bool isDatasetTypeSpecial (self, str datasetTypeName)
 
List[str] getSpecialDirectories (self)
 
Tuple[Optional[BaseSkyMap], Optional[str]] findMatchingSkyMap (self, str datasetTypeName)
 
def runRawIngest (self, pool=None)
 
def runDefineVisits (self, pool=None)
 
def prep (self)
 
Iterator[FileDataset] iterDatasets (self)
 
str getRun (self, str datasetTypeName, Optional[str] calibDate=None)
 
Iterator[Tuple[str, CameraMapperMapping]] iterMappings (self)
 
RepoWalker.Target makeRepoWalkerTarget (self, str datasetTypeName, str template, Dict[str, type] keys, StorageClass storageClass, FormatterParameter formatter=None, Optional[PathElementHandler] targetHandler=None)
 
List[str] getCollectionChain (self)
 
def findDatasets (self)
 
def expandDataIds (self)
 
def ingest (self)
 
None finish (self)
 

Public Attributes

 butler2
 
 mapper
 
 task
 
 root
 
 instrument
 
 subset
 

Detailed Description

A specialization of `RepoConverter` for root data repositories.

`RootRepoConverter` adds support for raw images (mostly delegated to the
parent task's `RawIngestTask` subtask) and reference catalogs.

Parameters
----------
kwds
    Keyword arguments are forwarded to (and required by) `RepoConverter`.

Definition at line 63 of file rootRepoConverter.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.__init__ (   self,
**  kwds 
)

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 75 of file rootRepoConverter.py.

75  def __init__(self, **kwds):
76  super().__init__(run=None, **kwds)
77  self._refCats: Dict[str, SkyPixDimension] = {}
78  if self.task.config.rootSkyMapName is not None:
79  self._rootSkyMap = self.task.config.skyMaps[self.task.config.rootSkyMapName].skyMap.apply()
80  else:
81  self._rootSkyMap = None # All access to _rootSkyMap is guarded
82  self._rawRefs = []
83 

Member Function Documentation

◆ expandDataIds()

def lsst.obs.base.gen2to3.repoConverter.RepoConverter.expandDataIds (   self)
inherited
Expand the data IDs for all datasets to be inserted.

Subclasses may override this method, but must delegate to the base
class implementation if they do.

This involves queries to the registry, but not writes.  It is
guaranteed to be called between `findDatasets` and `ingest`.

Definition at line 441 of file repoConverter.py.

441  def expandDataIds(self):
442  """Expand the data IDs for all datasets to be inserted.
443 
444  Subclasses may override this method, but must delegate to the base
445  class implementation if they do.
446 
447  This involves queries to the registry, but not writes. It is
448  guaranteed to be called between `findDatasets` and `ingest`.
449  """
450  import itertools
451  for datasetType, datasetsByCalibDate in self._fileDatasets.items():
452  for calibDate, datasetsForCalibDate in datasetsByCalibDate.items():
453  nDatasets = len(datasetsForCalibDate)
454  suffix = "" if nDatasets == 1 else "s"
455  if calibDate is not None:
456  self.task.log.info("Expanding data IDs for %s %s dataset%s at calibDate %s.",
457  nDatasets,
458  datasetType.name,
459  suffix,
460  calibDate)
461  else:
462  self.task.log.info("Expanding data IDs for %s %s non-calibration dataset%s.",
463  nDatasets,
464  datasetType.name,
465  suffix)
466  expanded = []
467  for dataset in datasetsForCalibDate:
468  for i, ref in enumerate(dataset.refs):
469  self.task.log.debug("Expanding data ID %s.", ref.dataId)
470  try:
471  dataId = self.task.registry.expandDataId(ref.dataId)
472  dataset.refs[i] = ref.expanded(dataId)
473  except LookupError as err:
474  self.task.log.warn("Skipping ingestion for '%s': %s", dataset.path, err)
475  # Remove skipped datasets from multi-extension
476  # FileDatasets
477  dataset.refs[i] = None # We will strip off the `None`s after the loop.
478  dataset.refs[:] = itertools.filterfalse(lambda x: x is None, dataset.refs)
479  if dataset.refs:
480  expanded.append(dataset)
481  datasetsForCalibDate[:] = expanded
482 
std::vector< SchemaItem< Flag > > * items

◆ findDatasets()

def lsst.obs.base.gen2to3.repoConverter.RepoConverter.findDatasets (   self)
inherited

Definition at line 424 of file repoConverter.py.

424  def findDatasets(self):
425  assert self._repoWalker, "prep() must be called before findDatasets."
426  self.task.log.info("Adding special datasets in repo %s.", self.root)
427  for dataset in self.iterDatasets():
428  assert len(dataset.refs) == 1
429  # None index below is for calibDate, which is only relevant for
430  # CalibRepoConverter.
431  self._fileDatasets[dataset.refs[0].datasetType][None].append(dataset)
432  self.task.log.info("Finding datasets from files in repo %s.", self.root)
433  datasetsByTypeAndCalibDate = self._repoWalker.walk(
434  self.root,
435  predicate=(self.subset.isRelated if self.subset is not None else None)
436  )
437  for datasetType, datasetsByCalibDate in datasetsByTypeAndCalibDate.items():
438  for calibDate, datasets in datasetsByCalibDate.items():
439  self._fileDatasets[datasetType][calibDate].extend(datasets)
440 
std::shared_ptr< FrameSet > append(FrameSet const &first, FrameSet const &second)
Construct a FrameSet that performs two transformations in series.
Definition: functional.cc:33

◆ findMatchingSkyMap()

Tuple[Optional[BaseSkyMap], Optional[str]] lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.findMatchingSkyMap (   self,
str  datasetTypeName 
)
Return the appropriate SkyMap for the given dataset type.

Parameters
----------
datasetTypeName : `str`
    Name of the dataset type for which a skymap is sought.

Returns
-------
skyMap : `BaseSkyMap` or `None`
    The `BaseSkyMap` instance, or `None` if there was no match.
skyMapName : `str` or `None`
    The Gen3 name for the SkyMap, or `None` if there was no match.

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 97 of file rootRepoConverter.py.

97  def findMatchingSkyMap(self, datasetTypeName: str) -> Tuple[Optional[BaseSkyMap], Optional[str]]:
98  # Docstring inherited from StandardRepoConverter.findMatchingSkyMap.
99  skyMap, name = super().findMatchingSkyMap(datasetTypeName)
100  if skyMap is None and self.task.config.rootSkyMapName is not None:
101  self.task.log.debug(
102  ("Assuming configured root skymap with name '%s' for dataset %s."),
103  self.task.config.rootSkyMapName, datasetTypeName
104  )
105  skyMap = self._rootSkyMap
106  name = self.task.config.rootSkyMapName
107  return skyMap, name
108 

◆ finish()

None lsst.obs.base.gen2to3.repoConverter.RepoConverter.finish (   self)
inherited
Finish conversion of a repository.

This is run after ``ingest``, and delegates to `_finish`, which should
be overridden by derived classes instead of this method.

Definition at line 511 of file repoConverter.py.

511  def finish(self) -> None:
512  """Finish conversion of a repository.
513 
514  This is run after ``ingest``, and delegates to `_finish`, which should
515  be overridden by derived classes instead of this method.
516  """
517  self._finish(self._fileDatasets)
518 

◆ getCollectionChain()

List[str] lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.getCollectionChain (   self)
inherited
Return run names that can be used to construct a chained collection
that refers to the converted repository (`list` [ `str` ]).

Definition at line 207 of file standardRepoConverter.py.

207  def getCollectionChain(self) -> List[str]:
208  """Return run names that can be used to construct a chained collection
209  that refers to the converted repository (`list` [ `str` ]).
210  """
211  return self._chain
212 

◆ getRun()

str lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.getRun (   self,
str  datasetTypeName,
Optional[str]   calibDate = None 
)
Return the name of the run to insert instances of the given dataset
type into in this collection.

Parameters
----------
datasetTypeName : `str`
    Name of the dataset type.
calibDate : `str`, optional
    If not `None`, the "CALIBDATE" associated with this (calibration)
    dataset in the Gen2 data repository.

Returns
-------
run : `str`
    Name of the `~lsst.daf.butler.CollectionType.RUN` collection.

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 186 of file rootRepoConverter.py.

186  def getRun(self, datasetTypeName: str, calibDate: Optional[str] = None) -> str:
187  # Docstring inherited from RepoConverter.
188  if datasetTypeName in self._refCats:
189  return self.instrument.makeRefCatCollectionName("gen2")
190  return super().getRun(datasetTypeName, calibDate)
191 

◆ getSpecialDirectories()

List[str] lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.getSpecialDirectories (   self)
Return a list of directory paths that should not be searched for
files.

These may be directories that simply do not contain datasets (or
contain datasets in another repository), or directories whose datasets
are handled specially by a subclass.

Returns
-------
directories : `list` [`str`]
    The full paths of directories to skip, relative to the repository
    root.

Reimplemented from lsst.obs.base.gen2to3.repoConverter.RepoConverter.

Definition at line 93 of file rootRepoConverter.py.

93  def getSpecialDirectories(self) -> List[str]:
94  # Docstring inherited from RepoConverter.
95  return super().getSpecialDirectories() + ["CALIB", "ref_cats", "rerun"]
96 

◆ ingest()

def lsst.obs.base.gen2to3.repoConverter.RepoConverter.ingest (   self)
inherited
Insert converted datasets into the Gen3 repository.

Subclasses may override this method, but must delegate to the base
class implementation at some point in their own logic.

This method is guaranteed to be called after `expandDataIds`.

Definition at line 483 of file repoConverter.py.

483  def ingest(self):
484  """Insert converted datasets into the Gen3 repository.
485 
486  Subclasses may override this method, but must delegate to the base
487  class implementation at some point in their own logic.
488 
489  This method is guaranteed to be called after `expandDataIds`.
490  """
491  for datasetType, datasetsByCalibDate in self._fileDatasets.items():
492  self.task.registry.registerDatasetType(datasetType)
493  for calibDate, datasetsForCalibDate in datasetsByCalibDate.items():
494  try:
495  run = self.getRun(datasetType.name, calibDate)
496  except LookupError:
497  self.task.log.warn(f"No run configured for dataset type {datasetType.name}.")
498  continue
499  nDatasets = len(datasetsForCalibDate)
500  self.task.log.info("Ingesting %s %s dataset%s into run %s.", nDatasets,
501  datasetType.name, "" if nDatasets == 1 else "s", run)
502  try:
503  self.task.registry.registerRun(run)
504  self.task.butler3.ingest(*datasetsForCalibDate, transfer=self.task.config.transfer,
505  run=run)
506  except LookupError as err:
507  raise LookupError(
508  f"Error expanding data ID for dataset type {datasetType.name}."
509  ) from err
510 

◆ isDatasetTypeSpecial()

bool lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.isDatasetTypeSpecial (   self,
str  datasetTypeName 
)
Test whether the given dataset is handled specially by this
converter and hence should be ignored by generic base-class logic that
searches for dataset types to convert.

Parameters
----------
datasetTypeName : `str`
    Name of the dataset type to test.

Returns
-------
special : `bool`
    `True` if the dataset type is special.

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 84 of file rootRepoConverter.py.

84  def isDatasetTypeSpecial(self, datasetTypeName: str) -> bool:
85  # Docstring inherited from RepoConverter.
86  return (
87  super().isDatasetTypeSpecial(datasetTypeName)
88  or datasetTypeName in ("raw", "ref_cat", "ref_cat_config")
89  # in Gen2, some of these are in the root repo, not a calib repo
90  or datasetTypeName in self.instrument.getCuratedCalibrationNames()
91  )
92 

◆ iterDatasets()

Iterator[FileDataset] lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.iterDatasets (   self)
Iterate over datasets in the repository that should be ingested into
the Gen3 repository.

The base class implementation yields nothing; the datasets handled by
the `RepoConverter` base class itself are read directly in
`findDatasets`.

Subclasses should override this method if they support additional
datasets that are handled some other way.

Yields
------
dataset : `FileDataset`
    Structures representing datasets to be ingested.  Paths should be
    absolute.

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 163 of file rootRepoConverter.py.

163  def iterDatasets(self) -> Iterator[FileDataset]:
164  # Docstring inherited from RepoConverter.
165  # Iterate over reference catalog files.
166  for refCat, dimension in self._refCats.items():
167  datasetType = DatasetType(refCat, dimensions=[dimension], universe=self.task.universe,
168  storageClass="SimpleCatalog")
169  if self.subset is None:
170  regex = re.compile(r"(\d+)\.fits")
171  for fileName in os.listdir(os.path.join(self.root, "ref_cats", refCat)):
172  m = regex.match(fileName)
173  if m is not None:
174  htmId = int(m.group(1))
175  dataId = self.task.registry.expandDataId({dimension: htmId})
176  yield FileDataset(path=os.path.join(self.root, "ref_cats", refCat, fileName),
177  refs=DatasetRef(datasetType, dataId))
178  else:
179  for begin, end in self.subset.skypix[dimension]:
180  for htmId in range(begin, end):
181  dataId = self.task.registry.expandDataId({dimension: htmId})
182  yield FileDataset(path=os.path.join(self.root, "ref_cats", refCat, f"{htmId}.fits"),
183  refs=DatasetRef(datasetType, dataId))
184  yield from super().iterDatasets()
185 

◆ iterMappings()

Iterator[Tuple[str, CameraMapperMapping]] lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.iterMappings (   self)
inherited
Iterate over all `CameraMapper` `Mapping` objects that should be
considered for conversion by this repository.

This this should include any datasets that may appear in the
repository, including those that are special (see
`isDatasetTypeSpecial`) and those that are being ignored (see
`ConvertRepoTask.isDatasetTypeIncluded`); this allows the converter
to identify and hence skip these datasets quietly instead of warning
about them as unrecognized.

Yields
------
datasetTypeName: `str`
    Name of the dataset type.
mapping : `lsst.obs.base.mapping.Mapping`
    Mapping object used by the Gen2 `CameraMapper` to describe the
    dataset type.

Reimplemented from lsst.obs.base.gen2to3.repoConverter.RepoConverter.

Definition at line 124 of file standardRepoConverter.py.

124  def iterMappings(self) -> Iterator[Tuple[str, CameraMapperMapping]]:
125  # Docstring inherited from RepoConverter.
126  for datasetTypeName, mapping in self.mapper.mappings.items():
127  if datasetTypeName not in self.mapper.calibrations:
128  yield datasetTypeName, mapping
129 

◆ makeRepoWalkerTarget()

RepoWalker.Target lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.makeRepoWalkerTarget (   self,
str  datasetTypeName,
str  template,
Dict[str, type keys,
StorageClass  storageClass,
FormatterParameter   formatter = None,
Optional[PathElementHandler]   targetHandler = None 
)
inherited
Make a struct that identifies a dataset type to be extracted by
walking the repo directory structure.

Parameters
----------
datasetTypeName : `str`
    Name of the dataset type (the same in both Gen2 and Gen3).
template : `str`
    The full Gen2 filename template.
keys : `dict` [`str`, `type`]
    A dictionary mapping Gen2 data ID key to the type of its value.
storageClass : `lsst.daf.butler.StorageClass`
    Gen3 storage class for this dataset type.
formatter : `lsst.daf.butler.Formatter` or `str`, optional
    A Gen 3 formatter class or fully-qualified name.
targetHandler : `PathElementHandler`, optional
    Specialist target handler to use for this dataset type.

Returns
-------
target : `RepoWalker.Target`
    A struct containing information about the target dataset (much of
    it simplify forwarded from the arguments).

Reimplemented from lsst.obs.base.gen2to3.repoConverter.RepoConverter.

Definition at line 167 of file standardRepoConverter.py.

171  ) -> RepoWalker.Target:
172  # Docstring inherited from RepoConverter.
173  skyMap, skyMapName = self.findMatchingSkyMap(datasetTypeName)
174  return RepoWalker.Target(
175  datasetTypeName=datasetTypeName,
176  storageClass=storageClass,
177  template=template,
178  keys=keys,
179  universe=self.task.registry.dimensions,
180  instrument=self.task.instrument.getName(),
181  skyMap=skyMap,
182  skyMapName=skyMapName,
183  formatter=formatter,
184  targetHandler=targetHandler,
185  translatorFactory=self.task.translatorFactory,
186  )
187 

◆ prep()

def lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.prep (   self)
Perform preparatory work associated with the dataset types to be
converted from this repository (but not the datasets themselves).

Notes
-----
This should be a relatively fast operation that should not depend on
the size of the repository.

Subclasses may override this method, but must delegate to the base
class implementation at some point in their own logic.
More often, subclasses will specialize the behavior of `prep` by
overriding other methods to which the base class implementation
delegates.  These include:
 - `iterMappings`
 - `isDatasetTypeSpecial`
 - `getSpecialDirectories`
 - `makeRepoWalkerTarget`

This should not perform any write operations to the Gen3 repository.
It is guaranteed to be called before `ingest`.

Reimplemented from lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.

Definition at line 133 of file rootRepoConverter.py.

133  def prep(self):
134  # Docstring inherited from RepoConverter.
135  # Gather information about reference catalogs.
136  if self.task.isDatasetTypeIncluded("ref_cat") and len(self.task.config.refCats) != 0:
137  from lsst.meas.algorithms import DatasetConfig as RefCatDatasetConfig
138  for refCat in os.listdir(os.path.join(self.root, "ref_cats")):
139  path = os.path.join(self.root, "ref_cats", refCat)
140  configFile = os.path.join(path, "config.py")
141  if not os.path.exists(configFile):
142  continue
143  if refCat not in self.task.config.refCats:
144  continue
145  self.task.log.info(f"Preparing ref_cat {refCat} from root {self.root}.")
146  onDiskConfig = RefCatDatasetConfig()
147  onDiskConfig.load(configFile)
148  if onDiskConfig.indexer.name != "HTM":
149  raise ValueError(f"Reference catalog '{refCat}' uses unsupported "
150  f"pixelization '{onDiskConfig.indexer.name}'.")
151  level = onDiskConfig.indexer["HTM"].depth
152  try:
153  dimension = self.task.universe[f"htm{level}"]
154  except KeyError as err:
155  raise ValueError(f"Reference catalog {refCat} uses HTM level {level}, but no htm{level} "
156  f"skypix dimension is configured for this registry.") from err
157  self.task.useSkyPix(dimension)
158  self._refCats[refCat] = dimension
159  if self.task.isDatasetTypeIncluded("brightObjectMask") and self.task.config.rootSkyMapName:
160  self.task.useSkyMap(self._rootSkyMap, self.task.config.rootSkyMapName)
161  super().prep()
162 
Fit spatial kernel using approximate fluxes for candidates, and solving a linear system of equations.

◆ runDefineVisits()

def lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.runDefineVisits (   self,
  pool = None 
)

Definition at line 125 of file rootRepoConverter.py.

125  def runDefineVisits(self, pool=None):
126  if self.task.defineVisits is None:
127  return
128  dimensions = DimensionGraph(self.task.universe, names=["exposure"])
129  exposureDataIds = set(ref.dataId.subset(dimensions) for ref in self._rawRefs)
130  self.task.log.info("Defining visits from exposures.")
131  self.task.defineVisits.run(exposureDataIds, pool=pool)
132 
daf::base::PropertySet * set
Definition: fits.cc:912

◆ runRawIngest()

def lsst.obs.base.gen2to3.rootRepoConverter.RootRepoConverter.runRawIngest (   self,
  pool = None 
)

Definition at line 109 of file rootRepoConverter.py.

109  def runRawIngest(self, pool=None):
110  if self.task.raws is None:
111  return
112  self.task.log.info(f"Finding raws in root {self.root}.")
113  if self.subset is not None:
114  dataRefs = itertools.chain.from_iterable(
115  self.butler2.subset(self.task.config.rawDatasetType,
116  visit=visit) for visit in self.subset.visits
117  )
118  else:
119  dataRefs = self.butler2.subset(self.task.config.rawDatasetType)
120  dataPaths = getDataPaths(dataRefs)
121  self.task.log.info("Ingesting raws from root %s into run %s.", self.root, self.task.raws.butler.run)
122  self._rawRefs.extend(self.task.raws.run(dataPaths, pool=pool))
123  self._chain = [self.task.raws.butler.run]
124 

Member Data Documentation

◆ butler2

lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.butler2
inherited

Definition at line 90 of file standardRepoConverter.py.

◆ instrument

lsst.obs.base.gen2to3.repoConverter.RepoConverter.instrument
inherited

Definition at line 213 of file repoConverter.py.

◆ mapper

lsst.obs.base.gen2to3.standardRepoConverter.StandardRepoConverter.mapper
inherited

Definition at line 91 of file standardRepoConverter.py.

◆ root

lsst.obs.base.gen2to3.repoConverter.RepoConverter.root
inherited

Definition at line 212 of file repoConverter.py.

◆ subset

lsst.obs.base.gen2to3.repoConverter.RepoConverter.subset
inherited

Definition at line 214 of file repoConverter.py.

◆ task

lsst.obs.base.gen2to3.repoConverter.RepoConverter.task
inherited

Definition at line 211 of file repoConverter.py.


The documentation for this class was generated from the following file: