LSST Applications  21.0.0+3c14b91618,21.0.0+9f51b1e3f7,21.0.0-1-ga51b5d4+6691386486,21.0.0-10-g2408eff+49d24385eb,21.0.0-10-g560fb7b+6deed7dcb5,21.0.0-10-gcf60f90+8a335ee4d8,21.0.0-15-g490e301a+3275d29b7b,21.0.0-2-g103fe59+8d3bd148b2,21.0.0-2-g1367e85+7f080822af,21.0.0-2-g45278ab+9f51b1e3f7,21.0.0-2-g5242d73+7f080822af,21.0.0-2-g7f82c8f+0446281eca,21.0.0-2-g8f08a60+e6fd6d9ff9,21.0.0-2-ga326454+0446281eca,21.0.0-2-gde069b7+66c51b65da,21.0.0-2-gecfae73+2991dc08df,21.0.0-2-gfc62afb+7f080822af,21.0.0-20-g09baf175d+b753e4a737,21.0.0-3-g357aad2+29041d4ddb,21.0.0-3-g4be5c26+7f080822af,21.0.0-3-g65f322c+910dc3add9,21.0.0-3-g7d9da8d+3c14b91618,21.0.0-3-gaa929c8+6deed7dcb5,21.0.0-3-ge02ed75+f91014d647,21.0.0-4-g3af6bfd+4bd7f27a2e,21.0.0-4-g591bb35+f91014d647,21.0.0-4-g88306b8+fb98652b4f,21.0.0-4-gccdca77+86bf7a300d,21.0.0-4-ge8a399c+950ca2ef13,21.0.0-45-g0dcdce56+90354a0300,21.0.0-5-g073e055+57e5e98977,21.0.0-6-g2d4f3f3+9f51b1e3f7,21.0.0-6-g4e60332+f91014d647,21.0.0-6-g8356267+ce55d80eb2,21.0.0-7-g6531d7b+c3c1e9b0a0,21.0.0-7-g98eecf7+3609eddee2,21.0.0-8-ga5967ee+5685175956,master-gac4afde19b+f91014d647,w.2021.09
LSST Data Management Base Package
Public Member Functions | Public Attributes | List of all members
lsst.meas.base.forcedPhotImage.ForcedPhotImageConnections Class Reference
Inheritance diagram for lsst.meas.base.forcedPhotImage.ForcedPhotImageConnections:
lsst.pipe.base.connections.PipelineTaskConnections lsst.pipe.base.connections.PipelineTaskConnectionsMetaclass lsst.meas.base.forcedPhotCoadd.ForcedPhotCoaddConfig

Public Member Functions

typing.Tuple[InputQuantizedConnection, OutputQuantizedConnection] buildDatasetRefs (self, Quantum quantum)
 
NamedKeyDict[DatasetType, typing.Set[DatasetRef]] adjustQuantum (self, NamedKeyDict[DatasetType, typing.Set[DatasetRef]] datasetRefMap)
 
def __prepare__ (name, bases, **kwargs)
 
def __new__ (cls, name, bases, dct, **kwargs)
 

Public Attributes

 inputs
 
 prerequisiteInputs
 
 outputs
 
 initInputs
 
 initOutputs
 
 allConnections
 
 config
 

Detailed Description

Definition at line 45 of file forcedPhotImage.py.

Member Function Documentation

◆ __new__()

def lsst.pipe.base.connections.PipelineTaskConnectionsMetaclass.__new__ (   cls,
  name,
  bases,
  dct,
**  kwargs 
)
inherited

Definition at line 110 of file connections.py.

110  def __new__(cls, name, bases, dct, **kwargs):
111  dimensionsValueError = TypeError("PipelineTaskConnections class must be created with a dimensions "
112  "attribute which is an iterable of dimension names")
113 
114  if name != 'PipelineTaskConnections':
115  # Verify that dimensions are passed as a keyword in class
116  # declaration
117  if 'dimensions' not in kwargs:
118  for base in bases:
119  if hasattr(base, 'dimensions'):
120  kwargs['dimensions'] = base.dimensions
121  break
122  if 'dimensions' not in kwargs:
123  raise dimensionsValueError
124  try:
125  if isinstance(kwargs['dimensions'], str):
126  raise TypeError("Dimensions must be iterable of dimensions, got str,"
127  "possibly omitted trailing comma")
128  if not isinstance(kwargs['dimensions'], typing.Iterable):
129  raise TypeError("Dimensions must be iterable of dimensions")
130  dct['dimensions'] = set(kwargs['dimensions'])
131  except TypeError as exc:
132  raise dimensionsValueError from exc
133  # Lookup any python string templates that may have been used in the
134  # declaration of the name field of a class connection attribute
135  allTemplates = set()
136  stringFormatter = string.Formatter()
137  # Loop over all connections
138  for obj in dct['allConnections'].values():
139  nameValue = obj.name
140  # add all the parameters to the set of templates
141  for param in stringFormatter.parse(nameValue):
142  if param[1] is not None:
143  allTemplates.add(param[1])
144 
145  # look up any template from base classes and merge them all
146  # together
147  mergeDict = {}
148  for base in bases[::-1]:
149  if hasattr(base, 'defaultTemplates'):
150  mergeDict.update(base.defaultTemplates)
151  if 'defaultTemplates' in kwargs:
152  mergeDict.update(kwargs['defaultTemplates'])
153 
154  if len(mergeDict) > 0:
155  kwargs['defaultTemplates'] = mergeDict
156 
157  # Verify that if templated strings were used, defaults were
158  # supplied as an argument in the declaration of the connection
159  # class
160  if len(allTemplates) > 0 and 'defaultTemplates' not in kwargs:
161  raise TypeError("PipelineTaskConnection class contains templated attribute names, but no "
162  "defaut templates were provided, add a dictionary attribute named "
163  "defaultTemplates which contains the mapping between template key and value")
164  if len(allTemplates) > 0:
165  # Verify all templates have a default, and throw if they do not
166  defaultTemplateKeys = set(kwargs['defaultTemplates'].keys())
167  templateDifference = allTemplates.difference(defaultTemplateKeys)
168  if templateDifference:
169  raise TypeError(f"Default template keys were not provided for {templateDifference}")
170  # Verify that templates do not share names with variable names
171  # used for a connection, this is needed because of how
172  # templates are specified in an associated config class.
173  nameTemplateIntersection = allTemplates.intersection(set(dct['allConnections'].keys()))
174  if len(nameTemplateIntersection) > 0:
175  raise TypeError(f"Template parameters cannot share names with Class attributes"
176  f" (conflicts are {nameTemplateIntersection}).")
177  dct['defaultTemplates'] = kwargs.get('defaultTemplates', {})
178 
179  # Convert all the connection containers into frozensets so they cannot
180  # be modified at the class scope
181  for connectionName in ("inputs", "prerequisiteInputs", "outputs", "initInputs", "initOutputs"):
182  dct[connectionName] = frozenset(dct[connectionName])
183  # our custom dict type must be turned into an actual dict to be used in
184  # type.__new__
185  return super().__new__(cls, name, bases, dict(dct))
186 
daf::base::PropertySet * set
Definition: fits.cc:912

◆ __prepare__()

def lsst.pipe.base.connections.PipelineTaskConnectionsMetaclass.__prepare__ (   name,
  bases,
**  kwargs 
)
inherited

Definition at line 99 of file connections.py.

99  def __prepare__(name, bases, **kwargs): # noqa: 805
100  # Create an instance of our special dict to catch and track all
101  # variables that are instances of connectionTypes.BaseConnection
102  # Copy any existing connections from a parent class
103  dct = PipelineTaskConnectionDict()
104  for base in bases:
105  if isinstance(base, PipelineTaskConnectionsMetaclass):
106  for name, value in base.allConnections.items():
107  dct[name] = value
108  return dct
109 

◆ adjustQuantum()

NamedKeyDict[DatasetType, typing.Set[DatasetRef]] lsst.pipe.base.connections.PipelineTaskConnections.adjustQuantum (   self,
NamedKeyDict[DatasetType, typing.Set[DatasetRef]]   datasetRefMap 
)
inherited
Override to make adjustments to `lsst.daf.butler.DatasetRef` objects
in the `lsst.daf.butler.core.Quantum` during the graph generation stage
of the activator.

The base class implementation simply checks that input connections with
``multiple`` set to `False` have no more than one dataset.

Parameters
----------
datasetRefMap : `NamedKeyDict`
    Mapping from dataset type to a `set` of
    `lsst.daf.butler.DatasetRef` objects

Returns
-------
datasetRefMap : `NamedKeyDict`
    Modified mapping of input with possibly adjusted
    `lsst.daf.butler.DatasetRef` objects.

Raises
------
ScalarError
    Raised if any `Input` or `PrerequisiteInput` connection has
    ``multiple`` set to `False`, but multiple datasets.
Exception
    Overrides of this function have the option of raising an Exception
    if a field in the input does not satisfy a need for a corresponding
    pipelineTask, i.e. no reference catalogs are found.

Definition at line 459 of file connections.py.

460  ) -> NamedKeyDict[DatasetType, typing.Set[DatasetRef]]:
461  """Override to make adjustments to `lsst.daf.butler.DatasetRef` objects
462  in the `lsst.daf.butler.core.Quantum` during the graph generation stage
463  of the activator.
464 
465  The base class implementation simply checks that input connections with
466  ``multiple`` set to `False` have no more than one dataset.
467 
468  Parameters
469  ----------
470  datasetRefMap : `NamedKeyDict`
471  Mapping from dataset type to a `set` of
472  `lsst.daf.butler.DatasetRef` objects
473 
474  Returns
475  -------
476  datasetRefMap : `NamedKeyDict`
477  Modified mapping of input with possibly adjusted
478  `lsst.daf.butler.DatasetRef` objects.
479 
480  Raises
481  ------
482  ScalarError
483  Raised if any `Input` or `PrerequisiteInput` connection has
484  ``multiple`` set to `False`, but multiple datasets.
485  Exception
486  Overrides of this function have the option of raising an Exception
487  if a field in the input does not satisfy a need for a corresponding
488  pipelineTask, i.e. no reference catalogs are found.
489  """
490  for connection in itertools.chain(iterConnections(self, "inputs"),
491  iterConnections(self, "prerequisiteInputs")):
492  refs = datasetRefMap[connection.name]
493  if not connection.multiple and len(refs) > 1:
494  raise ScalarError(
495  f"Found multiple datasets {', '.join(str(r.dataId) for r in refs)} "
496  f"for scalar connection {connection.name} ({refs[0].datasetType.name})."
497  )
498  return datasetRefMap
499 
500 
typing.Generator[BaseConnection, None, None] iterConnections(PipelineTaskConnections connections, Union[str, Iterable[str]] connectionType)
Definition: connections.py:503

◆ buildDatasetRefs()

typing.Tuple[InputQuantizedConnection, OutputQuantizedConnection] lsst.pipe.base.connections.PipelineTaskConnections.buildDatasetRefs (   self,
Quantum  quantum 
)
inherited
Builds QuantizedConnections corresponding to input Quantum

Parameters
----------
quantum : `lsst.daf.butler.Quantum`
    Quantum object which defines the inputs and outputs for a given
    unit of processing

Returns
-------
retVal : `tuple` of (`InputQuantizedConnection`,
    `OutputQuantizedConnection`) Namespaces mapping attribute names
    (identifiers of connections) to butler references defined in the
    input `lsst.daf.butler.Quantum`

Definition at line 392 of file connections.py.

393  OutputQuantizedConnection]:
394  """Builds QuantizedConnections corresponding to input Quantum
395 
396  Parameters
397  ----------
398  quantum : `lsst.daf.butler.Quantum`
399  Quantum object which defines the inputs and outputs for a given
400  unit of processing
401 
402  Returns
403  -------
404  retVal : `tuple` of (`InputQuantizedConnection`,
405  `OutputQuantizedConnection`) Namespaces mapping attribute names
406  (identifiers of connections) to butler references defined in the
407  input `lsst.daf.butler.Quantum`
408  """
409  inputDatasetRefs = InputQuantizedConnection()
410  outputDatasetRefs = OutputQuantizedConnection()
411  # operate on a reference object and an interable of names of class
412  # connection attributes
413  for refs, names in zip((inputDatasetRefs, outputDatasetRefs),
414  (itertools.chain(self.inputs, self.prerequisiteInputs), self.outputs)):
415  # get a name of a class connection attribute
416  for attributeName in names:
417  # get the attribute identified by name
418  attribute = getattr(self, attributeName)
419  # Branch if the attribute dataset type is an input
420  if attribute.name in quantum.inputs:
421  # Get the DatasetRefs
422  quantumInputRefs = quantum.inputs[attribute.name]
423  # if the dataset is marked to load deferred, wrap it in a
424  # DeferredDatasetRef
425  if attribute.deferLoad:
426  quantumInputRefs = [DeferredDatasetRef(datasetRef=ref) for ref in quantumInputRefs]
427  # Unpack arguments that are not marked multiples (list of
428  # length one)
429  if not attribute.multiple:
430  if len(quantumInputRefs) > 1:
431  raise ScalarError(
432  f"Received multiple datasets "
433  f"{', '.join(str(r.dataId) for r in quantumInputRefs)} "
434  f"for scalar connection {attributeName} "
435  f"({quantumInputRefs[0].datasetType.name}) "
436  f"of quantum for {quantum.taskName} with data ID {quantum.dataId}."
437  )
438  if len(quantumInputRefs) == 0:
439  continue
440  quantumInputRefs = quantumInputRefs[0]
441  # Add to the QuantizedConnection identifier
442  setattr(refs, attributeName, quantumInputRefs)
443  # Branch if the attribute dataset type is an output
444  elif attribute.name in quantum.outputs:
445  value = quantum.outputs[attribute.name]
446  # Unpack arguments that are not marked multiples (list of
447  # length one)
448  if not attribute.multiple:
449  value = value[0]
450  # Add to the QuantizedConnection identifier
451  setattr(refs, attributeName, value)
452  # Specified attribute is not in inputs or outputs dont know how
453  # to handle, throw
454  else:
455  raise ValueError(f"Attribute with name {attributeName} has no counterpoint "
456  "in input quantum")
457  return inputDatasetRefs, outputDatasetRefs
458 

Member Data Documentation

◆ allConnections

lsst.pipe.base.connections.PipelineTaskConnections.allConnections
inherited

Definition at line 370 of file connections.py.

◆ config

lsst.pipe.base.connections.PipelineTaskConnections.config
inherited

Definition at line 375 of file connections.py.

◆ initInputs

lsst.pipe.base.connections.PipelineTaskConnections.initInputs
inherited

Definition at line 368 of file connections.py.

◆ initOutputs

lsst.pipe.base.connections.PipelineTaskConnections.initOutputs
inherited

Definition at line 369 of file connections.py.

◆ inputs

lsst.pipe.base.connections.PipelineTaskConnections.inputs
inherited

Definition at line 365 of file connections.py.

◆ outputs

lsst.pipe.base.connections.PipelineTaskConnections.outputs
inherited

Definition at line 367 of file connections.py.

◆ prerequisiteInputs

lsst.pipe.base.connections.PipelineTaskConnections.prerequisiteInputs
inherited

Definition at line 366 of file connections.py.


The documentation for this class was generated from the following file: