LSST Applications  21.0.0-131-g8cabc107+528f53ee53,22.0.0+00495a2688,22.0.0+0ef2527977,22.0.0+11a2aa21cd,22.0.0+269b7e55e3,22.0.0+2c6b6677a3,22.0.0+64c1bc5aa5,22.0.0+7b3a3f865e,22.0.0+e1b6d2281c,22.0.0+ff3c34362c,22.0.1-1-g1b65d06+c95cbdf3df,22.0.1-1-g7058be7+1cf78af69b,22.0.1-1-g7dab645+2a65e40b06,22.0.1-1-g8760c09+64c1bc5aa5,22.0.1-1-g949febb+64c1bc5aa5,22.0.1-1-ga324b9c+269b7e55e3,22.0.1-1-gf9d8b05+ff3c34362c,22.0.1-10-g781e53d+9b51d1cd24,22.0.1-10-gba590ab+b9624b875d,22.0.1-13-g76f9b8d+2c6b6677a3,22.0.1-14-g22236948+57af756299,22.0.1-18-g3db9cf4b+9b7092c56c,22.0.1-18-gb17765a+2264247a6b,22.0.1-2-g8ef0a89+2c6b6677a3,22.0.1-2-gcb770ba+c99495d3c6,22.0.1-24-g2e899d296+4206820b0d,22.0.1-3-g7aa11f2+2c6b6677a3,22.0.1-3-g8c1d971+f253ffa91f,22.0.1-3-g997b569+ff3b2f8649,22.0.1-4-g1930a60+6871d0c7f6,22.0.1-4-g5b7b756+6b209d634c,22.0.1-6-ga02864e+6871d0c7f6,22.0.1-7-g3402376+a1a2182ac4,22.0.1-7-g65f59fa+54b92689ce,master-gcc5351303a+e1b6d2281c,w.2021.32
LSST Data Management Base Package
Public Member Functions | Static Public Attributes | List of all members
lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask Class Reference
Inheritance diagram for lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask:

Public Member Functions

def runQuantum (self, butlerQC, inputRefs, outputRefs)
 
def run (self, inputMDs, inputDims, camera)
 
def measureScales (self, bgMatrix, bgCounts=None, iterations=10)
 

Static Public Attributes

 ConfigClass = CpFlatNormalizationTaskConfig
 

Detailed Description

Rescale merged flat frames to remove unequal screen illumination.

Definition at line 182 of file cpFlatNormTask.py.

Member Function Documentation

◆ measureScales()

def lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask.measureScales (   self,
  bgMatrix,
  bgCounts = None,
  iterations = 10 
)
Convert backgrounds to exposure and detector components.

Parameters
----------
bgMatrix : `np.ndarray`, (nDetectors, nExposures)
    Input backgrounds indexed by exposure (axis=0) and
    detector (axis=1).
bgCounts : `np.ndarray`, (nDetectors, nExposures), optional
    Input pixel counts used to in measuring bgMatrix, indexed
    identically.
iterations : `int`, optional
    Number of iterations to use in decomposition.

Returns
-------
scaleResult : `lsst.pipe.base.Struct`
    Result struct containing fields:

    ``vectorE``
        Output E vector of exposure level scalings
        (`np.array`, (nExposures)).
    ``vectorG``
        Output G vector of detector level scalings
        (`np.array`, (nExposures)).
    ``bgModel``
        Expected model bgMatrix values, calculated from E and G
        (`np.ndarray`, (nDetectors, nExposures)).

Notes
-----

The set of background measurements B[exposure, detector] of
flat frame data should be defined by a "Cartesian" product of
two vectors, E[exposure] and G[detector].  The E vector
represents the total flux incident on the focal plane.  In a
perfect camera, this is simply the sum along the columns of B
(np.sum(B, axis=0)).

However, this simple model ignores differences in detector
gains, the vignetting of the detectors, and the illumination
pattern of the source lamp.  The G vector describes these
detector dependent differences, which should be identical over
different exposures.  For a perfect lamp of unit total
intensity, this is simply the sum along the rows of B
(np.sum(B, axis=1)).  This algorithm divides G by the total
flux level, to provide the relative (not absolute) scales
between detectors.

The algorithm here, from pipe_drivers/constructCalibs.py and
from there from Eugene Magnier/PanSTARRS [1]_, attempts to
iteratively solve this decomposition from initial "perfect" E
and G vectors.  The operation is performed in log space to
reduce the multiply and divides to linear additions and
subtractions.

References
----------
.. [1] https://svn.pan-starrs.ifa.hawaii.edu/trac/ipp/browser/trunk/psModules/src/detrend/pmFlatNormalize.c  # noqa: E501

Definition at line 316 of file cpFlatNormTask.py.

316  def measureScales(self, bgMatrix, bgCounts=None, iterations=10):
317  """Convert backgrounds to exposure and detector components.
318 
319  Parameters
320  ----------
321  bgMatrix : `np.ndarray`, (nDetectors, nExposures)
322  Input backgrounds indexed by exposure (axis=0) and
323  detector (axis=1).
324  bgCounts : `np.ndarray`, (nDetectors, nExposures), optional
325  Input pixel counts used to in measuring bgMatrix, indexed
326  identically.
327  iterations : `int`, optional
328  Number of iterations to use in decomposition.
329 
330  Returns
331  -------
332  scaleResult : `lsst.pipe.base.Struct`
333  Result struct containing fields:
334 
335  ``vectorE``
336  Output E vector of exposure level scalings
337  (`np.array`, (nExposures)).
338  ``vectorG``
339  Output G vector of detector level scalings
340  (`np.array`, (nExposures)).
341  ``bgModel``
342  Expected model bgMatrix values, calculated from E and G
343  (`np.ndarray`, (nDetectors, nExposures)).
344 
345  Notes
346  -----
347 
348  The set of background measurements B[exposure, detector] of
349  flat frame data should be defined by a "Cartesian" product of
350  two vectors, E[exposure] and G[detector]. The E vector
351  represents the total flux incident on the focal plane. In a
352  perfect camera, this is simply the sum along the columns of B
353  (np.sum(B, axis=0)).
354 
355  However, this simple model ignores differences in detector
356  gains, the vignetting of the detectors, and the illumination
357  pattern of the source lamp. The G vector describes these
358  detector dependent differences, which should be identical over
359  different exposures. For a perfect lamp of unit total
360  intensity, this is simply the sum along the rows of B
361  (np.sum(B, axis=1)). This algorithm divides G by the total
362  flux level, to provide the relative (not absolute) scales
363  between detectors.
364 
365  The algorithm here, from pipe_drivers/constructCalibs.py and
366  from there from Eugene Magnier/PanSTARRS [1]_, attempts to
367  iteratively solve this decomposition from initial "perfect" E
368  and G vectors. The operation is performed in log space to
369  reduce the multiply and divides to linear additions and
370  subtractions.
371 
372  References
373  ----------
374  .. [1] https://svn.pan-starrs.ifa.hawaii.edu/trac/ipp/browser/trunk/psModules/src/detrend/pmFlatNormalize.c # noqa: E501
375 
376  """
377  numExps = bgMatrix.shape[1]
378  numChips = bgMatrix.shape[0]
379  if bgCounts is None:
380  bgCounts = np.ones_like(bgMatrix)
381 
382  logMeas = np.log(bgMatrix)
383  logMeas = np.ma.masked_array(logMeas, ~np.isfinite(logMeas))
384  logG = np.zeros(numChips)
385  logE = np.array([np.average(logMeas[:, iexp] - logG,
386  weights=bgCounts[:, iexp]) for iexp in range(numExps)])
387 
388  for iter in range(iterations):
389  logG = np.array([np.average(logMeas[ichip, :] - logE,
390  weights=bgCounts[ichip, :]) for ichip in range(numChips)])
391 
392  bad = np.isnan(logG)
393  if np.any(bad):
394  logG[bad] = logG[~bad].mean()
395 
396  logE = np.array([np.average(logMeas[:, iexp] - logG,
397  weights=bgCounts[:, iexp]) for iexp in range(numExps)])
398  fluxLevel = np.average(np.exp(logG), weights=np.sum(bgCounts, axis=1))
399 
400  logG -= np.log(fluxLevel)
401  self.log.debug(f"ITER {iter}: Flux: {fluxLevel}")
402  self.log.debug(f"Exps: {np.exp(logE)}")
403  self.log.debug(f"{np.mean(logG)}")
404 
405  logE = np.array([np.average(logMeas[:, iexp] - logG,
406  weights=bgCounts[:, iexp]) for iexp in range(numExps)])
407 
408  bgModel = np.exp(logE[np.newaxis, :] - logG[:, np.newaxis])
409  return pipeBase.Struct(
410  expScales=np.exp(logE),
411  detScales=np.exp(logG),
412  bgModel=bgModel,
413  )

◆ run()

def lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask.run (   self,
  inputMDs,
  inputDims,
  camera 
)
Normalize FLAT exposures to a consistent level.

Parameters
----------
inputMDs : `list` [`lsst.daf.base.PropertyList`]
    Amplifier-level metadata used to construct scales.
inputDims : `list` [`dict`]
    List of dictionaries of input data dimensions/values.
    Each list entry should contain:

    ``"exposure"``
        exposure id value (`int`)
    ``"detector"``
        detector id value (`int`)

Returns
-------
outputScales : `dict` [`dict` [`dict` [`float`]]]
    Dictionary of scales, indexed by detector (`int`),
    amplifier (`int`), and exposure (`int`).

Raises
------
KeyError
    Raised if the input dimensions do not contain detector and
    exposure, or if the metadata does not contain the expected
    statistic entry.

Definition at line 200 of file cpFlatNormTask.py.

200  def run(self, inputMDs, inputDims, camera):
201  """Normalize FLAT exposures to a consistent level.
202 
203  Parameters
204  ----------
205  inputMDs : `list` [`lsst.daf.base.PropertyList`]
206  Amplifier-level metadata used to construct scales.
207  inputDims : `list` [`dict`]
208  List of dictionaries of input data dimensions/values.
209  Each list entry should contain:
210 
211  ``"exposure"``
212  exposure id value (`int`)
213  ``"detector"``
214  detector id value (`int`)
215 
216  Returns
217  -------
218  outputScales : `dict` [`dict` [`dict` [`float`]]]
219  Dictionary of scales, indexed by detector (`int`),
220  amplifier (`int`), and exposure (`int`).
221 
222  Raises
223  ------
224  KeyError
225  Raised if the input dimensions do not contain detector and
226  exposure, or if the metadata does not contain the expected
227  statistic entry.
228  """
229  expSet = sorted(set([d['exposure'] for d in inputDims]))
230  detSet = sorted(set([d['detector'] for d in inputDims]))
231 
232  expMap = {exposureId: idx for idx, exposureId in enumerate(expSet)}
233  detMap = {detectorId: idx for idx, detectorId in enumerate(detSet)}
234 
235  nExp = len(expSet)
236  nDet = len(detSet)
237  if self.config.level == 'DETECTOR':
238  bgMatrix = np.zeros((nDet, nExp))
239  bgCounts = np.ones((nDet, nExp))
240  elif self.config.level == 'AMP':
241  nAmp = len(camera[detSet[0]])
242  bgMatrix = np.zeros((nDet * nAmp, nExp))
243  bgCounts = np.ones((nDet * nAmp, nExp))
244 
245  for inMetadata, inDimensions in zip(inputMDs, inputDims):
246  try:
247  exposureId = inDimensions['exposure']
248  detectorId = inDimensions['detector']
249  except Exception as e:
250  raise KeyError("Cannot find expected dimensions in %s" % (inDimensions, )) from e
251 
252  if self.config.level == 'DETECTOR':
253  detIdx = detMap[detectorId]
254  expIdx = expMap[exposureId]
255  try:
256  value = inMetadata.get('DETECTOR_MEDIAN')
257  count = inMetadata.get('DETECTOR_N')
258  except Exception as e:
259  raise KeyError("Cannot read expected metadata string.") from e
260 
261  if np.isfinite(value):
262  bgMatrix[detIdx][expIdx] = value
263  bgCounts[detIdx][expIdx] = count
264  else:
265  bgMatrix[detIdx][expIdx] = np.nan
266  bgCounts[detIdx][expIdx] = 1
267 
268  elif self.config.level == 'AMP':
269  detector = camera[detectorId]
270  nAmp = len(detector)
271 
272  detIdx = detMap[detectorId] * nAmp
273  expIdx = expMap[exposureId]
274 
275  for ampIdx, amp in enumerate(detector):
276  try:
277  value = inMetadata.get(f'AMP_MEDIAN_{ampIdx}')
278  count = inMetadata.get(f'AMP_N_{ampIdx}')
279  except Exception as e:
280  raise KeyError("cannot read expected metadata string.") from e
281 
282  detAmpIdx = detIdx + ampIdx
283  if np.isfinite(value):
284  bgMatrix[detAmpIdx][expIdx] = value
285  bgCounts[detAmpIdx][expIdx] = count
286  else:
287  bgMatrix[detAmpIdx][expIdx] = np.nan
288  bgMatrix[detAmpIdx][expIdx] = 1
289 
290  scaleResult = self.measureScales(bgMatrix, bgCounts, iterations=self.config.scaleMaxIter)
291  expScales = scaleResult.expScales
292  detScales = scaleResult.detScales
293 
294  outputScales = defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: defaultdict(float))))
295 
296  # Note that the enumerated "detId"/"expId" here index the
297  # "detScales" and "expScales" arrays.
298  if self.config.level == 'DETECTOR':
299  for detIdx, det in enumerate(detSet):
300  for amp in camera[det]:
301  for expIdx, exp in enumerate(expSet):
302  outputScales['expScale'][det][amp.getName()][exp] = expScales[expIdx].tolist()
303  outputScales['detScale'][det] = detScales[detIdx].tolist()
304  elif self.config.level == 'AMP':
305  for detIdx, det in enumerate(detSet):
306  for ampIdx, amp in enumerate(camera[det]):
307  for expIdx, exp in enumerate(expSet):
308  outputScales['expScale'][det][amp.getName()][exp] = expScales[expIdx].tolist()
309  detAmpIdx = detIdx + ampIdx
310  outputScales['detScale'][det][amp.getName()] = detScales[detAmpIdx].tolist()
311 
312  return pipeBase.Struct(
313  outputScales=ddict2dict(outputScales),
314  )
315 
daf::base::PropertySet * set
Definition: fits.cc:912
def ddict2dict(d)
Definition: utils.py:806
def run(self, skyInfo, tempExpRefList, imageScalerList, weightList, altMaskList=None, mask=None, supplementaryData=None)

◆ runQuantum()

def lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask.runQuantum (   self,
  butlerQC,
  inputRefs,
  outputRefs 
)

Definition at line 189 of file cpFlatNormTask.py.

189  def runQuantum(self, butlerQC, inputRefs, outputRefs):
190  inputs = butlerQC.get(inputRefs)
191 
192  # Use the dimensions of the inputs for generating
193  # output scales.
194  dimensions = [exp.dataId.byName() for exp in inputRefs.inputMDs]
195  inputs['inputDims'] = dimensions
196 
197  outputs = self.run(**inputs)
198  butlerQC.put(outputs, outputRefs)
199 

Member Data Documentation

◆ ConfigClass

lsst.cp.pipe.cpFlatNormTask.CpFlatNormalizationTask.ConfigClass = CpFlatNormalizationTaskConfig
static

Definition at line 186 of file cpFlatNormTask.py.


The documentation for this class was generated from the following file: