LSST Applications  21.0.0-131-g8cabc107+528f53ee53,22.0.0+00495a2688,22.0.0+0ef2527977,22.0.0+11a2aa21cd,22.0.0+269b7e55e3,22.0.0+2c6b6677a3,22.0.0+64c1bc5aa5,22.0.0+7b3a3f865e,22.0.0+e1b6d2281c,22.0.0+ff3c34362c,22.0.1-1-g1b65d06+c95cbdf3df,22.0.1-1-g7058be7+1cf78af69b,22.0.1-1-g7dab645+2a65e40b06,22.0.1-1-g8760c09+64c1bc5aa5,22.0.1-1-g949febb+64c1bc5aa5,22.0.1-1-ga324b9c+269b7e55e3,22.0.1-1-gf9d8b05+ff3c34362c,22.0.1-10-g781e53d+9b51d1cd24,22.0.1-10-gba590ab+b9624b875d,22.0.1-13-g76f9b8d+2c6b6677a3,22.0.1-14-g22236948+57af756299,22.0.1-18-g3db9cf4b+9b7092c56c,22.0.1-18-gb17765a+2264247a6b,22.0.1-2-g8ef0a89+2c6b6677a3,22.0.1-2-gcb770ba+c99495d3c6,22.0.1-24-g2e899d296+4206820b0d,22.0.1-3-g7aa11f2+2c6b6677a3,22.0.1-3-g8c1d971+f253ffa91f,22.0.1-3-g997b569+ff3b2f8649,22.0.1-4-g1930a60+6871d0c7f6,22.0.1-4-g5b7b756+6b209d634c,22.0.1-6-ga02864e+6871d0c7f6,22.0.1-7-g3402376+a1a2182ac4,22.0.1-7-g65f59fa+54b92689ce,master-gcc5351303a+e1b6d2281c,w.2021.32
LSST Data Management Base Package
Public Member Functions | Static Public Attributes | List of all members
lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask Class Reference
Inheritance diagram for lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask:

Public Member Functions

def runQuantum (self, butlerQC, inputRefs, outputRefs)
 
def run (self, inputCovariances, camera=None, inputExpList=None)
 
def fitCovariancesAstier (self, dataset)
 
def getOutputPtcDataCovAstier (self, dataset, covFits, covFitsNoB)
 
def fitPtc (self, dataset)
 
def fillBadAmp (self, dataset, ptcFitType, ampName)
 

Static Public Attributes

 ConfigClass = PhotonTransferCurveSolveConfig
 

Detailed Description

Task to fit the PTC from flat covariances.
This task assembles the list of individual PTC datasets produced
by `PhotonTransferCurveSolveTask` into one single final PTC dataset.
The task fits the measured (co)variances to a polynomial model or to
the models described in equations 16 and 20 of Astier+19
(referred to as `POLYNOMIAL`, `EXPAPPROXIMATION`, and `FULLCOVARIANCE`
in the configuration options of the task, respectively). Parameters
of interest such as tghe gain and noise are derived from the fits.

Astier+19: "The Shape of the Photon Transfer Curve
of CCD sensors", arXiv:1905.08677

Definition at line 158 of file cpSolvePtcTask.py.

Member Function Documentation

◆ fillBadAmp()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.fillBadAmp (   self,
  dataset,
  ptcFitType,
  ampName 
)
Fill the dataset with NaNs if there are not enough good points.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing the means, variances and exposure times.
ptcFitType : `str`
    Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
    'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC.
ampName : `str`
    Amplifier name.

Definition at line 678 of file cpSolvePtcTask.py.

678  def fillBadAmp(self, dataset, ptcFitType, ampName):
679  """Fill the dataset with NaNs if there are not enough good points.
680 
681  Parameters
682  ----------
683  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
684  The dataset containing the means, variances and exposure times.
685  ptcFitType : `str`
686  Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
687  'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC.
688  ampName : `str`
689  Amplifier name.
690  """
691  dataset.badAmps.append(ampName)
692  dataset.expIdMask[ampName] = np.repeat(False, len(dataset.rawExpTimes[ampName]))
693  dataset.gain[ampName] = np.nan
694  dataset.gainErr[ampName] = np.nan
695  dataset.noise[ampName] = np.nan
696  dataset.noiseErr[ampName] = np.nan
697  dataset.ptcFitPars[ampName] = (np.repeat(np.nan, self.config.polynomialFitDegree + 1) if
698  ptcFitType in ["POLYNOMIAL", ] else np.repeat(np.nan, 3))
699  dataset.ptcFitParsError[ampName] = (np.repeat(np.nan, self.config.polynomialFitDegree + 1) if
700  ptcFitType in ["POLYNOMIAL", ] else np.repeat(np.nan, 3))
701  dataset.ptcFitChiSq[ampName] = np.nan
702  dataset.finalVars[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
703  dataset.finalModelVars[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
704  dataset.finalMeans[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
705 
706  return

◆ fitCovariancesAstier()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.fitCovariancesAstier (   self,
  dataset 
)
Fit measured flat covariances to full model in Astier+19.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing information such as the means, (co)variances,
    and exposure times.

Returns
-------
dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input paramter, however, it has been modified
    to include information such as the fit vectors and the fit parameters. See
    the class `PhotonTransferCurveDatase`.

Definition at line 280 of file cpSolvePtcTask.py.

280  def fitCovariancesAstier(self, dataset):
281  """Fit measured flat covariances to full model in Astier+19.
282 
283  Parameters
284  ----------
285  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
286  The dataset containing information such as the means, (co)variances,
287  and exposure times.
288 
289  Returns
290  -------
291  dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
292  This is the same dataset as the input paramter, however, it has been modified
293  to include information such as the fit vectors and the fit parameters. See
294  the class `PhotonTransferCurveDatase`.
295  """
296 
297  covFits, covFitsNoB = fitDataFullCovariance(dataset)
298  dataset = self.getOutputPtcDataCovAstier(dataset, covFits, covFitsNoB)
299 
300  return dataset
301 

◆ fitPtc()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.fitPtc (   self,
  dataset 
)
Fit the photon transfer curve to a polynomial or to Astier+19 approximation.

Fit the photon transfer curve with either a polynomial of the order
specified in the task config, or using the exponential approximation
in Astier+19 (Eq. 16).

Sigma clipping is performed iteratively for the fit, as well as an
initial clipping of data points that are more than
config.initialNonLinearityExclusionThreshold away from lying on a
straight line. This other step is necessary because the photon transfer
curve turns over catastrophically at very high flux (because saturation
drops the variance to ~0) and these far outliers cause the initial fit
to fail, meaning the sigma cannot be calculated to perform the
sigma-clipping.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing the means, variances and exposure times.

Returns
-------
dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input parameter, however, it has been modified
    to include information such as the fit vectors and the fit parameters. See
    the class `PhotonTransferCurveDatase`.

Raises
------
RuntimeError:
    Raises if dataset.ptcFitType is None or empty.

Definition at line 493 of file cpSolvePtcTask.py.

493  def fitPtc(self, dataset):
494  """Fit the photon transfer curve to a polynomial or to Astier+19 approximation.
495 
496  Fit the photon transfer curve with either a polynomial of the order
497  specified in the task config, or using the exponential approximation
498  in Astier+19 (Eq. 16).
499 
500  Sigma clipping is performed iteratively for the fit, as well as an
501  initial clipping of data points that are more than
502  config.initialNonLinearityExclusionThreshold away from lying on a
503  straight line. This other step is necessary because the photon transfer
504  curve turns over catastrophically at very high flux (because saturation
505  drops the variance to ~0) and these far outliers cause the initial fit
506  to fail, meaning the sigma cannot be calculated to perform the
507  sigma-clipping.
508 
509  Parameters
510  ----------
511  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
512  The dataset containing the means, variances and exposure times.
513 
514  Returns
515  -------
516  dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
517  This is the same dataset as the input parameter, however, it has been modified
518  to include information such as the fit vectors and the fit parameters. See
519  the class `PhotonTransferCurveDatase`.
520 
521  Raises
522  ------
523  RuntimeError:
524  Raises if dataset.ptcFitType is None or empty.
525  """
526  if dataset.ptcFitType:
527  ptcFitType = dataset.ptcFitType
528  else:
529  raise RuntimeError("ptcFitType is None of empty in PTC dataset.")
530  matrixSide = self.config.maximumRangeCovariancesAstier
531  nanMatrix = np.empty((matrixSide, matrixSide))
532  nanMatrix[:] = np.nan
533 
534  for amp in dataset.ampNames:
535  lenInputTimes = len(dataset.rawExpTimes[amp])
536  listNanMatrix = np.empty((lenInputTimes, matrixSide, matrixSide))
537  listNanMatrix[:] = np.nan
538 
539  dataset.covariancesModel[amp] = listNanMatrix
540  dataset.aMatrix[amp] = nanMatrix
541  dataset.bMatrix[amp] = nanMatrix
542  dataset.covariancesModelNoB[amp] = listNanMatrix
543  dataset.aMatrixNoB[amp] = nanMatrix
544 
545  def errFunc(p, x, y):
546  return ptcFunc(p, x) - y
547 
548  sigmaCutPtcOutliers = self.config.sigmaCutPtcOutliers
549  maxIterationsPtcOutliers = self.config.maxIterationsPtcOutliers
550 
551  for i, ampName in enumerate(dataset.ampNames):
552  timeVecOriginal = np.ravel(np.array(dataset.rawExpTimes[ampName]))
553  meanVecOriginal = np.ravel(np.array(dataset.rawMeans[ampName]))
554  varVecOriginal = np.ravel(np.array(dataset.rawVars[ampName]))
555  varVecOriginal = self._makeZeroSafe(varVecOriginal)
556 
557  goodPoints = self._getInitialGoodPoints(meanVecOriginal, varVecOriginal,
558  self.config.initialNonLinearityExclusionThresholdPositive,
559  self.config.initialNonLinearityExclusionThresholdNegative,
560  self.config.minMeanRatioTest,
561  self.config.minVarPivotSearch)
562  if not (goodPoints.any()):
563  msg = (f"SERIOUS: All points in goodPoints: {goodPoints} are bad."
564  f"Setting {ampName} to BAD.")
565  self.log.warn(msg)
566  # Fill entries with NaNs
567  self.fillBadAmp(dataset, ptcFitType, ampName)
568  continue
569 
570  mask = goodPoints
571 
572  if ptcFitType == 'EXPAPPROXIMATION':
573  ptcFunc = funcAstier
574  parsIniPtc = [-1e-9, 1.0, 10.] # a00, gain, noisei^2
575  # lowers and uppers obtained from BOT data studies by C. Lage (UC Davis, 11/2020).
576  bounds = self._boundsForAstier(parsIniPtc, lowers=[-1e-4, 0.5, -2000],
577  uppers=[1e-4, 2.5, 2000])
578  if ptcFitType == 'POLYNOMIAL':
579  ptcFunc = funcPolynomial
580  parsIniPtc = self._initialParsForPolynomial(self.config.polynomialFitDegree + 1)
581  bounds = self._boundsForPolynomial(parsIniPtc)
582 
583  # Before bootstrap fit, do an iterative fit to get rid of outliers
584  count = 1
585  while count <= maxIterationsPtcOutliers:
586  # Note that application of the mask actually shrinks the array
587  # to size rather than setting elements to zero (as we want) so
588  # always update mask itself and re-apply to the original data
589  meanTempVec = meanVecOriginal[mask]
590  varTempVec = varVecOriginal[mask]
591  res = least_squares(errFunc, parsIniPtc, bounds=bounds, args=(meanTempVec, varTempVec))
592  pars = res.x
593 
594  # change this to the original from the temp because the masks are ANDed
595  # meaning once a point is masked it's always masked, and the masks must
596  # always be the same length for broadcasting
597  sigResids = (varVecOriginal - ptcFunc(pars, meanVecOriginal))/np.sqrt(varVecOriginal)
598  newMask = np.array([True if np.abs(r) < sigmaCutPtcOutliers else False for r in sigResids])
599  mask = mask & newMask
600  if not (mask.any() and newMask.any()):
601  msg = (f"SERIOUS: All points in either mask: {mask} or newMask: {newMask} are bad. "
602  f"Setting {ampName} to BAD.")
603  self.log.warn(msg)
604  # Fill entries with NaNs
605  self.fillBadAmp(dataset, ptcFitType, ampName)
606  break
607  nDroppedTotal = Counter(mask)[False]
608  self.log.debug(f"Iteration {count}: discarded {nDroppedTotal} points in total for {ampName}")
609  count += 1
610  # objects should never shrink
611  assert (len(mask) == len(timeVecOriginal) == len(meanVecOriginal) == len(varVecOriginal))
612  if not (mask.any() and newMask.any()):
613  continue
614  dataset.expIdMask[ampName] = np.array(dataset.expIdMask[ampName])
615  # store the final mask
616  if len(dataset.expIdMask[ampName]):
617  dataset.expIdMask[ampName] &= mask # bitwise_and if there is already a mask
618  else:
619  dataset.expIdMask[ampName] = mask
620  parsIniPtc = pars
621  meanVecFinal = meanVecOriginal[mask]
622  varVecFinal = varVecOriginal[mask]
623 
624  if Counter(mask)[False] > 0:
625  self.log.info((f"Number of points discarded in PTC of amplifier {ampName}:"
626  f" {Counter(mask)[False]} out of {len(meanVecOriginal)}"))
627 
628  if (len(meanVecFinal) < len(parsIniPtc)):
629  msg = (f"SERIOUS: Not enough data points ({len(meanVecFinal)}) compared to the number of "
630  f"parameters of the PTC model({len(parsIniPtc)}). Setting {ampName} to BAD.")
631  self.log.warn(msg)
632  # Fill entries with NaNs
633  self.fillBadAmp(dataset, ptcFitType, ampName)
634  continue
635  # Fit the PTC
636  if self.config.doFitBootstrap:
637  parsFit, parsFitErr, reducedChiSqPtc = fitBootstrap(parsIniPtc, meanVecFinal,
638  varVecFinal, ptcFunc,
639  weightsY=1./np.sqrt(varVecFinal))
640  else:
641  parsFit, parsFitErr, reducedChiSqPtc = fitLeastSq(parsIniPtc, meanVecFinal,
642  varVecFinal, ptcFunc,
643  weightsY=1./np.sqrt(varVecFinal))
644  dataset.ptcFitPars[ampName] = parsFit
645  dataset.ptcFitParsError[ampName] = parsFitErr
646  dataset.ptcFitChiSq[ampName] = reducedChiSqPtc
647  # Masked variances (measured and modeled) and means. Need to pad the array so astropy.Table does
648  # not crash (the mask may vary per amp).
649  padLength = len(dataset.rawExpTimes[ampName]) - len(varVecFinal)
650  dataset.finalVars[ampName] = np.pad(varVecFinal, (0, padLength), 'constant',
651  constant_values=np.nan)
652  dataset.finalModelVars[ampName] = np.pad(ptcFunc(parsFit, meanVecFinal), (0, padLength),
653  'constant', constant_values=np.nan)
654  dataset.finalMeans[ampName] = np.pad(meanVecFinal, (0, padLength), 'constant',
655  constant_values=np.nan)
656  if ptcFitType == 'EXPAPPROXIMATION':
657  ptcGain = parsFit[1]
658  ptcGainErr = parsFitErr[1]
659  ptcNoise = np.sqrt(np.fabs(parsFit[2]))
660  ptcNoiseErr = 0.5*(parsFitErr[2]/np.fabs(parsFit[2]))*np.sqrt(np.fabs(parsFit[2]))
661  if ptcFitType == 'POLYNOMIAL':
662  ptcGain = 1./parsFit[1]
663  ptcGainErr = np.fabs(1./parsFit[1])*(parsFitErr[1]/parsFit[1])
664  ptcNoise = np.sqrt(np.fabs(parsFit[0]))*ptcGain
665  ptcNoiseErr = (0.5*(parsFitErr[0]/np.fabs(parsFit[0]))*(np.sqrt(np.fabs(parsFit[0]))))*ptcGain
666  dataset.gain[ampName] = ptcGain
667  dataset.gainErr[ampName] = ptcGainErr
668  dataset.noise[ampName] = ptcNoise
669  dataset.noiseErr[ampName] = ptcNoiseErr
670 
671  if not len(dataset.ptcFitType) == 0:
672  dataset.ptcFitType = ptcFitType
673  if len(dataset.badAmps) == 0:
674  dataset.badAmps = np.repeat(np.nan, len(list(dataset.rawExpTimes.values())[0]))
675 
676  return dataset
677 
daf::base::PropertyList * list
Definition: fits.cc:913
def fitBootstrap(initialParams, dataX, dataY, function, weightsY=None, confidenceSigma=1.)
Definition: utils.py:487
def fitLeastSq(initialParams, dataX, dataY, function, weightsY=None)
Definition: utils.py:420

◆ getOutputPtcDataCovAstier()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.getOutputPtcDataCovAstier (   self,
  dataset,
  covFits,
  covFitsNoB 
)
Get output data for PhotonTransferCurveCovAstierDataset from CovFit objects.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing information such as the means, variances and exposure times.
covFits: `dict`
    Dictionary of CovFit objects, with amp names as keys.
covFitsNoB : `dict`
     Dictionary of CovFit objects, with amp names as keys, and 'b=0' in Eq. 20 of Astier+19.

Returns
-------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input paramter, however, it has been modified
    to include extra information such as the mask 1D array, gains, reoudout noise, measured signal,
    measured variance, modeled variance, a, and b coefficient matrices (see Astier+19) per amplifier.
    See the class `PhotonTransferCurveDatase`.

Definition at line 302 of file cpSolvePtcTask.py.

302  def getOutputPtcDataCovAstier(self, dataset, covFits, covFitsNoB):
303  """Get output data for PhotonTransferCurveCovAstierDataset from CovFit objects.
304 
305  Parameters
306  ----------
307  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
308  The dataset containing information such as the means, variances and exposure times.
309  covFits: `dict`
310  Dictionary of CovFit objects, with amp names as keys.
311  covFitsNoB : `dict`
312  Dictionary of CovFit objects, with amp names as keys, and 'b=0' in Eq. 20 of Astier+19.
313 
314  Returns
315  -------
316  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
317  This is the same dataset as the input paramter, however, it has been modified
318  to include extra information such as the mask 1D array, gains, reoudout noise, measured signal,
319  measured variance, modeled variance, a, and b coefficient matrices (see Astier+19) per amplifier.
320  See the class `PhotonTransferCurveDatase`.
321  """
322  assert(len(covFits) == len(covFitsNoB))
323 
324  for i, amp in enumerate(dataset.ampNames):
325  lenInputTimes = len(dataset.rawExpTimes[amp])
326  # Not used when ptcFitType is 'FULLCOVARIANCE'
327  dataset.ptcFitPars[amp] = [np.nan]
328  dataset.ptcFitParsError[amp] = [np.nan]
329  dataset.ptcFitChiSq[amp] = np.nan
330  if amp in covFits:
331  fit = covFits[amp]
332  fitNoB = covFitsNoB[amp]
333  # Save full covariances, covariances models, and their weights
334  # dataset.expIdMask is already full
335  dataset.covariances[amp] = fit.cov
336  dataset.covariancesModel[amp] = fit.evalCovModel()
337  dataset.covariancesSqrtWeights[amp] = fit.sqrtW
338  dataset.aMatrix[amp] = fit.getA()
339  dataset.bMatrix[amp] = fit.getB()
340  dataset.covariancesModelNoB[amp] = fitNoB.evalCovModel()
341  dataset.aMatrixNoB[amp] = fitNoB.getA()
342 
343  (meanVecFinal, varVecFinal, varVecModel,
344  wc, varMask) = fit.getFitData(0, 0, divideByMu=False)
345  gain = fit.getGain()
346 
347  dataset.gain[amp] = gain
348  dataset.gainErr[amp] = fit.getGainErr()
349  dataset.noise[amp] = np.sqrt(fit.getRon())
350  dataset.noiseErr[amp] = fit.getRonErr()
351  dataset.finalVars[amp] = varVecFinal
352  dataset.finalModelVars[amp] = varVecModel
353  dataset.finalMeans[amp] = meanVecFinal
354 
355  else:
356  # Bad amp
357  # Entries need to have proper dimensions so read/write with astropy.Table works.
358  matrixSide = self.config.maximumRangeCovariancesAstier
359  nanMatrix = np.full((matrixSide, matrixSide), np.nan)
360  listNanMatrix = np.full((lenInputTimes, matrixSide, matrixSide), np.nan)
361 
362  dataset.covariances[amp] = listNanMatrix
363  dataset.covariancesModel[amp] = listNanMatrix
364  dataset.covariancesSqrtWeights[amp] = listNanMatrix
365  dataset.aMatrix[amp] = nanMatrix
366  dataset.bMatrix[amp] = nanMatrix
367  dataset.covariancesModelNoB[amp] = listNanMatrix
368  dataset.aMatrixNoB[amp] = nanMatrix
369 
370  dataset.expIdMask[amp] = np.repeat(np.nan, lenInputTimes)
371  dataset.gain[amp] = np.nan
372  dataset.gainErr[amp] = np.nan
373  dataset.noise[amp] = np.nan
374  dataset.noiseErr[amp] = np.nan
375  dataset.finalVars[amp] = np.repeat(np.nan, lenInputTimes)
376  dataset.finalModelVars[amp] = np.repeat(np.nan, lenInputTimes)
377  dataset.finalMeans[amp] = np.repeat(np.nan, lenInputTimes)
378 
379  return dataset
380 

◆ run()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.run (   self,
  inputCovariances,
  camera = None,
  inputExpList = None 
)
Fit measure covariances to different models.

Parameters
----------
inputCovariances : `list` [`lsst.ip.isr.PhotonTransferCurveDataset`]
    List of lsst.ip.isr.PhotonTransferCurveDataset datasets.

camera : `lsst.afw.cameraGeom.Camera`, optional
    Input camera.

inputExpList : `list` [`~lsst.afw.image.exposure.exposure.ExposureF`], optional
    List of exposures.

Returns
-------
results : `lsst.pipe.base.Struct`
    The results struct containing:
    ``outputPtcDatset`` : `lsst.ip.isr.PhotonTransferCurveDataset`
        Final PTC dataset, containing information such as the means, variances,
        and exposure times.

Definition at line 191 of file cpSolvePtcTask.py.

191  def run(self, inputCovariances, camera=None, inputExpList=None):
192  """Fit measure covariances to different models.
193 
194  Parameters
195  ----------
196  inputCovariances : `list` [`lsst.ip.isr.PhotonTransferCurveDataset`]
197  List of lsst.ip.isr.PhotonTransferCurveDataset datasets.
198 
199  camera : `lsst.afw.cameraGeom.Camera`, optional
200  Input camera.
201 
202  inputExpList : `list` [`~lsst.afw.image.exposure.exposure.ExposureF`], optional
203  List of exposures.
204 
205  Returns
206  -------
207  results : `lsst.pipe.base.Struct`
208  The results struct containing:
209  ``outputPtcDatset`` : `lsst.ip.isr.PhotonTransferCurveDataset`
210  Final PTC dataset, containing information such as the means, variances,
211  and exposure times.
212  """
213  # Assemble partial PTC datasets into a single dataset.
214  ampNames = np.unique(inputCovariances[0].ampNames)
215  datasetPtc = PhotonTransferCurveDataset(ampNames, self.config.ptcFitType,
216  self.config.maximumRangeCovariancesAstier)
217  for partialPtcDataset in inputCovariances:
218  if partialPtcDataset.ptcFitType == 'DUMMY':
219  continue
220  for ampName in ampNames:
221  datasetPtc.inputExpIdPairs[ampName].append(partialPtcDataset.inputExpIdPairs[ampName])
222  if type(partialPtcDataset.rawExpTimes[ampName]) is list:
223  datasetPtc.rawExpTimes[ampName].append(partialPtcDataset.rawExpTimes[ampName][0])
224  else:
225  datasetPtc.rawExpTimes[ampName].append(partialPtcDataset.rawExpTimes[ampName])
226  if type(partialPtcDataset.rawMeans[ampName]) is list:
227  datasetPtc.rawMeans[ampName].append(partialPtcDataset.rawMeans[ampName][0])
228  else:
229  datasetPtc.rawMeans[ampName].append(partialPtcDataset.rawMeans[ampName])
230  if type(partialPtcDataset.rawVars[ampName]) is list:
231  datasetPtc.rawVars[ampName].append(partialPtcDataset.rawVars[ampName][0])
232  else:
233  datasetPtc.rawVars[ampName].append(partialPtcDataset.rawVars[ampName])
234  if type(partialPtcDataset.expIdMask[ampName]) is list:
235  datasetPtc.expIdMask[ampName].append(partialPtcDataset.expIdMask[ampName][0])
236  else:
237  datasetPtc.expIdMask[ampName].append(partialPtcDataset.expIdMask[ampName])
238  datasetPtc.covariances[ampName].append(np.array(partialPtcDataset.covariances[ampName][0]))
239  datasetPtc.covariancesSqrtWeights[ampName].append(
240  np.array(partialPtcDataset.covariancesSqrtWeights[ampName][0]))
241  # Sort arrays that are filled so far in the final dataset by rawMeans index
242  for ampName in ampNames:
243  index = np.argsort(np.ravel(np.array(datasetPtc.rawMeans[ampName])))
244  datasetPtc.inputExpIdPairs[ampName] = np.array(datasetPtc.inputExpIdPairs[ampName])[index]
245  datasetPtc.rawExpTimes[ampName] = np.array(datasetPtc.rawExpTimes[ampName])[index]
246  datasetPtc.rawMeans[ampName] = np.array(datasetPtc.rawMeans[ampName])[index]
247  datasetPtc.rawVars[ampName] = np.array(datasetPtc.rawVars[ampName])[index]
248  datasetPtc.expIdMask[ampName] = np.array(datasetPtc.expIdMask[ampName])[index]
249  datasetPtc.covariances[ampName] = np.array(datasetPtc.covariances[ampName])[index]
250  datasetPtc.covariancesSqrtWeights[ampName] = np.array(
251  datasetPtc.covariancesSqrtWeights[ampName])[index]
252  if self.config.ptcFitType == "FULLCOVARIANCE":
253  # Calculate covariances and fit them, including the PTC, to Astier+19 full model (Eq. 20)
254  # First, fit get the flat pairs that are masked, fitting C_00 vs mu to
255  # the EXPAPPROXIMATION model (Eq. 16 in Astier+19).
256  # The points at these fluxes will also be masked when calculating the other covariances, C_ij)
257  tempDatasetPtc = copy.copy(datasetPtc)
258  tempDatasetPtc.ptcFitType = "EXPAPPROXIMATION"
259  tempDatasetPtc = self.fitPtc(tempDatasetPtc)
260  for ampName in datasetPtc.ampNames:
261  datasetPtc.expIdMask[ampName] = tempDatasetPtc.expIdMask[ampName]
262  datasetPtc.fitType = "FULLCOVARIANCE"
263  datasetPtc = self.fitCovariancesAstier(datasetPtc)
264  # The other options are: self.config.ptcFitType in ("EXPAPPROXIMATION", "POLYNOMIAL")
265  else:
266  # Fit the PTC to a polynomial or to Astier+19 exponential approximation (Eq. 16).
267  # Fill up PhotonTransferCurveDataset object.
268  datasetPtc = self.fitPtc(datasetPtc)
269  if inputExpList is not None:
270  # It should be a list of exposures, to get the detector.
271  detector = inputExpList[0].getDetector()
272  else:
273  detector = None
274  datasetPtc.updateMetadata(setDate=True, camera=camera, detector=detector)
275 
276  return pipeBase.Struct(
277  outputPtcDataset=datasetPtc,
278  )
279 
table::Key< int > type
Definition: Detector.cc:163
std::shared_ptr< FrameSet > append(FrameSet const &first, FrameSet const &second)
Construct a FrameSet that performs two transformations in series.
Definition: functional.cc:33
def run(self, skyInfo, tempExpRefList, imageScalerList, weightList, altMaskList=None, mask=None, supplementaryData=None)

◆ runQuantum()

def lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.runQuantum (   self,
  butlerQC,
  inputRefs,
  outputRefs 
)
Ensure that the input and output dimensions are passed along.

Parameters
----------
butlerQC : `~lsst.daf.butler.butlerQuantumContext.ButlerQuantumContext`
    Butler to operate on.
inputRefs : `~lsst.pipe.base.connections.InputQuantizedConnection`
    Input data refs to load.
ouptutRefs : `~lsst.pipe.base.connections.OutputQuantizedConnection`
    Output data refs to persist.

Definition at line 175 of file cpSolvePtcTask.py.

175  def runQuantum(self, butlerQC, inputRefs, outputRefs):
176  """Ensure that the input and output dimensions are passed along.
177 
178  Parameters
179  ----------
180  butlerQC : `~lsst.daf.butler.butlerQuantumContext.ButlerQuantumContext`
181  Butler to operate on.
182  inputRefs : `~lsst.pipe.base.connections.InputQuantizedConnection`
183  Input data refs to load.
184  ouptutRefs : `~lsst.pipe.base.connections.OutputQuantizedConnection`
185  Output data refs to persist.
186  """
187  inputs = butlerQC.get(inputRefs)
188  outputs = self.run(inputCovariances=inputs['inputCovariances'], camera=inputs['camera'])
189  butlerQC.put(outputs, outputRefs)
190 

Member Data Documentation

◆ ConfigClass

lsst.cp.pipe.ptc.cpSolvePtcTask.PhotonTransferCurveSolveTask.ConfigClass = PhotonTransferCurveSolveConfig
static

Definition at line 172 of file cpSolvePtcTask.py.


The documentation for this class was generated from the following file: