LSSTApplications  21.0.0+1b62c9342b,21.0.0+45a059f35e,21.0.0-1-ga51b5d4+ceb9cf20a3,21.0.0-19-g7c7630f+a88ebbf2d9,21.0.0-2-g103fe59+3522cf3bc7,21.0.0-2-g1367e85+571a348718,21.0.0-2-g2909d54+45a059f35e,21.0.0-2-g45278ab+1b62c9342b,21.0.0-2-g4bc9b9f+35a70d5868,21.0.0-2-g5242d73+571a348718,21.0.0-2-g54e2caa+aa129c4686,21.0.0-2-g66bcc37+3caef57c29,21.0.0-2-g7f82c8f+6f9059e2fe,21.0.0-2-g8dde007+5d1b9cb3f5,21.0.0-2-g8f08a60+73884b2cf5,21.0.0-2-g973f35b+1d054a08b9,21.0.0-2-ga326454+6f9059e2fe,21.0.0-2-ga63a54e+3d2c655db6,21.0.0-2-gc738bc1+a567cb0f17,21.0.0-2-gde069b7+5a8f2956b8,21.0.0-2-ge17e5af+571a348718,21.0.0-2-ge712728+834f2a3ece,21.0.0-2-gecfae73+dfe6e80958,21.0.0-2-gfc62afb+571a348718,21.0.0-21-g006371a9+88174a2081,21.0.0-3-g4c5b185+7fd31a6834,21.0.0-3-g6d51c4a+3caef57c29,21.0.0-3-gaa929c8+55f5a6a5c9,21.0.0-3-gd222c45+afc8332dbe,21.0.0-3-gd5de2f2+3caef57c29,21.0.0-4-g3300ddd+1b62c9342b,21.0.0-4-g5873dc9+9a92674037,21.0.0-4-g8a80011+5955f0fd15,21.0.0-5-gb7080ec+8658c79ec4,21.0.0-5-gcff38f6+89f2a0074d,21.0.0-6-gd3283ba+55f5a6a5c9,21.0.0-8-g19111d86+2c4b0a9f47,21.0.0-9-g7bed000b9+c7d3cce47e,w.2021.03
LSSTDataManagementBasePackage
Public Member Functions | Static Public Attributes | List of all members
lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask Class Reference
Inheritance diagram for lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask:

Public Member Functions

def __init__ (self, *args, **kwargs)
 
def runDataRef (self, dataRefList)
 
def makePairs (self, dataRefList)
 
def fitCovariancesAstier (self, dataset, covariancesWithTagsArray)
 
def getOutputPtcDataCovAstier (self, dataset, covFits, covFitsNoB)
 
def measureMeanVarCov (self, exposure1, exposure2, region=None, covAstierRealSpace=False)
 
def computeCovDirect (self, diffImage, weightImage, maxRange)
 
def covDirectValue (self, diffImage, weightImage, dx, dy)
 
def fitPtc (self, dataset, ptcFitType)
 
def fillBadAmp (self, dataset, ptcFitType, ampName)
 

Static Public Attributes

 RunnerClass = DataRefListRunner
 
 ConfigClass = MeasurePhotonTransferCurveTaskConfig
 

Detailed Description

A class to calculate, fit, and plot a PTC from a set of flat pairs.

The Photon Transfer Curve (var(signal) vs mean(signal)) is a standard tool
used in astronomical detectors characterization (e.g., Janesick 2001,
Janesick 2007). If ptcFitType is "EXPAPPROXIMATION" or "POLYNOMIAL",  this task calculates the
PTC from a series of pairs of flat-field images; each pair taken at identical exposure
times. The difference image of each pair is formed to eliminate fixed pattern noise,
and then the variance of the difference image and the mean of the average image
are used to produce the PTC. An n-degree polynomial or the approximation in Equation
16 of Astier+19 ("The Shape of the Photon Transfer Curve of CCD sensors",
arXiv:1905.08677) can be fitted to the PTC curve. These models include
parameters such as the gain (e/DN) and readout noise.

Linearizers to correct for signal-chain non-linearity are also calculated.
The `Linearizer` class, in general, can support per-amp linearizers, but in this
task this is not supported.

If ptcFitType is "FULLCOVARIANCE", the covariances of the difference images are calculated via the
DFT methods described in Astier+19 and the variances for the PTC are given by the cov[0,0] elements
at each signal level. The full model in Equation 20 of Astier+19 is fit to the PTC to get the gain
and the noise.

Parameters
----------

*args: `list`
    Positional arguments passed to the Task constructor. None used at this
    time.
**kwargs: `dict`
    Keyword arguments passed on to the Task constructor. None used at this
    time.

Definition at line 198 of file ptc.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.__init__ (   self,
args,
**  kwargs 
)

Definition at line 237 of file ptc.py.

237  def __init__(self, *args, **kwargs):
238  pipeBase.CmdLineTask.__init__(self, *args, **kwargs)
239  self.makeSubtask("linearity")
240  plt.interactive(False) # stop windows popping up when plotting. When headless, use 'agg' backend too
241  self.config.validate()
242  self.config.freeze()
243 

Member Function Documentation

◆ computeCovDirect()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.computeCovDirect (   self,
  diffImage,
  weightImage,
  maxRange 
)
Compute covariances of diffImage in real space.

For lags larger than ~25, it is slower than the FFT way.
Taken from https://github.com/PierreAstier/bfptc/

Parameters
----------
diffImage : `numpy.array`
    Image to compute the covariance of.

weightImage : `numpy.array`
    Weight image of diffImage (1's and 0's for good and bad pixels, respectively).

maxRange : `int`
    Last index of the covariance to be computed.

Returns
-------
outList : `list`
    List with tuples of the form (dx, dy, var, cov, npix), where:
        dx : `int`
            Lag in x
        dy : `int`
            Lag in y
        var : `float`
            Variance at (dx, dy).
        cov : `float`
            Covariance at (dx, dy).
        nPix : `int`
            Number of pixel pairs used to evaluate var and cov.

Definition at line 720 of file ptc.py.

720  def computeCovDirect(self, diffImage, weightImage, maxRange):
721  """Compute covariances of diffImage in real space.
722 
723  For lags larger than ~25, it is slower than the FFT way.
724  Taken from https://github.com/PierreAstier/bfptc/
725 
726  Parameters
727  ----------
728  diffImage : `numpy.array`
729  Image to compute the covariance of.
730 
731  weightImage : `numpy.array`
732  Weight image of diffImage (1's and 0's for good and bad pixels, respectively).
733 
734  maxRange : `int`
735  Last index of the covariance to be computed.
736 
737  Returns
738  -------
739  outList : `list`
740  List with tuples of the form (dx, dy, var, cov, npix), where:
741  dx : `int`
742  Lag in x
743  dy : `int`
744  Lag in y
745  var : `float`
746  Variance at (dx, dy).
747  cov : `float`
748  Covariance at (dx, dy).
749  nPix : `int`
750  Number of pixel pairs used to evaluate var and cov.
751  """
752  outList = []
753  var = 0
754  # (dy,dx) = (0,0) has to be first
755  for dy in range(maxRange + 1):
756  for dx in range(0, maxRange + 1):
757  if (dx*dy > 0):
758  cov1, nPix1 = self.covDirectValue(diffImage, weightImage, dx, dy)
759  cov2, nPix2 = self.covDirectValue(diffImage, weightImage, dx, -dy)
760  cov = 0.5*(cov1 + cov2)
761  nPix = nPix1 + nPix2
762  else:
763  cov, nPix = self.covDirectValue(diffImage, weightImage, dx, dy)
764  if (dx == 0 and dy == 0):
765  var = cov
766  outList.append((dx, dy, var, cov, nPix))
767 
768  return outList
769 

◆ covDirectValue()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.covDirectValue (   self,
  diffImage,
  weightImage,
  dx,
  dy 
)
Compute covariances of diffImage in real space at lag (dx, dy).

Taken from https://github.com/PierreAstier/bfptc/ (c.f., appendix of Astier+19).

Parameters
----------
diffImage : `numpy.array`
    Image to compute the covariance of.

weightImage : `numpy.array`
    Weight image of diffImage (1's and 0's for good and bad pixels, respectively).

dx : `int`
    Lag in x.

dy : `int`
    Lag in y.

Returns
-------
cov : `float`
    Covariance at (dx, dy)

nPix : `int`
    Number of pixel pairs used to evaluate var and cov.

Definition at line 770 of file ptc.py.

770  def covDirectValue(self, diffImage, weightImage, dx, dy):
771  """Compute covariances of diffImage in real space at lag (dx, dy).
772 
773  Taken from https://github.com/PierreAstier/bfptc/ (c.f., appendix of Astier+19).
774 
775  Parameters
776  ----------
777  diffImage : `numpy.array`
778  Image to compute the covariance of.
779 
780  weightImage : `numpy.array`
781  Weight image of diffImage (1's and 0's for good and bad pixels, respectively).
782 
783  dx : `int`
784  Lag in x.
785 
786  dy : `int`
787  Lag in y.
788 
789  Returns
790  -------
791  cov : `float`
792  Covariance at (dx, dy)
793 
794  nPix : `int`
795  Number of pixel pairs used to evaluate var and cov.
796  """
797  (nCols, nRows) = diffImage.shape
798  # switching both signs does not change anything:
799  # it just swaps im1 and im2 below
800  if (dx < 0):
801  (dx, dy) = (-dx, -dy)
802  # now, we have dx >0. We have to distinguish two cases
803  # depending on the sign of dy
804  if dy >= 0:
805  im1 = diffImage[dy:, dx:]
806  w1 = weightImage[dy:, dx:]
807  im2 = diffImage[:nCols - dy, :nRows - dx]
808  w2 = weightImage[:nCols - dy, :nRows - dx]
809  else:
810  im1 = diffImage[:nCols + dy, dx:]
811  w1 = weightImage[:nCols + dy, dx:]
812  im2 = diffImage[-dy:, :nRows - dx]
813  w2 = weightImage[-dy:, :nRows - dx]
814  # use the same mask for all 3 calculations
815  wAll = w1*w2
816  # do not use mean() because weightImage=0 pixels would then count
817  nPix = wAll.sum()
818  im1TimesW = im1*wAll
819  s1 = im1TimesW.sum()/nPix
820  s2 = (im2*wAll).sum()/nPix
821  p = (im1TimesW*im2).sum()/nPix
822  cov = p - s1*s2
823 
824  return cov, nPix
825 

◆ fillBadAmp()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.fillBadAmp (   self,
  dataset,
  ptcFitType,
  ampName 
)
Fill the dataset with NaNs if there are not enough good points.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing the means, variances and exposure times.

ptcFitType : `str`
    Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
    'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC.

ampName : `str`
    Amplifier name.

Definition at line 1126 of file ptc.py.

1126  def fillBadAmp(self, dataset, ptcFitType, ampName):
1127  """Fill the dataset with NaNs if there are not enough good points.
1128 
1129  Parameters
1130  ----------
1131  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
1132  The dataset containing the means, variances and exposure times.
1133 
1134  ptcFitType : `str`
1135  Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
1136  'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC.
1137 
1138  ampName : `str`
1139  Amplifier name.
1140  """
1141  dataset.badAmps.append(ampName)
1142  dataset.expIdMask[ampName] = np.repeat(False, len(dataset.rawExpTimes[ampName]))
1143  dataset.gain[ampName] = np.nan
1144  dataset.gainErr[ampName] = np.nan
1145  dataset.noise[ampName] = np.nan
1146  dataset.noiseErr[ampName] = np.nan
1147  dataset.ptcFitPars[ampName] = (np.repeat(np.nan, self.config.polynomialFitDegree + 1) if
1148  ptcFitType in ["POLYNOMIAL", ] else np.repeat(np.nan, 3))
1149  dataset.ptcFitParsError[ampName] = (np.repeat(np.nan, self.config.polynomialFitDegree + 1) if
1150  ptcFitType in ["POLYNOMIAL", ] else np.repeat(np.nan, 3))
1151  dataset.ptcFitChiSq[ampName] = np.nan
1152  dataset.finalVars[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
1153  dataset.finalModelVars[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
1154  dataset.finalMeans[ampName] = np.repeat(np.nan, len(dataset.rawExpTimes[ampName]))
1155 
1156  return

◆ fitCovariancesAstier()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.fitCovariancesAstier (   self,
  dataset,
  covariancesWithTagsArray 
)
Fit measured flat covariances to full model in Astier+19.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing information such as the means, variances and exposure times.

covariancesWithTagsArray : `numpy.recarray`
    Tuple with at least (mu, cov, var, i, j, npix), where:
    mu : 0.5*(m1 + m2), where:
        mu1: mean value of flat1
        mu2: mean value of flat2
    cov: covariance value at lag(i, j)
    var: variance(covariance value at lag(0, 0))
    i: lag dimension
    j: lag dimension
    npix: number of pixels used for covariance calculation.

Returns
-------
dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input paramter, however, it has been modified
    to include information such as the fit vectors and the fit parameters. See
    the class `PhotonTransferCurveDatase`.

Definition at line 476 of file ptc.py.

476  def fitCovariancesAstier(self, dataset, covariancesWithTagsArray):
477  """Fit measured flat covariances to full model in Astier+19.
478 
479  Parameters
480  ----------
481  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
482  The dataset containing information such as the means, variances and exposure times.
483 
484  covariancesWithTagsArray : `numpy.recarray`
485  Tuple with at least (mu, cov, var, i, j, npix), where:
486  mu : 0.5*(m1 + m2), where:
487  mu1: mean value of flat1
488  mu2: mean value of flat2
489  cov: covariance value at lag(i, j)
490  var: variance(covariance value at lag(0, 0))
491  i: lag dimension
492  j: lag dimension
493  npix: number of pixels used for covariance calculation.
494 
495  Returns
496  -------
497  dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
498  This is the same dataset as the input paramter, however, it has been modified
499  to include information such as the fit vectors and the fit parameters. See
500  the class `PhotonTransferCurveDatase`.
501  """
502 
503  covFits, covFitsNoB = fitData(covariancesWithTagsArray,
504  r=self.config.maximumRangeCovariancesAstier,
505  expIdMask=dataset.expIdMask)
506  dataset = self.getOutputPtcDataCovAstier(dataset, covFits, covFitsNoB)
507  return dataset
508 

◆ fitPtc()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.fitPtc (   self,
  dataset,
  ptcFitType 
)
Fit the photon transfer curve to a polynimial or to Astier+19 approximation.

Fit the photon transfer curve with either a polynomial of the order
specified in the task config, or using the Astier approximation.

Sigma clipping is performed iteratively for the fit, as well as an
initial clipping of data points that are more than
config.initialNonLinearityExclusionThreshold away from lying on a
straight line. This other step is necessary because the photon transfer
curve turns over catastrophically at very high flux (because saturation
drops the variance to ~0) and these far outliers cause the initial fit
to fail, meaning the sigma cannot be calculated to perform the
sigma-clipping.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing the means, variances and exposure times

ptcFitType : `str`
    Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
    'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC

Returns
-------
dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input paramter, however, it has been modified
    to include information such as the fit vectors and the fit parameters. See
    the class `PhotonTransferCurveDatase`.

Definition at line 942 of file ptc.py.

942  def fitPtc(self, dataset, ptcFitType):
943  """Fit the photon transfer curve to a polynimial or to Astier+19 approximation.
944 
945  Fit the photon transfer curve with either a polynomial of the order
946  specified in the task config, or using the Astier approximation.
947 
948  Sigma clipping is performed iteratively for the fit, as well as an
949  initial clipping of data points that are more than
950  config.initialNonLinearityExclusionThreshold away from lying on a
951  straight line. This other step is necessary because the photon transfer
952  curve turns over catastrophically at very high flux (because saturation
953  drops the variance to ~0) and these far outliers cause the initial fit
954  to fail, meaning the sigma cannot be calculated to perform the
955  sigma-clipping.
956 
957  Parameters
958  ----------
959  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
960  The dataset containing the means, variances and exposure times
961 
962  ptcFitType : `str`
963  Fit a 'POLYNOMIAL' (degree: 'polynomialFitDegree') or
964  'EXPAPPROXIMATION' (Eq. 16 of Astier+19) to the PTC
965 
966  Returns
967  -------
968  dataset: `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
969  This is the same dataset as the input paramter, however, it has been modified
970  to include information such as the fit vectors and the fit parameters. See
971  the class `PhotonTransferCurveDatase`.
972  """
973 
974  matrixSide = self.config.maximumRangeCovariancesAstier
975  nanMatrix = np.empty((matrixSide, matrixSide))
976  nanMatrix[:] = np.nan
977 
978  for amp in dataset.ampNames:
979  lenInputTimes = len(dataset.rawExpTimes[amp])
980  listNanMatrix = np.empty((lenInputTimes, matrixSide, matrixSide))
981  listNanMatrix[:] = np.nan
982 
983  dataset.covariances[amp] = listNanMatrix
984  dataset.covariancesModel[amp] = listNanMatrix
985  dataset.covariancesSqrtWeights[amp] = listNanMatrix
986  dataset.aMatrix[amp] = nanMatrix
987  dataset.bMatrix[amp] = nanMatrix
988  dataset.covariancesNoB[amp] = listNanMatrix
989  dataset.covariancesModelNoB[amp] = listNanMatrix
990  dataset.covariancesSqrtWeightsNoB[amp] = listNanMatrix
991  dataset.aMatrixNoB[amp] = nanMatrix
992 
993  def errFunc(p, x, y):
994  return ptcFunc(p, x) - y
995 
996  sigmaCutPtcOutliers = self.config.sigmaCutPtcOutliers
997  maxIterationsPtcOutliers = self.config.maxIterationsPtcOutliers
998 
999  for i, ampName in enumerate(dataset.ampNames):
1000  timeVecOriginal = np.array(dataset.rawExpTimes[ampName])
1001  meanVecOriginal = np.array(dataset.rawMeans[ampName])
1002  varVecOriginal = np.array(dataset.rawVars[ampName])
1003  varVecOriginal = self._makeZeroSafe(varVecOriginal)
1004 
1005  goodPoints = self._getInitialGoodPoints(meanVecOriginal, varVecOriginal,
1006  self.config.initialNonLinearityExclusionThresholdPositive,
1007  self.config.initialNonLinearityExclusionThresholdNegative,
1008  self.config.minMeanRatioTest,
1009  self.config.minVarPivotSearch)
1010  if not (goodPoints.any()):
1011  msg = (f"\nSERIOUS: All points in goodPoints: {goodPoints} are bad."
1012  f"Setting {ampName} to BAD.")
1013  self.log.warn(msg)
1014  # The first and second parameters of initial fit are discarded (bias and gain)
1015  # for the final NL coefficients
1016  self.fillBadAmp(dataset, ptcFitType, ampName)
1017  continue
1018 
1019  mask = goodPoints
1020 
1021  if ptcFitType == 'EXPAPPROXIMATION':
1022  ptcFunc = funcAstier
1023  parsIniPtc = [-1e-9, 1.0, 10.] # a00, gain, noise^2
1024  # lowers and uppers obtained from studies by C. Lage (UC Davis, 11/2020).
1025  bounds = self._boundsForAstier(parsIniPtc, lowers=[-1e-4, 0.5, -2000],
1026  uppers=[1e-4, 2.5, 2000])
1027  if ptcFitType == 'POLYNOMIAL':
1028  ptcFunc = funcPolynomial
1029  parsIniPtc = self._initialParsForPolynomial(self.config.polynomialFitDegree + 1)
1030  bounds = self._boundsForPolynomial(parsIniPtc)
1031 
1032  # Before bootstrap fit, do an iterative fit to get rid of outliers
1033  count = 1
1034  while count <= maxIterationsPtcOutliers:
1035  # Note that application of the mask actually shrinks the array
1036  # to size rather than setting elements to zero (as we want) so
1037  # always update mask itself and re-apply to the original data
1038  meanTempVec = meanVecOriginal[mask]
1039  varTempVec = varVecOriginal[mask]
1040  res = least_squares(errFunc, parsIniPtc, bounds=bounds, args=(meanTempVec, varTempVec))
1041  pars = res.x
1042 
1043  # change this to the original from the temp because the masks are ANDed
1044  # meaning once a point is masked it's always masked, and the masks must
1045  # always be the same length for broadcasting
1046  sigResids = (varVecOriginal - ptcFunc(pars, meanVecOriginal))/np.sqrt(varVecOriginal)
1047  newMask = np.array([True if np.abs(r) < sigmaCutPtcOutliers else False for r in sigResids])
1048  mask = goodPoints & newMask
1049  if not (mask.any() and newMask.any()):
1050  msg = (f"\nSERIOUS: All points in either mask: {mask} or newMask: {newMask} are bad. "
1051  f"Setting {ampName} to BAD.")
1052  self.log.warn(msg)
1053  # The first and second parameters of initial fit are discarded (bias and gain)
1054  # for the final NL coefficients
1055  self.fillBadAmp(dataset, ptcFitType, ampName)
1056  break
1057  nDroppedTotal = Counter(mask)[False]
1058  self.log.debug(f"Iteration {count}: discarded {nDroppedTotal} points in total for {ampName}")
1059  count += 1
1060  # objects should never shrink
1061  assert (len(mask) == len(timeVecOriginal) == len(meanVecOriginal) == len(varVecOriginal))
1062 
1063  if not (mask.any() and newMask.any()):
1064  continue
1065  dataset.expIdMask[ampName] = mask # store the final mask
1066  parsIniPtc = pars
1067  meanVecFinal = meanVecOriginal[mask]
1068  varVecFinal = varVecOriginal[mask]
1069 
1070  if Counter(mask)[False] > 0:
1071  self.log.info((f"Number of points discarded in PTC of amplifier {ampName}:" +
1072  f" {Counter(mask)[False]} out of {len(meanVecOriginal)}"))
1073 
1074  if (len(meanVecFinal) < len(parsIniPtc)):
1075  msg = (f"\nSERIOUS: Not enough data points ({len(meanVecFinal)}) compared to the number of"
1076  f"parameters of the PTC model({len(parsIniPtc)}). Setting {ampName} to BAD.")
1077  self.log.warn(msg)
1078  # The first and second parameters of initial fit are discarded (bias and gain)
1079  # for the final NL coefficients
1080  self.fillBadAmp(dataset, ptcFitType, ampName)
1081  continue
1082 
1083  # Fit the PTC
1084  if self.config.doFitBootstrap:
1085  parsFit, parsFitErr, reducedChiSqPtc = fitBootstrap(parsIniPtc, meanVecFinal,
1086  varVecFinal, ptcFunc,
1087  weightsY=1./np.sqrt(varVecFinal))
1088  else:
1089  parsFit, parsFitErr, reducedChiSqPtc = fitLeastSq(parsIniPtc, meanVecFinal,
1090  varVecFinal, ptcFunc,
1091  weightsY=1./np.sqrt(varVecFinal))
1092  dataset.ptcFitPars[ampName] = parsFit
1093  dataset.ptcFitParsError[ampName] = parsFitErr
1094  dataset.ptcFitChiSq[ampName] = reducedChiSqPtc
1095  # Masked variances (measured and modeled) and means. Need to pad the array so astropy.Table does
1096  # not crash (the mask may vary per amp).
1097  padLength = len(dataset.rawExpTimes[ampName]) - len(varVecFinal)
1098  dataset.finalVars[ampName] = np.pad(varVecFinal, (0, padLength), 'constant',
1099  constant_values=np.nan)
1100  dataset.finalModelVars[ampName] = np.pad(ptcFunc(parsFit, meanVecFinal), (0, padLength),
1101  'constant', constant_values=np.nan)
1102  dataset.finalMeans[ampName] = np.pad(meanVecFinal, (0, padLength), 'constant',
1103  constant_values=np.nan)
1104 
1105  if ptcFitType == 'EXPAPPROXIMATION':
1106  ptcGain = parsFit[1]
1107  ptcGainErr = parsFitErr[1]
1108  ptcNoise = np.sqrt(np.fabs(parsFit[2]))
1109  ptcNoiseErr = 0.5*(parsFitErr[2]/np.fabs(parsFit[2]))*np.sqrt(np.fabs(parsFit[2]))
1110  if ptcFitType == 'POLYNOMIAL':
1111  ptcGain = 1./parsFit[1]
1112  ptcGainErr = np.fabs(1./parsFit[1])*(parsFitErr[1]/parsFit[1])
1113  ptcNoise = np.sqrt(np.fabs(parsFit[0]))*ptcGain
1114  ptcNoiseErr = (0.5*(parsFitErr[0]/np.fabs(parsFit[0]))*(np.sqrt(np.fabs(parsFit[0]))))*ptcGain
1115  dataset.gain[ampName] = ptcGain
1116  dataset.gainErr[ampName] = ptcGainErr
1117  dataset.noise[ampName] = ptcNoise
1118  dataset.noiseErr[ampName] = ptcNoiseErr
1119  if not len(dataset.ptcFitType) == 0:
1120  dataset.ptcFitType = ptcFitType
1121  if len(dataset.badAmps) == 0:
1122  dataset.badAmps = np.repeat(np.nan, len(list(dataset.rawExpTimes.values())[0]))
1123 
1124  return dataset
1125 

◆ getOutputPtcDataCovAstier()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.getOutputPtcDataCovAstier (   self,
  dataset,
  covFits,
  covFitsNoB 
)
Get output data for PhotonTransferCurveCovAstierDataset from CovFit objects.

Parameters
----------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    The dataset containing information such as the means, variances and exposure times.

covFits: `dict`
    Dictionary of CovFit objects, with amp names as keys.

covFitsNoB : `dict`
     Dictionary of CovFit objects, with amp names as keys, and 'b=0' in Eq. 20 of Astier+19.

Returns
-------
dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
    This is the same dataset as the input paramter, however, it has been modified
    to include extra information such as the mask 1D array, gains, reoudout noise, measured signal,
    measured variance, modeled variance, a, and b coefficient matrices (see Astier+19) per amplifier.
    See the class `PhotonTransferCurveDatase`.

Definition at line 509 of file ptc.py.

509  def getOutputPtcDataCovAstier(self, dataset, covFits, covFitsNoB):
510  """Get output data for PhotonTransferCurveCovAstierDataset from CovFit objects.
511 
512  Parameters
513  ----------
514  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
515  The dataset containing information such as the means, variances and exposure times.
516 
517  covFits: `dict`
518  Dictionary of CovFit objects, with amp names as keys.
519 
520  covFitsNoB : `dict`
521  Dictionary of CovFit objects, with amp names as keys, and 'b=0' in Eq. 20 of Astier+19.
522 
523  Returns
524  -------
525  dataset : `lsst.ip.isr.ptcDataset.PhotonTransferCurveDataset`
526  This is the same dataset as the input paramter, however, it has been modified
527  to include extra information such as the mask 1D array, gains, reoudout noise, measured signal,
528  measured variance, modeled variance, a, and b coefficient matrices (see Astier+19) per amplifier.
529  See the class `PhotonTransferCurveDatase`.
530  """
531  assert(len(covFits) == len(covFitsNoB))
532 
533  for i, amp in enumerate(dataset.ampNames):
534  lenInputTimes = len(dataset.rawExpTimes[amp])
535  # Not used when ptcFitType is 'FULLCOVARIANCE'
536  dataset.ptcFitPars[amp] = np.nan
537  dataset.ptcFitParsError[amp] = np.nan
538  dataset.ptcFitChiSq[amp] = np.nan
539  if (amp in covFits and (covFits[amp].covParams is not None) and
540  (covFitsNoB[amp].covParams is not None)):
541  fit = covFits[amp]
542  fitNoB = covFitsNoB[amp]
543  # Save full covariances, covariances models, and their weights
544  dataset.covariances[amp] = fit.cov
545  dataset.covariancesModel[amp] = fit.evalCovModel()
546  dataset.covariancesSqrtWeights[amp] = fit.sqrtW
547  dataset.aMatrix[amp] = fit.getA()
548  dataset.bMatrix[amp] = fit.getB()
549  dataset.covariancesNoB[amp] = fitNoB.cov
550  dataset.covariancesModelNoB[amp] = fitNoB.evalCovModel()
551  dataset.covariancesSqrtWeightsNoB[amp] = fitNoB.sqrtW
552  dataset.aMatrixNoB[amp] = fitNoB.getA()
553 
554  (meanVecFinal, varVecFinal, varVecModel,
555  wc, varMask) = fit.getFitData(0, 0, divideByMu=False)
556  gain = fit.getGain()
557 
558  dataset.gain[amp] = gain
559  dataset.gainErr[amp] = fit.getGainErr()
560  dataset.noise[amp] = np.sqrt(fit.getRon())
561  dataset.noiseErr[amp] = fit.getRonErr()
562 
563  padLength = lenInputTimes - len(varVecFinal)
564  dataset.finalVars[amp] = np.pad(varVecFinal/(gain**2), (0, padLength), 'constant',
565  constant_values=np.nan)
566  dataset.finalModelVars[amp] = np.pad(varVecModel/(gain**2), (0, padLength), 'constant',
567  constant_values=np.nan)
568  dataset.finalMeans[amp] = np.pad(meanVecFinal/gain, (0, padLength), 'constant',
569  constant_values=np.nan)
570  else:
571  # Bad amp
572  # Entries need to have proper dimensions so read/write with astropy.Table works.
573  matrixSide = self.config.maximumRangeCovariancesAstier
574  nanMatrix = np.full((matrixSide, matrixSide), np.nan)
575  listNanMatrix = np.full((lenInputTimes, matrixSide, matrixSide), np.nan)
576 
577  dataset.covariances[amp] = listNanMatrix
578  dataset.covariancesModel[amp] = listNanMatrix
579  dataset.covariancesSqrtWeights[amp] = listNanMatrix
580  dataset.aMatrix[amp] = nanMatrix
581  dataset.bMatrix[amp] = nanMatrix
582  dataset.covariancesNoB[amp] = listNanMatrix
583  dataset.covariancesModelNoB[amp] = listNanMatrix
584  dataset.covariancesSqrtWeightsNoB[amp] = listNanMatrix
585  dataset.aMatrixNoB[amp] = nanMatrix
586 
587  dataset.expIdMask[amp] = np.repeat(np.nan, lenInputTimes)
588  dataset.gain[amp] = np.nan
589  dataset.gainErr[amp] = np.nan
590  dataset.noise[amp] = np.nan
591  dataset.noiseErr[amp] = np.nan
592  dataset.finalVars[amp] = np.repeat(np.nan, lenInputTimes)
593  dataset.finalModelVars[amp] = np.repeat(np.nan, lenInputTimes)
594  dataset.finalMeans[amp] = np.repeat(np.nan, lenInputTimes)
595 
596  return dataset
597 

◆ makePairs()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.makePairs (   self,
  dataRefList 
)
Produce a list of flat pairs indexed by exposure time.

Parameters
----------
dataRefList : `list` [`lsst.daf.peristence.ButlerDataRef`]
    Data references for exposures for detectors to process.

Return
------
flatPairs : `dict` [`float`, `lsst.afw.image.exposure.exposure.ExposureF`]
  Dictionary that groups flat-field exposures that have the same exposure time (seconds).

Notes
-----
We use the difference of one pair of flat-field images taken at the same exposure time when
calculating the PTC to reduce Fixed Pattern Noise. If there are > 2 flat-field images with the
same exposure time, the first two are kept and the rest discarded.

Definition at line 420 of file ptc.py.

420  def makePairs(self, dataRefList):
421  """Produce a list of flat pairs indexed by exposure time.
422 
423  Parameters
424  ----------
425  dataRefList : `list` [`lsst.daf.peristence.ButlerDataRef`]
426  Data references for exposures for detectors to process.
427 
428  Return
429  ------
430  flatPairs : `dict` [`float`, `lsst.afw.image.exposure.exposure.ExposureF`]
431  Dictionary that groups flat-field exposures that have the same exposure time (seconds).
432 
433  Notes
434  -----
435  We use the difference of one pair of flat-field images taken at the same exposure time when
436  calculating the PTC to reduce Fixed Pattern Noise. If there are > 2 flat-field images with the
437  same exposure time, the first two are kept and the rest discarded.
438  """
439 
440  # Organize exposures by observation date.
441  expDict = {}
442  for dataRef in dataRefList:
443  try:
444  tempFlat = dataRef.get("postISRCCD")
445  except RuntimeError:
446  self.log.warn("postISR exposure could not be retrieved. Ignoring flat.")
447  continue
448  expDate = tempFlat.getInfo().getVisitInfo().getDate().get()
449  expDict.setdefault(expDate, tempFlat)
450  sortedExps = {k: expDict[k] for k in sorted(expDict)}
451 
452  flatPairs = {}
453  for exp in sortedExps:
454  tempFlat = sortedExps[exp]
455  expTime = tempFlat.getInfo().getVisitInfo().getExposureTime()
456  listAtExpTime = flatPairs.setdefault(expTime, [])
457  if len(listAtExpTime) >= 2:
458  self.log.warn(f"Already found 2 exposures at expTime {expTime}. "
459  f"Ignoring exposure {tempFlat.getInfo().getVisitInfo().getExposureId()}")
460  else:
461  listAtExpTime.append(tempFlat)
462 
463  keysToDrop = []
464  for (key, value) in flatPairs.items():
465  if len(value) < 2:
466  keysToDrop.append(key)
467 
468  if len(keysToDrop):
469  for key in keysToDrop:
470  self.log.warn(f"Only one exposure found at expTime {key}. Dropping exposure "
471  f"{flatPairs[key][0].getInfo().getVisitInfo().getExposureId()}.")
472  flatPairs.pop(key)
473  sortedFlatPairs = {k: flatPairs[k] for k in sorted(flatPairs)}
474  return sortedFlatPairs
475 

◆ measureMeanVarCov()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.measureMeanVarCov (   self,
  exposure1,
  exposure2,
  region = None,
  covAstierRealSpace = False 
)
Calculate the mean of each of two exposures and the variance and covariance of their difference.

The variance is calculated via afwMath, and the covariance via the methods in Astier+19 (appendix A).
In theory, var = covariance[0,0]. This should be validated, and in the future, we may decide to just
keep one (covariance).

Parameters
----------
exposure1 : `lsst.afw.image.exposure.exposure.ExposureF`
    First exposure of flat field pair.

exposure2 : `lsst.afw.image.exposure.exposure.ExposureF`
    Second exposure of flat field pair.

region : `lsst.geom.Box2I`, optional
    Region of each exposure where to perform the calculations (e.g, an amplifier).

covAstierRealSpace : `bool`, optional
    Should the covariannces in Astier+19 be calculated in real space or via FFT?
    See Appendix A of Astier+19.

Returns
-------
mu : `float` or `NaN`
    0.5*(mu1 + mu2), where mu1, and mu2 are the clipped means of the regions in
    both exposures. If either mu1 or m2 are NaN's, the returned value is NaN.

varDiff : `float` or `NaN`
    Half of the clipped variance of the difference of the regions inthe two input
    exposures. If either mu1 or m2 are NaN's, the returned value is NaN.

covDiffAstier : `list` or `None`
    List with tuples of the form (dx, dy, var, cov, npix), where:
        dx : `int`
            Lag in x
        dy : `int`
            Lag in y
        var : `float`
            Variance at (dx, dy).
        cov : `float`
            Covariance at (dx, dy).
        nPix : `int`
            Number of pixel pairs used to evaluate var and cov.
    If either mu1 or m2 are NaN's, the returned value is None.

Definition at line 598 of file ptc.py.

598  def measureMeanVarCov(self, exposure1, exposure2, region=None, covAstierRealSpace=False):
599  """Calculate the mean of each of two exposures and the variance and covariance of their difference.
600 
601  The variance is calculated via afwMath, and the covariance via the methods in Astier+19 (appendix A).
602  In theory, var = covariance[0,0]. This should be validated, and in the future, we may decide to just
603  keep one (covariance).
604 
605  Parameters
606  ----------
607  exposure1 : `lsst.afw.image.exposure.exposure.ExposureF`
608  First exposure of flat field pair.
609 
610  exposure2 : `lsst.afw.image.exposure.exposure.ExposureF`
611  Second exposure of flat field pair.
612 
613  region : `lsst.geom.Box2I`, optional
614  Region of each exposure where to perform the calculations (e.g, an amplifier).
615 
616  covAstierRealSpace : `bool`, optional
617  Should the covariannces in Astier+19 be calculated in real space or via FFT?
618  See Appendix A of Astier+19.
619 
620  Returns
621  -------
622  mu : `float` or `NaN`
623  0.5*(mu1 + mu2), where mu1, and mu2 are the clipped means of the regions in
624  both exposures. If either mu1 or m2 are NaN's, the returned value is NaN.
625 
626  varDiff : `float` or `NaN`
627  Half of the clipped variance of the difference of the regions inthe two input
628  exposures. If either mu1 or m2 are NaN's, the returned value is NaN.
629 
630  covDiffAstier : `list` or `None`
631  List with tuples of the form (dx, dy, var, cov, npix), where:
632  dx : `int`
633  Lag in x
634  dy : `int`
635  Lag in y
636  var : `float`
637  Variance at (dx, dy).
638  cov : `float`
639  Covariance at (dx, dy).
640  nPix : `int`
641  Number of pixel pairs used to evaluate var and cov.
642  If either mu1 or m2 are NaN's, the returned value is None.
643  """
644 
645  if region is not None:
646  im1Area = exposure1.maskedImage[region]
647  im2Area = exposure2.maskedImage[region]
648  else:
649  im1Area = exposure1.maskedImage
650  im2Area = exposure2.maskedImage
651 
652  if self.config.binSize > 1:
653  im1Area = afwMath.binImage(im1Area, self.config.binSize)
654  im2Area = afwMath.binImage(im2Area, self.config.binSize)
655 
656  im1MaskVal = exposure1.getMask().getPlaneBitMask(self.config.maskNameList)
657  im1StatsCtrl = afwMath.StatisticsControl(self.config.nSigmaClipPtc,
658  self.config.nIterSigmaClipPtc,
659  im1MaskVal)
660  im1StatsCtrl.setNanSafe(True)
661  im1StatsCtrl.setAndMask(im1MaskVal)
662 
663  im2MaskVal = exposure2.getMask().getPlaneBitMask(self.config.maskNameList)
664  im2StatsCtrl = afwMath.StatisticsControl(self.config.nSigmaClipPtc,
665  self.config.nIterSigmaClipPtc,
666  im2MaskVal)
667  im2StatsCtrl.setNanSafe(True)
668  im2StatsCtrl.setAndMask(im2MaskVal)
669 
670  # Clipped mean of images; then average of mean.
671  mu1 = afwMath.makeStatistics(im1Area, afwMath.MEANCLIP, im1StatsCtrl).getValue()
672  mu2 = afwMath.makeStatistics(im2Area, afwMath.MEANCLIP, im2StatsCtrl).getValue()
673  if np.isnan(mu1) or np.isnan(mu2):
674  self.log.warn(f"Mean of amp in image 1 or 2 is NaN: {mu1}, {mu2}.")
675  return np.nan, np.nan, None
676  mu = 0.5*(mu1 + mu2)
677 
678  # Take difference of pairs
679  # symmetric formula: diff = (mu2*im1-mu1*im2)/(0.5*(mu1+mu2))
680  temp = im2Area.clone()
681  temp *= mu1
682  diffIm = im1Area.clone()
683  diffIm *= mu2
684  diffIm -= temp
685  diffIm /= mu
686 
687  diffImMaskVal = diffIm.getMask().getPlaneBitMask(self.config.maskNameList)
688  diffImStatsCtrl = afwMath.StatisticsControl(self.config.nSigmaClipPtc,
689  self.config.nIterSigmaClipPtc,
690  diffImMaskVal)
691  diffImStatsCtrl.setNanSafe(True)
692  diffImStatsCtrl.setAndMask(diffImMaskVal)
693 
694  varDiff = 0.5*(afwMath.makeStatistics(diffIm, afwMath.VARIANCECLIP, diffImStatsCtrl).getValue())
695 
696  # Get the mask and identify good pixels as '1', and the rest as '0'.
697  w1 = np.where(im1Area.getMask().getArray() == 0, 1, 0)
698  w2 = np.where(im2Area.getMask().getArray() == 0, 1, 0)
699 
700  w12 = w1*w2
701  wDiff = np.where(diffIm.getMask().getArray() == 0, 1, 0)
702  w = w12*wDiff
703 
704  if np.sum(w) < self.config.minNumberGoodPixelsForFft:
705  self.log.warn(f"Number of good points for FFT ({np.sum(w)}) is less than threshold "
706  f"({self.config.minNumberGoodPixelsForFft})")
707  return np.nan, np.nan, None
708 
709  maxRangeCov = self.config.maximumRangeCovariancesAstier
710  if covAstierRealSpace:
711  covDiffAstier = computeCovDirect(diffIm.getImage().getArray(), w, maxRangeCov)
712  else:
713  shapeDiff = diffIm.getImage().getArray().shape
714  fftShape = (fftSize(shapeDiff[0] + maxRangeCov), fftSize(shapeDiff[1]+maxRangeCov))
715  c = CovFft(diffIm.getImage().getArray(), w, fftShape, maxRangeCov)
716  covDiffAstier = c.reportCovFft(maxRangeCov)
717 
718  return mu, varDiff, covDiffAstier
719 

◆ runDataRef()

def lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.runDataRef (   self,
  dataRefList 
)
Run the Photon Transfer Curve (PTC) measurement task.

For a dataRef (which is each detector here),
and given a list of exposure pairs (postISR) at different exposure times,
measure the PTC.

Parameters
----------
dataRefList : `list` [`lsst.daf.peristence.ButlerDataRef`]
    Data references for exposures for detectors to process.

Definition at line 245 of file ptc.py.

245  def runDataRef(self, dataRefList):
246  """Run the Photon Transfer Curve (PTC) measurement task.
247 
248  For a dataRef (which is each detector here),
249  and given a list of exposure pairs (postISR) at different exposure times,
250  measure the PTC.
251 
252  Parameters
253  ----------
254  dataRefList : `list` [`lsst.daf.peristence.ButlerDataRef`]
255  Data references for exposures for detectors to process.
256  """
257  if len(dataRefList) < 2:
258  raise RuntimeError("Insufficient inputs to combine.")
259 
260  # setup necessary objects
261  dataRef = dataRefList[0]
262 
263  detNum = dataRef.dataId[self.config.ccdKey]
264  camera = dataRef.get('camera')
265  detector = camera[dataRef.dataId[self.config.ccdKey]]
266 
267  amps = detector.getAmplifiers()
268  ampNames = [amp.getName() for amp in amps]
269  datasetPtc = PhotonTransferCurveDataset(ampNames, self.config.ptcFitType)
270 
271  # Get the pairs of flat indexed by expTime
272  expPairs = self.makePairs(dataRefList)
273  expIds = []
274  for (exp1, exp2) in expPairs.values():
275  id1 = exp1.getInfo().getVisitInfo().getExposureId()
276  id2 = exp2.getInfo().getVisitInfo().getExposureId()
277  expIds.append((id1, id2))
278  self.log.info(f"Measuring PTC using {expIds} exposures for detector {detector.getId()}")
279 
280  # get photodiode data early so that logic can be put in to only use the
281  # data if all files are found, as partial corrections are not possible
282  # or at least require significant logic to deal with
283  if self.config.doPhotodiode:
284  for (expId1, expId2) in expIds:
285  charges = [-1, -1] # necessary to have a not-found value to keep lists in step
286  for i, expId in enumerate([expId1, expId2]):
287  # //1000 is a Gen2 only hack, working around the fact an
288  # exposure's ID is not the same as the expId in the
289  # registry. Currently expId is concatenated with the
290  # zero-padded detector ID. This will all go away in Gen3.
291  dataRef.dataId['expId'] = expId//1000
292  if self.config.photodiodeDataPath:
293  photodiodeData = getBOTphotodiodeData(dataRef, self.config.photodiodeDataPath)
294  else:
295  photodiodeData = getBOTphotodiodeData(dataRef)
296  if photodiodeData: # default path stored in function def to keep task clean
297  charges[i] = photodiodeData.getCharge()
298  else:
299  # full expId (not //1000) here, as that encodes the
300  # the detector number as so is fully qualifying
301  self.log.warn(f"No photodiode data found for {expId}")
302 
303  for ampName in ampNames:
304  datasetPtc.photoCharge[ampName].append((charges[0], charges[1]))
305  else:
306  # Can't be an empty list, as initialized, because astropy.Table won't allow it
307  # when saving as fits
308  for ampName in ampNames:
309  datasetPtc.photoCharge[ampName] = np.repeat(np.nan, len(expIds))
310 
311  for ampName in ampNames:
312  datasetPtc.inputExpIdPairs[ampName] = expIds
313 
314  maxMeanSignalDict = {ampName: 1e6 for ampName in ampNames}
315  minMeanSignalDict = {ampName: 0.0 for ampName in ampNames}
316  for ampName in ampNames:
317  if 'ALL_AMPS' in self.config.maxMeanSignal:
318  maxMeanSignalDict[ampName] = self.config.maxMeanSignal['ALL_AMPS']
319  elif ampName in self.config.maxMeanSignal:
320  maxMeanSignalDict[ampName] = self.config.maxMeanSignal[ampName]
321 
322  if 'ALL_AMPS' in self.config.minMeanSignal:
323  minMeanSignalDict[ampName] = self.config.minMeanSignal['ALL_AMPS']
324  elif ampName in self.config.minMeanSignal:
325  minMeanSignalDict[ampName] = self.config.minMeanSignal[ampName]
326 
327  tupleRecords = []
328  allTags = []
329  for expTime, (exp1, exp2) in expPairs.items():
330  expId1 = exp1.getInfo().getVisitInfo().getExposureId()
331  expId2 = exp2.getInfo().getVisitInfo().getExposureId()
332  tupleRows = []
333  nAmpsNan = 0
334  tags = ['mu', 'i', 'j', 'var', 'cov', 'npix', 'ext', 'expTime', 'ampName']
335  for ampNumber, amp in enumerate(detector):
336  ampName = amp.getName()
337  # covAstier: (i, j, var (cov[0,0]), cov, npix)
338  doRealSpace = self.config.covAstierRealSpace
339  muDiff, varDiff, covAstier = self.measureMeanVarCov(exp1, exp2, region=amp.getBBox(),
340  covAstierRealSpace=doRealSpace)
341 
342  if np.isnan(muDiff) or np.isnan(varDiff) or (covAstier is None):
343  msg = (f"NaN mean or var, or None cov in amp {ampName} in exposure pair {expId1},"
344  f" {expId2} of detector {detNum}.")
345  self.log.warn(msg)
346  nAmpsNan += 1
347  continue
348  if (muDiff <= minMeanSignalDict[ampName]) or (muDiff >= maxMeanSignalDict[ampName]):
349  continue
350 
351  datasetPtc.rawExpTimes[ampName].append(expTime)
352  datasetPtc.rawMeans[ampName].append(muDiff)
353  datasetPtc.rawVars[ampName].append(varDiff)
354 
355  tupleRows += [(muDiff, ) + covRow + (ampNumber, expTime, ampName) for covRow in covAstier]
356  if nAmpsNan == len(ampNames):
357  msg = f"NaN mean in all amps of exposure pair {expId1}, {expId2} of detector {detNum}."
358  self.log.warn(msg)
359  continue
360  allTags += tags
361  tupleRecords += tupleRows
362  covariancesWithTags = np.core.records.fromrecords(tupleRecords, names=allTags)
363 
364  for ampName in datasetPtc.ampNames:
365  # Sort raw vectors by rawMeans index
366  index = np.argsort(datasetPtc.rawMeans[ampName])
367  datasetPtc.rawExpTimes[ampName] = np.array(datasetPtc.rawExpTimes[ampName])[index]
368  datasetPtc.rawMeans[ampName] = np.array(datasetPtc.rawMeans[ampName])[index]
369  datasetPtc.rawVars[ampName] = np.array(datasetPtc.rawVars[ampName])[index]
370 
371  if self.config.ptcFitType in ["FULLCOVARIANCE", ]:
372  # Calculate covariances and fit them, including the PTC, to Astier+19 full model (Eq. 20)
373  # First, fit get the flat pairs that are masked, according to the regular PTC (C_00 vs mu)
374  # The points at these fluxes will also be masked when calculating the other covariances, C_ij)
375  newDatasetPtc = copy.copy(datasetPtc)
376  newDatasetPtc = self.fitPtc(newDatasetPtc, 'EXPAPPROXIMATION')
377  for ampName in datasetPtc.ampNames:
378  datasetPtc.expIdMask[ampName] = newDatasetPtc.expIdMask[ampName]
379 
380  datasetPtc.fitType = "FULLCOVARIANCE"
381  datasetPtc = self.fitCovariancesAstier(datasetPtc, covariancesWithTags)
382  elif self.config.ptcFitType in ["EXPAPPROXIMATION", "POLYNOMIAL"]:
383  # Fit the PTC to a polynomial or to Astier+19 exponential approximation (Eq. 16)
384  # Fill up PhotonTransferCurveDataset object.
385  datasetPtc = self.fitPtc(datasetPtc, self.config.ptcFitType)
386 
387  detName = detector.getName()
388  now = datetime.datetime.utcnow()
389  calibDate = now.strftime("%Y-%m-%d")
390  butler = dataRef.getButler()
391 
392  datasetPtc.updateMetadata(setDate=True, camera=camera, detector=detector)
393 
394  # Fit a poynomial to calculate non-linearity and persist linearizer.
395  if self.config.doCreateLinearizer:
396  # Fit (non)linearity of signal vs time curve.
397  # Fill up PhotonTransferCurveDataset object.
398  # Fill up array for LUT linearizer (tableArray).
399  # Produce coefficients for Polynomial and Squared linearizers.
400  # Build linearizer objects.
401  dimensions = {'camera': camera.getName(), 'detector': detector.getId()}
402  linearityResults = self.linearity.run(datasetPtc, camera, dimensions)
403  linearizer = linearityResults.outputLinearizer
404 
405  self.log.info("Writing linearizer:")
406 
407  detName = detector.getName()
408  now = datetime.datetime.utcnow()
409  calibDate = now.strftime("%Y-%m-%d")
410 
411  butler.put(linearizer, datasetType='linearizer',
412  dataId={'detector': detNum, 'detectorName': detName, 'calibDate': calibDate})
413 
414  self.log.info(f"Writing PTC data.")
415  butler.put(datasetPtc, datasetType='photonTransferCurveDataset', dataId={'detector': detNum,
416  'detectorName': detName, 'calibDate': calibDate})
417 
418  return pipeBase.Struct(exitStatus=0)
419 

Member Data Documentation

◆ ConfigClass

lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.ConfigClass = MeasurePhotonTransferCurveTaskConfig
static

Definition at line 234 of file ptc.py.

◆ RunnerClass

lsst.cp.pipe.ptc.MeasurePhotonTransferCurveTask.RunnerClass = DataRefListRunner
static

Definition at line 233 of file ptc.py.


The documentation for this class was generated from the following file:
lsst::log.log.logContinued.warn
def warn(fmt, *args)
Definition: logContinued.py:205
lsst::log.log.logContinued.info
def info(fmt, *args)
Definition: logContinued.py:201
lsst.cp.pipe.astierCovPtcUtils.fftSize
def fftSize(s)
Definition: astierCovPtcUtils.py:122
lsst.gdb.afw.printers.debug
bool debug
Definition: printers.py:9
ast::append
std::shared_ptr< FrameSet > append(FrameSet const &first, FrameSet const &second)
Construct a FrameSet that performs two transformations in series.
Definition: functional.cc:33
lsst::afw::math::makeStatistics
Statistics makeStatistics(lsst::afw::math::MaskedVector< EntryT > const &mv, std::vector< WeightPixel > const &vweights, int const flags, StatisticsControl const &sctrl=StatisticsControl())
The makeStatistics() overload to handle lsst::afw::math::MaskedVector<>
Definition: Statistics.h:520
lsst.cp.pipe.astierCovPtcUtils.covDirectValue
def covDirectValue(diffImage, weightImage, dx, dy)
Definition: astierCovPtcUtils.py:179
lsst.cp.pipe.photodiode.getBOTphotodiodeData
def getBOTphotodiodeData(dataRef, dataPath='/project/shared/BOT/_parent/raw/photodiode_data/', logger=None)
Definition: photodiode.py:30
lsst.cp.pipe.astierCovPtcUtils.computeCovDirect
def computeCovDirect(diffImage, weightImage, maxRange)
Definition: astierCovPtcUtils.py:128
lsst.cp.pipe.astierCovPtcUtils.fitData
def fitData(tupleName, expIdMask, r=8)
Definition: astierCovPtcUtils.py:310
lsst.pipe.tasks.assembleCoadd.run
def run(self, skyInfo, tempExpRefList, imageScalerList, weightList, altMaskList=None, mask=None, supplementaryData=None)
Definition: assembleCoadd.py:721
lsst.cp.pipe.utils.fitLeastSq
def fitLeastSq(initialParams, dataX, dataY, function, weightsY=None)
Definition: utils.py:330
lsst::afw::math::StatisticsControl
Pass parameters to a Statistics object.
Definition: Statistics.h:93
list
daf::base::PropertyList * list
Definition: fits.cc:913
lsst::afw::math::binImage
std::shared_ptr< ImageT > binImage(ImageT const &inImage, int const binsize, lsst::afw::math::Property const flags=lsst::afw::math::MEAN)
Definition: binImage.cc:38
lsst.cp.pipe.utils.fitBootstrap
def fitBootstrap(initialParams, dataX, dataY, function, weightsY=None, confidenceSigma=1.)
Definition: utils.py:397
lsst.pex.config.wrap.validate
validate
Definition: wrap.py:295