This page gives an overview of the part of the Data Release Production (DRP) that performs Level 2 "single-frame" processing resulting in calibrated exposures.

Baseline Documents

Also relevant are:

  • LSE-180 (Level 2 Calibration Plan, as there may be some applicability even to Level 1)

Inputs (for a nominal science visit)

  • Two raw "snap" images from archive (note: not crosstalk-corrected, as is the case for Level 1)
  • Calibration "master" frames and models:
    • Bias (from Calibration Products Production, CPP, as needed)
    • Other amp/CCD info (gains, read noise, brighter-fatter coefficients, ...)
    • Dark (if necessary, from CPP as needed)
    • Non-linearity (from CPP as needed)
    • Flat (from CPP)
      • RHL the details on flat generation are TBD
    • Fringe (if necessary, from CPP as needed, modified by model fit?)
      • RHL Probably multiple fringes (the OH results in more than one component)
    • Defect and hot pixel list (from CPP as needed)
    • Note: these master frames will be generated during the annual CPP run before the DRP begins
  • Astrometric and photometric reference catalog
  • Thresholds, default PSF, and other algorithm configuration parameters
    • The default PSF may be taken from the Level 1 processing output, but if so it needs to be recorded as a separate input for provenance purposes.

Overall Process

  1. For each "snap" image in a visit (an entire focal plane will be processed on a single machine, unlike Level 1):
    1. For each amplifier:
      1. Convert to floating point
      2. Correct crosstalk using information from other amplifiers (unlike Level 1, where this is done by the Camera Data System)
      3. Detect and mask (but do not interpolate) saturation (TBD: not mentioned in LDM-151)
      4. Do overscan correction by averaging columns, fitting 1D function, and subtracting row by row
      5. Do bias correction by subtracting master bias frame
      6. Do dark correction (if necessary) by subtracting master dark frame scaled by exposure time (RHL: coefficient possibly a function of temperature?)
    2. Assemble amplifiers into a CCD including trimming prescan/overscan
    3. Correct for non-linearity, along with any temperature dependence
    4. Do flat correction by dividing by a normalized master flat, assuming a nominal flat spectrum for all sources
      1. RHL:  the choice of spectrum is still TBD.  More likely an average sky spectrum.
    5. Do fringe correction if necessary depending on filter by subtracting a best-fit modelled fringe pattern frame
      1. RHL Maybe more than one component.  In theory it's not obvious that we should estimate the fringe coeffs per chip, but it's probably OK.
    6. Update the image variance (TBD: not mentioned in LDM-151)
    7. Mask and interpolate over defects (TBD: not mentioned in LDM-151)
    8. Unmask saturated hot pixels (mark them as only BAD, not SAT) (TBD: not mentioned in LDM-151)
    9. Interpolate over saturated pixels (TBD: not mentioned in LDM-151)
    10. Mask and interpolate over NaNs (TBD: not mentioned in LDM-151)
      1. RHL where do these NaNs come from?
  2. Combine two "snap" CCD images from a visit:
    1. Reject cosmic rays based on two images (TBD: simple subtraction, morphological analysis, more?)
      1. RHL we need a PSF before we can do morphological CR rejection.  We'll probably do a morpho in the difference between the images, but that depends on the atmosphere and telescope.
    2. Add images; assume no warping or realignment is necessary
      1. RHL we won't know for sure until comCam or beyond.  It's the same question as whether we can do a straight subtraction for CR rejection.  If we do need to do some simple warp/match we'd do it before the CR step to allow us to subtract.
  3. Using a default PSF:
    1. Estimate the background and subtract it
      1. RHL At high Galactic latitude we can probably avoid a subtraction – a single number can be added to the threshold.  Down in the plane it's going to be more fun.
    2. Detect and do initial measurement of sources on the image
    3. Use sources to determine a PSF
      1. Second-moment, catalog, and object size star selectors are options
        1. RHL Probably catalog in steady state
      2. Use PCA to generate spatially-varying PSF model (TBD: How accurate does the PSF need to be for Level 1 processing?)
        1. RHL PCA is a possible model of the individual PSFs.  The spatial model is another question.  One implementation of both aspects is the current pcaPsf
  4. Now repeat using the real PSF:
    1. Estimate the background and subtract it (Note: this needs to change in Level 2 to include more information from neighboring sensors)
      1. Uses large cells (256 or 512 pixels on a side) and clipped mean
      2. Ignores pixels that are part of sources
      3. Akima spline used to estimate background level in each pixel
        1. RHL I'm not sure of the algorithm:  the cells, the clipped mean, and the spline are all TBD.  But as we just need this for WCS/Photocal it seems reasonable for Level 1
    2. Detect and do initial measurement of sources on the image
    3. Use sources to do astrometric calibration to determine the WCS
  5. Do photometric zero-point determination by fitting the measured sources with a photometric catalog
    1. RHL there's no single zero-point when it's cloudy.  We'll need a model TBD
  • No labels