Page tree

DEPRECATED

This space is obsolete and unmaintained.

Skip to end of metadata
Go to start of metadata

This is a first attempt at a list of the LSST SQuaRE metrics.  These metrics are for quality checks of both the data and the software.

Feel free to comment or suggest other metrics!

Alert Production (AP) Metrics

We need QA metrics for the single-frame real-time processing (AP) of the images to make sure that the data isn't useless. Here are some ideas:

  • Background/sky level: Don't want it too high, maybe we are looking at the moon, or it's too early to be observing.
  • Structure in background: If there's a lot of structure in the background level across a chip this can be caused by clouds or electronics problems (at least on DECam chip has this problem).
  • Seeing: Measure the seeing, i.e. the PSF FWHM, if it's terrible then maybe stop the rest of the processing. If it's 6 arc sec seeing then we probably won't get useful difference images or transient detection.
  • Check for transparency/clouds: use the magnitude zeropoint from known stars, if there are clouds then maybe we should stop the processing. Also check how much the zero point is varying across the field.
  • Other basic QA checks to make sure that something bad hasn't happened to the telescope/camera system, i.e. garbage/noise in the images due to an electronics problem. "Health" checks.
  • Tracking problems: look for streaks of stars in the images.
  • Automated check for "linear" features in the image that could be satellites, airplanes, meteors, etc.

Difference images:

  • Check for residuals in the difference images around bright, isolated stars that might indicate a problem with the difference images
  • Check the PSF FWHM of the image and the template to make sure they are actually similar

Calibration Metrics

We need QA metrics for the calibration products (e.g., flats, biases, etc.) to make sure that the individual ones are good enough and also track how they are changing with time.

Individual calibration frame checks:

  • Domeflats: Need decent S/N, maybe 5000 minimum and shouldn't be too high to get near saturation or nonlinear regime. Also, make sure that the median signal for each chip isn't dramatically different from the last few times the dome flats were taken.
  • Bias: Make sure it's consistent with biases from the last few days.
  • Darks: Make sure they are long enough to detect the dark current. Also make sure that they frames are consistent with the darks from the last few days.
  • Linearity?

Track with time:

  • Number of bad pixels per chip
  • Bias level in each chip
  • Median domeflat counts per chip/filter
  • Median dark count rate per chip
  • Readnoise for each amplifier
  • Gain per amplifier?

Data Release Products (DRP) Metrics

Instrument Signature Removal (ISR)

  • Cross-talk: Double-check that cross-talk has actually been removed. See if there is still a signature of bright stars in one chip in other chips.
  • Bias correction: 
  • Mask bad pixels: 
  • Check saturation and bleed problems:
  • Check the noise in background of the image. Is it similar to expectations 
  • Issues from flat fielding / illumination corrections
  • Issues from fringe corrections:  Check if there are any fringe patterns left in the image, maybe use fourier transform.
  • Check the background level across the field. If gain corrections are applied then the background level should be pretty smooth once ISR is finished.
  • If there's a sky/background subtraction stage (i.e. setting the background close to zero), then check that this hasn't created artifacts
  • Check for leftover cosmic rays that got through the "repair" step.

Point Spread Function (PSF)

  • Check seeing, is it consistent across the chip/focal plane. It should be. If not, then there might be a problem.
  • Check shape of PSF, is it elliptical, weird, double/bimodal? If the telescope is out of focus then you can get bimodal PSFs or even donuts.
  • Check that enough the programs are allowing enough spatial variation across chip/focal plane. If the problem is saying the PSF needs to be fixed across a chip and it's actually varying then this is a problem.
  • Check for brighter-fatter effects, i.e. the PSF FWHM as a function of magnitude.

World Coordinate System (WCS)

  • Check the astrometric scatter around the WCS fit for reference stars. See how this varies across the focal plane. If it varies a lot then there's a problem. For example, large scatter at the edges might indicate that there's a problem with the distortion terms.

Source Detection

  • Check for lots of sources detected around bright, saturated stars.

Photometric Measurement

  • Check that the aperture corrections have the right sign, and see how they vary across the field. Should be pretty uniform.

Coadd

  • Check the S/N in the background and the stars, is it what you expect
  • Check that regions around bright/saturated stars are okay

Multi-fit / Forced Photometry

Catalog-Level Checks

  • Check depth of the photometry, is it as deep as expected
  • Check resulting chi/sharp or equivalent PSF/shape metrics. Are the median/sigma values as expected? How do they vary with magnitude?
  • Check aperture corrections
  • Check calibrated photometry against our master/reference LSST catalog
    • Is it as expected?
    • Any color, airmass or magnitude trends?
    • Is the scatter wrt the reference catalog as expected and similar to the photometry noise?
  • Are we getting the S/N that we should be getting according to the model?
  • At the catalog level we can:
    • Compare stack-reduced image to a reference catalog (e.g. SDSS Stripe82)
    • Compare reductions of images with the stack and other software packages (e.g. DAOPHOT, DoPHOT, SExtractor, etc.)
    • Compare stack reductions of simulated images to truth tables.

Photometric Calibration

  • Check the calibration by measuring the scatter in the stellar locus across a field.  Compare this to the stellar locus scatter in the instrumental photometry.  This isn't an absolute calibration check, but it's a check that the calibration terms are actually improving the relative calibration over a larger field.


Other

  • Issues due to differential refraction not being dealt with properly
  • Issues in crowded regions?
  • Resampling/warping problems?
  • Calibrating/rescaling of image problems

Key Performance Metrics (KPM)

  • No labels

2 Comments

  1. Unknown User (wmwood-vasey)

    I would suggest that the injection and tracking of fake sources in a parallel processing would be most useful in rough checks of consistencies in levels, zeropoints, and saturation.

    • For DRP this should be a substantial effort.
    • For Alert Production one might inject fakes for 1/10 amplifiers for 1/10 exposures to track real-time performance.
    • For Alert Production, one should track number of sources detected in the Image Difference as a fraction of the typical density of sources of that region of the sky.

     

    In more generality,

    http://arxiv.org/abs/1507.05137  "The Difference Imaging Pipeline for the Transient Search in the Dark Energy Survey", Kessler et al. 

    contains useful tests that were done for the Dark Energy Survey.  I would suggest implementing many of the tests described in this Kessler et al. paper.  While one might be tempted to dismiss many of them because LSST will be "of course" better, but that means that LSST should trivially pass them.  In addition it provides a useful set of metrics to measure quality and progress against during development.

    Steps taken in Kessler et al. DES image differencing testing and quality monitory:

    • Fakes are assigned in 2 class
      1. 20th magnitude sources
      2. "SNeIa" are placed at a range of magnitude following the underlying light profile of selected galaxies.  These are from actual SNANA simulations of SN lightcurves based on underlying properties of SN populations.  The PSF is from the image PSF model as for the 20th magnitude sources.

    I don't think it's as necessary to sample from realistic distributions of SN properties.  Rather I think it would be good to have persistent fakes that can be tracked to verify lightcurves are recovered correctly.

    Notes on Injection Fakes Sources

    • Requires estimate of PSF
      • Should be able to do simple Gaussian, modeled PSF, and empirical sampling of stars in field.
      • Getting this slightly wrong will likely be revealed in the image differencing as one will then be comparing 2 different things: 
        1. Ability to subtract Real PSF(Image 1) - Real PSF(Template/Image 2)
        2. Ability to subtract Sim PSF (Image 1) - Real PSF(Template/Image 2)
    • Requires estimate of zeropoint
      • If one isn't worried about 10% effects, this is much easier to get roughly right.
      • Will required more detailed iteration for DRP verification.