You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

This document describes the procedure for validating the first non-beta release of the jointcal package as a replacement for meas_mosaic.  Longer term requirements for jointcal are described in a separate document (draft here).  Because meas_mosaic only runs on Hyper Suprime-Cam data, the scientific performance tests here are also focused on HSC data.  The jointcal beta already supports additional cameras and we expect it to continue to do so, but as there's nothing to replace for other cameras there are no short-term requirements for its performance on them.

Basic Requirements

  • jointcal shall be runnable as a command-line Task that takes the src and ref_cat datasets as input and produces (at least) a wcs and photoCalib dataset as outputs, which provide updated astrometric and photometric calibrations.
  • jointcal shall perform a different fit for each tract, and allow multiple tracts to be run in single invocation (at least in serial, which is all meas_mosaic does).  Some visits may thus be processed with multiple tracts.
  • jointcal shall fit models that have a level of sophistication similar to those in meas_mosaic (if it fits simpler models, it will be because the tests below demonstrate that they are sufficient to generate similar-quality results).  That involves:
    • astrometry: a full-focal-plane polynomial transform for each visit composed with a rotation and translation for each CCD (which is the same for all visits in the fit).
    • photometry: a full-focal-plane polynomial scaling for each visit multiplied by a constant scaling for each CCD (which is the same for all visits in the fit) and the determinant of the Jacobian of the astrometric model.

Science Quality Tests

As of DM-10728 - Getting issue details... STATUS  and DM-10729 - Getting issue details... STATUS , the validate_drp package can now (optionally) utilize the wcs and photoCalib datasets to calibrate the src dataset prior to computing its astrometric and photometric accuracy metrics.  To replace meas_mosaic, we require jointcal to match (no significant difference in metric value) or improve upon the following metrics as computed with meas_mosaic on HSC data.

  • AM1, AF1 (astrometric accuracy and outlier fraction on 5 arcmin scales).
  • AM2, AF2 (astrometric accuracy and outlier fraction on 20 arcmin scales).
  • Median astrometric RMS (left plot of validate_drp's check_astrometry plot).
  • PA1 (astrometric accuracy).
  • Median photometric RMS for SNR>100 (upper-left plot of validate_drp's check_photometry plot).

No significant difference is a bit tricky to define here because most of these metrics are not accompanied by uncertainty estimates; to estimate them we'll use the RMS of the meas_mosaic metric results over different tracts of HSC Wide.  Differences less than 0.1 mmag for photometry or 0.1 mas for astrometry (even if larger than the cross-tract RMS, though I doubt the RMS across tracts will be that small) can also be ignored.

We will run the metric comparison independently on each of at least 20 tracts of HSC-SSP Wide data, as well as the (single tract) HSC-SSP UDeep COSMOS dataset.  Any tracts that that have metrics that are more than one sigma worse than meas_mosaic should be investigated more carefully to understand what (if anything went wrong); this does not necessarily preclude acceptance of jointcal as a replacement for meas_mosaic, but it's a sign that we'd like to understand the difference.

Doing the Work

Scheduling work is obviously a T/CAM responsibility, but it may be best to try to share the generation of the input data repositories for these tests with re-run of the full HSC-SSP PDR1 (NCSA may have already scheduled some time for this later this cycle).

Final sign-off on the results of the tests is up to the Pipeline Scientist, with the understanding that (pending this test plan's review and approval) he'll only ask for significantly more testing than the above if the results are significantly worse/weirder than we expect.

  • No labels