Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

      • Gen3 (
        Jira
        serverJIRA
        serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
        keyDM-32248
        )
        • Underway, issues reported on ticket include OOM problems when consolidating forced sources.
    • What changed? Annotate
      • Repeatability metrics have fluctuated because of differences in usage of best photometric/astromeric calibrations; code changes should now make this harder to get wrong.
      • Gen2 (pipe_analysis) wPerp has gone up, while Gen3 wPerp (faro) has remained unchanged.  Known changes include FGCM robustness that shouldn't have caused this kind of problem.  Looking for other potential culprits; other pipe_analysis coadd plots have changed, too.  Want to see how faro wPerp looks on Gen2→Gen3 converted run.
      • Look into what happened between w_2021_34 and w_2021_38 gen2 reruns to the stellar locus width in the gen2 reruns?
      • Why is faro measuring the same for 9615 and not 9813?  Yusra will take a quick look, but let's revisit with w42. 
    • What do we expect from w_2021_46?
      • Processing changes:
        • since w42 faro steps have been added to the HSC DRP pipeline (before w40 for obs_lsst). Need memory requests. 
        • If we run DRPFakes.yaml on 9813, do we have any automatically generated plotDMs or metrics that will make it worth our while. Is having the fakes output needed for developing such plots/metrics?  w46 or wait until Sophie has time to do that after V&V ops time or in Jan?
        • DM-31777 is blocking adding dia to rc2_subset 
        • DM-32129  default which applies to obs_subaru threshold 0.5 arcsec and obs_lsst/imsim 0.05 arcsec
        • Ask John whether we are thresholding from his new jointcal metrics, to limit what is going into the coadds.
        • FM: I have some deblender bug fixes in the works. At least the memory leak fix and improper footprint threshold will be in (DM-32079). It's possible it might have an affect on PSF metrics? Prob not since they're based on pixels that haven't been touched by scarlet.
        •  DM-30284: is going to sort by detector in makeCoaddTempExp.
      • Changes toward Gen2-Gen3 parity:
      • What new metric can we expect next time 
  • Review DC2
    • w_2021_40
      • gen3:
      • gen2 (
        Jira
        serverJIRA
        columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
        columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
        serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
        keyDM-32071
        ): this processing was done having set up w_2021_40 + the ticket branches from
        Jira
        serverJIRA
        columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
        columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
        serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
        keyDM-30284
         in efforts to achieve gen2/gen3 parity.  In that ticket, various gen2 updates are made to mimic the gen3 behavior of the setting of exposureIds.  These are important as they are typically used for random number seeing, thus will have an effect on the results.  Those issues should all be solved in this run, but there is one lingering difference which is due to gen3 order of inputs not being deterministic.  I missed it at first as the inputs are sorted by visit in the code, but it turns out the detector order also matters.  In gen2 it seems to be systematic (and always in increasing id order), but it can vary in gen3 (and I think can vary with how the pipeline is run → it seems to be the same for the gen3/bps/DRP vs. gen2, but in some one-off testing, I ended up with a different ordering and was freaking out just a little since thus far the warps had been comparing identical!)  Here is and example of the difference in the warps just from a different detector input ordering:
        So, with all the exposureId fixes, we are only left with a slight difference in the PSFs (for which the detector ordering in gen3 just doesn't seem to ever match that of gen2:At this stage I was left with differences appearing in the deblender phase.  Since scarlet is expected to be rather unstable in certain (numerical) circumstances, I had attributed these differences to the above tiny differences in the PSF – all other inputs are identical as far as I can glean.  However, with the ordering of the detectors in gen3, I ran the coadd stages again and indeed got identical coaddPsfs...but I still get differences at the deblender stage.  So, paging Fred Moolekamp as to whether he can think of any reason gen2 vs. gen3 scarlet runs could differ (again, as far as I can tell, all inputs going in with all the fixes on DM-30284 are identical).
        • Ok, so where are we with the current run which only suffers the above small PSF differences going in?  Even better than before and at difference levels so small as to potentially be scientifically uninteresting.  Direct gen2/gen3 comparison plot can be perused at https://lsst.ncsa.illinois.edu/~lauren/DC2.2i_gen2/w_2021_40/vsGen3/plots and here are a few examples (all r-band):
        • I still feel slightly uncomfortable not knowing the root cause of these differences (and the possibility that the deblender is sensitive to some as yet unknown cause that may lead to non-reproducability), but it may be time to start considering the cost/benefit of chasing these lingering differences down.
      • Gen3 Pipeline for DP0.2 (backport-v23 label)
  • AOB
    • Next metrics meeting to review w46 Nov 29th after Thanksgiving will be the last of the year. 
    • I owe a follow-up on Eli's recent fixes to fgcm (towards parity, but also serendipitously improving on some fgcm specifics).  Here is a brief history of gen2 vs. gen3 fgcm for one of the most problematic/red-flag visits in the RC2 dataset:
    • w_2021_30:
    • w_2021_34:
    • w_2021_38 (with the random seed fix of 
      Jira
      serverJIRA
      columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
      columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
      serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
      keyDM-31462
      and survey edge/ref star outlier handling improvements of 
      Jira
      serverJIRA
      columnIdsissuekey,summary,issuetype,created,updated,duedate,assignee,reporter,priority,status,resolution
      columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
      serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
      keyDM-31505
       )This is now looking really close!  However, I am somewhat surprised that there was a jump in the stellar locus width on the w_2021_38 run – see the chronograph dashboard.  I'm not aware of any other significant changes between w34 & w38 that may have impacted the stellar locus...

...