Date

 

Location

Zoom alias : https://ls.st/rsv

Zoom:  https://noirlab-edu.zoom.us/j/96120781994?pwd=bCtpT3Q3b2RsU1ZBRUFaSnUrZXo3Zz09#success

Attendees


Regrets

Useful links

Metric tracking dashboards

Rubin science pipelines weekly change log and  Github repo: https://github.com/lsst-dm/lsst_git_changelog

Status of jenkins jobs and slack channle for notifications:  #dmj-s_verify_drp_metrics slack channel 

Discussion items

ItemWhoPre-meeting notesMinutes
News, announcements and task review

All

  • CTS: Request from pipelines to merge pair coding sessions.
  • CTS: Also a reminder that DM in general are interested in doing reviews, helps with sharing experience across teams.
  • Plan to merge pair-coding with rest of DM starting tomorrow
  • TCAM = technical and cost account manager
  • Erik is now physically located in Tucson

CI status, bugs, issues of the week

All


Faro Development status 
  • Fall 2021 epic
  • Spring 2022 epic 
  • Backlog epic: 
  • Progress on conversion to parquet
  • Peter is implementing version of PA1 using forced sources. Forced source tract table only has PSF fluxes. Does not have extendedness. Metric implemented using pandas groupby functionality. Astrometric metrics will need to use matched source tables. Suggest that we concetrate effort on getting matcher working.
    • Another option would be to grab the object table and use
    • What do we gain from using ForcedSource table
  • Eli's ticket has been merged. Isolated star association task. Returns a catalog that indexes. Based on Erin Sheldon's smatch. Reads in the source tables one at a time.
    • Currently is a task in DRP yaml.
    • Eli does not want to be the owner of general
    • Matcher is a wrapper around KDTree.
    • Three potential options:
      • Use new task directly
      • Make new task more configurable
      • Add capability in faro
    • Our plan is to make a JIRA ticket to create the base classes for analysis task that would use the new data product
    • Branch is here: isolatedStarAssociation code DM-33856
  • wPerp parquet table conversion. Sophie agreed to review the PR.
    • wPerp values will go up by a factor of ~3 because not doing iterative rounds of clipping. Concern that the metric specification is sensitive to the way that clipping is applied.
    • Not applying FGCM before making object tables?? There will be a version of the source table that includes FGCM corrections. Would
    • There is Zeljko paper. Lauren would have a sense of what is being routinely measured.
    • Question is whether a large value of wPerp
      • indications of problem with data, e.g., stellar classification, photometric calibration
      • the different statistics have different meanings
  • PR to persist in-memory objects has been approved.


Data processing campaigns  

  • RC2/DC2 reprocessing epic :
    • w_2021_40 RC2:  
    • w_2021_44 DC2 :
  • AuxTel - Erik to give update
  • AP/DiffIm
  • AuxTel. Dedicated almost all three nights to imaging campaigns run by the scheduler at night. Tiago had been setting up files.
    • Southern pole. Available 12 months. None of this data is processed yet. 3 epochs. Once a month.
    • E. High stellar density field. Will be up next 6 months. Much higher source count. 98% of the images went through with no tweaking. We should focus on dense fields. gri. 110 points. About 100 pointings completed. Erik planning to talk with Tiago soon to get better understanding of . What is the typical depth.
    • First field (3 epochs) overlap with GOODS, GEMS. This field is sparse. Pushed pipelines to the limits. Fail to get astrometric solutions much of the time. Between 50-70%
    • Multi-color imaging.
    • Will be purchasing 6.5 cm SDSS gri. It will be a new color system. Patrick will know the timeline.
    • Runs pipeline though transform source table. Producing instrumental fluxes.
    • We think this field was calibrated to PS1, but not sure. The computed magnitudes are not what is expected, so trying to understand better.
DP0.2
  • DP0.2 currently running into an issue where matchedCatalogTask runs for >2 hours without emitting any log messages. The Panda monitoring system assumes that means it has crashed or is in an infinite loop or something, so it terminates the job.
  • Had to switch to running the non-faro parts of step3
  • We now have a reprieve from that limit (as of an hour ago), so we might be able to resume running it as normal?
  • Hypothesis is that the butler.get() at the start of runQuantum is taking all the time, but we haven't confirmed that.
  • Deferred dataset loading might also be a good idea in general, and would avoid a long block of time spent in single butler.get()
  • Not clear that we know the right route yet, best if someone (Jeff?) can check in with Yusra at pair coding and see if there's an issue that can be addressed.
  • CTS opinion: the sooner we can jettison the src catalogs in favor of the parquet tables, the easier the performance issues will become.
Ran out of time.
AOB

Potential co-working session ideas here

Next meeting  


List of tasks (Confluence)