Date

 

Location

Browser

Room System

Phone Dial-in

https://bluejeans.com/690803707

or 

https://ls.st/rsv


  1. Dial: 199.48.152.152 or bjn.vc
  2. Enter Meeting ID: 690803707 -or- use the pairing code

Dial-in numbers:

  • +1 408 740 7256
  • +1 888 240 2560 (US Toll Free)
  • +1 408 317 9253 (Alternate Number)

Meeting ID: 690803707

Attendees

Regrets

Useful links

Metric tracking dashboards

Rubin science pipelines weekly change log and  Github repo: https://github.com/lsst-dm/lsst_git_changelog

Status of jenkins jobs and slack channle for notifications:  #dmj-s_verify_drp_metrics slack channel 

Discussion items

ItemWhoPre-meeting notesMinutes
News, announcements and task review

All

  • Task review
  • This team will take a leading role in processing AuxTel data processing and science evalution at the summit. 
  • Join #dmj-s_verify_drp_metrics slack channel to see the status of jenkins jobs.
  • There are notebooks to verify calibration products that run on the summit. Robert wants these in CI
  • Where does the input data come from. Run at NCSA v summit? 
  • Run the script queue to take data, flats/bias/dark. Pipelines run to process data and run verification notebooks. Can this be automated?
  • Metrics computed by cp_verify, this report failures from the data taking.
  • Follow on #dm-calibration

AuxTel  processing: 

  • AuxTel is producing data and this team will take the lead in processing and scientifically evaluating the data. 
  • Processing will be on the antu cluster in Chile. Science pipelines are not yet installed. 
  • For now, veryone should get onboarded on the summit following:  : https://ittn-045.lsst.io/
  • Make sure you can log on to https://summit-lsp.lsst.codes/ and the commissioning cluster, antu. 
  • Robert Lupton to post an example to he SV channel and/or get Chris to present what he is doing and the outputs and present as an example.  

Bugs, issues, topics, etc of the week

All


  • faro on w_43 has failed. Jointcal bug, thought fixed but we are seeing the same failure. Unit tests were missing and are now added. Looks like it did not merge?
  •  Simon Krughoff will check that this is the problem. If not, we have another problem  

Reprocessing status and metrics review  

  • Nightly CI with new rc2_subset
  • RC2/DC2 reprocessing epic :
  • w_2021_38 RC2:  
  • w_2021_36 DC2 : 
  • Rubin science pipelines weekly change log and  Github repo: https://github.com/lsst-dm/lsst_git_changelog
  • We should be making annotations on the dashboard plots when metric values change, trying to associate likely tickets with the change. The annotations can be added directly to the dashboard.
    • Important to include tags because annotations are global. Tags are arbitrary, so we can choose.  Filter annotations by flag. Can tag on dataset and pipeline, i.e., possible to have multiple tags. Suggest the following:
      • ci_dataset: rc2_subset
    • Tip: first set the tag filters and then add the annotations for the tags to be applied to new annotations automatically
  • Leanne Guy  to convert change log to a nightly log  
DP0.2

Faro for DP0.2

  • DRP.yaml complete, still need to change obs_subaru. jeff will put into PR today. tested and works!
  • schema changes only affect the SSDM data products in parquet files - FITs files not affected. 
Development status 
  • Fall 2021 epic
  • Backlog epic: 
  • Migration to parquet files: 
    • Conversion to parquet is in progress. Only NumSources is running at the moment. 
    • Jeff started on working MatchdVisitMetrics to use the forced source table. Types of inputs will be completly different – will we need to rewrite all tasks?
  • Where are we with faro documentation (Daily builds of documentation: https://pipelines.lsst.io/v/d_2021_09_28/modules/lsst.faro/index.html )?
    • PR is ready for review:
  • Implementation of new metrics - priorities
AOB

Potential co-working session ideas here

We will discuss metrics to compute for the AuxTel data processing at the next meeting. Come along with ideas:

  • simple scalar metrics such as transparency, seeing, astrometrics errrors
  • Is the data we took good? 
  • What are the metrics  that will tell us whether the data is good?


List of tasks (Confluence)