Room System

Phone Dial-in


  1. Dial: or
  2. Enter Meeting ID: 690803707 -or- use the pairing code

Dial-in numbers:

  • +1 408 740 7256
  • +1 888 240 2560 (US Toll Free)
  • +1 408 317 9253 (Alternate Number)

Meeting ID: 690803707



Metric tracking dashboards

Rubin science pipelines weekly change log and  Github repo:

Status of jenkins jobs and slack channle for notifications:  #dmj-s_verify_drp_metrics slack channel 

Discussion items

ItemWhoPre-meeting notesMinutes
News, announcements and task review


  • Task review
  • This team will take a leading role in processing AuxTel data processing and science evalution at the summit. 
  • Join #dmj-s_verify_drp_metrics slack channel to see the status of jenkins jobs.
  • There are notebooks to verify calibration products that run on the summit. Robert wants these in CI
  • Where does the input data come from. Run at NCSA v summit? 
  • Run the script queue to take data, flats/bias/dark. Pipelines run to process data and run verification notebooks. Can this be automated?
  • Metrics computed by cp_verify, this report failures from the data taking.
  • Follow on #dm-calibration

AuxTel  processing: 

  • AuxTel is producing data and this team will take the lead in processing and scientifically evaluating the data. 
  • Processing will be on the antu cluster in Chile. Science pipelines are not yet installed. 
  • For now, veryone should get onboarded on the summit following:  :
  • Make sure you can log on to and the commissioning cluster, antu. 
  • Robert Lupton to post an example to he SV channel and/or get Chris to present what he is doing and the outputs and present as an example.  

Bugs, issues, topics, etc of the week


  • faro on w_43 has failed. Jointcal bug, thought fixed but we are seeing the same failure. Unit tests were missing and are now added. Looks like it did not merge?
  •  Simon Krughoff will check that this is the problem. If not, we have another problem  

Reprocessing status and metrics review  

  • Nightly CI with new rc2_subset
  • RC2/DC2 reprocessing epic : DM-26911 - Getting issue details... STATUS
  • w_2021_38 RC2:   DM-31795 - Getting issue details... STATUS
  • w_2021_36 DC2 :  DM-31699 - Getting issue details... STATUS
  • Rubin science pipelines weekly change log and  Github repo:
  • We should be making annotations on the dashboard plots when metric values change, trying to associate likely tickets with the change. The annotations can be added directly to the dashboard.
    • Important to include tags because annotations are global. Tags are arbitrary, so we can choose.  Filter annotations by flag. Can tag on dataset and pipeline, i.e., possible to have multiple tags. Suggest the following:
      • ci_dataset: rc2_subset
    • Tip: first set the tag filters and then add the annotations for the tags to be applied to new annotations automatically
  • Leanne Guy  to convert change log to a nightly log  

Faro for DP0.2

  • DRP.yaml complete, still need to change obs_subaru. jeff will put into PR today. tested and works!
  • schema changes only affect the SSDM data products in parquet files - FITs files not affected. 
Development status 
  • Fall 2021 epic DM-30748 - Getting issue details... STATUS
  • Backlog epic:  DM-29525 - Getting issue details... STATUS
  • Migration to parquet files:  DM-31825 - Getting issue details... STATUS
    • Conversion to parquet is in progress. Only NumSources is running at the moment. 
    • Jeff started on working MatchdVisitMetrics to use the forced source table. Types of inputs will be completly different – will we need to rewrite all tasks?
  • Where are we with faro documentation (Daily builds of documentation: )?
    • PR is ready for review: DM-25839 - Getting issue details... STATUS
  • Implementation of new metrics - priorities

Potential co-working session ideas here

We will discuss metrics to compute for the AuxTel data processing at the next meeting. Come along with ideas:

  • simple scalar metrics such as transparency, seeing, astrometrics errrors
  • Is the data we took good? 
  • What are the metrics  that will tell us whether the data is good?

List of tasks (Confluence)

DescriptionDue dateAssigneeTask appears on
  • Leanne Guy to talk to Science Pipelines (Yusra) about when do this transfer  
19 Oct 2021Leanne Guy2021-09-28 Science Metric Development Agenda and Meeting notes
  • Leanne Guy arrange to disucss at a future meeting if there are metrics from PDR3 & this paper that we might want to include in faro.   
26 Oct 2021Leanne Guy2021-08-31 Science Metric Development Agenda and Meeting notes
  • Colin to ask about capturing ideas for improvement to the stellar locus algorithm   
30 Nov 2021 2021-11-09 Science Metric Development Agenda and Meeting notes
  • Colin Slater to make a preliminary draft agenda for a workshop to clarify visualization use cases for science verification and validation
Colin Slater2022-04-19 Science Metric Development Agenda and Meeting notes
  • Jeffrey Carlin to review metric specification package organization and the relationship to formal requirements documents
Jeffrey Carlin2022-04-19 Science Metric Development Agenda and Meeting notes
  • Keith Bechtol Schedule a time to have focused discussion on verification package, potentially next status meeting
Keith Bechtol2021-09-14 Science Metric Development Agenda and Meeting notes
  • Keith Bechtol to make a ticket to better understand mapping of these camera and calibration products characterization efforts to verification documents and the focus of these efforts. Discuss with the SCLT
Keith Bechtol2021-09-14 Science Metric Development Agenda and Meeting notes