Date 12 Oct 2021 Location
https://ls.st/rsv Dial: 220.127.116.11 or bjn.vc Enter Meeting ID: 690803707 -or- use the pairing code Dial-in numbers: +1 408 740 7256 +1 888 240 2560 (US Toll Free) +1 408 317 9253 (Alternate Number) Meeting ID: 690803707 Attendees Regrets Useful links
Metric tracking dashboards
Rubin science pipelines weekly change log and Github repo: https://github.com/lsst-dm/lsst_git_changelog Status of jenkins jobs and slack channle for notifications: #dmj-s_verify_drp_metrics slack channel Discussion items
Item Who Pre-meeting notes Minutes News, announcements and task review Task review This team will take a leading role in processing AuxTel data processing and science evalution at the summit. Join # There are notebooks to verify calibration products that run on the summit. Robert wants these in CI Where does the input data come from. Run at NCSA v summit? Run the script queue to take data, flats/bias/dark. Pipelines run to process data and run verification notebooks. Can this be automated? Metrics computed by cp_verify, this report failures from the data taking. Follow on #dm-calibration
AuxTel is producing data and this team will take the lead in processing and scientifically evaluating the data. Processing will be on the antu cluster in Chile. Science pipelines are not yet installed. For now, veryone should get onboarded on the summit following: : https://ittn-045.lsst.io/ Make sure you can log on to https://summit-lsp.lsst.codes/ and the commissioning cluster, antu. Robert Lupton to post an example to he SV channel and/or get Chris to present what he is doing and the outputs and present as an example. 02 Nov 2021
Bugs, issues, topics, etc of the week
All faro on w_43 has failed. Jointcal bug, thought fixed but we are seeing the same failure. Unit tests were missing and are now added. Looks like it did not merge? Simon Krughoff will check that this is the problem. If not, we have another problem 19 Oct 2021
Reprocessing status and metrics review
Rubin science pipelines weekly change log and Github repo: https://github.com/lsst-dm/lsst_git_changelog We should be making annotations on the dashboard plots when metric values change, trying to associate likely tickets with the change. The annotations can be added directly to the dashboard. Important to include tags because annotations are global. Tags are arbitrary, so we can choose. Filter annotations by flag. Can tag on dataset and pipeline, i.e., possible to have multiple tags. Suggest the following: Tip: first set the tag filters and then add the annotations for the tags to be applied to new annotations automatically Leanne Guy to convert change log to a nightly log 09 Nov 2021 DP0.2 DRP.yaml complete, still need to change obs_subaru. jeff will put into PR today. tested and works! schema changes only affect the SSDM data products in parquet files - FITs files not affected. Development status Fall 2021 epic
DM-30748 Getting issue details...
DM-29525 Getting issue details...
Migration to parquet files:
DM-31825 Getting issue details...
Conversion to parquet is in progress. Only NumSources is running at the moment. Jeff started on working MatchdVisitMetrics to use the forced source table. Types of inputs will be completly different – will we need to rewrite all tasks? Where are we with faro documentation (Daily builds of documentation: https://pipelines.lsst.io/v/d_2021_09_28/modules/lsst.faro/index.html )? PR is ready for review:
DM-25839 Getting issue details...
Implementation of new metrics - priorities AOB Potential co-working session ideas here
We will discuss metrics to compute for the AuxTel data processing at the next meeting. Come along with ideas:
simple scalar metrics such as transparency, seeing, astrometrics errrors Is the data we took good? What are the metrics that will tell us whether the data is good? List of tasks (Confluence)