Faro planning meeting - details to come.
Date
Location
Zoom alias : https://ls.st/rsv
Zoom: https://noirlab-edu.zoom.us/j/96120781994?pwd=bCtpT3Q3b2RsU1ZBRUFaSnUrZXo3Zz09#success
Attendees
Regrets
Useful links
Metric tracking dashboards
- DC2 and HSC RC2 monthly reprocessing (Gen3 and Gen2/3, faro): https://chronograf-demo.lsst.codes/sources/2/dashboards/75
- Nightly CI (Gen3, faro): https://chronograf-demo.lsst.codes/sources/2/dashboards/70
Rubin science pipelines weekly change log and Github repo: https://github.com/lsst-dm/lsst_git_changelog
Status of jenkins jobs and slack channel for notifications: #dmj-s_verify_drp_metrics slack channel
Discussion items
Item | Priority (vote here for your top three with your initials) | Minutes |
---|---|---|
Relationship with visualization | ED, LPG, JLC, CTS: high | Group these together with others below, and set up a team from pipelines, SST, commissioning-verification and bring in someone from architecture (KT) to develop design for a visualization framework CTS: I'd spin this slightly differently; we should figure out how users want to interact with the faro results and build what's necessary for that. (In practice that will mostly fall under the "visualization" umbrella, but that's not itself the goal I'm interested in). |
Diagnostic / drill-down capability | LPG - this is related to visualization. | High priority to architect a design and prototype in next 6 months Start with a set of use cases - e.g 2pt correlation function |
Where does common/shared (with other stack packages) code live? e.g selectors, algorithms. | ED, KB, LPG, JLC: high | Set up session with DRP to address in next cycle |
How to handle situations when we have multiple summary statistics that we would like to compute with the same underlying computations | LPG: low priority. KB: low priority CTS: medium risk | LPG: If the cost of underlying computing is small, I prefer to stick to fully independent metrics. No computation/resource issues around this at the moment. Matching is an exception to this. A matched catalogue should be computed once for all metrics that use it CTS: My concern is that this makes each new metric seem like more of a burden and raises the threshold for "is this worth writing a new metric". E.g. if I want to count the fraction of pixels with a given mask bit set, is it "worth" writing a new task for each type of mask bit? |
Metric dataset type naming convention and metric package specifications | KB: High – this is technical debt that will catch up with us. Coupled to some extent with lsst.verify.Measurement LPG: high but also should be fast to resolve. | LPG: Not doing this now will create technical debt (and nightmare code) metric package name, metric names, variants (dataset type) JLC: I'm not sure this will be fast or easy, but it will only get more difficult the longer we wait. |
Review of documentation | LPG: High - essential to enable wide contribution of metrics from a broad community (à la MAF) JLC: medium – once we get all the basics for various analysis contexts in place, I think providing straightforward guidance about how to implement metrics should be a top priority to enable others to get involved. | JLC: It may make sense to sort out where "shared" code and modules will live before doing this, as it may change the recommendations for implementing metrics. |
Review of unit tests / coverage | LPG: medium. | We should look regularly at unit test coverage. |
List of metrics to be implemented / review of metric specifications | LPG: low - we have a good list for now that we need to complete. These are by no means a complete set. Additional Non SRD metrics will be driven by need (and probably out of commissioning) ED | LPG: Exception is for requests from on-sky observing to implement a metric to support commissioning. |
Metrics using SSI (Fake Injection) | LPG: related to above | JLC Note: the synthetic source package in the Stack is currently being refactored, so it may make sense to wait a while. |
Persisting metrics as lsst.verify.Measurement objects or something different?? | KB: Separate from faro development. More in the FAFF domain or lsst.verify LPG: high in that should we feed this back now. | LPG: Performance issue - it is prohibitive to load 1000s into memory to do aggregations across metrics. 10K takes 30 mins to load into memory. LPG: Suggest we create a ticket incl. use case + timescale of need for the lsst.verify developers. CTS: This is closely related with item #3 (each metric is an independent task). |
Matching routine for use with external catalogs | LPG: High JLC: medium | We have simple algorithms available/implemented for matching with external catalogs, but not yet a more general solution. |
Rendezvous with image metadata | KB: LPG: Is this for faro? | LPG: Commissioning to provide use cases to evaluate if faro is appropriate |
List of tasks (Confluence)
Description | Due date | Assignee | Task appears on |
---|---|---|---|
| 19 Oct 2021 | Leanne Guy | 2021-09-28 Science Metric Development Agenda and Meeting notes |
| 26 Oct 2021 | Leanne Guy | 2021-08-31 Science Metric Development Agenda and Meeting notes |
| 30 Nov 2021 | 2021-11-09 Science Metric Development Agenda and Meeting notes | |
| Colin Slater | 2022-04-19 Science Metric Development Agenda and Meeting notes | |
| Jeffrey Carlin | 2022-04-19 Science Metric Development Agenda and Meeting notes | |
| Keith Bechtol | 2021-09-14 Science Metric Development Agenda and Meeting notes | |
| Keith Bechtol | 2021-09-14 Science Metric Development Agenda and Meeting notes |