Metric dataset type naming convention and package specifications
Metrics being tied to the units of data processing due to the dataId (being also to specify arbitrary spatial scale)
Next step is to plan out epics for the coming cycle of work
Common/shared code organization
How much time is needed? Imagining 2 sessions of an hour each.
Yusra AlSayyad to send out a doodle poll today to schedule a meeting to evaluate the similarities and differences between analysis_drp and faro implementations of selector actions and to consider potential technical solutions to use shared code
It will be important to do some homework in advance to describe similarities and differences between the analysis_drp and faro implementations of selector actions
Metric dataset type naming convention / persistence of metrics as lsst.verify.objects
Gen 3 butler is flexible and we should feel comfortable creating lots of dataset types
Including the collection in your query will help avoid debugging datasets from showing up in searches
Is there any way of specifying the verification package besides the datatype name.
Maybe a choice to make squash pickup parsing easy – we need to get a definitive answer to the question to understand what flexibility we have on dataset type names (NOT yet assigned)
could lsst.verify metric sets be helpful for associating metrics.
Colin: should all metrics be lsst.verify measurements?
only squash measurements need to be; other metrics could have different outputs, but this requires some thought.
Keith: Eventually want metrics to be persisted into something like a relational database that makes it straightforward to query subsets of data and to correlate with other information, such as telemetry from the EFD.
Metrics being tied to the units of data processing due to the dataId
There will be healpix pixel as a dimension
Healsparse is current approach to compute data on healpix grid, but it is NOT recommended to
It is currently possible to search by HTM shard
Jeffrey Carlin to review metric specification package organization and the relationship to formal requirements documents
Peter Ferguson to use DC0.2 as a testbed to study the performance of querying over many thousands of metric values persisted as lsst.verify.Measurement objects and start to think about the workflows to compile summary statistics and correlate with metadata
We should loop in Angelo Fausti on the
Visualization is a big topic and we need some language to define what we mean by different kinds of visualization
Need to more clearly articulate the use cases
Colin Slater to make a preliminary draft agenda for a workshop to clarify visualization use cases for science verification and validation