Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


ItemWhoPre-meeting notesMinutes
News, announcements and task review


  • Chris Waters shares a presentation "Introduction to cp_verify"
    • Slides: cp_verify overview-VV.20220314.pdf
    • Example notebook: cp_verify_exampleNotebook-20220315.pdf
    • DMTN-101:
    • Measure metrics on a calibration residual image
    • Do the calibration product corrections do what it is expected to do
    • DMTN-101 defines the metrics
      • Several of the metrics are likely to partially cover OSS requirements, however, there was not yet a dedicated effort so far to check the traceability. The system is sufficiently modular that it should be straightforward to add more metrics.
    • Used both for checking the afternoon calibrations, and for certifying data for the set of master calibration products
    • Currently implements: bias, dark, flat, brighter-fatter.
      • Zero-residual tests for crosstalk, linearity, and fringes in development
    • Would like to have the cp_verify metrics available in a database. Currently being ouput as yaml files.
    • It does correctly catch bad data
    • Run as part of OCPS. Observers can run a script that runs cp_pipe and cp_verify. Trying to condense the information into a summary report, e.g., sending warning messages to LOVE
    • Ongoing work:
      • Documentation
      • Visualization is a challenge (e.g., full focal plane mosaics). This has overlap with FAFF charge as well. Visualization use cases include both pixel-level data and representation of scalar fields.
        • Relationship with analysis_drp and code is afw (see cam geom utils) for drawing sensor boundaries. Ask Sophie Reed what is the plan.
        • DRP plotting sprint is happening through Wednesday this week
    • Do you see these notebooks as a debugging tool during development, or a more long term tool like a  "report" or diagnostic tool when warning appear?
      • Initially for development, Alysha might(?) be looking at them?
    • Chris is effectively serving as calibrations control board. In the future, might have a board with more people, e.g., with a specific commissioning needs in mind. For example, Erik has started to engage in this process, running the notebooks that Chris wrote. We are verifying, but in some cases, want to go deeper. In some cases, might want to go back and inspect even after the calibrations have been applied.
    • In practical terms of how the certification is done, is the long-term goal to fully automate the certification based on scalar metric values, or will there always been a need for visual inspection?
      • In practice, running a notebook and doing visual inspection as part of certification. In principle, there is a cell that checks all the metric values against specs, but still like to have visual inspection for assurance.
      • Used parametrized notebooks to generate report as notebook automatically
      • Already, Chris is doing some iteration of the analysis and labeling versions of the analysis as collection names
    • Calibrations change over time. How is this handled?
      • We should be able to preemptively make calibrations to capture trends
      • The bookkeeping is done for you with the butler. Each calibration has a valid time interval (datetime).
    • Metric tracking
      • Can only track if have some inputs. Don't want 10 points on your dashboard for the 10 iterations that were run on some day.
      • Processing streams get validated against daily calibration data
      • Daily calibrations get validated against current master calibration products
      • Can certify into different collections.

CI status, bugs, issues of the week



Faro Development status 
  • Spring 2022 epic 
  • Backlog epic: 
  • Progress on conversion to parquet

Data processing campaigns  

  • AT data reduction


Potential co-working session ideas here

Next meeting