Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ItemWhoPre-meeting notesNotes and  Action Items 
News, announcements and task review



Tasks 

  • Gen 3 results in the CMR this time with a note that this is the last time. Will probably continue this until we have pipelines full parity
  • Schedule Dan from early August. 
  • RHL hasa  draft of a notebook out of Chris - will pass on details to Simon when there are more 

AO for community engagement with commissining out – good interest.

  • Interest from DESC See #desc-announce, not sure about other SCs yet. 

Bugs, issues, topics, etc of the week

All
  • Use of logger in faro . Some guidelines have emerged from work on 
    Jira
    serverJIRA
    serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
    keyDM-31016
    • Do not use lsst.log, use python 'logging' instead. See an example in 'separations.py' 
      • import logging
        log = logging.getLogger(__name__)
        log.debug("No matching visits "
        "found for objs: %d and %d" % obj1, obj2)
    • Except if a class is using 'Task' - then the logger that is attached to each 'Task; should be used. e.g 'self.log.info()'. See example in 'TractMeasurementTask'
      • class TExTask(Task):
        ...
        self.log.info("Measuring %s", metricName)
    • Do not use "f-strings" in any log call (sad)   'Task has been set to use % formatting and at the log handler level, f-string and % formatting cannot be mixed.  This only applies to logging. f strings can be used elsewhere. 
    • Use , self.log.info("my message %s", message) and not  self.log.info("my message %s" % (message)).
    • See also 
      Jira
      serverJIRA
      columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
      serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
      keyRFC-783
  • The developer guide does not explain all of this 
  • Define convention for loggers, e.g "faro.method"
  • Do not use print statements at all in faro
  •  Leanne GuyCreat a ticket to update developer guide.   
Reprocessing status and metrics review  

Dashboards can be found here: https://chronograf-demo.lsst.codes/sources/2/dashboards

Monthly reprocessing: 

  • DRP metrics monthly Gen 2 (HSC RC2)
  • Where is the Gen3 squash dashboard - follow up with Yusra
  • Get the new test dataset running and metrics into a dashboard (or add to existing) in squash 
  • Still no faro metrics being written to squash  – need to address the bug first. 
  • Specify dates in dispatch_verify (lsst_verify) for upload to squash. Default is the time the job was run.  Lack of provencance. 
  • What about a sandbox dashboard to test dispatch_verify/dev? 
  •  Simon Krughoffto ask what happened to the sandbox dashboard.   
Development status 
  • Fall 2021 epic 
    Jira
    serverJIRA
    serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
    keyDM-30748
  • Backlog epic: 
    Jira
    serverJIRA
    columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
    serverId9da94fb6-5771-303d-a785-1b6c5ab0f2d2
    keyDM-29525
  • DM-31013 : NaNs  being reported by all metrics. Problem with validation datasets - FGCM format changed?  jeff will look at it
  • Keith been working on switching to using parquet tables DM-26214.
    • Has switched  to src table for single detector metrics. Very fast. Per visit and per detector src catalogs only so far. 
    • Implemented as additional base classes rather than replacing old. 
    • Complete this PR first then move to implementation of using parquet in the remainder of the code
  • DM-30748: Reference catlogs
    • Gen 3 magic -  will give the dataids associated with the shards that overlap the spatial region of the quantum being computed. So no need to load the whole reference catalogs! Access spatial info, timestanmp of visit from the dataId associated with the quantum. Can then apply the correct proper motion. 
    • Can now proceed to implementing all metrics that require comaprison to external catalogs. 
  • Simon has run DRP pipelines on reduced dataset. Can get through coadd and coadd measurement. One patch with complete coverage in all bands. Issue is time - 240 mins for SFP. Maybe not a nightly reprocessing.  Deblending and measurement is time consuning. So we have a coherent dataset that can be run fully but time is an issue.
    • Starting repo 15GB .
AOBNext meeting 2021-07-06

...