Date

July 8, 2014 12:00-2:00 PDT

< previous (2014-07-01) | next (...) >

Attendees

 

 

Goals

Discussion Items

ItemWhoNotes
Design Deep-DiveKTL
  • Discussed Level 1: Level 1 Calibrated Exposure Processing and Level 1 Difference Imaging and Moving Object Processing
  • Is ISR suitable for wavefront sensors? (They're out of focus, among other things.) Gregory to ask Chuck (Bo/Srini).
  • Unclear how well we need to know the PSF for L1.
  • We can use data from L1 or a previous DR to set initial conditions, but provenance points to that specific data.
  • Use multifit outputs for DCR-corrected template generation?
  • Filtering false positives?
  • Where/when do glints and other artifacts get masked?
  • Where do fakes get inserted and how?  And how do we make sure they're flagged appropriately in the outputs?
    • Do we have to insert into the template as well as the image?
    • Do we have to process faked regions twice?

Data Storage

KTL
  • Conditions/calibrations (bitemporal data):
    • Butler handles all queries.
      • But we need technologies underlying the butler.
    • Gregory has a preference against numbers as code, but doesn't take git into account.
      • git is not a conditions database.
    • For data intended for tests, there is a third dimension beyond bitemporality: versioning.
    • Need to have at least a metadata database, even if data itself is not in database.
  • Gregory Dubois-Felsmann: Ask Chuck (Bo/Srini) if ISR as defined for science sensors is suitable for wavefront sensors.
  • Kian-Tat Lim: Update Confluence pages with corrections from RHL and questions from this meeting.