Mikolaj Kowalik Tim Jenness Robert GruendlJohn Parejko Michelle Gower Andy Salnikov Unknown User (mbutler) Monika Adamow Jim BoschSteve Pietrowicz

Status

  • Tim Jenness : Finished review of DM-22655. Starting to look at Jim's DM-21849 collection rewrite.
  • Andy Salnikov : Finishing APDB for this month. Working on time handling in April. Pondering how to represent timestamps.
  • Jim Bosch : DM-21849 dormant now. Still ci_hsc_gen3 to fix. Now working on slides.
  • Robert Gruendl : Working on milestone definitions.
  • Michelle Gower : Not broken on Oracle.
  • John Parejko : Going through review of DM-22655.  Still not been able to do full test run of converted DECam data. Issue with filter handling.

AOB

  • John Parejko reporting it took him a month to get up to speed so it's really hard to bring people in to help quickly.
  •  Milestones
    • Have enough for testing pipeline tasks. Don't have enough code to just use gen3 on its own. Always starting from gen2. Not native gen3. There is a lack of documentation. No tools for ingest.
    • Milestone 1: Gen3 ready for pipeline developers.
    • Milestone 2: Gen3 ready friendly users.
    • Milestone 3: Deprecate Gen2
    • Repo conversion unblocks many people in the short term.
    • Ask pipeline developers what is slowing them down using gen3.



  • Talk over Yusra's email response to ctrl_pool vs BPS
    • Need to write pipeline configurations and software versions somewhere. Software versions should be done once per run. Maybe each task has to write out the config. No dataId as such. One per run? Like writing out schema files.
      • Andy SalnikovWork out how to deal with config and software versions in executor and create ticket  
    • Task metadata needs a dataId that is same as quantum dataId. One dataset type per task?
    • Since repo will be shared need to ponder persmissions of files written with datastore.put.
    • Where do log files go? Tim Jennessdoesn't want them to be butler datasets. Maybe ELK system?
  • Michelle Gower raw ids. Data back bone ingest with multiple endpoints. Ingestion in multiple locations of the same raw file? Provenance nightmare. The only workable approach is to stop using auto increment integers as the ID. Origin IDs.



  • No labels