July 13 at 14:00 PT  on ls.st/wom

Attendees

Open to all 

Discussion Items


 https://dmtn-223.lsst.io/ 

Wil wants to know what do we need to declare we meet the requirements?  Does DMTN-202 introduce new requirements we can not afford?

Tim: For image processing increasing levels of difficulty:


KT questions:

  1. If we use BPS, are we forcing people to write PipelineTasks? Should we demonstrate how to wrap Astromatic in such? Can people use IVOA interfaces instead of Butler?
    1. DMTN-202 1.1 says "community-standard bulk-numeric-data-processing frameworks"
    2.   DMTN-202 1.2 says "Others will wish to use community image processing tools"
  2. If batch is at USDF, how is it authenticated/authorized? Long-lived submission token? Login-session-length token? Does BPS or the underlying workflow system need to understand refresh tokens?
  3. If batch is at USDF do we still use the Butler server in CloudDF or is there something local? How is authentication/authorization for the Butler handled from batch jobs?
  4. Should we mention GPUs? (At CloudDF only?)
  5. Is single-ADQL-query use of external-catalog matches and data sufficiently motivating to bring them into Qserv, or are identifier lists for remote, user-programmed joins adequate?
  6.  The Resource Allocation Committee should determine which external catalogs deserve space at USDF. Should they define scientifically-useful cross-match processing?
  7. The document (in §4.3) still seems to say Qserv is meeting all catalog processing requirements.
  8. 9% (81 of 892) of NCSA home directories had more than 10 GB on 2022-02-07.


SRP:   users are going to user.   If they have an alternate way of doing something, which they can work around to access it, they will.

  

Richard:

PCW - LINCC  https://project.lsst.org/meetings/rubin2022/program/agenda?field_day_value=04

Need some DM people at this - someone for light curves,  Science Platform and User batch.