Date

Attendees

Minutes

Quick summary of the compression scheme that was arrived at for DES (FPACK with a quantization factor of 16, and subtractive_dither_2 option to preserve 0's identically) presented by Eric Morganson

Discussion that followed revolved around that processing and what benchmarks were gathered.

  1. how/when this compression occurs within the DES pipelines,
  2. how those data might be used/re-used in subsequent analyses
    1. the DES model compresses after the single-epoch pipeline (detrending, astrometry, masking, cataloging) has complete
    2. compressed products are currently used as inputs for subsequent coaddition pipeline and also form the basis for the products that are later used in multi-object, multi-band, multi-epoch (MOF) fitting, and for WL shape estimates (metacal and im3shape)
  3. whether timing benchmarks had been measured (they had not)
    1. with regard to this point it was noted that should compression take place within the alert processing pipeline, timing would be needed as it could be a significant blocking-issue
    2. it was also noted that compression requirements could be different for different processing cases (and that in the case of alert processing compression could still run as an afterburner... ie. not during the actual NN second alert processing window)

Action items

  • Review these notes...
  • For next week be ready to discuss a proper test set and test bed.
  • Also should be prepared to discuss which algorithms should be looked at:
    • gzip
    • bzip
    • hcompress (probably DOA due to boxiness and impact on shape measurement)
    • fpack (RICE with quantization level q  and the analogous szip for HDF5)
    • LZMA2 (xz)
    • add others (or comment on the suitability of the those above)