Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ItemWhoPre-Meeting NotesNotes and  Action Items
Project/Science Updates
Processing for weak lensing / DESC 

Discussion of processing needed to produce an Object catalog for DESC weak lensing studies. 

JB: Always intended to take a contribution from the community for shear estimation  - this is a presentation of a mature-state option. 

RHL: This work is optimal for weak shear but what about strong shear? We need to support all strong shear as well. Is metadetect good enough for them? Weak lensing looks at 1-2% added shear. 

ES: In the regime of say 10% shear or less metadetect is probably OK - for a 10% shear you get a 1% bias due to the fact that the shear is not in the linear regime.  Anything higher than that would require specialized code. In that case you have to go back and redo the processing and then the question become do you do the processing on cell-based coadds or some other dataset?

RHL: I think that a classic shape measurement on cell coadds will be fine

ES: All classic shape measurements will have this same limitation – if you go into the really strong lensing regime with the big arcs, specialized code is needed. 

MB: The DESC has people working in clustering thinking about this so they might be able to provide input. 

LPG: With cell-based coadds you remove images – this will impact the overall m5 coadded survey depth. Do you know how much? 

MB: Not as much as you think it would. Bob has numbers for this, 

AK: It's about a 5% loss, details depend on the size of the CCDs, how many satellite trails come through .. 

JB: No matter what coadd technique we use there will be some loss - there might be a slight tradeoff between SNR and systematics. 

SR: verified to what? and how many images were coadded?

MB: We verified to 10yr Rubin requirements, so better than 0.1% (at 3 sigma) in multiplicative bias and it works. So far only tested on a small number of images where small == 1-2. Warping all these images as well as running enough simulations with with all the shape noise and other sources of noise is very expensive.  We believe that the algorithms we have now will work  in the low number of images regime, which is where LSST will be in the first few years. 

MB: The PSFs are exact in the sense that there are no edges, this is important - we cannot recover shear without an exact PSF

CTS: You say you have a noise realization - but only one? 

ES: We can make 10 if we want and put them through the pipeline

CTS: 10 on the fly? Is that because it is not computationally prohibitive? Or make 10 and store them?

MB: We will probably store ~2 not 10 if we go this route

JB: Eventually want to use the same warps for these and other coadds

CTS: Time/space tradeoffs - we don't need precision for noise, can we use compression. This is done in DES to some level. 

JB: the six catalogs are narrow but have 5-6 times the number of rows. 

LPG: Do we need to serve them in Qserv? Can we use parquet, what are the access patterns for these catalogs. 

MBL: Do not need to do forced photometry back on the images. 

LPG: Do you plan to use cell-based coadds for the difference image templates

EB: Don't have good input data on that yet - a bit risky at this stage. 

  •  Jim Bosch follow up with Strong Lensing people on the suitability of this work for strong lensing ( WL cluster people)   
  •  Jim Bosch Understand the usage patterns  for the ShearObject catalogs   
  •  Leanne Guy Schedule a follow up to this discussion to define conceptual schema for metadetect catalogs to add to DPDD  
AOB

Next meeting is:   



...