You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Overview

This page attempts to capture at a high level the software and algorithm development necessary to implement the processing of objects detected at the full survey (at the time of a particular data release), including the detection, deblending, and measurement of sources too faint to be detected in any individual visit.  The algorithms to be used here are generally poorly understood; we have many options for extending well-understood algorithms for processing single-epoch data to multi-epoch data, and considerable research is needed to find the right balance between computational and scientific performance in doing so.  Unfortunately, different algorithmic options may require vastly different parallelization and data flow, so we cannot yet make assertions about even the high-level interfaces and structure of the code.  We do, however, have a good understanding of most of the needed low-level algorithms, so our goal should be to implement these as reusable components that will allow us to quickly explore different algorithmic options.  This will also require early access to parallelization interfaces, test data, and analysis tools that will be developed outside the DRP algorithms team.

Inputs

  • Calibrated Exposures from Visit Processing
  • Final relative astrometric calibration
  • Final relative photometric calibration
  • Moving and transient sources from Image Differencing and MOPS*
  • External Catalogs (e.g. Level 3 inputs or known bright stars)

* We don't need Image Differencing outputs to start the Deep Processing (e.g. we can do Image Coaddition first), and there may be some value to doing the DRP Image Differencing at the same time as some parts of the Deep Processing (Deep Background Modeling, in particular).

Stages/Components

In rough order - exact flow is very much TBD.

Image Coaddition

We'll almost certainly need some sort of coadded image to detect faint sources, and do at least preliminary deblending and measurement.  We'll use at least most the same code to generate templates for Image Differencing.

Because there's no single coadd that best represents all the data it summarizes, there are several different kinds of coadds we may want to produce.  It's best to think of coaddition as a major piece of the algorithms below, but one which will be executed up front and reused by different stages of the processing; while there will be algorithmic research involved in determining which kinds of coadds we build, that research effort will be covered under the detection, deblending, and measurement sections, below.

All coadds will be produced by roughly the following steps:

  1. Warp to the coadd coordinate system
  2. Convolution (coadd type dependent)
  3. Scale fluxes to coadd photometric system
  4. Stacking

The overall processing flow for all coadds will likely be the same (though the final processing flow may or may not match our current prototype), and will share most of the same code.

Types of Coadds

  • Direct: no convolution is performed, which further requires that little or no rejection be used in the stacking phase.  PSF models and aperture corrections must be propagated through the coaddition process, as there will be discontinuities on the coadd image that will make determining these quantities from the coadd essentially impossible.  This is the coadd type most similar to a single long exposure, and we can run any of our usual single-frame algorithms on it with relatively little modification.  It will have correlated noise due to warping, but less than any other type of coadd.  Input images can be weighed by exposure time or photometric quality, but no optimal weighting exists in the presence of different seeing in different exposures; some information from the best exposures is always lost.
  • PSF-Matched: convolve with a kernel that makes the PSF on the coadd constant.  Noise on the coadd is highly correlated, and a tradeoff must be made between the desired image quality on the coadd and the exposures that can be included; exposures with a PSF more than a little broader than the target PSF cannot be matched.  A large amount of information from the best exposures will be lost, making this a poor choice for deblending or blended measurement.  Single-frame algorithms that are not influenced by correlated noise can be applied as usual, and the measurement of colors is considerably easier, as the target PSF will almost certainly be the same in all bands.  Unlike direct or likelihood coadds, PSF-matched coadds may be created using a stack that does outlier-rejection.
  • Likelihood: convolve with a kernel that creates a map of the likelihood of an source (for given morphology) at the center of pixels: this is a generalization of the "Kaiser coadd", which is a map of the likelihood of a point source centered at each pixel, created by correlating each exposure with its own PSF.  For a given morphology, this coadd is optimal for detecting that sources with that morphology.  This clearly makes a point source likelihood coadd an excellent choice for point source detection, and we might be able to find a computationally feasible way to build likelihood coadds that allow us to pick up more extended sources (without being strictly optimal for these).  Naively, the noise on a likelihood coadd will be very correlated, but it also has a very different meaning from the noise on other coadds, and it is not clear whether or not this will require a modified version of the single-frame detection algorithm.  Existing single-frame algorithms for deblending and measurement cannot be run on a likelihood coadd, though at least in some cases an equivalent algorithm may exist.  Formally, all exposures can be included in a likelihood coadd without degrading the result, and there is no trade-off between depth and image quality, but of course this still requires that all inputs have good PSF models and astrometric and photometric calibration.

For any of these coadd types, we may also want to coadd across bands, either to produce a χ2 coadd or a weighting that matches a given source spectrum.  It should be possible to create any multi-band coadd by adding single-band coadds; we do not anticipate having to go back to individual exposures to create a multi-band coadds.

Status/Challenges

We have a relatively mature pipeline for direct coadds that should only require minor modification to produce at least crude PSF-matched coadds.  We need to finish (fix/reimplement) the PSF-matching coadds, and do some testing to verify that they're working properly.  We haven't put any effort at all into likelihood coadds yet, and we need to completely reimplement our approach to χ2 and other multi-band coadds.

Our propagation of mask pixels to coadds needs to be reviewed and probably reconsidered in some respects.

Our stacker class is a clumsy and inflexible, and needs to be rewritten.  This is a prerequisite for really cleaning up the handling of masks.

The coadd datasets are a bit of a mess, and need a redesign.

The data flow and parallelization is currently split into two separate tasks to allow for different parallelization axes, and the stacking step is parallelized at a coarser level than would be desirable for rapid development (though it may be acceptable for production).  Background matching may also play a role in setting the data flow and parallelization for coaddition, as it shares some of the processing.

Choices about which images to include in a coadd are still largely human-directed, and need to be automated fully.

We have a flexible system for specifying coordinate systems, but we have not yet done a detailed exploration of which projection(s) we should use in production, or determined the best scales for the "tract" and "patch" levels.  We have no plans yet for how to deal with tract-level overlaps, though it is likely this will occur at the catalog level, rather than in the images.

We have put no effort so far into analyzing and mitigating the effects of correlated noise, which will become more important when we regularly deal with more than just direct coadds (but may be important even for these).  A major question here is how well we can propagate the noise covariance matrix; exactly propagating all uncertainty information is not computationally feasible, but there are several proposed approximation methods that are likely to be sufficient.

Dependencies

  • New Butler: Needed to clean up coadd datasets and existing tasks.
  • Parallelization Middleware: we can get by with the current tasks during algorithm development, but we'll need slightly more sophisticated parallelization to implement production-ready pipelines, and development work would be sped up by having finer-grained (i.e. sub-patch) parallelization in the stacking step.
  • Deep Background Modeling: we need to at least have a sense of the processing flow this will require before working on production-ready pipelines.

Effort Required

The algorithms here are relatively well-understood, at least formally, and most of the research work is either in coming up with the appropriate shortcuts to take (e.g. in propagating noise covariance, handling masks) or is covered below in detection/deblending/measurement.  Almost all of the coding we need to for coaddition is pure Python, with the only real exception being the stacker (unless we need to make drastic changes to the convolution or warping code to improve computational performance or deal with noise covariances).

Scheduling

PSF-matched coadds are a requirement for any serious look into options for estimating colors.  The measurement codes themselves need some additional testing before this becomes a blocker, however.

Likelihood coadds should be implemented as-needed when researching deep detection algorithms.

Dataset and task cleanup should be done as soon as the new butler is available.

The stacker rewrite and mask propagation should be fixed up before we spend too much time investigating measurement algorithm behavior on coadds, as it will make our lives much easier there if we start with sane pixel masks.

Noise covariance work may need to come up early in detection algorithm research, but if not, it's a relatively low priority, as it may not end up being important unless we decide we need PSF-matched coadds for colors, and we've determined that the measurement algorithms we want to use there are affected by correlations in the noise.

Deep Background Modeling

Algorithm

Traditional background modeling involves estimating and subtracting the background from individual exposures separately.  While this will still be necessary for visit-level processing, for deep processing we can use a better approach.  We start by PSF-matching and warping all but one of the N input exposures (on a patch of sky) to match the final, reference exposure and subtract these exposures, using much of the same code we use for Image Differencing.  We then model the background of the N-1 difference images, where, depending on the quality of the PSF-matching, we can fit the instrumental background without interference from astrophysical backgrounds.  We can then combine all N original exposures and subtract the N-1 background difference models, producing a coadd that contains the full-depth astrophysical signal from all exposures but an instrumental background for just the reference exposure.  We can then model and subtract that final background using traditional methods, while taking advantage of the higher signal-to-noise ratio of the sources in the coadd.  We can then also compute an improved background model for any of the individual exposures as the combination of its difference background relative to the reference and the background model for the reference.

Status/Challenges

We have prototype code that works well for SDSS data, but experiments on the HSC side have shown that processing non-drift-scan data is considerably more difficult.  One major challenge is selecting/creating a seamless reference exposure across amplifier, sensor, and visit boundaries, especially in the presence of gain and linearity variations.  We also need to think about how the flat-fielding and photometric calibration algorithms interact with background matching, as the fact that the sky has a different color than the sources makes it impossible to for a single photometric solution to simultaneously generate both a seamless sky and correct photometry - and we also need to be able to generate the final photometric calibration using measurements that make use of only the cruder visit-level background models.

The problem of generating a seamless reference image across the whole sky is very similar to the problem of building a template image for Image Differencing, and in fact the Image Differencing template or some other previously-built coadd may be a better choice for the reference image, once those coadds become available (this would require a small modification to the algorithm summarized above).  Of course, in this case, the problem of bootstrapping those coadds would still remain.

We also don't yet have a good sense of where background matching belongs in the overall processing flow.  It seems to share many intermediates with either Image Coaddition or Image Differencing, depending on whether the background-difference fitting is done in the coadd coordinates system (most likely; shared work with coaddition) or the original exposure frame (less likely, shared work with Image Differencing).  It is also unclear what spatial scale the modeling needs to be done at, which could affect how we would want to parallelize it.

Dependencies

  • Photometric Self-Calibration: the problems we're likely to see in background matching fully-calibrated inputs may be completely different, so we need to have this in place before we settle on a final algorithm
  • High-Quality ISR for Precursor Datasets: performance will depend on the details of both the astrophysical background and the camera, so tests would be carried out ideally on a combination of HSC, DECam, an PhoSim data.  But trying to do background matching on any of these without doing a very good job on ISR first would be a waste of time.
  • Parallelization Middleware: putting all of the steps together will require at least some sort of scatter-gather, though we could continue with our current approach of manually starting tasks that correspond to different parallelization units while prototyping.  Some algorithmic options may end up requiring even more complex interprocess communication, but it's hard to say at this point.

Effort Required

This is a difficult algorithmic research problem that interacts in subtle ways with ISR, Photometric Self-Calibration, and the data flow and parallelization for Image Coaddition.  It should not be a computational bottleneck on its own, but it will likely need to piggyback on some other processing (e.g. Image Coaddition) to achieve this.

Scheduling

Because we can use traditional background modeling outputs as a placeholder, and the improvement due to background matching is likely to only matter when we're trying to really push the precision of the overall system, we can probably defer the complete implementation of background matching somewhat.  It may be a long research project, so we shouldn't delay too long, though.  However, wey should have an earlier in-depth design period to sketch out possible algorithmic options (and hopefully reject a few) and figure out how it will fit into the overall processing.

Deep Detection

Deep Deblending

Deep Measurement

Deep Aperture Corrections

 

  • No labels