Logistics

Meeting Date: 2022-12-09

Present: DM: Leanne G, Robert L, Jim B, DESC: Katrin H, Eric G, Humna A

Background

This page summarizes a discussion between members of the DM Science team and the DESC on the uniformity of coadds in a Data Release. The DESC are concerned about the uniformity of the coadds released as part of the annual DRs. The discussion was in the context of the SCOC recommendations for rolling cadence, and how rolling cadence will contribute to non-uniformity of coadds. DM were asked if there is a software/DM solution to this problem. This informal chat was set up to better understand concerns from the DESC and to brainstorm possible solutions.

Discussion

DM: We would like to understand what the DESC defines as non-uniform and what definition of non-uniformity matters? Coadds are going to be non-uniform because the sky is not uniform; conditions, weather, high air mass data, bad seeing, etc, all contributed to non-uniformity. DESC code is going to have to handle that even without rolling cadence. What is the push to make coadds more uniform when DESC code will have to handle non-uniformity in any case?

DESC: The DESC has both static and time-domain science goals and so any recommendations about the survey cadence need to balance those goals. Rolling has a lot of benefits for SN1a discovery and we have supported it. We know that rolling cadence makes things less uniform in the interim before a return to uniformity. The initial implementation of rolling cadence took 1 year of data non-rolling and then rolled over 2 parts of the sky alternating over 2 years resulting in uniformity again after three years.  This strategy was acceptable; the DESC do not need every DR to be uniform. The new seasonal rolling cadence strategy is highly optimized for the time domain but will not result in a return uniformity every few years and there will be no uniform data releases between DR1 and DR11. This is a big problem for cosmology.

DESC understand that they will need a pipeline to handle non-uniformity, what they are worried about is a significant increase to the amount of non-uniformity that would be introduced by the new seasonal rolling cadence strategy. 

The DESC  estimates (roughly) that after 10 years, the depth variation as a function of everything ought to have a 10-12% variability. They want to avoid a situation where a DR that would have had say 15% variation now has 30% depth variations (estimate from simulations with the seasonal rolling cadence strategy). This is less of an issue after year 5.

DM: Modifying the cadence to return to uniformity is a conversation to have with the cadence team. From the DM side, all we could really do is remove, downgrade or delay data. This creates a lot of provenance work  for DM (undesirable) and risks affecting other science. 

Open Questions:

Possible Solutions discussed

  1. Modify the observing strategy to ensure a return to uniformity every few years (say 2 or 3). The new seasonal rolling cadence could be maintained in the interim data releases. The DESC does not require every DR to be uniform, one uniform DR every few years would suffice.
  2. Remove data from a coadd to force uniformity. No one was very enthusiastic about this option as it involves removing good data.
  3. Add noise to the data – again this option degrades good data. 
  4. Shift the Data Release schedule (or the cutoff date for data to enter a DRP) to match when the survey naturally returns to uniformity in the rolling cadence scenario.
  5. Publish a small number (2-3)  special “uniform” data releases  that are sufficiently uniform to meet the DESC’s needs at key intervals during the 10 year survey alongside the regular DR 

Actions

2022-01-26: Answer to follow up questions from the DM Science team and  PST to the DESC

Q: How much uniformity will be difficult for the DESC to deal with and why. 

A: Here is a statement about why non-uniformity is difficult to deal with:  Non-uniformity must be corrected for, but we cannot measure the non-uniformity perfectly; errors in that measurement propagate into errors in the correction, which propagate into errors in our cosmology analysis; those errors are typically fractional, so the higher the level of non-uniformity, the worse our output cosmology will be.  To get a sense of why it is difficult to measure the impact of non-uniformity perfectly, imagine that we have perfect maps of the depth in each of the ugrizy filters - now we have to predict how our photometric redshift precision varies across the sky based upon those depth maps, and photometric redshifts are complicated and non-linear.   

Quantifying the impact of the increased non-uniformity on our science is subtle and will take us a few months; this is an R&D activity since it is not clear how to design a “Fisher matrix” that converts the level of non-uniformity into degraded cosmological parameter constraints, and detailed simulations are likely needed.  Humna developed some basic machinery for handling this for the galaxy clustering probe as part of her Ph.D. thesis, but that’s only one dark energy probe.  We will work on this and keep you posted.  

Q: Uniformity and the apparent 5-sigma point source sensitivity is impacted by the extinction variation across the across regions in a band-dependent manner. It will not be possible to get equal 5-sigma point source magnitude across 2pi of the sky. What exactly is the parameter that DESC are trying to homogenize?

Put simply, we are trying to homogenize the depth to the extent possible.  What really matters for us is the sensitivity for galaxy detections, which relates closely to the point source sensitivity.  We understand, of course, that this will never be perfectly uniform across the sky.  The level of uniformity achieved in non-rolling OpSims is quite good, dropping gradually from about 20% rms variations in depth after 1 year to about 12% rms variations in depth after 10 years in the area we consider usable for extragalactic science due to having low Galactic dust extinction, defined as E(B-V)<0.2.  Our concern is that the level of non-uniformity is much higher at intermediate years in the current rolling cadence simulations, with none of the intermediate data releases achieving the level of uniformity that was possible with either “no rolling” or the “simple rolling” that was initially envisioned to complete every 2 years.  

Q: During our discussion in December, you said that you wanted  to avoid a situation where a DR that would have had say 15% variation now has 30% depth variations. What is the actual impact on science of a 30% non-uniformity? What are the systematics induced by the various degrees of non-uniformity? Understanding the science impact will help to determine the best mitigation

The systematics induced by non-uniformity include spurious large-scale structure (this is a direct consequence - you get more galaxies in sky locations that are deeper, which causes artificial galaxy clustering), noise patterns in weak gravitational lensing (caused by both depth and seeing variations), and varying completeness for cluster-finding (since galaxy clusters can be identified better in deeper data).  As noted above, we will work to quantify the degradation of cosmological parameters corresponding to a given level of non-uniformity.  But these systematics need to be corrected for, and the larger the needed correction, the more uncertainty there is in the result.  

Q: Rolling cadence as implemented in the current V3.0 baseline strategy is the lowest (or weakest) rolling to date, and starts after one year, which Fed says is what the DESC wanted. Has DESC used the current V3 baseline to understand the effect of rolling cadence on the non-uniformity of coadds, or was if done with an earlier version?

DESC explored the v2.99 simulations in detail; these are quite similar to the just-released v3.0 strategy.  We are indeed relieved that rolling now starts after one year; this enables an as-uniform-as-possible Year 1 data release, which is important for a wide range of science.  The remaining concern is with intermediate releases, some of which (roughly Years 1, 4, 7, 10) are needed by DESC for cosmological analysis.  In the original version of rolling cadence, half the sky was emphasized each year, and after two years of rolling, uniformity was recovered.  In this sense, the currently-implemented rolling is not the weakest to date, as a new rolling cycle starts before a given one finishes, and the level of uniformity of the “no rolling” or “simple rolling” approaches is not approached until Year 10.  We have made some suggestions to the SCOC for how to add “pauses” to rolling cadences that might resolve the issue, but we have not yet been provided with simulations that implement these ideas.  

Q: I still feel that the best solution to this lies in the cadence and not in a DM/software solution

A: That would be ideal.  We will of course let you know if we are able to get the survey strategy improved to the point where our level of concern decreases significantly.

Supporting Material