Logistics

Meeting Date: 2022-12-09

Present: DM: Leanne G, Robert L, Jim B, DESC: Katrin H, Eric G, Humna A

Background

This page summarizes a discussion between members of the DM Science team and the DESC on the uniformity of coadds in a Data Release. The DESC are concerned about the uniformity of the coadds released as part of the annual DRs. The discussion was in the context of the SCOC recommendations for rolling cadence, and how rolling cadence will contribute to non-uniformity of coadds. DM were asked if there is a software/DM solution to this problem. This informal chat was set up to better understand concerns from the DESC and to brainstorm possible solutions.

Discussion

DM: We would like to understand what the DESC defines as non-uniform and what definition of non-uniformity matters? Coadds are going to be non-uniform because the sky is not uniform; conditions, weather, high air mass data, bad seeing, etc, all contributed to non-uniformity. DESC code is going to have to handle that even without rolling cadence. What is the push to make coadds more uniform when DESC code will have to handle non-uniformity in any case?

DESC: The DESC has both static and time-domain science goals and so any recommendations about the survey cadence need to balance those goals. Rolling has a lot of benefits for SN1a discovery and we have supported it. We know that rolling cadence makes things less uniform in the interim before a return to uniformity. The initial implementation of rolling cadence took 1 year of data non-rolling and then rolled over 2 parts of the sky alternating over 2 years resulting in uniformity again after three years.  This strategy was acceptable; the DESC do not need every DR to be uniform. The new seasonal rolling cadence strategy is highly optimized for the time domain but will not result in a return uniformity every few years and there will be no uniform data releases between DR1 and DR11. This is a big problem for cosmology.

DESC understand that they will need a pipeline to handle non-uniformity, what they are worried about is a significant increase to the amount of non-uniformity that would be introduced by the new seasonal rolling cadence strategy. 

The DESC  estimates (roughly) that after 10 years, the depth variation as a function of everything ought to have a 10-12% variability. They want to avoid a situation where a DR that would have had say 15% variation now has 30% depth variations (estimate from simulations with the seasonal rolling cadence strategy). This is less of an issue after year 5.

DM: Modifying the cadence to return to uniformity is a conversation to have with the cadence team. From the DM side, all we could really do is remove, downgrade or delay data. This creates a lot of provenance work  for DM (undesirable) and risks affecting other science. 

Open Questions:

  • How non-uniform was Kids/DES at the intermediate data releases?

Possible Solutions discussed

  1. Modify the observing strategy to ensure a return to uniformity every few years (say 2 or 3). The new seasonal rolling cadence could be maintained in the interim data releases. The DESC does not require every DR to be uniform, one uniform DR every few years would suffice.
  2. Remove data from a coadd to force uniformity. No one was very enthusiastic about this option as it involves removing good data.
  3. Add noise to the data – again this option degrades good data. 
  4. Shift the Data Release schedule (or the cutoff date for data to enter a DRP) to match when the survey naturally returns to uniformity in the rolling cadence scenario.
  5. Publish a small number (2-3)  special “uniform” data releases  that are sufficiently uniform to meet the DESC’s needs at key intervals during the 10 year survey alongside the regular DR 

Actions

  • DM will feed the details of this meeting back to the DM Science team, PST and SCOC
  • DESC will provide a statement about how much uniformity will be difficult to deal with and why.
  • DM (via PST) will talk to other SCs to understand if they have similar concerns and if any of the possible solutions proposed here would meet their needs.
  • DM will put some (but not too much)  initial thought into the feasibility of the possible solutions, but will wait for a response from the DESC on 2).  2023-01-30: Following discussion with DESC, they will interact with the SCOC to resolve this at a survey cadence level. No action on DM to investigate software solutions. 

2022-01-26: Answer to follow up questions from the DM Science team and  PST to the DESC

Q: How much uniformity will be difficult for the DESC to deal with and why. 

A: Here is a statement about why non-uniformity is difficult to deal with:  Non-uniformity must be corrected for, but we cannot measure the non-uniformity perfectly; errors in that measurement propagate into errors in the correction, which propagate into errors in our cosmology analysis; those errors are typically fractional, so the higher the level of non-uniformity, the worse our output cosmology will be.  To get a sense of why it is difficult to measure the impact of non-uniformity perfectly, imagine that we have perfect maps of the depth in each of the ugrizy filters - now we have to predict how our photometric redshift precision varies across the sky based upon those depth maps, and photometric redshifts are complicated and non-linear.   

Quantifying the impact of the increased non-uniformity on our science is subtle and will take us a few months; this is an R&D activity since it is not clear how to design a “Fisher matrix” that converts the level of non-uniformity into degraded cosmological parameter constraints, and detailed simulations are likely needed.  Humna developed some basic machinery for handling this for the galaxy clustering probe as part of her Ph.D. thesis, but that’s only one dark energy probe.  We will work on this and keep you posted.  

Q: Uniformity and the apparent 5-sigma point source sensitivity is impacted by the extinction variation across the across regions in a band-dependent manner. It will not be possible to get equal 5-sigma point source magnitude across 2pi of the sky. What exactly is the parameter that DESC are trying to homogenize?

Put simply, we are trying to homogenize the depth to the extent possible.  What really matters for us is the sensitivity for galaxy detections, which relates closely to the point source sensitivity.  We understand, of course, that this will never be perfectly uniform across the sky.  The level of uniformity achieved in non-rolling OpSims is quite good, dropping gradually from about 20% rms variations in depth after 1 year to about 12% rms variations in depth after 10 years in the area we consider usable for extragalactic science due to having low Galactic dust extinction, defined as E(B-V)<0.2.  Our concern is that the level of non-uniformity is much higher at intermediate years in the current rolling cadence simulations, with none of the intermediate data releases achieving the level of uniformity that was possible with either “no rolling” or the “simple rolling” that was initially envisioned to complete every 2 years.  

Q: During our discussion in December, you said that you wanted  to avoid a situation where a DR that would have had say 15% variation now has 30% depth variations. What is the actual impact on science of a 30% non-uniformity? What are the systematics induced by the various degrees of non-uniformity? Understanding the science impact will help to determine the best mitigation

The systematics induced by non-uniformity include spurious large-scale structure (this is a direct consequence - you get more galaxies in sky locations that are deeper, which causes artificial galaxy clustering), noise patterns in weak gravitational lensing (caused by both depth and seeing variations), and varying completeness for cluster-finding (since galaxy clusters can be identified better in deeper data).  As noted above, we will work to quantify the degradation of cosmological parameters corresponding to a given level of non-uniformity.  But these systematics need to be corrected for, and the larger the needed correction, the more uncertainty there is in the result.  

Q: Rolling cadence as implemented in the current V3.0 baseline strategy is the lowest (or weakest) rolling to date, and starts after one year, which Fed says is what the DESC wanted. Has DESC used the current V3 baseline to understand the effect of rolling cadence on the non-uniformity of coadds, or was if done with an earlier version?

DESC explored the v2.99 simulations in detail; these are quite similar to the just-released v3.0 strategy.  We are indeed relieved that rolling now starts after one year; this enables an as-uniform-as-possible Year 1 data release, which is important for a wide range of science.  The remaining concern is with intermediate releases, some of which (roughly Years 1, 4, 7, 10) are needed by DESC for cosmological analysis.  In the original version of rolling cadence, half the sky was emphasized each year, and after two years of rolling, uniformity was recovered.  In this sense, the currently-implemented rolling is not the weakest to date, as a new rolling cycle starts before a given one finishes, and the level of uniformity of the “no rolling” or “simple rolling” approaches is not approached until Year 10.  We have made some suggestions to the SCOC for how to add “pauses” to rolling cadences that might resolve the issue, but we have not yet been provided with simulations that implement these ideas.  

Q: I still feel that the best solution to this lies in the cadence and not in a DM/software solution

A: That would be ideal.  We will of course let you know if we are able to get the survey strategy improved to the point where our level of concern decreases significantly.

Supporting Material

  • No labels

3 Comments

  1. Thank you for this [~lguy] and all! Couple of thoughts:

    “ The initial implementation of rolling cadence took 1 year of data non-rolling and then rolled over 2 parts of the sky alternating over 2 years resulting in uniformity again after three years. ” I believe is the current recommendation from the SCOC (baseline_v3.0). Which I would jor say it’s highly optimized for time domain science. It is effectively the least aggressive rolling that was considered (in sky areas). I think t hi s conversation happened before the release on vaseline_v3.0 - I will get an update from the IBA strategy DESC team about their thoughts on the current recommendation
    We should stop saying “throw away Sara” - the inclusion of the data in the coadds can be delayed till the following DR but not actually tossed. This also goes not seem reflected in how point 2 is worded. This may still not be the best solution, but presented as throwing away data it’s really not conceivable - presented as delaying the inclusion of data is really just a variation of option 4

    We really need 2 to advance this conversation.

    Finally, I’m still not sure the PST-SC meeting is the right place to host this conversations between SCs for the reason I highlighted to you and Steve

  2. First, thanks to everyone for engaging and moving this forward. On Action item 2, I have two questions:

    a. What is the timeframe for the statement?

    b. What metric(s) will be used? For example, using number of visits or exposure time is likely too simplistic, as there are many other observational factors that set the limiting magnitude depth uniformity at the scales of possible interest here. So it would be most helpful to understand the needed depth uniformity as a function of release and why.

  3. The SCOC discussed this topic over the past month or so in general and focus meetings. We resolved to create a task force with communication focused on a slack channel with the goals of

    • help the DESC and other SCs with needs for uniformity generate quantifiable metrics of uniformity that the SCOC can use in its optimization process
    • enable an understanding of required, desirable, and sufficient uniformity per data release
    • explore improvements to the rolling strategy (within the 2 sky areas  + 90% strength recommendation of the SCOC, see PSTN-055) that will maximize uniformity

    The channel includes SCOC liaisons from SCs for which uniformity is or may be important, reps from the observing strategy team (Peter Y.) and selected SC members (DESC and Galaxies for now) with knowledge of the problem and potential for leading the creation of metrics (and possibly thresholds). We hope the deliberations of this task force will wrap up by August providing sufficient information to the SCOC to select a rolling scheme that minimally interferes with uniformity and at which point we can reevaluate what else needs/should/may be done
    The task force is designed to be small (10-15 ppl max) to enable agile conversation and ensure all leads are explored throughout.