Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Table 1:  add the "startup comment" from Config table either to the title or caption.  There is a JIRA item to add this to the Config table. The OpsimComment is on the select run page
  • Table 2:  add the NAME of the filter file - it is specified in the Config table and can be different than Filter.conf. Listed in the Opsim Configuration tab
  • Table 3:  
    • add the actual NAME of the LSST.conf file as it can be different than LSST.conf (can simply put this string in the table subheader).  There is a JIRA item to add this information to the Config table. (LJ - this is actually in the opsim config page, just further down on the Config Details panel).  
    • add the parameter idleDelay from LSST.conf to the table in this subsection.  There is a JIRA item to add this information to the Config table.
    • remove recalcSkyCount from the section for Scheduler.conf (parameter is not used and has been removed from future installations of conf files).
    • add MinDistance2Moon to the section for Scheduler.conf – These are all now listed in the Opsim Configuration page
  • Table 4: Benchmarks - should support ANY user specified benchmarks that are requested for comparisons (not just stretch and design number of visits, single visit and coadded depths) Benchmark values are specified in the maf config file and can be set to any value.
  • Table 5 (and others of this form): It makes more intuitive sense to switch the position of the +3sigma and -3sigma columns to show the number of outliers. Ultimately, it will be instructive to be able to look more closely at these fields or datapoints to be sure that the scheduler selection is staying within specified bounds, so we will want a way to display these by altering the axis limits or otherwise print or identify these fields. OK, the order is different in MAF than sstar.  Not sure why one is more intuitive...
  • Metrics that are for the WFD observing mode may make use of MORE THAN ONE proposal.  SSTAR identifies these by the proposal's name, but this is not as robust as needed.  We need a way to tag ALL proposals set up to fulfill the WFD science goal. MAF builds a query for all proposals that are tagged as WFD (LJ - we need to update this identification in MAF to the new standard, which is that the info is just kept in the config table rather than the proposal table)
  • Metrics showing sky brightness and single visit (5sigma) depth use the 5sigma_modified and skybrightness_modified values in the output table which are the post-calculated values.  The simulator selects fields based on vSkyBright so it would be worth seeing those distributions to be sure algorithms are behaving as expected.  Values of  filtSkyBright & 5sigma are the vSkyBright adjusted to the given filter and calculated single visit depth; values of perry_skybrightness & 5sigma_ps are vSkyBright transformed into sky and depth using a form of the ETC model; the plotted values should use 5sigma_modified & skybrightness modified which are the perry_skybrightness & 5sigma_ps values except for twilight observations for which we insert filtSkyBright & 5sigma. It was decided to drop the perry and modified skybrightnesses.
  • All Histograms showing curves for all filters and/or by filter should be presented in a manner that allows for examination of the outliers.  Side by side presentation of the table of numbers helps identify the number of datapoints outside the 3 sigma bounds, but what we really want to know is how many of the values are outside the specified bounds (usually set in the configuration files) and list the identifiers for these datapoints (fieldID and expMJD for instance). Histogram ranges can be explicitly specified.  If no ranges are set, the historgrams will show the full range of values including any outliers.
  • All sky maps show a color bar as the scale 
    • would be better presented as a single color in shades from light to dark instead of multi-color. The colorbar can be changed to any matplotlib color table.
    • would be better for cross-comparison with other simulations if the scale range could be set to the "requested" value as set in the configuration files, or to the "benchmark" value (stretch, design, user) The colorbar ranges can be set in the config file (and then the same config run on different runs) There is a current JIRA issue to make it easier to replot figures with new ranges.
    • would be better presented as a two color shading (like blue/orange) where the transition in color is the value of 1 in the case of a ratio or 0 in the case of a residual (difference) - this adds more content to the plot as the reader can instantly see areas above (e.g. blue) and below (e.g. orange) a target value.  A two color plot could also be implemented for values presented in percent with the transition occuring at >=100%. Any matplotlib color table can be specified. Should we make a ticket to change the default color table for all the sstar plots?
    • Hammer Aitoff projections are most useful as the distortion over the sky is the least of all the projections and percent area is more easily judged by eye. we default to aitoff projection for teh OpsimFieldSlicer and Mollview for Healpixslicer. Hopefully healpy does a better job in the future and we can switch that to aitoff as well.
    • We typically display E to the right and W to the left with 0 degrees in the center but perhaps we need to discuss this as a group. We are using the community standard plotting directions
  • Joint completeness (described below) can be presented both as the number of fields in a percentage bin (say 10% as in the SSTAR Standard Report) and as a cumulative value of number of fields having a minimum completeness in each filter >= bin_lower_bound.  The choice of binsize (currently 10%) is somewhat arbitrary and it would be useful to be able to set this to 25%, 20% 10% 5% 2% bins (for instance).  Also, in the same table, or new table, we like to know how many fields got exactly 100% of what was requested, then the numbers in bins below and the number in bins above.  This lets us judge the effectiveness of various setups in the simulation. We now have bins for exactly 100% and exactly zero.  The total number of bins is a kwarg.
  • Section 5.1 Filter map plotted for individual years would be more useful if we could "flip" through the one year images (like in a powerpoint or a movie) to see the detail in every year. Filter maps are made for each year.
  • Section 6.1 Slew activity numbers are currently presented as verbatim text from an output file, but would be more readable if they were presented in a table along with the key values at the top of this section. Slew stats are now in tables. We have not managed to match the values in sstar.
  • Section 6.2 Inter-visit time numbers are plotted logarithmically but I find the current plot style hard to read, so I would recommend making the plot similar to other histograms in the report.  Also a missing piece of information here is the number of cable wraps which will be an indicator of overall efficiency of a survey. It really doesn't show much on a linear scale. How do we get the number of cable wraps?
  • All figures and tables should include some caption or documentation that repeats back the assumptions and settings for that which is being displayed. All the metrics can have captions set in the config file
  • All figures and tables need to use fonts that are readable and be of publishable quality, as many times we have discovered plots finding their way in to project presentations.  The pgplot font used for axes and labels is unsatisfactory and needs to be improved. We output both png and high res pdf files.
  • More detailed documentation will be needed to describe to our potential users (and to remind power-users) what each of the fields in the distributed tables are (with units) and how the various post-processing fields are calculated.  For example, we designed the SSTAR Standard Report with this in mind and included some short explanatory material - there have been pros and cons about putting this in the Standard Report with some argument for placing this material in a "User's Handbook". Yes, we still need more documentation on Opsim output, this is not a MAF issue though.

...