You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

This is the Gateway for information about the Operations/Observation Simulator, including documentation, software, and simulated survey data.

 

The LSST Project developed the Operations Simulator to verify that the LSST Science Requirements could be met with the telescope design. It was used to demonstrated the capability of the LSST to deliver a 27,000 square degree survey probing the time domain and with 20,000 square degrees for the Wide-Fast-Deep survey, while effectively surveying for NEOs over the same area.  Currently, the Operations Simulation Team is investigating how to optimally observe the sky to obtain a single 10-year dataset that can be used to accomplish multiple science goals. 

Operations Simulator

The Simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences.  This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. 

The Simulator is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. Ten years of LSST operations can be simulated using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters. 

Achievements

  • Demonstrated the need for a 9.6 square degree field of view.
  • Motivated the need for 5 filters in the dewar instead of 4 filters based on filter usage per night.
  • Provided survey coverage statistics by site to the Site Selection Committee.
  • Assessed the impact on the survey of various telescope changes, such as dome crawl.
  • Supported engineering requirements analysis.

 Future Work

  • Develop multiple scheduling algorithms or strategies.
  • Expand LSST observing modes (e.g., more flexible cadences)
  • Experiment with dithering algorithms.
  • Include higher fidelity sky brightness models (e.g., twilight & scattered light).
  • Implement an improved weather model.
  • Include logic to plan observations based on upcoming events such as sunrise, downtime, or cloudy weather (not trivial).

 

Science Programs

In a ten-year survey, the LSST will take more than five million exposures, collecting over 32 petabytes of raw image data to produce a deep, time-dependent, multi-color movie of 30,000 square degrees of sky. The sequence, or cadence, with which these exposures are made is essential to achieving multiple scientific goals from a single survey, an important feature of the LSST concept.

LSST will take data as pairs of back-to-back, 15-second exposures to aid in cosmic-ray rejection. This pair is called a visit - a single observation of one ten-square-degree field through a given filter. Designing the LSST survey requires ordering these visits in time and allocating them among its six filters so as to maximize the return on scientific goals in a fixed survey duration. Synthesizing the requirements to accomplish the four primary science objectives of the LSST,

  • Constraining Dark Energy & Dark Matter
  • Taking an Inventory of the Solar System
  • Exploring the Transient Optical Sky
  • Mapping the Milky Way

results in the following constraints:

  • Cosmological parameter estimation by many techniques requires uniform coverage of 20,000 square degrees of sky. Obtaining accurate photometric redshifts in every field requires a specified number of visits in each filter.
  • Weak lensing shear measurements benefit from allocating times of best seeing to observations in the r and i bands. Maximizing signal-to-noise ratios requires choosing the next filter based upon the current sky background.
  •  Supernova cosmology requires frequent, deep photometry in all bands, with z and Y observations even during dark time.
  • Detecting the motion of solar system objects and transients, characterizing variability on various timescales, and acquiring the best proper motions and parallaxes place further demands upon the distribution of revisit intervals and observation geometries to each point on the sky.

Finally, making uniform progress in time toward each of the scientific goals facilitates analyses made while the survey is still in progress.

Sky Coverage

From its site on Cerro Pachon in northern Chile, the LSST can view sky regions with Declination (Dec.) of less than 33.5 degrees at an airmass of 2.2 or smaller - a limit that is used to define the LSST survey.  This airmass results in a 0.6 mag loss of sensitivity at 500 nm compared to an observation at zenith (due to both seeing degradation and atmospheric absorption), and corresponds to an observable area of 31,000 square degrees.

Sky regions with -75 < Dec. < +15 can be observed at an airmass of 1.4 or smaller, providing especially good image quality for weak lensing and other science programs that require it.  The total accessible area in this range, outside of the star-crowded parts of the galactic plane, exceeds 20,000 square degrees. The two dashed blue lines in the figure below outline the 24,000 square degree region for which the minimum airmass reaches values of less than 1.4. 

For the Wide-Fast-Deep (WFD) observing program, we use 18,000 of the possible 24,000 square degrees to meet the Science Requirements Document (SRD) design goals. The WFD science program is designed to provide data for cosmology, transients and moving objects.

A summary of the observing constraints in equatorial (top panel) and galactic (bottom panel) coordinates. The two dashed blue lines outline the 24,000 square degree region for which the minimum airmass reaches values of less than 1.4. The galactic plane regions with the highest stellar density are demarked by solid red lines and enclose 1,000 square degrees.

 

These four plots illustrate the potential airmass distribution in R and I bandpasses for the Wide Fast Deep ('universal cadence') portion of the LSST survey. Each plot represents a different realization of the survey, as simulated by the Operations Simulator. The opsim 3.61 and 3.87 runs represent a WFD survey with a footprint covering the LSST stretch goals of 20,000 square degrees and 1030 visits per field, approximately 230 of which are in R band with another 230 in the I band. The opsim 2.93 and 4.262 runs represent a WFD survey covering the LSST design goals of 18,000 square degrees and 824 visits per field, approximately 184 of which are in each of the R and I bands. The figure insets provide 25/50/75th percentile values for the airmass distribution in each bandpass.

Software

The LSST Operations Simulator a software tool created primarily with an open-source simulation package--SimPy.  SimPy is an object-oriented, process-based, discrete-event simulation package based on standard Python and released under the GNU GPL.

Modeling the Telescope and the Sky

The simulator uses a sophisticated model of the sky. It computes the sky brightness using the Krisciunas and Schaeffer (1991) model, and it tracks the positions of the Sun and the Moon using SLALIB routines.

A detailed telescope model tracks the movements of all the components: mount, dome, optics, instrument rotator, cable wraps and filter changer. The velocities and accelerations for these motions are all settable parameters. There are open-loop optics alignments for all moves and closed-loop alignments for moves in altitude greater than a settable parameter (currently 9 degrees).The telescope model is used to calculate a penalty for the time it takes to slew to a proposed next field which factors into the scheduling decision-making process.

The simulator employs models for atmospheric seeing and cloud conditions.  Available DIMM data has been used to determine the power spectrum of the seeing throughout the year, and a model data sets having that power spectrum was generated. The cloud model is derived from 10 years of nightly observations of the sky by the CTIO night assistant. The simulator currently assumes an alternating one week and two week shutdown per year for scheduled maintenance, and can generate random periods of downtime.

Observing Modes

The simulator is modular in design and can accept multiple, distinct observing modes which are used to specify the observing requirements for the science programs.  Each observing mode ranks potential observations based upon user-specified parameters, which control the hard-coded algorithms. Rankings are evaluated using the current seeing, sky brightness, and progress towards completing an observing mode based on previous observations. For observing modes with a specified cadence, the algorithm increases rankings for observations useful for that cadence. Currently, the algorithm does not look ahead to determine future events such as when a field will set, when the sun will rise, or if the field will otherwise become unavailable in the middle of a sequence.

Before an observation is scheduled, each of the observing modes rank potential target fields according to criteria such as timing, sky brightness, seeing, airmass, and progress toward survey goals. These rankings are then merged, penalties are applied for slew time, filter change times and other operational considerations, and ranked again. A visit is made to the best field, and the process repeats. We have found that surveys using the first four of these observing modes are sufficient to ensure meeting the Science Requirement Document goals.  The fifth mode is representative of a mini-survey or target of opportunity observing.

Wide-Fast-Deep is designed to provide the deep, uniform coverage of the sky with uniform progress toward the specified number of visits over ten years. In times of good seeing and at low airmass, preference is given to r and i band observations. It provides most of the temporal sampling for discovering time variability and detecting moving solar system objects. It requires, as often as possible, that each field be observed twice with visits separated by 15 – 60 minutes to provide motion vectors to link moving object detections and fine time sampling for measuring short-period variability.

North Ecliptic Spur extends the Wide-Fast-Deep to 4,000 square degrees of the northern ecliptic beyond the airmass limit of the main survey.

Deep-Drilling is implemented because a small fraction of time spent employing different strategies can significantly enhance the overall science return. This observing mode allocates ten minutes of exposure per night to a small number of fields; the time is distributed among filters on a five-day cycle so as to provide high-quality type-Ia supernova light curves at redshifts to z~1.2. Many of the these fields are distributed across the ecliptic plane to enable deeper searches for KBO's and other denizens of the outer solar system.

Galactic Plane allocates thirty observations in each of six filters in a region of 1000 square degrees around the galactic center where the high stellar density leads to a confusion limit at much brighter magnitudes than those attained in the rest of the survey.

South Celestial Pole allocates thirty observations in each of six filters in a region of ~1700 square degrees around the south celestial pole to provide data on the Magellanic Clouds and transients in the southern sky. It is similar to the Galactic Plane observing mode, but has more relaxed seeing and sky brightness limits to allow higher airmass observations.

The number of visits obtained in each field in the r-filter for the first year of a survey is indicated by the shaded areas. Each of the areas of interest (labeled) has a specific cadence definition. It should be noted that this is the spatial distribution of the number of visits in the first year of a survey, and will not be as uniform as for the full 10-year survey.

Once a simulated survey is planned, designed and executed, it is useful to evaluate whether that particular survey met the LSST project science goals. Quantifying how well a simulated survey achieves a science objective or whether one simulation is "better than" another is a complex and open-ended problem.

A software tool has been created which executes a series of queries on the simulated survey history and creates a printable standard report that contains statistics, distributions, and sky maps designed to characterize the survey. This set of analyses is by no means comprehensive because of the broad range of science the survey enables.

The standard report is a useful initial characterization of a simulated survey and contains analyses which compare to the design and stretch specifications from the SRD. To more fully assess how well a survey meets a particular science goal, the development of a variety of scientific figures of merit is needed. Also, the process of making sense of the data requires the ability to explore and analyze it in an interactive way, and to communicate and collaborate about the results.

To this end we are

  • Working with Science Collaborations to develop figures of merit.
  • Designing an efficient and extensible framework for the figures of merit.
  • Enabling comparisons between simulated surveys.
  • Using visualization software for fast analysis and rapid prototyping.
  • Working with the ASCOT Team at the University of Washington to explore the feasibility of creating our own interactive analysis tools.

Here is an example of a diagnostic plot produced in the standard report.

An inventory of the time spent observing during the night color-coded by filter for a 10-year survey. The enclosing curves indicate the time of civil (−6°), nautical (−12°), and astronomical (−18°) twilight. Note that only z- and y-filters are used between astronomical and nautical twilight. The Moon’s illumination (in percent) is indicated by the arbitrarily scaled white curve at the bottom of the plot.

Should we talk about how to get sims_operations when v3.3 is available?

A Set of Simulated Surveys for LSST2015 (Aug 2015) - Data and Analysis

  • No labels