Skip to end of metadata
Go to start of metadata


Load the LSST Environment

You must have the LSST Stack installed on your system (see LSST Stack Installation) to proceed. The commands listed in the code blocks below primarily assume you are using the bash shell; analogous commands for (t)csh should work as well. If you have not already done so, load the LSST environment:

source $INSTALL_DIR/loadLSST.bash          # bash users

where $INSTALL_DIR is the directory where the LSST Stack was installed. 

Fetching & Running the Demo

A demo script is available that will exercise the software sufficiently to demonstrate a proper installation of the LSST Stack. You want the version of the script corresponding to the version of the stack you installed, which you can download anonymously from the LSST source code repository (using version 10.1 as an example): 

Stack Demo
cd /path/to/demo/install/directory
curl -L | tar xvzf -
cd lsst_dm_stack_demo-11.0

curl may fail with the following error (particularly on ubuntu 14.04)

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 0     0    0     0    0     0      0      0 --:--:-- --:--:-- 
--:--:--     0curl: (77) error setting certificate verify locations:
  CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none

The fix is (as suggested by Scott Emmons at

sudo mkdir -p /etc/pki/tls/certs
sudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt

after which curl should work normally.

The demo repository consumes roughly ~41 MB, as the payload contains input images, reference data, and configuration files. The demo script will process SDSS images from two fields in Stripe 82, as shown in the following table (filters in parentheses are not processed if run with the --small option):

41924300(ur) giz

 Now setup the processing package and run the demo: 

Running the LSST Stack demo
setup obs_sdss
./bin/      # use the --small option to process a subset of images

For each input image the script performs the following operations:

  • generate a subset of basic image characterization (e.g., determine photometric zero-point, detect sources, and measures positions, shapes, brightness with a variety of techniques)
  • creates a ./output subdirectory containing subdirectories of configuration files, processing metadata, calibrated images, FITS tables of detected sources. These "raw" outputs are readable by other parts of the LSST pipeline
  • generates a master comparison catalog in the working directory from the band-specific source catalogs in the output/sci-results/ subdirectories.

The demo will take a minute or two to execute (depending upon your machine), and will generate a large number of status messages. Upon successful completion, the top-level directory will contain an output ASCII table that can be compared to the expected results from a reference run. This table is for convenience only, and would not ordinarily be produced by the production LSST pipelines.  

Demo InvocationDemo OutputReference Output
demo.shdetected-sources.txtdetected-sources.txt.expected --smalldetected-sources_small.txtdetected-sources_small.txt.expected

The demo output may not be identical to the reference output due to minor variation in numerical routines between operating systems (see  DM-1086 - Getting issue details... STATUS  for details). The bin/compare script will check whether the output matches the reference to within expected tolerances.

The results in the detected-sources output files are only for verifying the reproducibility of the software. Information for each source that is essential for science use, such as filter or parent image, has not been included. This meta-information is retained in the catalogs under the /output/sci-results directories.

Understanding the Input and Output

This demo is illustrative of running tasks in the LSST Stack in that ./bin/ is a simple wrapper script to run the task processCcd, with named input and output directories and camera-specific IDs of the files to processThe processing software expects to locate (or output) data and configuration information from information stored in files in the directory structure:


Contains a configuration file that specifies index files for determining a WCS, and other metadata, pared down to the sky area required for the above fields. This catalog is used only for matching the reference catalog objects to the detected sources, from which it estimats the photometric zeropoint in each image; the astrometry is not re-determined.


Contains wrapper scripts for running the demos, and a script to re-format the output catalog to ASCII and include the most interesting fields.


Contains a specification for the input mapper, directories of the input files, and a registry for the input data. This directory contains the fpC files with calibrated pixels, fpM files with masks, asTrans, tsField, and psField calibration data.


Contains the output data repository, which consists of sub-directories containing the configuration for each of the processing algorithms, processing metadata, the schema used for the output catalog, and subdirectories of the output products, sorted by type (i.e., output images, catalogs, etc.). Note that the results may be used for downstream processing steps. 

Note that most of the files in output/ directory appear in subdirectories organized by run/camcol/filter/<result> of the input fields.

Visualization of the Demo Output

You can visualize the output of the demo by downloading an iPython notebook (download: StackDemo.ipynb). It includes snippets of python code that you can execute within the notebook. The code demonstrates how to use python within the LSST framework to display images in DS9, show source detections, bad pixel masks, etc. within a frame, and overplot the source catalog with markers.

You will need to install a couple of packages if you do not already have them: the iPython notebook and the ds9 image display software. For instance:

conda install ipython-notebook

Running the notebook requires setting up the afw package beforehand (though this is not necessary if you have setup obs_sdss to run the demo script), and an environment variable for the location of the output data repository. 

setup afw
export DATA_DIR=$PWD/output   # i.e., /path/to/lsst_dm_stack_demo/output
ipython notebook

Now you can execute the embedded source cells using the notebook GUI. (The The iPython Notebook page describes in more detail how to interact with notebooks.) After you execute the "import" statements, you can run any of the other code cells. The last cell displays 5 frames (one for each band), one of which is shown below: 

SDSS r-band, Run 6377, Field 399

The footprints of detected sources are shown in blue, measured (non-rejected) sources are denoted with magenta circles, and bad columns are denoted with green pixels. 

  File Modified
File StackDemo.ipynb Aug 08, 2014 by shaw

  • No labels


  1. Is there a list somewhere of version IDs with demos available?  I can't seem to find one for v8.0 so am using currently.  It seems to work, but is it the most up to date?

    1. Unknown User (shaw)

      The demo was developed for v7.2 and none of the package updates for v8.0 broke it. So yes, it is up to date. At some point I expect the demo will be packaged with the official Stack, and then the connection between the demo and the periodic releases will be more clear. 

  2. Unknown User (robyn)

    The demo was updated  include a snapshot for  Release The current list of tags available includes:,,,,