Cameras are imaging devices composed of one or more sensors arranged in a focal plane array (FPA). They produce science and calibration images, which can be processed by the LSST Stack. The stack includes a general model of a camera, a model for describing the geometric arrangement of the sensor array, and a mapping between these models and the representation of data on disk.
The LSST Focal Plane
Essential concepts in the representation of a camera can be understood by first considering the layout of the LSST Camera focal plane array (FPA), and then learning how the software accesses and uses LSST data. While the data organization for other cameras will be different in detail, the customizations (to the supporting software, configuration, and supporting data) will be straightforward.
The focal plane for any camera is assumed to be populated with multiple sensors (not necessarily arranged in a regular grid), each of which is read out independently. The figure below shows a schmatic of the LSST camera focal plane, where 189 science sensors (light-blue background) are arranged in 21 sub-arrays (rafts) on which the sensors are mounted 3x3. The sensors are dual-indexed: by raft from (0,0) in the lower-left to (4,4) in the upper-right; and then by position within a raft from (0,0) in the lower-left to (2,2) in the upper-right in the diagram. (Note that the four corner rafts are populated with sensors that are used for purposes other than science imaging.) Each science sensor consists of 4000x4072 photo-active pixels. The individual sensor electronics include 16 amplifiers that each read 509x2000 pixels in parallel, plus virtual overscan.
The LSST camera field-of-view is quite large: a circle of radius 1°.75 from the optical axis is shown. The focal plane coordinates and origin are shown on the image axes. The orientation of the amplifiers is shown for the central sensor (you may need to zoom the figure). See the section on Coordinate Systems, below, for details.
Organization of LSST Data
The LSST pipeline software constructs a representation of an image that consists of the science array, a quality mask, a variance array, and a variety of attributes. These data are, for convenience, organized hierarchically in a way that reflects the attributes of the expected survey cadence (visit, snap), and by the camera FPA (raft, sensor). To good approximation, the early stages of pipeline processing treat each LSST exposure as a collection of 189 separate images of the sky (taken simultaneously). When persisted to storage, each processed image file contains the data for a single sensor. The data reside in a repository, which for all cameras consists of the following:
- A directory structure populated with science and calibration data files
- A registry containing an inventory of image metadata (sqlite3)
_mapperfile to indicate to the system which camera model to use in navigating the hierarchy and reading/writing data
The processed data are organized in the file system like so:
<product>/ v<visit>-f<filter>/ R<raft>/ S<sensor>.fits
where the product is one of:
calexp|icMatch|icSrc|src, the visit is a running numerical identifier, the filter is:
u|g|r|i|z|y, and the raft and sensor nomenclature (described above) is a two-digit sequence (without punctuation). Access to compressed (g-zipped) data is supported transparently. Data for other cameras may be organized differently on disk, but the representation in software is the same.
Representing a Camera in Software
Given the above discussion of the LSST camera and the organization of the data, the following description of the software representation of data from a camera may be easier to follow.
The camera geometry package lsst.afw.cameraGeom provides a general representation of the geometry of a camera, including the location and orientation of each detector, multiple coordinate systems and transformations between them, and an approximate model of optical distortion. This package provides supporting utilities for such purposes as determining a World Coordinate System (WCS) characterization, assembling individual amplifier read-outs into a full sensor image, and creating mosaics of sensor images in the FPA.
Any relevant coordinate system for the camera can be defined, but the following coordinate systems (along with support for transformations between them) are supported by default:
|radians||x,y coordinates relative to the intersection of the optical axis with the FPA|
|mm||x,y coordinates in the focal plane|
|pix||x,y unbinned pixels on the entry surface of each detector|
|pix||Same as PIXELS, but accounts for pixel-level distortions|
Considering now a general camera (not LSST), the following schematic (right) shows image axes in amplifier coordinates for a single sensor, where the amp is at the origin, and pixels are read-out serially in the x-direction. The table (left) shows how the axes need to be flipped to align with the full sensor coordinate system, where x and y increase monotonically from the image origin. Also shown are the regions of pre-scan (blue) and over-scan (light blue) virtual pixels in the serial direction, which are removed by the Instrument Signature Removal pipeline during overscan correction.
The attributes of a camera (see next sub-section) are specified in the
obs_<camera> packages and are represented in software in the camera object; the cameraGeom package uses these attributes to perform geometric transformations.
Attributes of the sensors and their arrangement in the FPA are collected in the
obs_<camera> packages, in the
/description sub-directory. The attributes include the following:
|Number of amplifiers|
|Dimensions of axes (pix)|
|Region in device (pix)|
|Read-out direction for x,y|
|Region of overscan in parallel/serial directions (pix)|
|Name of device|
|Position in x,y (mm)|
|Device use type (|
|Readout duration (s)|
|Euler rotations (deg)|
The camera model is intended to be general enough to support any FPA. Default parameters may be used for many parameters when initially characterizing a new camera, however the software will not likely produce satisfactory results until accurate FPA and detector properties are recorded.
Raw data (i.e., camera data as persisted by the data collection system in the observing environment) consist of image pixels and associated metadata. Metadata includes information about the exposure itself (e.g. duration, start time, etc.), the camera and telescope configuration (e.g. filter used, sky coordinates of the optical axis), and environmental data; this information is collected in the image header. Raw images typically include virtual overscan pixels, along with per-sensor metadata. When lsstSim raw data are persisted to storage, each image file contains the data for a single amp. The data files are organized into a directory structure by
A fully processed image from a single sensor (a CalExp) consists of the following aggregated components:
|Science Array||N x M||The array of image pixels with dimensions equal to the photo-active region of a single sensor; instrument signature removed|
|Mask||N x M x P||An array of bits with each of P planes corresponding to an individual image or processing pathology|
|Variance Array||N x M||Variance at each pixel in the Science Array, computed from photon noise, detector noise, and other noise contributions (e.g., from snap co-addition) encountered during processing|
|PSF Characterization||(n/a)||A functional characterization (base function and coefficients) of the brightness profile from point sources, determined from bright stars in the image|
|Calibration Metadata||(n/a)||Approximate photometric zero-point, and functional characterization of the WCS|
The science array is corrected for the effects of instrumental signature and the mask, variance array, and calibration metadata are added during the course of pipeline processing.
Calibration Reference Data
Reference calibration data are used during pipeline data reduction to remove instrumental signature, and to determine the astrometric solution and photometric response. Although the details of content and format depend upon the type of sensors used and the particular camera, they commonly include:
|Static Bad Pixel mask||Image/Table||Masks pixels with known problems in response (bad columns, hot pixels, etc.)|
|Bias Structure||Image||Corrects for residual structure after overscan correction has been applied|
|Dark Rate||Image||Corrects for the accumulated counts in the absence of illumination|
|Flat-field||Image||Corrects for color-dependent pixel-to-pixel variations in sensitivity. Field illumination function, if not included, will appear as a separate reference image.|
|Fringe Structure||Image||Corrects for fringe pattern in the background of science images|
|Pupil Ghost||Image||Characterizes the pupil ghost (not needed for the LSST camera)|
|Astrometric Catalog||Table||Positions and motions of astrometric standard stars.|
|Photometric Catalog||Table||Brightnesses of standard stars in multiple passbands|
The reference data are often regarded as time-dependent (i.e., there is a range in time over which a given file is applicable), or may be organized according to some relevant property (e.g., filter in use). This organization is generally reflected in the file naming scheme and/or the directory hierarchy. Most reference images include variance arrays.
Rolling Your Own Camera
If you wish to use the LSST Stack to process data from a camera that is not currently supported, there are a number of steps that will be necessary. You will be creating a new
obs_<camera> package to support the processing, which will most likely be written entirely in python. Examples of working camera packages can be found in the Stack (obs_sdss and obs_lsstSim) or in the following packages, which are available from the following git source code repositories:
These latter two examples may be the most helpful for getting started. The obs_sdss package is more fully developed, and has additional examples for advanced pipelines such as image co-addition, forced photometry, etc. Here are the major steps for rolling your own:
Organize the Data
Organize science and calibration data in a hierarchical file structure in a way that is sensible for the new camera
- A survey cadence (if applicable) may be reflected in the directory hierarchy and file nomenclature
- Sensible means a hierarchy and nomenclature that can be navigated easily by a software mapper to locate data
- Construct a test data set on which regression tests can be built. Ideally it will include images of a well studied field.
- Create astrometry_net index files for your test data set (to support WCS solutions and photometric calibrations), if needed
Now collect the attributes of the camera, the sensors, and the geometry into reference files (to support the camera model). Example bootstrap ASCII files for obs_lsstSim may be found in the
<package>/description directory; these are re-cast with a script to create the
/camera/camera.py configuration file that is actually used by the software. For this package there is also a defects repository. Note that the concept of time-dependent calibration files is not yet transparently supported in the Stack.
Create the obs_<camera> Package
You will need to write a number of tasks for your camera package. See the on-line documentation for:
- Base package for Pipeline Tasks (introduces the concept of tasks in the LSST Stack)
- How to Write a Task
- How to Write a Command-line Task
- Source code documentation for CameraGeom
What tasks you write is intimately connected to the kind of data you have and the processing necessary to produce science results. Fortunately, most tasks and methods can be sub-classed from existing code, and customizations for your needs may require only modest work. Among the tasks in common to most obs_<camera> packages are:
- Create an ingester to build the registry of data products
- Create a custom mapper
- Create a mapper policy file that describes access patterns for navigating the repository of data
- You will likely need to create a subclass of CameraMapper
- Build custom configurations for pipelines of interest, in particular
- Support tasks (e.g., in ISR) may need to be customized (i.e., retargeted), depending upon the complexity of your data
- Attend to the camera packaging
- Create the configuration and software dependency files for EUPS, so that the software can be distributed and deployed with EUPS (if needed)
Exercise the Software
Aside from debugging, you will need to run the software and study the output in order to:
- fine-tune the task parameters, such as the minimum expected PSF width
- verify that sources can be detected to the expected depth (and distinguished from cosmic rays)
- verify that an accurate WCS solution can be determined over an entire image
- verify that point-source photometry is accurate
Write tests to ensure that:
- the camera object can be created
- data ingestion works, and creates a repository
- science and calibration data can be retrieved and persisted
- sources can be detected on science images
- an accurate WCS solution can be derived