Skip to end of metadata
Go to start of metadata

The mapping between raw data from a camera and the organization and structure of raw and calibrated images on disk is intimately related to the in-memory model of the Camera and its associated geometry. The LSST Stack is capable of processing data from multiple cameras, and the software is designed and built with the intent of making the addition of other cameras straightforward.

This chapter describes the LSST Camera focal plane and the organization of LSST camera simulation data, followed by a description of how these are represented in the software model that enables processing of data from any astronomical camera. It concludes with a high-level summary for creating a package for processing data from a custom camera.

 

In this Chapter

Introduction

Cameras are imaging devices composed of one or more sensors arranged in a focal plane array (FPA). They produce science and calibration images, which can be processed by the LSST Stack. The stack includes a general model of a camera, a model for describing the geometric arrangement of the sensor array, and a mapping between these models and the representation of data on disk. 

The LSST Focal Plane

Essential concepts in the representation of a camera can be understood by first considering the layout of the LSST Camera focal plane array (FPA), and then learning how the software accesses and uses LSST data. While the data organization for other cameras will be different in detail, the customizations (to the supporting software, configuration, and supporting data) will be straightforward. 

The focal plane for any camera is assumed to be populated with multiple sensors (not necessarily arranged in a regular grid), each of which is read out independently. The figure below shows a schmatic of the LSST camera focal plane, where 189 science sensors (light-blue background) are arranged in 21 sub-arrays (rafts) on which the sensors are mounted 3x3. The sensors are dual-indexed: by raft from (0,0) in the lower-left to (4,4) in the upper-right; and then by position within a raft from (0,0) in the lower-left to (2,2) in the upper-right in the diagram. (Note that the four corner rafts are populated with sensors that are used for purposes other than science imaging.) Each science sensor consists of 4000x4072 photo-active pixels. The individual sensor electronics include 16 amplifiers that each read 509x2000 pixels in parallel, plus virtual overscan. 

LSST Focal Plane

The LSST camera field-of-view is quite large: a circle of radius 1°.75 from the optical axis is shown. The focal plane coordinates and origin are shown on the image axes. The orientation of the amplifiers is shown for the central sensor (you may need to zoom the figure). See the section on Coordinate Systems, below, for details. 

Organization of LSST Data

The LSST pipeline software constructs a representation of an image that consists of the science array, a quality mask, a variance array, and a variety of attributes. These data are, for convenience, organized hierarchically in a way that reflects the attributes of the expected survey cadence (visitsnap), and by the camera FPA (raftsensor). To good approximation, the early stages of pipeline processing treat each LSST exposure as a collection of 189 separate images of the sky (taken simultaneously). When persisted to storage, each processed image file contains the data for a single sensor. The data reside in a repository, which for all cameras consists of the following:

  • A directory structure populated with science and calibration data files
  • A registry containing an inventory of image metadata (sqlite3)
  • A _mapper file to indicate to the system which camera model to use in navigating the hierarchy and reading/writing data

The processed data are organized in the file system like so: 

<product>/
  v<visit>-f<filter>/
    R<raft>/
      S<sensor>.fits

where the product is one of: calexp|icMatch|icSrc|src, the visit is a running numerical identifier, the filter is: u|g|r|i|z|y, and the raft and sensor nomenclature (described above) is a two-digit sequence (without punctuation). Access to compressed (g-zipped) data is supported transparently. Data for other cameras may be organized differently on disk, but the representation in software is the same. 

Representing a Camera in Software

Given the above discussion of the LSST camera and the organization of the data, the following description of the software representation of data from a camera may be easier to follow. 

Camera Geometry

The camera geometry package lsst.afw.cameraGeom provides a general representation of the geometry of a camera, including the location and orientation of each detector, multiple coordinate systems and transformations between them, and an approximate model of optical distortion. This package provides supporting utilities for such purposes as determining a World Coordinate System (WCS) characterization, assembling individual amplifier read-outs into a full sensor image, and creating mosaics of sensor images in the FPA

Coordinate Systems

Any relevant coordinate system for the camera can be defined, but the following coordinate systems (along with support for transformations between them) are supported by default: 

Coordinate SystemUnitsDescription
PUPILradiansx,y coordinates relative to the intersection of the optical axis with the FPA
FOCAL PLANEmmx,y coordinates in the focal plane
PIXELSpixx,y unbinned pixels on the entry surface of each detector
ACTUAL PIXELSpixSame as PIXELS, but accounts for pixel-level distortions

Camera Model

Considering now a general camera (not LSST), the following schematic (right) shows image axes in amplifier coordinates for a single sensor, where the amp is at the origin, and pixels are read-out serially in the x-direction. The table (left) shows how the axes need to be flipped to align with the full sensor coordinate system, where x and y increase monotonically from the image origin. Also shown are the regions of pre-scan (blue) and over-scan (light blue) virtual pixels in the serial direction, which are removed by the Instrument Signature Removal pipeline during overscan correction.


 

AmplifierFlip-xFlip-y
Amp1FalseTrue
Amp2TrueTrue
Amp3FalseFalse
Amp4TrueFalse

  Amp_layout

The attributes of a camera (see next sub-section) are specified in the obs_<camera> packages and are represented in software in the camera object; the cameraGeom package uses these attributes to perform geometric transformations. 

Camera Attributes

Attributes of the sensors and their arrangement in the FPA are collected in the obs_<camera> packages, in the /description sub-directory. The attributes include the following: 

Sensor Properties
Name
Number of amplifiers
Dimensions of axes (pix)
Amplifier Properties
Region in device (pix)
Read-out direction for x,y
Electronic properties
Region of overscan in parallel/serial directions (pix)
FPA Layout
Name of device
Position in x,y (mm)
Dimensions (pix)
Device use type (SCIENCE, GUIDING, etc.)
Readout duration (s)
Euler rotations (deg)
Translations (mm)
Deformation parameters

The camera model is intended to be general enough to support any FPA. Default parameters may be used for many parameters when initially characterizing a new camera, however the software will not likely produce satisfactory results until accurate FPA and detector properties are recorded. 

Camera Mapper

Raw Data

Raw data (i.e., camera data as persisted by the data collection system in the observing environment) consist of image pixels and associated metadata. Metadata includes information about the exposure itself (e.g. duration, start time, etc.), the camera and telescope configuration (e.g. filter used, sky coordinates of the optical axis), and environmental data; this information is collected in the image header. Raw images typically include virtual overscan pixels, along with per-sensor metadata. When lsstSim raw data are persisted to storage, each image file contains the data for a single amp. The data files are organized into a directory structure by visit_ID/raft/sensor

CalExp

A fully processed image from a single sensor (a CalExp) consists of the following aggregated components: 

ComponentDimensionsDescription
Science ArrayN x MThe array of image pixels with dimensions equal to the photo-active region of a single sensor; instrument signature removed
MaskN x M x PAn array of bits with each of P planes corresponding to an individual image or processing pathology
Variance ArrayN x MVariance at each pixel in the Science Array, computed from photon noise, detector noise, and other noise contributions (e.g., from snap co-addition) encountered during processing
PSF Characterization(n/a)A functional characterization (base function and coefficients) of the brightness profile from point sources, determined from bright stars in the image
Calibration Metadata(n/a)Approximate photometric zero-point, and functional characterization of the WCS

The science array is corrected for the effects of instrumental signature and the mask, variance array, and calibration metadata are added during the course of pipeline processing. 

Calibration Reference Data

Reference calibration data are used during pipeline data reduction to remove instrumental signature, and to determine the astrometric solution and photometric response. Although the details of content and format depend upon the type of sensors used and the particular camera, they commonly include: 

TypeFormatDescription
Static Bad Pixel maskImage/TableMasks pixels with known problems in response (bad columns, hot pixels, etc.)
Bias StructureImageCorrects for residual structure after overscan correction has been applied
Dark RateImageCorrects for the accumulated counts in the absence of illumination
Flat-fieldImageCorrects for color-dependent pixel-to-pixel variations in sensitivity. Field illumination function, if not included, will appear as a separate reference image.
Fringe StructureImageCorrects for fringe pattern in the background of science images
Pupil GhostImageCharacterizes the pupil ghost (not needed for the LSST camera)
Astrometric CatalogTablePositions and motions of astrometric standard stars.
Photometric CatalogTableBrightnesses of standard stars in multiple passbands

The reference data are often regarded as time-dependent (i.e., there is a range in time over which a given file is applicable), or may be organized according to some relevant property (e.g., filter in use). This organization is generally reflected in the file naming scheme and/or the directory hierarchy. Most reference images include variance arrays. 

Rolling Your Own Camera

If you wish to use the LSST Stack to process data from a camera that is not currently supported, there are a number of steps that will be necessary. You will be creating a new obs_<camera> package to support the processing, which will most likely be written entirely in python. Examples of working camera packages can be found in the Stack (obs_sdss and obs_lsstSim) or in the following packages, which are available from the following git source code repositories: 

These latter two examples may be the most helpful for getting started. The obs_sdss package is more fully developed, and has additional examples for advanced pipelines such as image co-addition, forced photometry, etc. Here are the major steps for rolling your own: 

Organize the Data

Organize science and calibration data in a hierarchical file structure in a way that is sensible for the new camera

  • A survey cadence (if applicable) may be reflected in the directory hierarchy and file nomenclature
  • Sensible means a hierarchy and nomenclature that can be navigated easily by a software mapper to locate data
  • Construct a test data set on which regression tests can be built. Ideally it will include images of a well studied field.
  • Create astrometry_net index files for your test data set (to support WCS solutions and photometric calibrations), if needed

Now collect the attributes of the camera, the sensors, and the geometry into reference files (to support the camera model). Example bootstrap ASCII files for obs_lsstSim may be found in the <package>/description directory; these are re-cast with a script to create the /camera/camera.py configuration file that is actually used by the software. For this package there is also a defects repository. Note that the concept of time-dependent calibration files is not yet transparently supported in the Stack. 

Create the obs_<camera> Package

You will need to write a number of tasks for your camera package. See the on-line documentation for: 

What tasks you write is intimately connected to the kind of data you have and the processing necessary to produce science results. Fortunately, most tasks and methods can be sub-classed from existing code, and customizations for your needs may require only modest work. Among the tasks in common to most obs_<camera> packages are: 

  • Create an ingester to build the registry of data products
  • Create a custom mapper
    • Create a mapper policy file that describes access patterns for navigating the repository of data
    • You will likely need to create a subclass of CameraMapper
  • Build custom configurations for pipelines of interest, in particular processCcd
    • Support tasks (e.g., in ISR) may need to be customized (i.e., retargeted), depending upon the complexity of your data
  • Attend to the camera packaging
    • Create the configuration and software dependency files for EUPS, so that the software can be distributed and deployed with EUPS (if needed)

Exercise the Software

Aside from debugging, you will need to run the software and study the output in order to: 

  • fine-tune the task parameters, such as the minimum expected PSF width 
  • verify that sources can be detected to the expected depth (and distinguished from cosmic rays)
  • verify that an accurate WCS solution can be determined over an entire image
  • verify that point-source photometry is accurate

Write tests to ensure that:

  • the camera object can be created
  • data ingestion works, and creates a repository 
  • science and calibration data can be retrieved and persisted
  • sources can be detected on science images
  • an accurate WCS solution can be derived

 

  • No labels

3 Comments

  1. Note that the pixel counts for each CCD defined above are for one of the two potential vendors.

    Also note that the serial register orientations are different between the two vendors. There are 8 serial registers along one edge of the CCD and 8 along the opposite edge. The registers are parallel to the y direction. For one vendor both edges shift in the same direction (reflection symmetry). For the other vendor one edge shifts in the positive y direction and the other in the negative y direction (180 degree rotation symmetry).

    1. With regard to the readout, the section Camera Model above says that for "a general camera (not LSST), ... pixels are read-out serially in the x-direction".  This is not correct for the LSST focal plane as defined above (the short side of the CCD segments holds the serial register).  How is that represented in the camera model?

  2. A small concern: in the figure

    LSST_FocalPlane.png

    above, the direction of the Y axis cannot be unambiguously read from the figure.  By comparison with LCA-280, +Y must be up, which is unsurprising, but still it would be good to ensure that the labeling is explicit.  Presently there is no axis arrow, and there is only one tick with a value shown (unlike the X axis).