Attendees

Regrets

Agenda

ItemWhoNotes
Discussion of “the LSST Camera Image Viewer”
  • That is, "Tony Johnson's viewer”.
  • https://lsst-camera-dev.slac.stanford.edu/FITSInfo/
  • It currently shows raft-level images (but see below).
  • The viewer is based on OpenSeadragon.
    • An “open-source web-based viewer for high-resolution zoomable images, implemented in pure JavaScript”.
  • Images are fetched using the International Image Interoperability Framework (IIIF).
    • This is an open standard; pretty sure it is implemented in this case by Cantaloupe running under Jetty.
  • Image data is read directly from FITS files on the server side.
  • It is tiled and delivered to the client as JPEGs.
    • In theory, the IIIF protocol can support other delivery formats, but they don't seem to be used by the Camera Image Viewer.
  • The OpenSeadragon / IIIF system has no understanding of a coordinate system beyond pixels (and, with a plugin, a scale factor to a physical unit). It does not (as far as I can see) enable you to e.g. project astronomical coordinates.
  • It is possible to both read and manipulate RGB values of individual pixels.
    • There are plugins which enable one to do things like thresholding, convolution, applying a colormap.
  • If it's possible to combine pixels from different images (e.g. to build an RGB composite), it is not obvious how.
    • Doesn't look as though OpenSeadragon can do this itself, but it may be possible to use it to collect pixel data from multiple images, then write your own Javascript to generate the composite.
  • It is possible to add annotations and overlays to images using OpenSeadragon.
    • The Camera Image Viewer uses this to label detectors & amplifiers.
    • There are variety of plugins for various different sorts of canvas, which fall outside my Javascript/HTML5 understanding, but which I imagine would make it straightforward to draw masks, footprints, overlay catalogs, etc.
  • Future plans for extension of the Camera Image Viewer include:
    • Live update (automatically show the latest image taken)
    • Display the full focal plane
    • Show data about pixel values over which the mouse is moving
    • Add more colorization/stretch options
    • Bias subtraction
    • FITS header viewer
    • Ability to flag “favourites”
      • (Which I assume means “images of particular interest”, rather than sources/regions).
  • Tony's image display goals (slide stolen from him!):

  • Tony's summary of the current system (hey, I was right, it is Cantaloupe!):

  • And thoughts on user requests (it's not clear that all of these can/will be implemented):

  • And what they are currently working on:

  • Extra notes:
    • They are currently not set up to read arbitrary FITS files: the FITS reader extension to Cantaloupe is very specific to the Camera raw format. It might be possible to change that to look at science images, but that's not something that's currently on their roadmap.
    • Many operations require going back to the raw pixel data, so that changing visualization settings on the sky really puts significant load on the storage backend. They have a fairly monstrous system to support this.
Discussion of ExpViewer
  • That is, “Luiz da Costa's viewer”. 
  • http://expviewer.linea.gov.br
  • The basic use case here is to display a “preview” image of the full focal plane as data is received from the camera.
    • As such, it already supports full focal plane (not just raft, as above) display.
    • And in default mode it “updates” every few seconds as a new image is received.
  • Not the same system as the Camera Image Viewer, above — they have been developed completely separately.
    • Although there is some suggestion that it'd be neat if ExpViewer could read from the same IIIF backend; not currently clear if that's a realistic possibility or just idle speculation.
  • However, this also uses the OpenSeadragon system, so much of the above discussion applies directly.
    • The code is harder for me to read, though, so it's harder to comment on the details of the implementation.
    • (That's not necessarily a comment on the quality of the code — I am no Javascript programmer, and I think it has been packed efficiently to minimize space.)
  • ExpViewer uses TIFFs on the back-end, rather than reading directly from FITS.
    • It's not clear where or how that conversion is happening.
  • There's a proposal from the group in Brazil to provide a tool for commenting on images (a variation on the DES “Tile Inspector”), but it's not clear what that'll actually amount to — possibly just extending Tony's tool to add a commenting facility.
  • ExpViewer design overview, courtesy of Luiz.

Discussion of aladinlite
  • Link to aladinlite documentation
  • Tool for visualizing large sets of images at adaptive resolution.
  • Requires ingesting images into HiPS (Hierarchical Progressive Surveys) format, which increases the size of the data by an estimated factor of ~ 1.5
  • No obvious callbacks implemented, yet.  Would need to be done in JavaScript.
  • ipyaladin module does exist for embedding aladinlite in Jupyter notebooks.

  • Note that conversion to HiPS is potentially slow (although probably nobody has tried to make it fast).
    • But of course LSST will ultimately have to write code to make HiPS fast.
  • Coordinate system is purely based on the HEALPix grid; there is no pixel coordinate system left.
    • The expansion factor of the data obviously depends on how coarse (or fine) the HPX grid is.
  • Can overlay catalogs.
  • Could render mask planes as a separate image and flip between that and the science data.
  • Useful as an exploratory science tool, rather than at the engineering level.
Will be at Space Telescope next week.Yusra AlSayyad
  • What opportunities can I take advantage of?
    • There will be AstroPy people, Ginga people, ...
  • There is an AstroPy initiative to produce an afwDisplay analogue (a backend-independent image viewing API).
    • Intended for notebook-based viewers; at least early versions would not be usable with DS9.
    • Be great to get a technical update on this.
    • Presumably could be updated to the Firefly API as necessary.
    • Gregory Dubois-Felsmann  will find a reference to this tool.
Homework
  • Yusra added a paragraph to the report.
  • Simon will evaluate JS9, in particular with a view to usability use-cases.
  • Need to map full-focal-plane use cases onto the Tony Johnson viewer, as above.
    • But it seems like a plausible recommendation that we use the Johnson viewer, but add capabilities to tile patches on a tract & chips on a focal plane.
  • Aiming to have a draft report within two weeks.
  • Discussed what the recommendations should look like.
    • We are torn perhaps between functionality (ie, Firefly) and responsiveness (JS9?).
    • In practice, it's hard to imagine that any recommendation of this group will cause one tool to satisfy all requirements; we should rather focus on a curated(?) list of tools which cumulative make as many people as possible happy.
  • Action for everybody: Look at the aggregated use cases, and pick your favourite to turn into text in the document.