Executive Summary

An event-driven Prompt Processing execution framework is required for Alert Production to reach its latency requirements.  Until recently no framework was available and all AP precursor testing took place in batch mode.

In March 2022 we attempted to integrate the Alert Production software payload with a prototype Prompt Processing framework in Google Cloud.  The integration was successful; we were able to execute AP payloads triggered by visit and file-upload notifications.

Further work will be necessary to run the system in production.  Google Cloud-specific technologies will need to be replaced for operation in the USDF.

Sprint Goals and Organization

During early March, 2022, we undertook an evaluation of the Prompt Processing system design presented in https://dmtn-219.lsst.io/.  The primary goal was to identify any significant issues with the design and identify necessary future work required for a production system.  Our approach was to take the prototype implementation on the Google Compute Platform (GCP) and attempt to extend it to process precursor data with the LSST Science Pipelines.  Completing a full run of the AP pipeline within the Prompt Processing Framework within the sprint period was a desired outcome, but not required to meet our goals of evaluating the design.

Sprint participants were Krzysztof Findeisen and John Parejko , with Kian-Tat Lim providing additional valuable contributions and Eric Bellm supervising the overall effort.

Progress was tracked on DM-33916 - Getting issue details... STATUS ; additional post-sprint tickets are tagged with component = prompt_prototype. Coordination discussion on Slack may be found in #dm-prompt-processing.


  • The AP team developed an understanding of the framework design; no show-stoppers appeared during our testing.
  • Migrated prototype code to https://github.com/lsst-dm/prompt_prototype and improved tests and documentation.
  • Created a central Butler repository to hold calibrations, templates, skymaps, etc.
  • Created a bucket of raw files from an ap_verify dataset to execute the pipeline on.
  • Created a local Butler repository in the worker container, extracted calibrations and other needed datasets from the central repo, and imported them to the worker-local repo.
  • Sent realistic next_visit events and simulated uploading the corresponding raw files.
  • Worked with (writing and reading) Butler Registry and APDB databases in Google Cloud SQL.
  • Demonstrated complete pipeline execution on the worker-local repo on one visit and an empty APDB.

Prototype Architecture Diagram

Next Steps

General improvements

  • Write hooks to synchronize raw and processed data from the worker-local repo back to the central repository.
  • Write ICDs and code that allow sending nextVisit and transferring images from the telescope(s) in addition to the test buckets used by upload.py
  • Develop a system to enable flexible retargeting of pipelines based on the next_visit input.
  • Improve flexibility of apdb/postgres schema handling to isolate distinct pipelines/processing situations

Porting to the USDF

We have heavily used Google Cloud infrastructure throughout the prototype; significant rework will be needed to translate to the USDF, including changes to the underlying technologies

  • CloudRun → Kubernetes + Ingress + Horizontal Pod Autoscaler
  • CloudStore → MinIO or Ceph Object Gateway (RADOS)
  • PubSub → Kafka or other messaging queue
  • APDB Cloud SQL Postgres → Cassandra

Plus deployment via Helm/ArgoCD.

It may be useful to maintain the ability to run Prompt Processing both in the USDF and GCP using the same codebase, although the ability to use this capability for Rubin data would require further investigation of relevant data security issues.

Enhancing the GCP Prototype

  • DM-34000 - Getting issue details... STATUS
  • DM-34020 - Getting issue details... STATUS
  • DM-34112 - Getting issue details... STATUS : check that we have the right refcat shards in the ap_verify datasets.
  • Find a way to reduce the "startup" time for the service and/or individual workers, which is currently longer than the time it takes to next_visit and observe a single visit. A preliminary test suggests that setting the minimum number of workers to 1 is not sufficient to avoid the delay.
  • Use the same exposure ID in the raw filename as in the file header and/or visitinfo; otherwise, the activator tries to process an exposure ID that the Butler says doesn't exist. (fixed on DM-35052 - Getting issue details... STATUS )
  • Upload real images that have unique visit/exposure IDs, so that the service and the APDB see a "new" image every run. Possibly related to the ongoing debate about how visit definition should work.
  • Add tests that prep_butler can be called twice without crashing (and that it works both times!)
    • We are currently blocked from being able to rerun the pipeline twice by all of the following factors:
      • It's hard (not impossible, just impractical) to set up a local repository that already contains files or even dimensions from a previous run (either successful or failed) DM-35051 - Getting issue details... STATUS .
      • run_pipeline assumes that all exposures in the local repository are from the current visit; doing otherwise requires that the activator know the exposure IDs of the images (see above comments about snap IDs vs. exposure IDs, and about uploader and file agreeing on what the file's exposure ID is) DM-35052 - Getting issue details... STATUS
      • Raws generated from test data are all ingested with the same (or a small handful of) visit/exposure IDs, causing uniqueness failures in the APDB
  • Implement syncing of raws and pipeline outputs to the central repository DM-35053 - Getting issue details... STATUS
    • This should probably be done after we solve the visit/exposure ID problems, or the central repository will suffer uniqueness problems analogous to those already in the APDB.
  • Include alert distribution as part of the prototype?
  • Scale up test data to have multiple visits (which allows non-trivial DIAObject handling, and better testing of how well we can manage a persistent APDB). (fixed on DM-35052 - Getting issue details... STATUS )
  • LSSTCam test data (perhaps from ImSim?)
  • Test that the correct visits are defined for the ingested exposure after either prep_butler or ingest_image. I'm not sure where we should do defineVisits, but we have to do it somewhere, I think.
  • Refactor activator.py to make it more testable and functional without so many globals.
  • Write tests of activator.py (there are pytest fixtures for flask apps) so we can confirm that it follows the MiddlewareInterface API without doing a full google cloud run.
  • Sort out exactly how to select the correct instrument in activator.py: where do we need instrument short name, where the full class name, do we pass an initialized Instrument object to MiddlewareInterface, how do we make ensure the right obs packages are setup, etc.?
    • Specifically, activator.py  should not have any "application" (LSST) code, including Instrument.
  • MiddlewareInterface currently accepts raws from both local storage (for testing) and Google Storage. However, the latter requires a temporary local store to work around the lack of support for remote ingest. This store is sometimes valid, sometimes not; find a less bug-prone way of doing this.
  • Reconcile how the code and pipeline paths are handled so that we identify files in exactly the same way in unit testing and in the Service container. Currently we hacked PROMPT_PROTOTYPE_DIR in the container in a way that works for pipelines but nothing else.
  • The prompt-service service account should not need the storage.buckets.get permission to download specific files; try to remove it once DM-34188 - Getting issue details... STATUS is done.

Open Questions

Extracted from TODOs in MiddlewareInterface and its tests:

  • What do we call MiddlewareInterface? We found that referring to it, and its "interface" (i.e. API) was clumsy in both chat and voice. MiddlewareWorker? PipelineWorker? We didn't have a better name yet.  MiddlewareManager or MiddlewareDriver?
  • How do we handle exceptions in the various MiddlewareInterface methods? Do we just let all exceptions fall through, or do we try to catch some of them and encapsulate them in something more informative or manage certain failures?
  • Can we make use of the on_ingest_failure  and on_success callbacks in RawIngestTask to help us handle failures and successes? Possibly passing information back up to the activator?
  • RFC-605: how to determine flipX when setting the initial skyWcs for determining the detector on-sky orientation in prep_butler().
  • How do we determine the name of the coadd datasetType (e.g. deepCoadd) that will be used for templates in the pipeline? We need to get the right dataset in _export_skymap_and_templates, and there are 3 TODOs there listing some options.
  • Are the list of snap ids included in the next_visit() packet, or do we have to receive them with the raws and pass them to run_pipeline?
    • Currently the Butler does not have snap IDs; it has exposure IDs that are unique per snap. Is this a feature that will be added later, or does the prototype need to be able to identify the images to process by exposure ID instead of by snap ID?
  • How are we handling retargeting of the pipeline based on instrument + nextvisit['kind']?  How is this configuration managed, and how do we handle missing combinations?

Middleware issues

  • butler.get('camera')  does not use the default instrument: DM-34098
  • Cannot ignore already existing items when importing a butler.export(). This results in a "unique constraint violation" when anything at all in the export is already in the local registry (post DM-34017, the only remaining problem with this seems to be the skymap).
    • If allowing butler.import_() to ignore already existing items is not feasible in middleware, could we instead only export() the items we don't already have?
    • How do we export/import only the skymap tracts/patches that we need?
    • Possible solution/related ticket here:  DM-33148 - Getting issue details... STATUS
    • Instead of using butler commands to register instruments/skymaps/etc, could copy the sqlite file and butler yaml config to get a prepped butler.

      • This requires an sqlite file configured for the instrument+skymap you are using.

  • In _export_refcats we cannot query for all refcats with a set of shards because ... doesn't work in queryDatasets with a where  statement that contains htm7 . Middleware knows about this issue (it raises NotImplementedError), so it should be fixable? Otherwise, we have to hardcode all possible refcat names. You can't do queryDatasetTypes on just the refcat collections: there is no solution to identify the names of available refcats in a butler.
    • Beginning of a general solution to this is on  DM-31725 - Getting issue details... STATUS (specifically, support for querying the dataset types in a collection).
  • We can't filter the calibs to the "most recent" in _export_calibs because queryDatasets doesn't allow validity range as an option.
    • Beginning of a general solution to this is on  DM-31725 - Getting issue details... STATUS .
  • Default dimensions don't always work: DM-34241 - Getting issue details... STATUS
  • Provide a way to get the sky coverage of visit-detector combinations (i.e., the visit_detector_region table or something analogous) into the registry before raws arrive (preferably, at next_visit  time). I don't know if the source would need to be be a VisitInfo dataset or something more exotic.
    • Current middleware is oriented/optimized towards batch processing and may introduce unnecessary overheads; we may want a simpler approach that is more appropriate to our kind of pipeline to building the quantum graph. Could Middleware add a different QG-generation system that satisfies APs needs (and does it have to support anything not-AP? Jim says preferably no.)?
    • Can middleware implement an "ignore missing datasetRefs" ("don't pare down the graph on existence") mode for QG-generation? This this may be more fragile at pipeline run time, depending on how precisely the `where` is specified.
    • We may want to build the quantum graph ourselves, independent of the middleware QG-generation code. Would we even need middleware regions defined?
    • Could we replace defineVisits with something more custom that only does the steps that we need?
  • Add support for remote files to RawIngestTask.

  • No labels


  1. Since a lot of discussion continues to happen on this subject that may not be of interest right now, but probably makes sense to keep for posterity, I think a very good way to use the middleware in the future for this problem goes something like this:

    1. After receiving the nextVisit event, use the shared/persistent butler repo and the metadata from nextVisit to custom-build a QG for each detector that encompasses all steps.  As a custom QG generation algorithm, this wouldn't need any of the visit or exposure dimension data the general QG generation algorithm expects, as long as equivalent information (particularly the region on the sky) is passed explicitly to it.
    2. Use that QG to set up the butler actually used for execution; right now that would be a SQLite "execution butler", and in the not too distant future it could be a "QuantumBackedButler" that doesn't use SQL at all.  In either case, this is functionality middleware will provide and the same functionality used for setting up a butler in batch execution, which is good.

    How you actually run that QG and interleave doing so with actual raw ingest is something I'm much less well-equipped to reason about, because I don't know what you're doing now, or even what kind of signal you get about the appearance of the raws, but I'm happy to discuss it.  One obvious problem with the above is that we won't be able to predict the right exposure and visit IDs when we make the QG, but it wouldn't be hard to swap them when the raws do land.  Since in this scheme it's only the temporary repo (and probably one backed only by the QG file at this point) that ever sees the temporary stand-ins for the real IDs, the damage they can do is greatly limited.

    1. The custom graph generation is what I was originally thinking of doing (hence why the code is called "activator", which was the original name for a thing-that-generates-and-runs-a-graph).  But it seemed to be a pain to translate nice pipeline YAML with config overrides etc. into a custom graph.  So I went with something simpler to start.

  2. Thanks. This is a great summary of what I'm trying to say with point 2.

  3. It seems like there are a few possible solutions for various of these issues, and they can be sorted out by whomever is responsible for continuing this work.

    For whomever works on this in the future: we need some external documentation describing the different types of butlers (e.g. how does one work with an "Execution Butler" or "QuantumBackedButler") so that we can consider the different options. Please don't try to write such documentation on this page: this is just a record of the work that has been done to show that the system prototyped by KT is viable.