Link to meeting agenda: DM Agenda for Joint Technical Meeting 2016-02-22 to 24
Jonathan is working on this. Will be done after the HSC fork is merged. Will be an automatic transition managed by SQuaRE.
Waiting for the new documentation build system, "LSST the Docs". Should be in the next week or two. Will be discussed in the "decision making" session tomorrow.
Still work to be done in determining the best way to document tasks and how tutorials and examples should be published
What does it mean to have "enough" CI to perform major changes? Suggestion that we need all the supported cameras processed through to measurement on coadds every week. This should include regression tests as well as tests for absolute values and how they relate to the science requirements.
What is a supported camera? SQuaRE emphasize that we should not guarantee to support in perpetuity every
obs_ package that is submitted. Agreed that they should have designated "sponsors" who will maintain them. However, if a change to the stack causes a test to break for a specific camera, our first assumption should be that it's an algorithmic error rather than a bug in
Plans for nightly QA runs that will run large scale (~hours) tests. Regular CI will run shorter (minutes) tests.
SQuaRE will provide only limited developer support during the short Summer 16 issue. Science Pipelines identify one issue deserving of immediate support as the shared stack on
lsst-dev. JDS will act as point of contact for SQuaRE on this.
Issues to discuss in details
how much containerization should we be doing?
does it simplify provenance capture?
Additional notes from Paul Wefel
Another mechanism being explored for SUIT is through real interactive environments where a VM or container running on a user workstation/laptop is pre-configured with SUIT software stack.
One question that came up: what is the resource limit for a Science user running jobs.? (Asked to Jason. Couldn’t hear the answer)
VM vs. Batch processing, SUIT understands Don’s point from yesterday on using a batch job
**SUTI would like to have a two / three day meeting with NCSA to work out system interaction details (action item that isn't owned)
Jason and Gregory to talk
Python 3, idiomatic Python, etc.
Almost all of this work has been blocking on the HSC merge. However, the onus is then on Science Pipelines to decide if and when to schedule the work: Architecture will not make this decision. Should assess:
(KTL adds: Prioritization needs to happen through the usual mechanisms with input from Science Pipelines and Architecture. Science Pipelines does not get to make this decision unilaterally either.)
Rough estimate that the effort to make our Python code "idiomatic" would be two weeks of work for three people. Important to have enough test coverage in place to make sure this doesn't break anything; running an HSC data release candidate should be sufficient.
The plan is to carry out the Python 3 work in a "big bang" hack session at the August All Hands.
There will be a meeting with the AstroPy developers at UW in ~1 month. Where we draw the line in terms of stack integration has a huge impact on the amount of effort required. Aim to have a rough idea of this before the meeting, but expect to refine it based on input from the AstroPy folks. An ideal outcome might be to produce AstroPy affiliated packages which provide C++ APIs that we can use in the stack.
It is likely that future architectures, as we might reasonably expect to run on in operations, will skew towards large numbers of cores per system, with relatively small memory and storage per core. This suggests we should move towards threading in our high-performance C++ code rather than relying on multiprocessing at a higher level. The likely, but not definitive, technology choice is OpenMP. We do not regard Science Pipelines as responsible for "blue skies" research in this area, but it is reasonable to expect that they will provide examples and requirements. We do not think this decision can be driven purely "bottom-up" but will ultimately require input from the Architecture team.
We all would like to exercise the SUIT to Data Access system in a basic way regularly, deploy nightly out of NCSA. The idea is to get this going with a very small repository. Then, when we receive the PanSTARRS data, SUIT can start to use that data to develop new features for users to try and write a robust set of regression tests.
Jacek would like to get the small set of available around July-Augus 2016. Before then SUIT will deliver example queries so we can verify we’re prepared to run the tests.SUIT would like to have the web portal to access PanSTARRS data ready by the end of November, which means that data should be ready for access by end of September the latest.
Senior management should prioritize how important it is to deliver Pan-STARRS data through Qserv/SUIT to our users, and define timeline. Two months ago the top two priorities were: end-to-end system, and serving Pan-STARRS data
Once the Pan-STARRS data arrives it should take Data Access about a month to get the pan stars data ready. Note, we don’t know what format it’s coming in; we will need to think about how to load it. After loaded, keep it available "for ever". Ok to limit access for users during scheduled stress DB tests.
Remote Butler - SUIT really wants a Java client to the remote Butler, for the Firefly service. We need to discuss it with the architecture group, but perhaps we should consider *not* doing the python butler client for now, and doing the Java one instead. Java client useful for SUIT server side (performance reasons, does not want to fork python process). Python access still needed for SUIT user access.
Would be nice to have simple prototype in Fall 2016 that demonstrates credentials acquiring/passing
There's a modest cluster hanging off the back of
lsst-dev which is available for developers to use. Use the
ctrl_orca middleware to access it. Documentation is available on Confluence.
This cluster is not addressable by the
ctrl_pool middleware recently ported from HSC. Adding this functionality to
ctrl_pool should be a relatively modest task for Science Pipelines developers, although it probably falls strictly outside their remit per WBS. Getting this working will be a requirement in the intermediate future.
Some discussion of the SuperTask concept and how it might relate to Science Pipelines. Nobody in the room is confident in describing either when it will be available or what impact it might have on existing tasks. We hope this will be clarified soon.
Hsin-Fang has volunteered to be a contact at NCSA for fielding short term maintenance issues with the existing middleware (
pex_config etc). The Science Pipelines group will be proactive about putting larger requests to the Middleware Group well in advance of future cycles so that they can be included in the plan.
The group at NCSA which will be working on middleware is gradually ramping up in terms of staffing and experience.
Agreement that there will be effort available at NCSA during Summer 2016 to support an investigation into future stack parallelization strategies.
Simon is concerned that the authors/owners of the various boxes in Don's presentation from this morning should be clearly stated, and that the interfaces where alert data passes from Science Pipelines to NCSA, or vice versa, should be documented.
need more definitions of interfaces to global catalog. This needs to be discussed with NCSA. Not sure when we will know
It was agreed that the large scale visualization of HSC data as presented by Robert was a compelling use case for SUIT. There was some discussion about what it would take to achieve this technically. It was agreed that it would be technically feasible to generate PNG (or equivalent) images which would be used in such a visualization as part of the data release processing (rather than post-processing the data release by the SUIT group). The key is to pre-generate the large images in different resolutions.
Following some work in 2015 to integrate Firefly with the
afw.display system, this effort has been languishing. The SUIT group are keen to see Science Pipelines developers using, and providing feedback on, Firefly; Science Pipelines developers are keen to have access to better visualization and debugging tools. However, the barriers to entry for getting Firefly running on individual laptops are -- arguably! -- offputting.
It was agreed that we will aim to establish a Firefly service on
lsst-dev which will enable developers to visualize data stored on the filesystem. We hope this will help build a critical mass of Firefly users.
A compelling user case for future development would be to enable visualization of
afw.tables through an interface similar to
remote access to butler (python client), read and write
S3 plugin backend for butler
Nate Pease to visit them for ~ a week
Data Access wants
Important to be able to select calibration data according to complex criteria, e.g. "give me the calibration data that would have been used were I reducing this data on that day". A variety of approaches were discussed to addressing this problem, including educating the Butler about all the "calibration roots" (corresponding to different versions of calibration data). Closely related to capturing provenance (see below): need to be able to reproduce results by capturing not only the version of the code but the specific calibration data used to generate them.
Need to go through the whole flow, in details
Table-valued functions are a major feature of the SciServer infrastructure. Not currently available in MariaDB or QServ (although less-than-optimal workarounds may be possible). Members of the Data Access will be meeting with Monty Widenius next week and may discuss it with him.
Margret and Don had other session to attend. Jason from NCSA called in for the discussion. We touched on workspace, resource management, and authorization/authentication. Xiuqin would like to start a regular discussion on work space which leads to a 2-3 day face to face design meeting of all parties.
Focus on adding Unicode test cases for strings that would come from outside.
Make sure that C++ interfaces handle Unicode properly.
Outside export requirements (could be more than FITS)
Internal format needs to be efficient for access and export
How self-describing and what does that really mean?
How do we handle multi-part objects?
Unknown User (ciardi) will take the lead in defining the file format(s) which LSST will ultimately provide to end users.
The Architecture team will take the lead in defining the file format(s) which are used within the stack. Efficient conversion to end-user-facing format(s), where applicable, will be an important (but not the only) consideration.
No timeline was established for these definitions being made available.
lsst_apps. It would exclude the
lsst-dev. (JDS: I believe this work is being coordinated by SUIT; I have nothing to do here until they ask for support. Unknown User (xiuqin), do you agree?) (Epic was created to address this issue)
ctrl_poolmiddleware to address Condor. (JDS: I believe this is obsolete due to the ongoing replanning exercise and discussions at the May DMLT)