Notes from the previous meeting:
Continued discussion on the office spaces at ROB to relocate the DAX team from B50. This is still in progress.
|User-generated data products||team|
|(Possible) bug in Qserv ||team|
How do we investigate this problem?
|Status of |
Fabrice Jammes There is the following proposal from FrDF to implement the fast Parquet-to-CSV translator in C++. Possible options include a separate application or integration with an existing partitioning tool: https://lsstc.slack.com/archives/C996604NR/p1666811747284709
Fritz Mueller is in favor of the latter option. We should also support (eventually) VO tables.
Igor Gaponenko we need to use Parquet "row groups" to allow parallel translation of the files. The columns could be efficiently compressed if they have repeating data patterns (all zeroes
Kian-Tat Lim: it's not done yet by the Pipeline. No JIRA ticket exists yet for this improvement. Though, a need in having the row groups has been recognized by the developers.
Colin Slater: column-oriented format provided by the Parquet data format is essential for the data analysis based on these files. Qserv is not the only (or the main) user of these files.
Igor Gaponenko mentioned that the source files of the partitioner are now a part of the Qserv source tree. The partitioner's binaries are now built as a part of the Qserv binary container. There are concerns about bringing extra dependencies into the Qserv container.
Fritz Mueller thinks we could introduce refined binary containers to separate Qserv itself from the partitioning tools. This may lead to better control over the dependencies.
Fritz Mueller on the practical steps in this direction:
|Status of the |
Slow progress so far. An implementation of the "live ObsCore manager for Butler" (PostgreSQL) has finished. It works. Still requires more testing. A few PostgreSQL extensions are required by ObsCore. One set has been installed at USDF. More will be needed.
More details on the status of the project can be found at Live ObsTAP service deployment