Peter is implementing version of PA1 using forced sources. Forced source tract table only has PSF fluxes. Does not have extendedness. Metric implemented using pandas groupby functionality. Astrometric metrics will need to use matched source tables. Suggest that we concetrate effort on getting matcher working.
Another option would be to grab the object table and use
What do we gain from using ForcedSource table
Eli's ticket has been merged. Isolated star association task. Returns a catalog that indexes. Based on Erin Sheldon's smatch. Reads in the source tables one at a time.
Currently is a task in DRP yaml.
Eli does not want to be the owner of general
Matcher is a wrapper around KDTree.
Three potential options:
Use new task directly
Make new task more configurable
Add capability in faro
Our plan is to make a JIRA ticket to create the base classes for analysis task that would use the new data product
wPerp parquet table conversion. Sophie agreed to review the PR.
wPerp values will go up by a factor of ~3 because not doing iterative rounds of clipping. Concern that the metric specification is sensitive to the way that clipping is applied.
Not applying FGCM before making object tables?? There will be a version of the source table that includes FGCM corrections. Would
There is Zeljko paper. Lauren would have a sense of what is being routinely measured.
Question is whether a large value of wPerp
indications of problem with data, e.g., stellar classification, photometric calibration
the different statistics have different meanings
PR to persist in-memory objects has been approved.
AuxTel. Dedicated almost all three nights to imaging campaigns run by the scheduler at night. Tiago had been setting up files.
Southern pole. Available 12 months. None of this data is processed yet. 3 epochs. Once a month.
E. High stellar density field. Will be up next 6 months. Much higher source count. 98% of the images went through with no tweaking. We should focus on dense fields. gri. 110 points. About 100 pointings completed. Erik planning to talk with Tiago soon to get better understanding of . What is the typical depth.
First field (3 epochs) overlap with GOODS, GEMS. This field is sparse. Pushed pipelines to the limits. Fail to get astrometric solutions much of the time. Between 50-70%
Multi-color imaging.
Will be purchasing 6.5 cm SDSS gri. It will be a new color system. Patrick will know the timeline.
Runs pipeline though transform source table. Producing instrumental fluxes.
We think this field was calibrated to PS1, but not sure. The computed magnitudes are not what is expected, so trying to understand better.
DP0.2 currently running into an issue where matchedCatalogTask runs for >2 hours without emitting any log messages. The Panda monitoring system assumes that means it has crashed or is in an infinite loop or something, so it terminates the job.
Had to switch to running the non-faro parts of step3
We now have a reprieve from that limit (as of an hour ago), so we might be able to resume running it as normal?
Hypothesis is that the butler.get() at the start of runQuantum is taking all the time, but we haven't confirmed that.
Deferred dataset loading might also be a good idea in general, and would avoid a long block of time spent in single butler.get()
Not clear that we know the right route yet, best if someone (Jeff?) can check in with Yusra at pair coding and see if there's an issue that can be addressed.
CTS opinion: the sooner we can jettison the src catalogs in favor of the parquet tables, the easier the performance issues will become.
Keith Bechtol to make a ticket to better understand mapping of these camera and calibration products characterization efforts to verification documents and the focus of these efforts. Discuss with the SCLT