DEPRECATED

This space is obsolete and unmaintained.

Present: FE, DN, YA, KSK, MJ, CS, JFB, DB, FF, MWV, HC, AF
Regrets: RHL
Notetaker: Yusra (thanks!)

1) Parameters for SkyMaps

Last week we talked about SkyMaps. Any thoughts on tract sizes and pixel-scales?
JFB: For HSC we used RingsSkyMap (which is what PanSTARRS uses) and native pixel scale. No reason to undersample. 
YA: What about to oversample? (like drizzle)
JFB: Limited by PSFs.  Maybe if we have sub optimal warping and measurement algorithms.

2) Seeing processCcd Failures

DN: Sometimes whole exposures failed due to not finding any astrometric matches.
DB: In CFHT we saw that most of the failures were due astrometric matching. Setting the initial PSF config parameter improved things.  
JFB: We do need to fix the initial PSF problem. We Should detect it instead of hardcoding it. On the HSC side the only images we fail on are dense crowded fields.  Ww should also do smarter things like put a ccd in with respect to the others around it. For your images that overlap with the sdss footprint (and therefore good reference catalog), calibration should be working. 
AF: I ran ProcessCcdDecam on 12,000 ids with the orchestration: (200 visits, 60 chips) It took ~3 hours, but only 15% succeeded. 
KSK:  We saw IO problems when running with hundreds of nodes on Gordon Lustre filesystem. Gordon had some misconfigurations. 
YA: This was only with 40 concurrent jobs which should not have caused IO failures. 
DN: Is there a way to not rerun files that have been run?

MVW: What are the goals here?

DN: Just to test the stack. Figure out why failures and processing. Feed that back to the development team. 
MVW: Log failures
DN:  Where can you find the logs? It’d be nice if the command-line task could print nice logs. 
JFB: Metadata and configs do get saved to the output repository by the butler. 
YA: Orchestration records logs too. Look for the log locations in the output to your condor_scratch directory
DN: But if I run the commandline task with -j the output from all processes gets merged in my stdout.
DN will do an RFC.

3) Work flow of the command-line tasks. 

DN: Would be good for science pipelines to be processed from one step to the next.  Not just the data but the parameters.  
JFB:  We need both and the ability to easily go from tract/patches to exposures. Need a database. Submitted as requirement for New Butler.  
DN: It feels unnatural to think in tract and patches (for makeCoaddTempExp) given that in the previous step (processCcd) I had a list of visits and ccd numbers. 
JFB: We also need a coadd refactor. makeCoaddTempExp should not be separate from assembleCoadd. makeCoaddTempExp’s are an implementation detail.
(Note taker’s aside: We would still need an option to cache the CTEs so they could be reused. They take 90% of the processing time)

4) Activities Status

DN: I have a lsstdb password and can use runOrca now.
AF: I’d like to understand where the proccessCcd failures are coming from.  
YA:  I’d be curious to see what happens with calibrate turned off.
CS/MJ: Image differencing not critical next two weeks. 
HC: Been working on ISR and making the mapper use calibration datasets. Question: What to do about ages of calibration files? You need a range for the calibration files that lets you map which calib files are good for which exposures. 
Leave it up to the users? I need defaults. Bias/flat/fringe. others? crosstalk? New tickets. 
DN: Franks says he makes new master bias and dome flats every night. Which is why there were none in those calibration directories we copied over. 
DB: Been thinking about astrometry. Things are going well with simastrom. Now implementation uses the LSST catalog methods.  The original was using a private way of accessing the usnob. 
Simon: Blocker for using it on decam is that we don’t have a good way of passing around parameters like airmass. This exposes a general area we need to think about.  Details? OK. It is in the primary HDU, and you need to hack around this by first reading the primary HDU and then the extension. 
HC: Is nightly too short? 
DN: Nightly is fine. that is what CP does. 
Fransisco: After Dominique’s comments I’ve been thinking about how to work in a true tangent plane. Still projecting to pixel, but am projecting to tangent plane in between. 
DN: Frank Valdez gave a talk about the NEO work yesterday and said he did not have the problem with the dipoles with Lori Allen’s dataset. He said using the distortions would be useful. TPV coeffs in the raw files are not good, but the CP refits the astrometry up to the quadratic terms in the TPV coefficients (which you’ll find in the instcals).

FF: This is why we work this way. Because the first year we didn’t have the CP output. 

DN (in a later aside:) Frank also gave false positive stats. They go back to each field 5 times. If you get 3 detections 98% are false positives, 4 detections 2% are false positives. 
DN: Hsin-Fang, I can help you with the ISR and the calibration. 

DN: Also, Frank says that in the CP the crosstalk is done first: the whole focal plane is loaded in one process. Then, it separates the chips and does ISR on each chip after that.

KSK: For now we can do IO inside the task. Only requires that we load 2 chips at a time. 
JFB: And we know we need a better parallelized calibrate task. 

MJ: We have the moving object moving pipeline compiling.  Student is working to set it up and run it on a test set from 4 years ago.  Yes, the plan is to run on real data but that will take longer. CI the mops pipeline after that.  The plan is to merge the patches, then Tim switches us from SLALIB to PAL, then check no results change and switch it over. 

5) BootCamp: https://community.lsst.org/t/dm-boot-camp-announcement/249

For new hires but anyone is welcome.
  • No labels

2 Comments

  1. Anonymous

    best usa online casino game gamble online safe online slots for real money best casino match bonuses usa casino deposit options Casino Deposit Guide high deposit bonus codes casino ewallet xpress accepted no download casinos promotions online blackjack accepting EcoCards