|Dataset Description||Origin||Destination (path to repository on GPFS)||Start Date||Aimed for Latency||Responsible Individual||Comments|
|Camera test stand data||SLAC|
Automated rsync of BOT raw files: /project/production/tmpdataloc/BOT
|Aiming for end-September for the transfer and ingestion (available via G2 Butler) the bulk existing data. Automated transfer service will commence following successful deployment for the AuxTel spectrograph data. Not yet operating as a reliable servicestarted transferring BOT raw files to temporary NCSA location 03/18/2019, but no Gen2 repo yet.|
(Automated raw file transfers and Butler ingestion < 1 hr)
ETU1/2 test images taken at SLAC. Plan is to bulk transfer and ingest data from 3 repositories: SLAC test-stand, BNL test-stand and vendor data. Estimated uncompressed total volume 60-170 TB. Additional data from other sources may be transferred if deemed useful. ~200TB estimate for total. The camera team at SLAC are reviewing to see what can be deleted before transfer to NCSA.
Images produced by ongoing test campaigns at SLAC will eventually be transferred to NCSA using the same service as for the AuxTel spectrograph, once demonstrated and functioning as a reliable service for AuxTel data.
|AuxTel spectrograph test data||Tucson|
Automated rysnc enabled to a temp NCSA location 03/20/2019
~15 mins from arrival at NCSA (depending upon load)
Only getting 1M/sec transfer rates. So single AuxTel file takes approx 73 seconds to transfer.
Single CCD, ~40MB spectrograph images taken during testing in Tucson and during commissioning. Aimed for (upper estimate) data rate of ~1100 images/day (50 biases, 50 darks, 50 flats, 2 images per minute for 8 hrs), ~1-2 TB/month. More likely rate expected to be 100-150 images/day, ~200GB/month. Rate expected to increase as commissioning proceeds.
The process for saving files from the test platform is different than the process that will exist during operations. On the test platform, someone chooses what files need to be saved in the permanent record of the survey. They run a program to copy the files to NCSA where a process there will put them in the correct location and ingest them into the Data Backbone.