Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Dataset DescriptionOriginDestination (path to repository on GPFS)Start DateAimed for LatencyResponsible IndividualComments
Camera test stand dataSLAC
/datasets/...?
Aiming for end-September for the transfer and ingestion (available via G2 Butler) the bulk existing data. Automated transfer service will commence following successful deployment for the AuxTel spectrograph data. Not yet operating as a reliable service.

~24 hours


ETU1/2 test images taken at SLAC. Plan is to bulk transfer and ingest data from 3 repositories: SLAC test-stand,  BNL test-stand and vendor data. Estimated uncompressed total volume 60-170 TB. Additional data from other sources may be transferred if deemed useful. ~200TB estimate for total. The camera team at SLAC are reviewing to see what can be deleted before transfer to NCSA.

Images produced by ongoing test campaigns at SLAC will eventually be transferred to NCSA using the same service as for the AuxTel spectrograph, once demonstrated and functioning as a reliable service for AuxTel data.

AuxTel spectrograph test dataTucson
/datasets/...?dbb/raw
Email was sent on 10/17/2018 to Patrick for first non-NCSA user test.Aiming for end-September; not yet operating as a reliable service

~15 mins from arrival at NCSA (depending upon load)


Only getting 1M/sec transfer rates. So single AuxTel file takes approx 73 seconds to transfer.

Single CCD, ~40MB spectrograph images taken during testing in Tucson and during commissioning. Aimed for (upper estimate) data rate of ~1100 images/day (50 biases, 50 darks, 50 flats, 2 images per minute for 8 hrs), ~1-2 TB/month. More likely rate expected to be 100-150 images/day, ~200GB/month. Rate expected to increase as commissioning proceeds.

The process for saving files from the test platform is different than the process that will exist during operations. On the test platform, someone chooses what files need to be saved in the permanent record of the survey. They run a program to copy the files to NCSA where a process there will put them in the correct location and ingest them into the Data Backbone.