Run MAF Analysis Scripts
## Running the standard analysis
# MAF is now running on ops2 (CentOS7) and minion (El Capitan)

# On Minion
# in bash
source /lsst_stack/loadLSST.bash
# in tcsh
source /lsst_stack/loadLSST.csh
# set up the tagged version of MAF (9/9/2016)
setup sims_maf sims_2.2.4     # use this version for setting up showMaf.py
setup sims_maf -t sims_2_3_2     # use this version for sciencePerformance.py and schedulerValidation.py 
# 
# on minion, a standard workflow is to 
cd /lsst/opsim_3.3.8   # where you have run modifySchema.sh and the sqlite file is in ./output
# run the analysis driver you want and direct output to a specific subdirectory named in a standard way for that script
# using --outDir)
# for example:  schedulerValidation.py --outDir maf/minion_1016/sched_2.2.6 output/minion_1016_sqlite.db
# I'm not sure if you can run schedulerValidation.py in parallel but you CAN NOT run sciencePerformance.py in parallel
# 
# you don't need to be in any particular directory if you completely specify the Dirs (takes about 1.5 hours for 10 year run)
# usage: schedulerValidation.py [-h] [--outDir OUTDIR] [--benchmark BENCHMARK]
#                              [--plotOnly]
#                              dbFile
# Python script to run MAF with the scheduler validation metrics 
# positional arguments:
#	dbFile                full file path to the opsim sqlite file
#
# optional arguments:
#	-h, --help            show this help message and exit
#	--outDir OUTDIR       Output directory for MAF outputs.
#	--benchmark BENCHMARK Can be 'design' or 'requested'
#	--plotOnly            Reload the metric values and re-plot them.
#
# Recommended procedure (assuming "design" is used):
schedulerValidation.py  --outDir maf/minion_1016/sched_2.2.4 --benchmark design output/minion_1016_sqlite.db >& log/sched_1016.log &
 
#
# usage: sciencePerformance.py [-h] [--outDir OUTDIR] [--nside NSIDE]
#                              [--lonCol LONCOL] [--latCol LATCOL]
#                              [--benchmark BENCHMARK] [--plotOnly]
#                              dbFile
# Python script to run MAF with the science performance metrics
# positional arguments:
#   dbFile                full file path to the opsim sqlite file
#
# optional arguments:
#  -h, --help            show this help message and exit
#  --outDir OUTDIR       Output directory for MAF outputs. Default "Out"
#  --nside NSIDE         Resolution to run Healpix grid at (must be 2^x). 
#                        Default 128.
#                        nside=64 is 55 arcmin resolution; nside=128 is 27 arcmin.
#  --lonCol LONCOL       Column to use for RA values (can be a stacker dither
#                        column). Default=fieldRA.
#  --latCol LATCOL       Column to use for Dec values (can be a stacker dither
#                        column). Default=fieldDec.
#  --benchmark BENCHMARK Can be 'design' or 'requested'
#  --plotOnly            Reload the metric values from disk and re-plot them.
#
# Recommended procedure  (design and nside=64 are the defaults and could be omitted):
sciencePerformance.py ./enigma_1260/data/enigma_1260_sqlite.db --outDir ./enigma_1260/performance
                      --benchmark design --nside 64 >& log/science_1016.log &
 
  • No labels