Data preparation for 1DC How-tosΒΆ

Download Notebook

This tutorial assumes that you have downloaded the data for the first CTA Data Challenge. If this is not the case, please read first how to get the 1DC data.

Start by importing the relevant Python modules.

In [1]:
import gammalib
import ctools
import cscripts

Now set the CTADATA and CALDB environment variables. Please adjust the path below so that it points to the relevant location.

In [2]:
%env CTADATA=/project-data/cta/data/1dc
%env CALDB=/project-data/cta/data/1dc/caldb
env: CTADATA=/project-data/cta/data/1dc
env: CALDB=/project-data/cta/data/1dc/caldb

Now prepare a dataset that comprises the Galactic Centre observations that have been performed during the Galactic Plane Scan. Start with selecting the observations.

In [3]:
obsselect = cscripts.csobsselect()
obsselect['inobs']     = '$CTADATA/obs/obs_gps_baseline.xml'
obsselect['pntselect'] = 'CIRCLE'
obsselect['coordsys']  = 'GAL'
obsselect['glon']      = 0.0
obsselect['glat']      = 0.0
obsselect['rad']       = 3.0
obsselect['tmin']      = 'NONE'
obsselect['tmax']      = 'NONE'
obsselect['outobs']    = 'obs.xml'
obsselect.execute()

Now select the events with energies comprised between 1 and 100 TeV from the observations.

In [4]:
select = ctools.ctselect()
select['inobs']   = 'obs.xml'
select['ra']      = 'NONE'
select['dec']     = 'NONE'
select['rad']     = 'NONE'
select['tmin']    = 'NONE'
select['tmax']    = 'NONE'
select['emin']    = 1.0
select['emax']    = 100.0
select['outobs']  = 'obs_selected.xml'
select.execute()

The next step is to stack the selected events into a counts cube.

In [5]:
binning = ctools.ctbin()
binning['inobs']    = 'obs_selected.xml'
binning['xref']     = 0.0
binning['yref']     = 0.0
binning['coordsys'] = 'GAL'
binning['proj']     = 'CAR'
binning['binsz']    = 0.02
binning['nxpix']    = 300
binning['nypix']    = 300
binning['ebinalg']  = 'LOG'
binning['emin']     = 1.0
binning['emax']     = 100.0
binning['enumbins'] = 20
binning['outobs']   = 'cntcube.fits'
binning.execute()

Now compute the corresponding stacked exposure cube, point spread function cube and background cube.

In [6]:
expcube = ctools.ctexpcube()
expcube['inobs']   = 'obs_selected.xml'
expcube['incube']  = 'cntcube.fits'
expcube['outcube'] = 'expcube.fits'
expcube.execute()
In [7]:
psfcube = ctools.ctpsfcube()
psfcube['inobs']    = 'obs_selected.xml'
psfcube['incube']   = 'NONE'
psfcube['ebinalg']  = 'LOG'
psfcube['emin']     = 1.0
psfcube['emax']     = 100.0
psfcube['enumbins'] = 20
psfcube['nxpix']    = 10
psfcube['nypix']    = 10
psfcube['binsz']    = 1.0
psfcube['coordsys'] = 'GAL'
psfcube['proj']     = 'CAR'
psfcube['xref']     = 0.0
psfcube['yref']     = 0.0
psfcube['outcube']  = 'psfcube.fits'
psfcube.execute()
In [8]:
bkgcube = ctools.ctbkgcube()
bkgcube['inobs']    = 'obs_selected.xml'
bkgcube['inmodel']  = '$CTOOLS/share/models/bkg_irf.xml'
bkgcube['incube']   = 'cntcube.fits'
bkgcube['outcube']  = 'bkgcube.fits'
bkgcube['outmodel'] = 'bkgcube.xml'
bkgcube.execute()

Now you are done. All data structures are prepared for the following tutorials.