Observational Astronomy Primer Exercises: 3.) Reducing SPIRE Data, Aperture Photometry

Tasks/exercises for SPIRE data in HIPE


As an example, work through one supernova remnant from the Hi-Gal survey, G332.4+0.1.  You can find more information about this object at, e.g., SIMBAD.

  • Try to download the Hi-Gal data that has already been calibrated and mapped.  Check the data quality to determine if you need to re-reduce or make the map again (for example, if there is some residual striping from the cross-scans).
  • If you decide you need to re-reduce it, you can follow the links I’ve listed below.
  • You will need the OBSID – you can look for this with the HSA Tool at  http://www.cosmos.esa.int/web/herschel/science-archive
    –> Input the object name or coordinates and it will return a list of observations and IDs
    There will be many observations with PACS/SPIRE/HIFI. Choose the ones you want to see.  The SPIRE large map obs for G332.4+0.1 were OBSID 1342192055
  • MAKE SURE you are using a recent version of HIPE if you plan to reprocess the data.  You will need LOTS of RAM to reduce Herschel data – aim for a machine with at least 32GB.

 

 

Data reduction pipeline in HIPE

 


==> See tutorials on NASA/IPAC’s site:
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/PhotDataAnalysis
And the DRG for reprocessing http://herschel.esac.esa.int/hcss-doc-15.0/load/spire_drg/html/photometer_launchpad.html#d0e571
The tutorial for the SPIRE large map pipeline reprocessing:
http://herschel.esac.esa.int/hcss-doc-15.0/load/spire_drg/html/spire-largemap.html#spire-largemap-reprocess-data-user-guide
That one is quite good – much better than was available when I started learning how to reduce SPIRE data, with many picture examples of things to look out for in your final images
They have also made available a large number of web video tutorials over the past few years:
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/VideoTutorials
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/Webinars

 


 

SPIRE photometry ‘recipe’


http://herschel.esac.esa.int/hcss-doc-15.0/load/dag/html/Dag.ImageAnalysis.HowTo.AperturePhotometry.html
http://herschel.esac.esa.int/hcss-doc-15.0/load/spire_drg/html/ch06s09.html  –> section 6.9.1.6
Also see the HIPE Data Analyis Guide for a description of basically any analysis task you would care to do in HIPE:
http://herschel.esac.esa.int/hcss-doc-15.0/

Overall summary of commands from the SPIRE ‘recipe’, downloading level-2 data from the HSA:

obsid = 1342192055  # Specify the observation ID from the HSA  (Kes 32)
alpha = 2   #For a source with spectrum S(ν) proportional to Sα (SPIRE default pipeline assumes α = -1)
array = "PSW"  # Run for an individual SPIRE band: "PSW", "PMW", "PLW"
obs = getObservation(obsid, useHsa=True, instrument='SPIRE') # Loading an observation of Gamma Dra from the HSA
# obs = getObservation(obsid, poolName='mypool', instrument='SPIRE') # Alternative, for observation from your own local pool
mapExtd = obs.level2.refs["extd"+array].product  #Extract  Extended (MJy/sr) calibrated maps from the Observation Context

cal = spireCal() #Load the calibration tree
#  --> if that doesn't work: cal = spireCal(calTree="spire_cal", saveTree=1)
beamCorrTable  = cal.phot.refs["ColorCorrBeam"].product
kCorrExtdTable = cal.phot.colorCorrKList.refs[0].product
beamArea  = beamCorrTable.meta["beamPipeline"+array.title()+"Arc"].double
kCorrExtd = kCorrExtdTable.getAlphaCorrection(alpha, array)

mapExtended = convertImageUnit(image=mapExtd, newUnit='Jy/pixel') # convert maps from MJy/sr to Jy/pix
mapExtendedCorrected = imageMultiply(image1=mapExtended, scalar=kCorrExtd) #Color Correction

ra  = '244.252833'   # Target RA and string values:  16h 17m 00.68s
dec = '-50.799300'   # Target Dec and string values: -50d 47' 57.48''
photrad = 200.0      #photometry source radius, in arcsec  (I'm just making these up - you will have to choose an appropriate region)
phot_ann_in = 300.0  #photometry annulus inner radius, in arcsec
phot_ann_out = 300.0 #photometry annulus outer radius, in arcsec
# Carry out circular/annulus Aperture Photometry
annularPSW = annularSkyAperturePhotometry(image= mapExtendedCorrected, \
    centerRA=ra, centerDec=dec, fractional=1, \
    radiusArcsec=photrad, innerArcsec=phot_ann_in, outerArcsec=phot_ann_out)
# annularSkyAperturePhotometry() #Alternatively, define the photometry regions manually

flux = annularPSW.getTargetTotal()  # Final target brightness in Jy/pixel
print 'PSW flux = %5.3f Jy'%(flux)

 

Introduction to Photometry (Emphasis on IR)

Photometry

(Note, I’m primarily concerned with relatively ‘intermediate distance’ astronomical sources, such as low-redshift galaxies, supernova remnants, etc. instead of high-redshift galaxies and other point sources.  This discussion is therefore biased in that direction.  )

First, some resources.
PhotUtils is a python module for photometry that you may find useful: http://photutils.readthedocs.io/en/latest/photutils/aperture.html
I recently came across the APT GUI tool, and while I haven’t used it, it seems to be quite useful: http://www.aperturephotometry.org/
Here is some Herschel documentation for ap.photom. in HIPE: http://herschel.esac.esa.int/hcss-doc-15.0/load/dag/html/Dag.ImageAnalysis.HowTo.AperturePhotometry.html
Of course there are many papers that detail how the authors did their photometry and uncertainty estimation.  A few recommendations are the H-ATLAS data release paper (Valiante+2016) which I was involved with, and I liked Remy-Ruyer+13 for their careful treatment of the PACS PSF contribution on the background regions, etc.

 

Basics points to consider


  • Create an appropriate region – may need to check image with extreme scaling in DS9 to make sure you’re getting all the target features you want / avoiding those you don’t want
  • Convert to desired physical units
  • Estimate background – from nearby aperture (preferably not far away due to beam power pattern and inherent sky variability)
  • Apply correction factors: Aperture correction, Color correction, K4 correction (IR)
  • Uncertainty estimation (you can do a ‘formal’ or theoretical estimation, but an empirical approach will be more realistic)

A general ‘formula’ for photometry will look something like this:

Raw = sum(Fluxpix,i) over all pixels
BGsub = Raw – MedianBGreg
Corrected = BGsub·correction1·correction2…

 

 

The goal of photometry is to determine a certain target’s integrated emission in some wavelength band. Usually for the purpose of calculating flux ratios or constructing an SED.  To accurately and precisely determine the true source flux, you need a properly calibrated image and usually some knowledge of the instrument that obtained it, for various correction factors.

 

Region creation and flux extraction


The most obvious step is to sum up the individual pixel intensities in your target region of choice.  However, there are many things to consider in this step.

  1. Be sure to decide on an appropriate aperture (or ‘region’ in DS9 parlance) for your science goal.  Of course, if you are trying to target a small feature in a crowded map, you might make a little circle or ellipse close around that target to make sure you aren’t including any flux from nearby features.  But take care that your region isn’t too small – in particular it should always be larger than the limiting beam size.  Furthermore, if you are interested in a target with a complicated brightness profile – perhaps an entire galaxy with an exponential brightness decrease from the center – be sure to make your aperture large enough to include all the necessary emission.  One good way to ensure you are doing this (for your image, at least) is to play with the data scaling in DS9 to see what the very faint extended emission is doing, etc.
    –> I’ll try to bring up an image to demonstrate some good and problematic choices of apertures for various features of interest
  2. You should make sure you understand the units of the image.  If they are something like Jy/beam, you will need to know what the beam area is to convert to flux density, and if you want to convert to something like luminosity, you will also need to know how to properly convert from the “per Hz” – if your image is ν·Fν you can just use the band’s characteristic frequency, but if it’s from a collapsed data cube you will need to know the channel width, and for some instruments you may need to know the overall imager bandwidth, etc…
  3. If you intend to compare the integrated fluxes between images at different spatial resolutions, you will need to convolve the sharper images down to limiting beam size (sometimes called “smoothing” the images).  This ensures that your flux integrator is viewing the images in the same way – that no flux is being spread out by a PSF in a different way.  Basically all astronomy data reduction/analysis packages have tools for this: CASA has imsmooth(), HIPE has imageConvolution(), python has several excellent tools such as  the astropy.convolution package.
  4. If your region is relatively small, or alternatively if your pixels are relatively large, you may want to employ sub-pixel precision in your sum.  It just depends on the level of precision you need, but if changing your desired region by a small increment changes the resulting flux by a few percent or more, you are probably in the regime of caring about this.  One simple way to get good payoff for relatively little work is to divide each pixel into something like 4, 9, 16, or 25 etc. subpixels for the flux integration. (Make sure to properly conserve flux if you do this!)

 

Background estimation


  • The ‘standard practice’ is to determine the background level from an annular region just outside of target region, but this isn’t always possible.  If, for example, you have bright foreground/background stars in the annulus just around your target, you may wish to blank them out or use a slightly further-out annulus.  Or if there are other galaxies nearby in the field, or if the PSF throws a noticeable portion of the source flux into the nearby background region (more on that below), you may have to use non-annulus regions even further out.  Be sure not to use regions of sky that are obviously biased, such as a large negative or positive bowl – remember, you want to estimate the typical background.  Each situation is a little bit different, and you will need to tailor your method to the data available to obtain the most unbiased estimate of the background.
  • Once you have determined the best region to use for your background pixels, you will need to decide on an appropriate way to characterize the ‘typical’ background level.  A simple average may be acceptable, or the intensity distribution may make the median more appropriate.  Perhaps you may find that sigma-clipping will give a more robust value.  Looking at the histogram of background pixel intensities can be useful for determining the best course of action.  Ultimately, as long as you have good justification for your choice, you should be fine.
  • Does the PSF make your BG region contain a contribution from the source? For example, in PACS images, the beam has a measurable contribution out to ~1000 arcsec (though of course it’s quite small that far out).  Typical small PACS maps are a few arcminutes across, so in principle every pixel in those maps will have a small contribution from the target source (and indeed every other pixel for that matter).  The effect may be small for your science goals, but it may be something that you decide you need to correct for.  See, e.g., Remy-Ruyer+2013 for one example of how to deal with this.

 

Correction factors – discussed in the previous post


  • Aperture correction – This is a correction for the amount of flux that the beam has scattered outside of your chosen aperture.  This will be a significant amount for small regions around the size of the beam (say, tens of percent), but will be small or negligible for very large apertures.  One of the best way to calculate this is to directly sum the fraction of the total power enclosed in your chosen aperture – by placing your region on a fits file of the beam profile, for example.  But if you know the instrument’s beam profile and you are using a standard aperture like a circle, you can construct an ‘Enclosed Energy Function’ (EEF) in advance by calculating the enclosed energy percentage for a wide range of radii.  Then your estimated aperture correction would be 1/EEF at the region’s radius.  HIPE has a function for calculating this, or you can also download the EEF tables yourself to create your own interpolation routines.
  • Color correction (IR).  This is a correction to account for the fact that the data reduction assumed a flat spectrum response, while in reality your source spectrum is something like a blackbody, with a different power-law spectral index.  Again, HIPE has a tool to calculate this for you, or you can retrieve the tables manually and calculate it yourself.  See the Spire Handbook:
    http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000046
  • K4 correction (IR, for extended objects) – this is a small correction to apply for extended sources to convert the RSRF-weighted flux density to monochromatic surface brightness.  Use the HIPE tool or download the tables manually.
    http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-830072
  • Beam area – if you need to convert from Jy/beam to Jy.  Again, because of the assumed flat spectrum in the data reduction, your beam area will be slightly different (~ a few %) for the source’s true spectral index.  Use the HIPE function or download the tables yourself.
    http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000024

 

Uncertainty estimation


Some sources of uncertainty in your measurement:

  • Uncertainty in background level – if the BG pixel intensities are roughly Gaussian-distributed, you can estimate this from the standard deviation of the BG pixels.  Since the uncertainty will decrease as the square root of the number of measurements, $ \sigma_\mathrm{BG} \sim stddev(BG)/\sqrt{N_\mathrm{BG}} \times N_\mathrm{ap}$
  • Uncertainty on individual pixel intensities (usually for stacked or drizzled images).  The total pixel error is generally calculated as the quadratic sum of all σpix,i
  • Confusion noise – this is particularly important for catalog measurements of small faint sources. This is basically the random fluctuations in the background sky brightness.  Below this level you can’t detect a source, even if you integrate for more time.  Causes are foreground Galactic cirrus, cosmic background, background galaxies.  Confusion noise depends on wavelength, position on sky, beam profile.  See the Herschel docs for a more complete discussion: http://herschel.esac.esa.int/Docs/Herschel/html/ch04s03.html
  • One of the most fundamental is the uncertainty on the source flux sum.  Some authors choose to use a ‘theoretical’ estimate of this error: if you assume that the variation in the aperture pixels follows the variation in the background pixels, then σsource = sqrt(Nap)·σBG.  But this on its own doesn’t address the inherent variation due to the shape/location of your chosen aperture.  If you were to take your source region and move it around the map, the enclosed flux would vary by some amount – even if you are only moving it around ’empty’ sky.  This is a complex combination of many effects, including the relation between the region shape and the beam size, the confusion noise, inherent variation across the map, etc.  This can often be the dominant source of uncertainty in your measurement, probably because it includes so many effects.  You can estimate this empirically with monte carlo methods.  The idea is quite simple – place the source aperture at many places around your map that are empty of ‘real’ emission from the galaxies or whatever you’re interested in.  Then integrate the flux in each of these test regions.  The standard deviation of this array of test aperture values should serve as a good estimate of the inherent variability your chosen region shape should produce in your map.
  • Systematic/Calibration uncertainties – These include the beam area uncertainty (4% for SPIRE), and absolute flux cal uncertainty (5.5% for SPIRE). Spectroscopy and polarization measurements will of course also have their own systematic uncertainties.

Once you have determined all the individual uncertainties you need to include, if they are all independent of each other, you can calculate the total uncertainty as the quadratic sum of the individual components.  For example:
$ \sigma_\mathrm{tot} = \sqrt{ \sigma^2_\mathrm{BG} + \sigma^2_\mathrm{source} + \sigma^2_\mathrm{pix} + \sigma^2_\mathrm{cal} + … }$