# Observational Astronomy Primer Exercises: 3.) Reducing SPIRE Data, Aperture Photometry

## Tasks/exercises for SPIRE data in HIPE

• Try to download the Hi-Gal data that has already been calibrated and mapped.  Check the data quality to determine if you need to re-reduce or make the map again (for example, if there is some residual striping from the cross-scans).
• If you decide you need to re-reduce it, you can follow the links I’ve listed below.
• You will need the OBSID – you can look for this with the HSA Tool at  http://www.cosmos.esa.int/web/herschel/science-archive
–> Input the object name or coordinates and it will return a list of observations and IDs
There will be many observations with PACS/SPIRE/HIFI. Choose the ones you want to see.  The SPIRE large map obs for G332.4+0.1 were OBSID 1342192055
• MAKE SURE you are using a recent version of HIPE if you plan to reprocess the data.  You will need LOTS of RAM to reduce Herschel data – aim for a machine with at least 32GB.

### Data reduction pipeline in HIPE

==> See tutorials on NASA/IPAC’s site:
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/PhotDataAnalysis
The tutorial for the SPIRE large map pipeline reprocessing:
That one is quite good – much better than was available when I started learning how to reduce SPIRE data, with many picture examples of things to look out for in your final images
They have also made available a large number of web video tutorials over the past few years:
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/VideoTutorials
https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/Webinars

### SPIRE photometry ‘recipe’

Also see the HIPE Data Analyis Guide for a description of basically any analysis task you would care to do in HIPE:
http://herschel.esac.esa.int/hcss-doc-15.0/

obsid = 1342192055  # Specify the observation ID from the HSA  (Kes 32)
alpha = 2   #For a source with spectrum S(ν) proportional to Sα (SPIRE default pipeline assumes α = -1)
array = "PSW"  # Run for an individual SPIRE band: "PSW", "PMW", "PLW"
obs = getObservation(obsid, useHsa=True, instrument='SPIRE') # Loading an observation of Gamma Dra from the HSA
# obs = getObservation(obsid, poolName='mypool', instrument='SPIRE') # Alternative, for observation from your own local pool
mapExtd = obs.level2.refs["extd"+array].product  #Extract  Extended (MJy/sr) calibrated maps from the Observation Context

cal = spireCal() #Load the calibration tree
#  --> if that doesn't work: cal = spireCal(calTree="spire_cal", saveTree=1)
beamCorrTable  = cal.phot.refs["ColorCorrBeam"].product
kCorrExtdTable = cal.phot.colorCorrKList.refs[0].product
beamArea  = beamCorrTable.meta["beamPipeline"+array.title()+"Arc"].double
kCorrExtd = kCorrExtdTable.getAlphaCorrection(alpha, array)

mapExtended = convertImageUnit(image=mapExtd, newUnit='Jy/pixel') # convert maps from MJy/sr to Jy/pix
mapExtendedCorrected = imageMultiply(image1=mapExtended, scalar=kCorrExtd) #Color Correction

ra  = '244.252833'   # Target RA and string values:  16h 17m 00.68s
dec = '-50.799300'   # Target Dec and string values: -50d 47' 57.48''
photrad = 200.0      #photometry source radius, in arcsec  (I'm just making these up - you will have to choose an appropriate region)
phot_ann_in = 300.0  #photometry annulus inner radius, in arcsec
phot_ann_out = 300.0 #photometry annulus outer radius, in arcsec
# Carry out circular/annulus Aperture Photometry
annularPSW = annularSkyAperturePhotometry(image= mapExtendedCorrected, \
centerRA=ra, centerDec=dec, fractional=1, \
# annularSkyAperturePhotometry() #Alternatively, define the photometry regions manually

flux = annularPSW.getTargetTotal()  # Final target brightness in Jy/pixel
print 'PSW flux = %5.3f Jy'%(flux)


# Observational Astronomy Primer Exercises: 2.) Python plotting / fitting / etc.

You may find some of these tasks useful as we move forward doing photometry and running HIPE.  This is just some useful material that you can work on over the next few weeks or months.

You can write & run python scripts in a few different ways, and the best way for you is really just up to your preference.  I like to write my code in gedit (Pluma in Ubuntu Mate), and interactively test/run in ipython in the terminal.  But there are also several IDEs such as Spyder, Eclipse, Geany, PyCharm (similar to those used by Matlab or IDL).

There are many great tutorials on the web for learning python.  I started to make a few very simple tutorials elsewhere on this site ( http://research.endlessfernweh.com/tutorials/ ) but there are a variety of better tutorials out there:
Practical Python For Astronomers  https://python4astronomers.github.io/  is a great comprehensive one tailored to astronomy, but I find that it’s a little bit hard to jump right in
Here is a page with powerpoint-style tutorials on many things in python (again, targeted at astronomers): https://astro.uni-bonn.de/~rschaaf/Python2008/
Astrobetter is a blog that often has very useful posts about coding and other important things in astronomy.  Here is one post on python: http://www.astrobetter.com/blog/2014/07/07/astropy-tutorials-learn-how-to-do-common-astro-tasks-with-astropy-and-python/

For plotting (see below as well), I recommend using the pywcsgrid2 module, but you can also use APLpy and kapteyn to make great  images.  Here are a couple pages for using pywcsgrid2:
http://leejjoon.github.io/pywcsgrid2/users/overview.html
http://leejjoon.github.io/matplotlib_astronomy_gallery/tutorial/tutorial.html#tutorial-index
Kapteyn has many other useful tools for reprojection, image smoothing, etc.  https://www.astro.rug.nl/software/kapteyn/

Here is a list of useful goals & concepts to work on.

• importing modules, syntax
• Create some functions (try some useful ones for astronomy, such as a Gaussian, blackbody, Modified BB)
• Load tables from text files, or DS9 regs.
• Save data to text files.
• Fit function parameters – see my web tutorial for a good overview
• Plot single function and 2D (heatmap, maybe try a 2D gaussian?) – see my web tutorial
• include labels – title, axis labels (modify size, color, etc.)
• include legend (1D), colorbar (2D)
• Can use APLpy, pywcsgrid2, kapteyn… whatever package you like
• Include labels
• include colorbar & units
• play with colormaps, intensity limits, scaling
• maybe include size scale bar, compass, beam size…
• draw a region (e.g., ellipse)
• Plot two images side by side (say, Kepler infrared and radio; follow the pywcsgrid2 tutorial)
• match coordinates
• make contours

# Observational Astronomy Primer Exercises: 1.) Viewing FITS Images

To help you become familiar with astronomical data files – .FITS format – I would like to give you a small task: please experiment with some supernova images in DS9.   Here is what I’m thinking for some small goals:

• Download data for one astronomical target from two different telescopes and display them together, side-by-side.
• Stretch the image scaling to appropriate levels and experiment with color maps.
• Match the coordinates.
• Make labels.
• Create regions.
• Output/print an image (png, jpg, etc.) of your results & save ‘backup’ file.

I discuss photometry in another post & how to do simple flux extraction with DS9 and HIPE later.

Try Kepler’s SNR (G4.5+6.8) in IR  http://chandra.harvard.edu/photo/openFITS/multiwavelength_data.html   and try to find one more image from another band that is not listed on that page.  I will discuss some good online databases tomorrow, but one good option would be a radio image from the VLA – you can download VLA data from the NRAO image archive retrieval tool  https://archive.nrao.edu/archive/archiveimage.html

A good .fits file will include necessary WCS (pointing) and brightness units.  And beam size information if needed.

#### Units:

The main header ‘card’ to look for is ‘BUNIT’, which should be something like Jy/beam, MJy/sr, W/m2/pixel, etc.  Sometimes images you download (such as HST, GALEX…) will still be in units of counts/sec or other system that will need to be converted to physical units before you can integrate fluxes.  If you need to perform these conversions, try looking at the instrument webpages or papers in the literature to get the conversion factors, otherwise you can try to contact the PI who owns the data.

To convert the beam area – that is, to calculate the beam size as pixels/beam or sr/beam etc – our job is easy if the PSF can be approximated by a 2D Gaussian.  Properly-formed radio, sub-mm and IR headers should have the beam parameters listed as BMAJ, BMIN, and BPA – the beam major axis FWHM, minor axis FWHM, and position angle, respectively.  All 3 are normally in units of degrees.  The PA in the FITS standard is defined as counter-clockwise from ‘image north’ or the upper y-axis, as opposed to the mathematical standard right x-axis assumed by python and other programming languages.

#### WCS (World Coordinate System)

The information about which pixel corresponds to what RA/DEC on the sky is given in the header as well.  Here is the NASA dictionary of standard header cards:  http://heasarc.gsfc.nasa.gov/docs/fcg/standard_dict.html
Basically, all the header cards beginning with C (CTYPE, CDELT, …) have to do with the the axis and pointing information.  Here is a brief description of some:
CTYPE gives the projection type – SIN, TAN, …
CRPIX gives the reference pixel number (starting from 1)
CRVAL gives the pointing coordinate of the reference pixel
CUNIT gives the units of the reference coordinate values (default is degrees)
CDELT gives the width of each pixel in sky coordinates
CROTA gives the image rotation angle, if applicable.

The number after the main keyword denotes which image axis – 1 is for the x-axis (RA), 2 is for the y-axis (DEC), 3 is for the z-axis (velocity/frequency/wavelength) in a 3D data cube, 4 is for the Stokes axis.  So, CRPIX1 is the reference pixel for the x-axis, CDELT2 is the pixel width in the y-axis, CUNIT3=’Hz’ means that the spectral axis channels are stepped in frequency, etc.

Instead of CRDELT/CROTA etc. you may find your image using an alternate grid reference system based on the cards CD1_1, CD2_1, CD1_2 and CD2_2.  These four parameters define a matrix grid of the pixel scale and rotation.

# Photometry

(Note, I’m primarily concerned with relatively ‘intermediate distance’ astronomical sources, such as low-redshift galaxies, supernova remnants, etc. instead of high-redshift galaxies and other point sources.  This discussion is therefore biased in that direction.  )

First, some resources.
PhotUtils is a python module for photometry that you may find useful: http://photutils.readthedocs.io/en/latest/photutils/aperture.html
I recently came across the APT GUI tool, and while I haven’t used it, it seems to be quite useful: http://www.aperturephotometry.org/
Here is some Herschel documentation for ap.photom. in HIPE: http://herschel.esac.esa.int/hcss-doc-15.0/load/dag/html/Dag.ImageAnalysis.HowTo.AperturePhotometry.html
Of course there are many papers that detail how the authors did their photometry and uncertainty estimation.  A few recommendations are the H-ATLAS data release paper (Valiante+2016) which I was involved with, and I liked Remy-Ruyer+13 for their careful treatment of the PACS PSF contribution on the background regions, etc.

### Basics points to consider

• Create an appropriate region – may need to check image with extreme scaling in DS9 to make sure you’re getting all the target features you want / avoiding those you don’t want
• Convert to desired physical units
• Estimate background – from nearby aperture (preferably not far away due to beam power pattern and inherent sky variability)
• Apply correction factors: Aperture correction, Color correction, K4 correction (IR)
• Uncertainty estimation (you can do a ‘formal’ or theoretical estimation, but an empirical approach will be more realistic)

A general ‘formula’ for photometry will look something like this:

Raw = sum(Fluxpix,i) over all pixels
BGsub = Raw – MedianBGreg
Corrected = BGsub·correction1·correction2…

The goal of photometry is to determine a certain target’s integrated emission in some wavelength band. Usually for the purpose of calculating flux ratios or constructing an SED.  To accurately and precisely determine the true source flux, you need a properly calibrated image and usually some knowledge of the instrument that obtained it, for various correction factors.

### Region creation and flux extraction

The most obvious step is to sum up the individual pixel intensities in your target region of choice.  However, there are many things to consider in this step.

1. Be sure to decide on an appropriate aperture (or ‘region’ in DS9 parlance) for your science goal.  Of course, if you are trying to target a small feature in a crowded map, you might make a little circle or ellipse close around that target to make sure you aren’t including any flux from nearby features.  But take care that your region isn’t too small – in particular it should always be larger than the limiting beam size.  Furthermore, if you are interested in a target with a complicated brightness profile – perhaps an entire galaxy with an exponential brightness decrease from the center – be sure to make your aperture large enough to include all the necessary emission.  One good way to ensure you are doing this (for your image, at least) is to play with the data scaling in DS9 to see what the very faint extended emission is doing, etc.
–> I’ll try to bring up an image to demonstrate some good and problematic choices of apertures for various features of interest
2. You should make sure you understand the units of the image.  If they are something like Jy/beam, you will need to know what the beam area is to convert to flux density, and if you want to convert to something like luminosity, you will also need to know how to properly convert from the “per Hz” – if your image is ν·Fν you can just use the band’s characteristic frequency, but if it’s from a collapsed data cube you will need to know the channel width, and for some instruments you may need to know the overall imager bandwidth, etc…
3. If you intend to compare the integrated fluxes between images at different spatial resolutions, you will need to convolve the sharper images down to limiting beam size (sometimes called “smoothing” the images).  This ensures that your flux integrator is viewing the images in the same way – that no flux is being spread out by a PSF in a different way.  Basically all astronomy data reduction/analysis packages have tools for this: CASA has imsmooth(), HIPE has imageConvolution(), python has several excellent tools such as  the astropy.convolution package.
4. If your region is relatively small, or alternatively if your pixels are relatively large, you may want to employ sub-pixel precision in your sum.  It just depends on the level of precision you need, but if changing your desired region by a small increment changes the resulting flux by a few percent or more, you are probably in the regime of caring about this.  One simple way to get good payoff for relatively little work is to divide each pixel into something like 4, 9, 16, or 25 etc. subpixels for the flux integration. (Make sure to properly conserve flux if you do this!)

### Background estimation

• The ‘standard practice’ is to determine the background level from an annular region just outside of target region, but this isn’t always possible.  If, for example, you have bright foreground/background stars in the annulus just around your target, you may wish to blank them out or use a slightly further-out annulus.  Or if there are other galaxies nearby in the field, or if the PSF throws a noticeable portion of the source flux into the nearby background region (more on that below), you may have to use non-annulus regions even further out.  Be sure not to use regions of sky that are obviously biased, such as a large negative or positive bowl – remember, you want to estimate the typical background.  Each situation is a little bit different, and you will need to tailor your method to the data available to obtain the most unbiased estimate of the background.
• Once you have determined the best region to use for your background pixels, you will need to decide on an appropriate way to characterize the ‘typical’ background level.  A simple average may be acceptable, or the intensity distribution may make the median more appropriate.  Perhaps you may find that sigma-clipping will give a more robust value.  Looking at the histogram of background pixel intensities can be useful for determining the best course of action.  Ultimately, as long as you have good justification for your choice, you should be fine.
• Does the PSF make your BG region contain a contribution from the source? For example, in PACS images, the beam has a measurable contribution out to ~1000 arcsec (though of course it’s quite small that far out).  Typical small PACS maps are a few arcminutes across, so in principle every pixel in those maps will have a small contribution from the target source (and indeed every other pixel for that matter).  The effect may be small for your science goals, but it may be something that you decide you need to correct for.  See, e.g., Remy-Ruyer+2013 for one example of how to deal with this.

### Correction factors – discussed in the previous post

• Aperture correction – This is a correction for the amount of flux that the beam has scattered outside of your chosen aperture.  This will be a significant amount for small regions around the size of the beam (say, tens of percent), but will be small or negligible for very large apertures.  One of the best way to calculate this is to directly sum the fraction of the total power enclosed in your chosen aperture – by placing your region on a fits file of the beam profile, for example.  But if you know the instrument’s beam profile and you are using a standard aperture like a circle, you can construct an ‘Enclosed Energy Function’ (EEF) in advance by calculating the enclosed energy percentage for a wide range of radii.  Then your estimated aperture correction would be 1/EEF at the region’s radius.  HIPE has a function for calculating this, or you can also download the EEF tables yourself to create your own interpolation routines.
• Color correction (IR).  This is a correction to account for the fact that the data reduction assumed a flat spectrum response, while in reality your source spectrum is something like a blackbody, with a different power-law spectral index.  Again, HIPE has a tool to calculate this for you, or you can retrieve the tables manually and calculate it yourself.  See the Spire Handbook:
http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000046
• K4 correction (IR, for extended objects) – this is a small correction to apply for extended sources to convert the RSRF-weighted flux density to monochromatic surface brightness.  Use the HIPE tool or download the tables manually.
http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-830072
• Beam area – if you need to convert from Jy/beam to Jy.  Again, because of the assumed flat spectrum in the data reduction, your beam area will be slightly different (~ a few %) for the source’s true spectral index.  Use the HIPE function or download the tables yourself.
http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000024

### Uncertainty estimation

Some sources of uncertainty in your measurement:

• Uncertainty in background level – if the BG pixel intensities are roughly Gaussian-distributed, you can estimate this from the standard deviation of the BG pixels.  Since the uncertainty will decrease as the square root of the number of measurements, $\sigma_\mathrm{BG} \sim stddev(BG)/\sqrt{N_\mathrm{BG}} \times N_\mathrm{ap}$
• Uncertainty on individual pixel intensities (usually for stacked or drizzled images).  The total pixel error is generally calculated as the quadratic sum of all σpix,i
• Confusion noise – this is particularly important for catalog measurements of small faint sources. This is basically the random fluctuations in the background sky brightness.  Below this level you can’t detect a source, even if you integrate for more time.  Causes are foreground Galactic cirrus, cosmic background, background galaxies.  Confusion noise depends on wavelength, position on sky, beam profile.  See the Herschel docs for a more complete discussion: http://herschel.esac.esa.int/Docs/Herschel/html/ch04s03.html
• One of the most fundamental is the uncertainty on the source flux sum.  Some authors choose to use a ‘theoretical’ estimate of this error: if you assume that the variation in the aperture pixels follows the variation in the background pixels, then σsource = sqrt(Nap)·σBG.  But this on its own doesn’t address the inherent variation due to the shape/location of your chosen aperture.  If you were to take your source region and move it around the map, the enclosed flux would vary by some amount – even if you are only moving it around ’empty’ sky.  This is a complex combination of many effects, including the relation between the region shape and the beam size, the confusion noise, inherent variation across the map, etc.  This can often be the dominant source of uncertainty in your measurement, probably because it includes so many effects.  You can estimate this empirically with monte carlo methods.  The idea is quite simple – place the source aperture at many places around your map that are empty of ‘real’ emission from the galaxies or whatever you’re interested in.  Then integrate the flux in each of these test regions.  The standard deviation of this array of test aperture values should serve as a good estimate of the inherent variability your chosen region shape should produce in your map.
• Systematic/Calibration uncertainties – These include the beam area uncertainty (4% for SPIRE), and absolute flux cal uncertainty (5.5% for SPIRE). Spectroscopy and polarization measurements will of course also have their own systematic uncertainties.

Once you have determined all the individual uncertainties you need to include, if they are all independent of each other, you can calculate the total uncertainty as the quadratic sum of the individual components.  For example:
$\sigma_\mathrm{tot} = \sqrt{ \sigma^2_\mathrm{BG} + \sigma^2_\mathrm{source} + \sigma^2_\mathrm{pix} + \sigma^2_\mathrm{cal} + … }$

# How an infrared (single-aperture) telescope works

## Types of detectors

There are three common varieties, roughly based on the wavelength of observations.  This is because on the scale of infrared light wavelengths (a few to a few hundred microns), the photons are in the transition phase of being smaller/bigger than our detecting electronics.  When the photon wavelength is much smaller, it interacts with the detector like a particle; when it is much larger, it interacts with the detector like a wave. The three common types are:

• Photodetector – ~a few microns.  This is somewhat like a camera CCD.
• bolometer arrays – ~tens to ~1000 microns, for imaging.  Basically, a single bolometer is a little block of material (silicon/germanium is common) that will fully absorb or ‘thermalize’ IR photons. This slightly changes the temperature of the material, which changes its electrical properties (resistance,conductance), which can be measured.
• heterodyne – ~70 to ~1000 microns, for spectroscopy.  The photon waves are mixed with a local oscillator (like an old-style radio).  Because the actual waves are detected, you can measure the polarization. Also, the phase is preserved, so you can do interferometry with multiple antennae/receivers.

## General observing recipe for imaging (in optical, IR, etc.)

– ‘dark/bias’ scans to calibrate how much signal the detector ‘sees’ as a baseline, with the cover still on.  Subtract this.
– ‘flat field’ scans to calibrate the relative response of each pixel, since each has different sensitivity.  Divide this.

– Observe target –> This is your raw data.  Telescope software should put pointing, etc. info into a data header.
– Spectrometers will also observe a blank patch of sky separately for background subtraction

## FIR observing

The atmosphere is mostly opaque to incoming IR light in many bands, due to absorption by water vapor.  Need very high-elevation dry sites for the open windows in the NIR and submm, or ideally airborne/space observatories like SOFIA & Herschel.

– The dark/bias current is usually much more of a problem, because the temperature/thermal emission of the science target can be very similar to the actual physical temperature of the detector/telescope/surroundings.  So these are typically cryogenically cooled, but we still need to be very careful with our calibration.
– The sky/background is quite high & variable in the IR, and atmosphere conditions can change quickly.
– Account for these facts by observing in a “chop-nod” pattern: chopping to remove sky background, nodding to remove telescope background.
– The secondary mirror moves from on-source to off-source (a few arcmins away) usually at speeds of a few Hz.  This is called a ‘chop’.  One chop contains the source plus the background, the other chop contains only background emission.  So when you subtract, you get the source-only signal.  (You will still need to do a separate background subtraction in your photometry, though!)
– The telescope itself is still contributing a lot of signal to your readouts, and it can actually be many hundreds of times the astronomical signal.  To remove this effect in single pointed observations, the telescope is physically slewed to an offset but nearby position a few times per minute – this is called the “nod”.  The chopped observations are repeated at the other nod to get a second background-subtracted image.  Then these are combined to give a final clean image that should only contain astronomical signal.
– The SOFIA FORCAST website has a really nice description and animations of chop/nod observing:
https://www.sofia.usra.edu/science/proposing-and-observing/proposal-calls/cycle-4/cycle-4-phase-ii/why-chop-and-nod-forcast
– For mapping large areas of the sky, cross-scan mapping can be employed (e.g. SPIRE ‘large map’ mode)

For IR telescopes, the telescope staff will have determined the flux calibration scale by repeated observations of ‘standard’ objects of infrared emission, such as planets and asteroids in our solar system.  Then in the data reduction process you can convert from instrument readout units like Volts to physical units like Watts and Jansky.

Note: 1 Jansky = 10-26 W/m2/Hz.

## (Spatial) Resolution

Imagine looking up into the sky on a dark night and spotting a particularly bright point somewhere out there.

Typical human eyesight has about 1 arcminute resolution [source], so if the bright spot on the sky is smaller than 1′, you obviously can’t make out any detail below that scale, and it’s effectively a point source for your detector (eye).  What does the true source look like?  It could be a single bright star far away, or a broader faint star, or even a small cluster of stars, a galaxy, or something else entirely. Which one of the following sources is the true source you’re looking at? You will need a telescope to find out.

To determine what the true source in your bright point on the sky contains – that is, to increase the spatial resolution – you will of course observe with a fancy telescope of some sort.  There are many subtle effects of the optics and detector you must be aware of if you want to understand the precise quantitative and qualitative properties of your signal.  One of these, which is very important when you get down to the lowest spatial scales in your image, is the effect of the Point Spread Function (often called the ‘resolving beam’ in longer-wavelength astronomy.)

## Beam/PSF

When we observe light, what we see in our detector (whether that’s our eye, a telescope, or a common camera, etc.), we are observing through a certain aperture.  Optics theory (and empirical demonstration, for that matter) tells us that the pattern of light we detect through an aperture is a convolution of the original emission incident on the aperture and the fourier-transform of the aperture shape/profile.  Remember the simple single-slit laser experiment from your undergraduate physics labs (Fraunhofer diffraction) as a 1-dimensional example.  There, the incident light is basically uniform over the slit.  But you don’t see a uniform laser dot on the wall, you see a sinc2 function.  The aperture profile is basically a Heaviside (“top-hat”) function, whose fourier transform is a sinc function.  The convolution of those profiles (Intensity goes as amplitude times its complex conjugate, hence sinc2 on the wall.)

The point-spread function (also called a “beam” in radio/submm) is the pattern a single small source of light (less than the limiting 1.22 λ/D) will make on the detector due to the optics of the aperture.  An idealized circular (2D) aperture produces an Airy-disk PSF pattern, and the central part of that can be approximated as a 2D gaussian.  For a more complicated real-world example of a telescope, the PSF can be quite complex due to things like imperfections in the reflector surface, occultation from support struts, etc.  What this means for us in practice, when we are trying to measure the actual flux of an object, is that even if you observe a tiny point source in the sky, a (hopefully) small amount of that power coming through the aperture will be spread out over the detector in the pattern of that instrument’s PSF.  You need to correct for the fraction of emission that PSF spreads outside of your integration region during aperture photometry.  For very large regions, the effect gets much smaller, but even for Herschel images with ~20″ resolution, the PSF can make a difference of a few percent in regions out to a couple arcminutes.

For a given optics system, all else being equal, increasing the size of your dish will mean more collecting area to help with sensitivity – you can detect fainter things.  But a larger diameter also directly reduces the theoretical smallest scale (1.22 λ/D) you can resolve.  Another way to say this is that a larger dish means a smaller beam or PSF in our idealized telescope.  At some point, building a larger dish will become prohibitive, because of cost, land, or engineering constraints.  Interferometers such as the VLA and ALMA dramatically increase the maximum resolving baseline by combining signals from multiple separate dishes that can be many kilometers apart (or even the whole earth, for VLBI).

In the following example images of our funny source, the beam size is gradually increasing to the right – as would happen if you observed with progressively smaller telescopes.  Or, conversely, if the same telescope would observe the imaginary source at smaller and smaller angular sizes.  Not only does the image become more ‘blurry’ as the beam gets larger, but the flux from the source gets more spread out to the other pixels in the image.  Be aware that more pixels does not necessarily mean better spatial resolution.  See the tutorial on convolution and regridding for some more discussion on these points.

## Color Corrections

Herschel/SPIRE imaging bands are not limited to exactly 250,350,500μm – they actually extend about 50μm to either side.  In addition to that, the filter transmission is not uniform across all wavelengths.  The exact profile of the wavelength-dependent transmission in a given imaging band is called the Relative Source Response Function (RSRF).  The incoming emission spectrum can have a variety of shapes, such as a power law.  The flux density recorded in the bolometer is the integral of the source’s spectrum weighted by the RSRF.   The data pipeline doesn’t know what shape the source spectrum has, however, so just assumes it’s flat – that is, νSν = constant.  It also assumes that the source is a point source.  (This calibration is done at the level-1/detector-timeline data stage.)  The monochromatic flux densities you get out of SPIRE – Sν0 for ν0 = 250,350,500μm – are produced with these assumptions, and weighted by the RSRF.  In reality though, your source is possibly not a point source, and almost certainly doesn’t have a flat or uniform spectrum.  For example, blackbody emission (planets, asteroids) in the SPIRE wavelengths (roughly the Rayleigh-Jeans limit) will follow a power law with α~2, while cold (~20K) dust will more typically have α~3 or 4, depending on the dust properties.  To get a usable ‘monochromatic’ flux (for example, a ‘250 micron flux’), you need to correct for these two assumptions.

The correction for the shape of the spectrum is called a ‘color correction’.  The assumed functional form for the source spectrum based on the monochromatic wavelength ν0 is:  S(ν) = S(ν0)·(ν/ν0)α.  Again, α = -1 is assumed, but your true source spectrum can be something like α=2 (blackbody), in which case you apply the appropriate correction factor.  See the Spire Handbook Section 5.2.6 “Colour correction for power-law spectra” (and 5.2.7 for greybodies) for a better explanation and the pertinent equations.  HIPE contains a table of color correction factors for various values of α, as well as for modified blackbodies as functions of temperature and β.

Herschel/HIPE assumes your data is a point source for reduction purposes.  If your source is actually extended, you need to make a small gain correction (called a “K4” correction) to the bolometer signals.  This is because point source observations care primarily about the peak signal in bolometers while extended source observations want to take caution about the consistency of the integral between individual bolometers, reducing striping in maps.  (See the SPIRE Handbook, Section 5.2.5 for a better explanation.)  This K4 correction factor also depends on the assumed shape of your source spectrum, and you can find the table of corrections for various indices of α in the SPIRE Handbook (possibly out of date) or within HIPE.

When converting from Jy/beam units to MJy/sr – that is, using the effectve beam/PSF area – note that the beam sizes also depends on the assumed spectral index.  Choose the appropriate beam area from the tables in HIPE.