How an infrared (single-aperture) telescope works
Types of detectors
There are three common varieties, roughly based on the wavelength of observations. This is because on the scale of infrared light wavelengths (a few to a few hundred microns), the photons are in the transition phase of being smaller/bigger than our detecting electronics. When the photon wavelength is much smaller, it interacts with the detector like a particle; when it is much larger, it interacts with the detector like a wave. The three common types are:
- Photodetector – ~a few microns. This is somewhat like a camera CCD.
- bolometer arrays – ~tens to ~1000 microns, for imaging. Basically, a single bolometer is a little block of material (silicon/germanium is common) that will fully absorb or ‘thermalize’ IR photons. This slightly changes the temperature of the material, which changes its electrical properties (resistance,conductance), which can be measured.
- heterodyne – ~70 to ~1000 microns, for spectroscopy. The photon waves are mixed with a local oscillator (like an old-style radio). Because the actual waves are detected, you can measure the polarization. Also, the phase is preserved, so you can do interferometry with multiple antennae/receivers.
General observing recipe for imaging (in optical, IR, etc.)
– ‘dark/bias’ scans to calibrate how much signal the detector ‘sees’ as a baseline, with the cover still on. Subtract this.
– ‘flat field’ scans to calibrate the relative response of each pixel, since each has different sensitivity. Divide this.
– Observe target –> This is your raw data. Telescope software should put pointing, etc. info into a data header.
– Spectrometers will also observe a blank patch of sky separately for background subtraction
The atmosphere is mostly opaque to incoming IR light in many bands, due to absorption by water vapor. Need very high-elevation dry sites for the open windows in the NIR and submm, or ideally airborne/space observatories like SOFIA & Herschel.
– The dark/bias current is usually much more of a problem, because the temperature/thermal emission of the science target can be very similar to the actual physical temperature of the detector/telescope/surroundings. So these are typically cryogenically cooled, but we still need to be very careful with our calibration.
– The sky/background is quite high & variable in the IR, and atmosphere conditions can change quickly.
– Account for these facts by observing in a “chop-nod” pattern: chopping to remove sky background, nodding to remove telescope background.
– The secondary mirror moves from on-source to off-source (a few arcmins away) usually at speeds of a few Hz. This is called a ‘chop’. One chop contains the source plus the background, the other chop contains only background emission. So when you subtract, you get the source-only signal. (You will still need to do a separate background subtraction in your photometry, though!)
– The telescope itself is still contributing a lot of signal to your readouts, and it can actually be many hundreds of times the astronomical signal. To remove this effect in single pointed observations, the telescope is physically slewed to an offset but nearby position a few times per minute – this is called the “nod”. The chopped observations are repeated at the other nod to get a second background-subtracted image. Then these are combined to give a final clean image that should only contain astronomical signal.
– The SOFIA FORCAST website has a really nice description and animations of chop/nod observing:
– For mapping large areas of the sky, cross-scan mapping can be employed (e.g. SPIRE ‘large map’ mode)
For IR telescopes, the telescope staff will have determined the flux calibration scale by repeated observations of ‘standard’ objects of infrared emission, such as planets and asteroids in our solar system. Then in the data reduction process you can convert from instrument readout units like Volts to physical units like Watts and Jansky.
Note: 1 Jansky = 10-26 W/m2/Hz.
Imagine looking up into the sky on a dark night and spotting a particularly bright point somewhere out there.
Typical human eyesight has about 1 arcminute resolution [source], so if the bright spot on the sky is smaller than 1′, you obviously can’t make out any detail below that scale, and it’s effectively a point source for your detector (eye). What does the true source look like? It could be a single bright star far away, or a broader faint star, or even a small cluster of stars, a galaxy, or something else entirely. Which one of the following sources is the true source you’re looking at? You will need a telescope to find out.
To determine what the true source in your bright point on the sky contains – that is, to increase the spatial resolution – you will of course observe with a fancy telescope of some sort. There are many subtle effects of the optics and detector you must be aware of if you want to understand the precise quantitative and qualitative properties of your signal. One of these, which is very important when you get down to the lowest spatial scales in your image, is the effect of the Point Spread Function (often called the ‘resolving beam’ in longer-wavelength astronomy.)
When we observe light, what we see in our detector (whether that’s our eye, a telescope, or a common camera, etc.), we are observing through a certain aperture. Optics theory (and empirical demonstration, for that matter) tells us that the pattern of light we detect through an aperture is a convolution of the original emission incident on the aperture and the fourier-transform of the aperture shape/profile. Remember the simple single-slit laser experiment from your undergraduate physics labs (Fraunhofer diffraction) as a 1-dimensional example. There, the incident light is basically uniform over the slit. But you don’t see a uniform laser dot on the wall, you see a sinc2 function. The aperture profile is basically a Heaviside (“top-hat”) function, whose fourier transform is a sinc function. The convolution of those profiles (Intensity goes as amplitude times its complex conjugate, hence sinc2 on the wall.)
The point-spread function (also called a “beam” in radio/submm) is the pattern a single small source of light (less than the limiting 1.22 λ/D) will make on the detector due to the optics of the aperture. An idealized circular (2D) aperture produces an Airy-disk PSF pattern, and the central part of that can be approximated as a 2D gaussian. For a more complicated real-world example of a telescope, the PSF can be quite complex due to things like imperfections in the reflector surface, occultation from support struts, etc. What this means for us in practice, when we are trying to measure the actual flux of an object, is that even if you observe a tiny point source in the sky, a (hopefully) small amount of that power coming through the aperture will be spread out over the detector in the pattern of that instrument’s PSF. You need to correct for the fraction of emission that PSF spreads outside of your integration region during aperture photometry. For very large regions, the effect gets much smaller, but even for Herschel images with ~20″ resolution, the PSF can make a difference of a few percent in regions out to a couple arcminutes.
For a given optics system, all else being equal, increasing the size of your dish will mean more collecting area to help with sensitivity – you can detect fainter things. But a larger diameter also directly reduces the theoretical smallest scale (1.22 λ/D) you can resolve. Another way to say this is that a larger dish means a smaller beam or PSF in our idealized telescope. At some point, building a larger dish will become prohibitive, because of cost, land, or engineering constraints. Interferometers such as the VLA and ALMA dramatically increase the maximum resolving baseline by combining signals from multiple separate dishes that can be many kilometers apart (or even the whole earth, for VLBI).
In the following example images of our funny source, the beam size is gradually increasing to the right – as would happen if you observed with progressively smaller telescopes. Or, conversely, if the same telescope would observe the imaginary source at smaller and smaller angular sizes. Not only does the image become more ‘blurry’ as the beam gets larger, but the flux from the source gets more spread out to the other pixels in the image. Be aware that more pixels does not necessarily mean better spatial resolution. See the tutorial on convolution and regridding for some more discussion on these points.
Herschel/SPIRE imaging bands are not limited to exactly 250,350,500μm – they actually extend about 50μm to either side. In addition to that, the filter transmission is not uniform across all wavelengths. The exact profile of the wavelength-dependent transmission in a given imaging band is called the Relative Source Response Function (RSRF). The incoming emission spectrum can have a variety of shapes, such as a power law. The flux density recorded in the bolometer is the integral of the source’s spectrum weighted by the RSRF. The data pipeline doesn’t know what shape the source spectrum has, however, so just assumes it’s flat – that is, νSν = constant. It also assumes that the source is a point source. (This calibration is done at the level-1/detector-timeline data stage.) The monochromatic flux densities you get out of SPIRE – Sν0 for ν0 = 250,350,500μm – are produced with these assumptions, and weighted by the RSRF. In reality though, your source is possibly not a point source, and almost certainly doesn’t have a flat or uniform spectrum. For example, blackbody emission (planets, asteroids) in the SPIRE wavelengths (roughly the Rayleigh-Jeans limit) will follow a power law with α~2, while cold (~20K) dust will more typically have α~3 or 4, depending on the dust properties. To get a usable ‘monochromatic’ flux (for example, a ‘250 micron flux’), you need to correct for these two assumptions.
The correction for the shape of the spectrum is called a ‘color correction’. The assumed functional form for the source spectrum based on the monochromatic wavelength ν0 is: S(ν) = S(ν0)·(ν/ν0)α. Again, α = -1 is assumed, but your true source spectrum can be something like α=2 (blackbody), in which case you apply the appropriate correction factor. See the Spire Handbook Section 5.2.6 “Colour correction for power-law spectra” (and 5.2.7 for greybodies) for a better explanation and the pertinent equations. HIPE contains a table of color correction factors for various values of α, as well as for modified blackbodies as functions of temperature and β.
Herschel/HIPE assumes your data is a point source for reduction purposes. If your source is actually extended, you need to make a small gain correction (called a “K4” correction) to the bolometer signals. This is because point source observations care primarily about the peak signal in bolometers while extended source observations want to take caution about the consistency of the integral between individual bolometers, reducing striping in maps. (See the SPIRE Handbook, Section 5.2.5 for a better explanation.) This K4 correction factor also depends on the assumed shape of your source spectrum, and you can find the table of corrections for various indices of α in the SPIRE Handbook (possibly out of date) or within HIPE.
When converting from Jy/beam units to MJy/sr – that is, using the effectve beam/PSF area – note that the beam sizes also depends on the assumed spectral index. Choose the appropriate beam area from the tables in HIPE.