Convolution and Regridding

Changing Image Resolution (Convolving) and Regridding

 

First, a few resources.
* Here is the Kapteyn package documentation on reprojecting with the maputils module
* And cygrid’s jupyter notebook

NASA SkyView query:  http://skyview.gsfc.nasa.gov/current/cgi/query.pl

Normally for example data sets I advertise the wonderful multiwavelength LITTLE THINGS images freely available on the NRAO website.  The LT maps are quite beautiful, but this time I will use a couple random maps pulled from NED instead that are smaller in size – a couple hundred kB instead of a few MB.
A couple of images I think are nice for this purpose:
* M101, Herschel SPIRE 250um (KINGFISH) from NED query [1.4MB], in MJy/sr.  Also, the M101 500um is only 426kB
* M101, VLA 21cm [391kB] * M100, Herschel PACS 100um from NED query, in Jy/pix [4MB…] * M83 MIPS 160um in MJy/sr from NED query [169kB]

 

 

Overview/Ramble


When calculating fluxes or luminosities from astronomical images – to create an SED, for example – you will almost certainly need to perform some formatting on your images before you extract fluxes, perform image arithmetic, etc.  Here I will cover two common operations: image convolution and regridding.  These are sometimes wrapped together in developed software packages, but are actually separate concepts.  One very important consideration here involves a quick mention of some terminology.  In normal life, the word “resolution” in regards to images might be described as “the fineness of detail” it contains.  That is true, but it’s actually a slightly subtle point.  And to understand resolution, we will need to understand the concept of point spread functions/beams (see the post on observing) and we need to talk about what precisely we mean by the term ‘pixel’.

 

Pixels


Fun fact: Wikipedia tells me that the first use of the word pixel, which simply comes from a shortening of ‘picture element’, was by JPL scientists in the 1960s.

One very important point to understand is that higher resolution does not necessarily equate to more pixels.  This is because ‘pixel’ can refer to two different concepts in astronomy.  One usage deals with image capture [hardware] – that is, the physical cells that capture light in a detector.  (They can also be lovingly referred to as photon buckets.)  Squeezing more (smaller) physical pixels into a given area on your detector will typically give you sharper resolution. Think megapixels in cameras.

The other meaning of pixel deals with image processing and image reconstruction [software].  Pixels in the sense of a digital image refer to the data grid that represents your image information.  You can take your favorite cat photo from the internet and try to scale it in Photoshop to twice the size – giving it more pixels – but you’re not adding any new information to the new pixels that wasn’t already there, so when you zoom in the overall picture will still look just as blurry as the original.  (You can of course apply some clever interpolation algorithms to give the pixels the appearance of being smoother, but that’s a different issue.)  You can combine separate images that are slightly offset to make a new image which has slightly finer resolution than any of the originals – the most common way to do this is called “drizzling“, and is used often in Hubble images, for example.  In radio astronomy (and sub-mm and some infrared instruments), the detector is not directly capturing photons as particles like a CCD camera does.  Usually it’s detecting light as a wave, and the image is reconstructed later from the raw voltages.  In practice, this means that a scientist could reconstruct an image with arbitrarily small digital pixels – just as you could scale your cat photo to an arbitrarily large size – but the extra pixels wouldn’t necessarily contain any new information.

 

To expound a little more, taking the example of consumer digital cameras, a sensor that boasts 20 Megapixels is regarded as higher resolution than one with, say 15 Megapixels.  This is only part of the story though, as it depends on things like the physical size of the pixels – it’s conceivable to have these two camera sensors made from the same chip of pixels, just cut to separate 15 and 20 Mp areas.  Just considering the sensors themselves here and ignoring the other camera optics, that pair of chips would have the same inherent ability to resolve detail but different total areas. But let’s assume now that they have the same overall area.  It makes sense that, say, a one inch square chip which contains 1000 pixels would produce a much lower resolution photo than a one inch square chip with 1 million pixels.
[Guess the image!] Guess the image
However, you can’t continue the trend forever – engineering issues aside, at some point the pixels would become smaller than the wavelength of the light it was trying to capture.  Beyond that limit, squishing in more pixels won’t help you.

“A gajillion pixels is overkill.” – PJC

Besides, you could open up your photo in some image processing software and divide each pixel into, say, four (or 9, 25, etc…) equal parts if you wanted to.  That would make your image have MANY more pixels.  But that wouldn’t be increasing your resolution at all, because those new smaller pixels are not adding any information.

Let’s reverse that notion – if you are now faced with the task of observing light of longer wavelengths, a single photon can more easily bleed over multiple pixels if they remain small.  Now an infinitesimally small point of light in the real world will register in several pixels, according to some pattern which depends on the optics.  This is the PSF or beam pattern mentioned in other posts.  (This description is vastly simplifying the whole process, of course – for starters, the detector technology of ‘pixels’ is quite different between optical, FIR, and radio telescopes.  Please consult a proper optics or observational astronomical text for a more complete picture.)

So, let’s back up a step.  In a basic sense, the sensor in your typical camera is recording the incoming light of whatever it’s pointing at.  However, if the camera is out of focus or there are defects in the optics, no matter how many pixels you have, the image will appear blurry.  And if your pixels are larger than the innate scale for detail afforded by your optics, your image will be ‘blurrier’ than if you were to employ smaller pixels.  To summarize, the final resolution of your image is determined by a complex interplay between your *entire* optics system, including lenses (possibly interferometer spacing) and recording hardware.

Let’s look at an example of increasing the number of pixels in an image.  Below on the left is an imaginary faraway source (top panel) observed with some telescope.  The resolution (beam FWHM) is shown by the dotted circle – in this case the true source would be a point source with regards to the telescope’s resolution.  The final image is in the bottom panel.  Note that obviously no fine detail about the source can be determined from this image such as if there are 2 or 3 separate objects contained, etc.  Now look at the image on the right side – this one has been produced by processing the raw signal onto a finer pixel grid.  (You can often do this with radio data, for example.)
Pixel Example 1Pixel Example 2

Do you gain anything by doing this?  Well, note that you’re still not able to determine any fine detail about the true source – on that count there is no improvement over coarser image on the left.  If you are trying to get a precise sum of the flux in a circular aperture around the source, though, this may help you out a bit as the smaller pixel sizes would fill the round aperture more fully than larger pixels.  The downside of having smaller pixels in this case would be that processing time could increase dramatically.

 

Now let’s move on to the discussion of how to compare images from different instruments/settings.  The tools we will use are convolution and regridding.  These can sometimes be related (such as when, in radio interferometer data reduction, you choose the pixel size so that many pixels cover the beam area), but remember they are separate concepts.

 

Convolution


For images that are taken with different resolutions, you will need to convolve or “smooth” the sharper images before comparing fluxes.  Comparative photometry requires this because the poorer-resolution image flux may be underestimated compared to the sharper image.  That is, when observing some object with a given inherent luminosity and angular size, if you observe with low spatial resolution (big beam/PSF), then some of the observed flux from the object will be scattered out to larger radii.  If you observe the same object with much higher resolution, this effect is much less pronounced.  So, for photometry, if you want an ‘apples-to-apples’ comparison between different spectral bands, you need to convolve/smooth your images down to the lowest resolution (largest beam size) before extracting your fluxes.

To visualize what changes in a PSF size mean for an image, let’s consider the following image, also mentioned in the post about observational data.  I have taken a sample source image and applied Gaussian blurs of increasing PSF sizes.

Convolution Example 1
The leftmost image is an imaginary ‘astrophysical’ source, as it would appear at infinite resolution. Now, note how the image gets blurred out as the beam size increases in the images to the right. The 2D Gaussian PSF FWHM is denoted by the dotted circles in the lower right corners.  Already in the second panel, in which the PSF is relatively small, the details are difficult to make out. Where is the edge of the bulge in the galaxy? Are the spiral arms narrow or fat?  It’s hard to tell in this panel.  But very notably, the thin comet shape has almost completely disappeared.  This demonstrates the importance of what is called the beam filling factor.  If a source populates the beam too sparsely (small filling factor), even if the overall structure is around the spatial scale of the beam, the instrument will be hard pressed to recover a detection because the flux gets spread over the whole beam (and some outside, too).  In the third panel, even though the beam FWHM is still smaller than the ‘alien’ face or galaxy, it’s already impossible to determine the detail of either blob. (The comet is nowhere to be seen now.)  The two larger features now closely resemble 2D Gaussians, though the left one is obviously not a point source because the emission goes notably beyond the beam size. The fourth panel shows the Gaussian convolution with a kernel roughly the size of the whole collection of sources.  Now the whole thing looks roughly like one single 2D Gaussian beam.

Now note the colorbars in each of those panels – in these plots I chose to let the min/max values be independent between panels in order to focus just on the blurring or flux spreading aspect of the convolution.  But if we look at the exact same figure made with all the plots scaled to the same absolute min/max, something else becomes apparent.  See the following figure:
Convolution Example 2
Here you can clearly see the emission in each panel appear to get dimmer – this is precisely because the larger beam size is spreading the original flux over a larger area (both within the beam, and also to the pixels without). Keep this in mind when you are proposing for observing time on a telescope!

Convolution isn’t only used to match two image resolutions, though – it can also be used to reduce the noise in a single image, if you don’t mind sacrificing some resolution – this is often done in 3D data cubes along the spectral axis.  (c.f. Hanning smoothing, for example.)  Since the ‘blurring out’ of your image by convolving to a larger PSF will ‘smooth out’ your signal, the smaller-scale fluctuations you might see in your original image will average out to a greater degree as the PSF increases.  So that disadvantage of small-scale features ‘disappearing’ from earlier can be repurposed here as a beneficial effect.

In practice, image convolution can be done with a variety of tools.  In popular image processing software packages, you may encounter it as a “Gaussian blur”.  In python, common implementations are scipy.ndimage.convolve or astropy.convolve.  I recommend the latter.  Below we’ll go through examples of how to do this in python.

 

 

Regridding


Why would you want to reorganize an image to a different pixel grid?  There are a number of reasons.  For example, maybe you want to perform some photometry using the same regions with identical pixel grids, to ensure consistency in your measurements.  Or, maybe you want to divide one image by another to make a pixel-by-pixel plot (making a color-color plot, for fitting functions, etc.).  Or maybe one of your images is rotated and you just want to compare them side-by-side.  Whatever the reason, reprojection to a different coordinate grid is relatively easy with the available packages in python.  However, it’s critical to conserve flux by keeping in mind the flux or brightness units of your image.

Let’s take a simple theoretical example pixel with a flux density of 10 mJy.  And let’s assume this is a single pixel from a map with units of [Jy/pixel].  Assuming uniform brightness across the entire pixel as is typically done, if you divide that pixel into four equal sub-pixels, each new one will NOT have a flux density of 10 mJy/pix, but will instead be decreased by its area fraction to 10/4 = 2.5 mJy/pix.  If your map is in units of brightness, however – say, 10 MJy/sr – and you regrid to the smaller pixel sizes, each new pixel still remains 10 MJy/sr.  That’s because brightness is a unit of flux per area – even though the flux is decreasing by say 1/4, the area is also decreasing by 1/4.   See below for another example:

Rendered by QuickLaTeX.com

 

Another technical point to note is that you should try to retain proper Nyquist sampling of your beam/PSF.  That is, you want to have at least two pixels across each beam, otherwise you are losing spatial information. That’s really on the low side, though.  To be safe, three to five pixels per beamwidth is common.  The penalty for increasing the number of pixels in this manner is increased computation time in your subsequent processing.  That being said, you could regrid such that each pixel is effectively one beamwidth across, making each pixel *independent* for the purpose of fitting functions in a spatially resolved way, etc.

 

Recap


REGRIDDING

Original

Mona Lisa (Original)

Regridded Only

Guess the image

 

 

 

 

 

CONVOLVING  

Convolved Only

Mona Lisa (Convolved)

Convolved AND Regridded

Mona Lisa (Convolved and Regridded)

Mona Lisa image credit: Wikimedia Commons

 

Caveats


Coming soon!

 

Examples


—– Work in progress —–

Let’s use the Kepler’s SNR images as an example – comparing the IR and radio.  The VLA image has larger pixels and is slightly rotated compared to the IR.   First, to work with any data in maputils, you need to load it in using their tool called FITSimage(), so that it’s formatted correctly.  You can either give FITSimage() a path directly to the fits file, or if you’ve already loaded the data & header elsewhere in your code, you can use the externaldata and externalheader paramaters:

from kapteyn import maputils
map_radio=maputils.FITSimage('kepler_radio.fits')

OR

from kapteyn import maputils
import astropy.io.fits as pyfits
kepler_radio_data,kepler_radio_hdr=pyfits.getdata('./<PATH>/kepler_radio.fits',header=True)
map_radio=maputils.FITSimage(externaldata=kepler_radio_data,externalheader=kepler_radio_hdr)

Then of course you need to load the IR image:

map_IR=maputils.FITSimage('kepler_ir.fits')
# OR
#kepler_ir_data,kepler_ir_hdr=pyfits.getdata('./<path>/kepler_radio.fits',header=True);

Once the data are loaded, it’s really simple to reproject/regrid the IR image to the radio header – in kapteyn it’s just a one-line command called maputils.reproject_to()

 map_IR_reprojected=map_IR.reproject_to(map_radio.hdr) 

The data array is actually stored in map_IR_reprojected.dat, so if you don’t need the extra info that reproject_to() includes, you can just do this right away when you call the function (this is also what I do):

map_IR_reproj=map_IR.reproject_to(map_radio.hdr).dat 

Alternatively, use the reproject package like so:

 map_IR_reprojected,map_IR_footprint=reproject_interp(pyfits.open('kepler_ir.fits'),map_radio.hdr) 

OR, using the exact reprojection method (not necessary here, but you will probably want this for precise flux conservation when doing comparative photometry):

 map_IR_reprojected,map_IR_footprint=reproject_exact((kepler_ir_data,kepler_ir_hdr),map_radio.hdr) 

Now the IR map’s spatial data array follows exactly the radio map’s header (though of course the flux units haven’t been changed). It’s ready to plot or save as you normally do.

*** A couple things to note ***

* For kapteyn‘s reproject_to, the header ‘NAXIS’ card needs to be exactly 2 (that is, a 2D image, not a 3D or 4D data cube).  It’s best to convert it to 2D properly, but if your image is truly just 2D yet retains empty spectral and Stokes axes in the header, you can ‘cheat’ by just manually setting NAXIS to 2:

if map_radio.hdr['NAXIS']>2: map_radio.hdr['NAXIS']=2
if map_IR['NAXIS']>2: map_IR['NAXIS']=2

* Also, if you are planning to do any flux extraction on your reprojected/regridded image, you will need be very careful of the units.  If the units of the original data are Flux [W/m^2, Jy, or similar], then you will need to scale by a factor of  (the new pixel area)/(the old pixel area).  You don’t do this for images with units of brightness [W/m^2/sr, MJy/sr or similar].

# --> Jy_2 = Jy_1_reproj*([sr/pix_2]/[sr/pix_1])
#FIRST, find the degrees per pixel from the header
map_original_flux = pyfits.getdata('....')
map_original_brightness = map_original_flux / degperpix_old**2 
map_reprojected_brightness = map_original_brightness.reproject_to(newheader).dat
map_reprojected_flux = map_projected_brightness * degperpix_new**2

 

Leave a Reply

Your email address will not be published. Required fields are marked *