Multi-Color Images

This post will describe how to combine multiple images with different color schemes to get a final product that you could call by one of these names:
Multi-color images, 2-color images, 3-color images, 4-color images, N-color images

For scikit-image, see e.g.  is a nice page for picking colors & converting between hsv, rgb, etc.

For this tutorial, I’m using free .fits images hosted by the Chandra site:

Note that all the images for a given object on that site are already reprojected to the same WCS and pixel grid.  For making your own N-color images with your own data though, in general you will have to first reproject all your images to one common WCS/pixel grid.  See my tutorial on reprojection for some discussion of how to do this.

from matplotlib import pyplot as plt
import numpy as np
import as pyfits
from skimage import color

The standard multi-color image is called an RGB or 3-color image.  To make one in python, you simply stack three (grayscale) images into a single array – one each for the Red, Green, and Blue channels.  Let’s try making one for Kepler’s Supernova Remnant, using infrared (Spitzer MIPS 24$\mu$m) as the R channel, low-energy Chandra X-rays as the G channel, and high-energy Chandra X-rays as the B channel.  [Credit: NASA/ESA/JHU/R.Sankrit & W.Blair  ]

### Load in the data
kepler_hdr = pyfits.getheader('kepler_ir.fits') #All these images have the same header
kepler_ir=pyfits.getdata('kepler_ir.fits') #MIPS 24um
#kepler_opt=pyfits.getdata('kepler_opt.fits') #WFPC2 F658N 
kepler_XhiE=pyfits.getdata('kepler_xray_he.fits') #High-energy X-rays (4-6keV)
kepler_XloE=pyfits.getdata('kepler_xray_le.fits') #Low-energy X-rays (0.3-1.4keV)

###Directly making an RGB array to plot (no rescaling)
#     --OR--     #

###Same, but explicitly setting the slices piece by piece 
kepler_simpleRGB[:,:,0]=kepler_ir; kepler_simpleRGB[:,:,1]=kepler_XloE; kepler_simpleRGB[:,:,2]=kepler_XhiE

Standard RGB stacks generally look very nice when each of the R, G, and B frames represent more or less the true red, green, and blue bands in reality – such as the lovely SDSS 3-color images.  However, there is no need to force other bands in the spectrum (such as the UV or radio) to adhere to the same pure red, green, and blue false color.  Take for instance the example above (also seen in the left panel below) – I personally think this does not look aesthetically pleasing at all.  But beyond aesthetics, you may have a ‘scientifically’ valid reason to despair over this color scheme – some time ago, a team I was working with made 3-color images of many galaxies in our survey and they just looked a bit off.  In the words of our PI: “Stars aren’t supposed to be green”.   To remedy this, we can imbue each channel with a color of our own choosing instead of the standard pure R, G, or B.  (That being said, the standard RGB does work stunningly well in many cases.)  Here is an example of a simple tweak from the standard RGB to a slightly more pleasing Red-Yellow-Blue color scheme.


Steps to create your own multi-custom-color images:

  1. Convert each separate image from greyscale (original fits) to its own RGB
  2. Colorize the RGB grey image by converting to HSV colorspace, changing the hue (from grey to color) & saturation
  3. Convert each back to RGB color space – at this point the data values should be between 0 and 1
  4. Add the separate RGB sky images together to create one master RGB
  5. Re-normalize each of R,G,B to [0,1] scale, but NOT each individual band from 0 to 1 independently…
    –> Scale each band from 0 to the FRACTION (max value in that band / max overall value from all 3 bands)
    –> This is what what lets the colors remain that color after scaling is a good resource to consult for finding colors you might want to use.



Rescaling image levels

You will probably have to rescale the data in some fashion to draw out the features you want.  For example, maybe you need to show changes in relatively faint features AND changes in relatively bright features in a sensible way.  Typically scalings are log, sqrt, power-law (gamma), etc.  I like Min-Su Shin’s for the job:

skimage, which I use for other tasks, also has rescaling capabilities – a useful one is rescale_intensity()

It may take you some time to play around with levels and scaling functions to determine the best settings to use for each image.  I usually just open individual images in DS9 and fiddle with the settings there when I want to do a ‘by eye’ check.

One very simple and important way to tweak your scaling is to set your desired minimum and maximum image values.  For plotting a single 2D frame with imshow(), you could achieve this by setting vmin=… and vmax=…  Since we will be working with 3D [RGB] frames, we can instead re-scale each image’s data values beforehand using img_scale.<function>() or skimage.rescale_intensity().  To make a simple linear re-scaling between min_value and max_value, you could do either of these:

rescaled_image=img_scale.linear(original_image, scale_min=min_value, scale_max=max_value)
# OR
rescaled_image=rescale_intensity(original_image, in_range=(min_value,max_value))

Note that img_scale.linear() will rescale the image values linearly between -1 and 1, which is useful or even required for some other operations.  One example is if your data values are floats rather than integers – the colorizing process we are about to discuss needs values to be between -1 and 1 if they are floats.

You can also rescale by logarithm, square root, power law, sinh, and asinh easily with img_scale.log(), img_scale.sqrt(), img_scale.power(), img_scale.sinh(), and img_scale.asinh().  Here is a simple visual of how square root scaling can be used to draw out the fainter features a little more, without even resorting to tweaking the min/max values.

SQRT rescaling
from matplotlib import pyplot as plt
import numpy as np
import as pyfits
import img_scale
import pywcsgrid2
from mpl_toolkits.axes_grid1 import ImageGrid, axes_grid


data_sqrt_scale=img_scale.sqrt(kepler_ir, scale_min=0., scale_max=kepler_ir.max())


g = ImageGrid(fig1, 111,nrows_ncols=(1,2),direction="row",axes_pad=0.02,

for i,dat,label in zip(range(2),[kepler_ir,data_sqrt_scale],['Original','SQRT Scale']):
    ax.set_ticklabel_type("hms", "dms"); ax.set_xlabel(''); ax.set_ylabel(''); 
    ax.xaxis.set_tick_params(color='0.8'); ax.yaxis.set_tick_params(color='0.8') 
    label=ax.text(0.05,0.95,label,ha='left',va='top',color='0.8', transform=ax.transAxes,size=10,)
    cbar.set_label_text('Some Units',color='k',size=9);

plt.clf(); plt.close()


One very common rescaling is by percentiles: for example, 0.1%-99.9%, 5%-95%, etc.  This means that if you were to make an ordered list of pixel values from lowest to highest, the min value on new brightness scale would be the one that is 0.1% up the list while the max value would be the one 99.9% up the list (or 5% and 95% or whatever pair you use).  An easy way to accomplish this is to compute it using numpy.percentile().  Note that if your image has a lot of pixels set to 0.0, such as in blanked outer edges, you will want to get percentiles after calling numpy.unique() on the image because np.percentile([0,0,0,0,0,1,2,3,4,5],20) is still 0 instead of 1.  You could do this:


Then you could run the rescale command of your choice like above.

Once you have rescaled your images to your satisfaction, you can check how well they mix by combining them into one RGB array. Here is a snippet of code that rescales the kepler images in various (arbitrary) ways, then combines them and plots the resulting RGB image.
* Give the Kepler IR image square scaling, using data values from 0 to its global maximum
* Give the low-energy X-ray image linear scaling from the 0.5% level to its global max
* Give the high-energy X-ray image square root scaling from the 4.0-99.5 percentile range

import img_scale
im1=img_scale.squared(kepler_ir, scale_min=0., scale_max=kepler_ir.max())
im2=img_scale.linear(kepler_XloE, scale_min=np.percentile(np.unique(kepler_XloE),0.5), 
im3=img_scale.sqrt(kepler_XhiE, scale_min=np.percentile(np.unique(kepler_XhiE),4.), 

ax1=plt.subplot(121); ax1.set_title('Unscaled')
ax2=plt.subplot(122); ax2.set_title('Example (Arbitrary) Re-Scaling')
for ax in [ax1,ax2]: ax.set_xticklabels(''); ax.set_yticklabels('');




Colorizing grayscale images

I use the skimage package to do the heavy-lifting here.  Specifically:

from skimage import color
from skimage.exposure import rescale_intensity, equalize_adapthist, adjust_gamma,adjust_log,adjust_sigmoid

Once again, see the pages here and here for some examples of using skimage with images.

We can now add a ‘tint’ or imbue a grayscale image with a certain color.  We’ll do this by converting a single grayscale image into its RGB components, then rescaling levels if necessary, and ‘colorizing’ those RGB channels.  I will use HSV colorspace to do the actual colorizing.  I’ve chosen ‘brick’, ‘dandelion’, and ‘steel’ as my three colors to use.  Once again, Kepler’s SNR is the example.

#A little function to imbue or tint grayscale images 
def colorize(image, hue, saturation=1,v=1):
    ### Add color of the given hue to an RGB greyscale image.
    hsv = color.rgb2hsv(image)
    hsv[:, :, 2] *= v
    hsv[:, :, 1] = saturation
    hsv[:, :, 0] = hue
    return color.hsv2rgb(hsv)

## brick=rgb(204,20,20)=hsv(0,.9,.8)
## dandelion=rgb(230,220,100)=hsv(60/360,.4,1.)
## steel=rgb(86,122,255)=hsv(227./360,.66,1.)


kepler_opt_sqrt=img_scale.sqrt(kepler_opt, scale_min=0., scale_max=100*np.sqrt(kepler_opt.max()))


### Comparing the infrared image in original grayscale vs colorized 'brick' red
plt.subplot(121); plt.imshow(kepler_ir_greyRGB,origin='lower',interpolation='nearest'); 
plt.title('IR - Grayscale')
plt.subplot(122); plt.imshow(kepler_ir_brick,origin='lower',interpolation='nearest'); 
plt.title('IR - Colorized to "Brick"'); plt.clf()


### Display all three colorized images separately
plt.subplot(131); plt.imshow(kepler_ir_brick,origin='lower',interpolation='nearest'); 
plt.title('IR - "Brick"')
plt.subplot(132); plt.imshow(kepler_XloE_dandelion,origin='lower',interpolation='nearest'); 
plt.title('Low X-ray - "Dandelion"')
plt.subplot(133); plt.imshow(kepler_XhiE_steel,origin='lower',interpolation='nearest'); 
plt.title('High X-ray - "Steel"')




Now, the final step is to combine them into one single array.  Since each colorized frame is an RGB set, for a given pixel we can take the mean of the ‘brick’ R value, the ‘dandelion’ R value, and the ‘steel’ R value to get a final ‘overall’ R value.  Ditto for the overall B and G values.   Note that you will need to re-scale the brightness levels to the [0,1] scale overall, but NOT each band independently.  To get your desired hue to remain in each band – the RELATIVE RGB levels – the peak values should be scaled as [the max. value in that band] / [the max. value from all 3 bands].

### Combine the colorized frames by taking the mean of the R images, mean of G, and mean of B

### Make a list of the maximum data values in each of the R,G,B frames
RYB_maxints=tuple(np.nanmax(kepler_RYB[:,:,i]) for i in [0,1,2])

### Now re-scale each frame to the overall maximum so that the final RGB set has range [0,1]
for i in [0,1,2]: 
                        out_range=(0, kepler_RYB[:,:,i].max()/np.nanmax(RYB_maxints) ));



Comparing once again with the simple RGB:
Kepler RYB

from astropy.wcs import WCS

fig1 = plt.figure(1)
wcs = WCS(kepler_hdr)
ax1=fig1.add_subplot(121, projection=wcs)
ax1.set_title('Simple RGB')
ax2=fig1.add_subplot(122, projection=wcs)
for ax in [ax1,ax2]:
    rapars = ax.coords[0]; decpars = ax.coords[1]
    rapars.set_ticks(color='white'); decpars.set_ticks(color='white')
    rapars.set_ticks(number=6); decpars.set_ticks(number=6); 
    rapars.set_ticklabel(size=8); decpars.set_ticklabel(size=8); 
    rapars.display_minor_ticks(True); decpars.display_minor_ticks(True)
    rapars.set_separator(('$^\mathrm{H}$', "'", '"'))
plt.suptitle("Kepler's SNR",y=0.8,size=14)


It’s worth mentioning that one big plus about combining images in this way (instead of in GIMP/Photoshop/etc.) is that you can keep the WCS info for plotting.  That is, the coordinate positions of the pixels is retained, so you can have a coordinate grid.  Using this method outlined here you should be able to integrate with your favorite reprojection and plotting packages, such as pywcsgrid2, kapteyn, aplpy, standard imshow, etc.

It doesn’t really make sense to have a single ‘multi-color’ colorbar unfortunately, but if you need to include some quantitative information about the flux scales (such as demonstrating log scaling, etc.),  you could always just plot three separate colorbars next to each other – one for each hue.

You don’t need to stop at 3 RGB frames – you can combine 2 or 4 (or more, I guess) frames to make N-color images.  Just be careful you select hue combinations that allow you to still see the different bands’ emission.  I’ve been fond of a teal-purple-orange-green scheme lately.




I’ll be using a little convenience function now to rescale by percentiles:

def rescale_image_percentiles(imagein,perclo=2.,perchi=98.):
    #Rescale an image by percentiles: perclo and perchi
    plo, phi = np.percentile(imagein, (perclo, perchi))
    return rescale_intensity(imagein, in_range=(plo, phi)) #uses skimage.exposure.rescale_intensity



* Here is M51 in 3 optical bands.  (Again, freely available from the Chandra site.) I think the standard RGB looks great here, which is not surprising as the optical bands are actually R, G, and B.  Here I subdued the colors a little bit, and added in a fourth image of X-ray emission.  I colored the X-rays in lime –  the splash of color is a good way to draw the reader’s eyes and emphasize certain features.
M51 4-Color Image

import pywcsgrid2 

### M51, red-yellow-blue-lime ###
### ==> For this one, the muted colors don't pop very well, and looks quite drab.  Maybe it's because
#       the different colors largely overlap (i.e., my method is best suited for roughly independent emission)
#   HOWEVER, this can be made up for by stretching the contrast at the end - 0-99.5 percentile works fine here.
#   Histogram (adaptive) equalization also works well, may provide some finer/subtler enhancement 


#Scale image between -1 and 1 if data type is float...
m51_optR=img_scale.linear(m51_optR); m51_optG=img_scale.linear(m51_optG); 
m51_optB=img_scale.linear(m51_optB); m51_X=img_scale.linear(m51_X)*.8; 

#Convert to RGB grayscale

#Colorize the grayscale images

#Combine the colorized frames and do final rescaling
RYBL_maxints=tuple(np.nanmax(m51_RYBL[:,:,i]) for i in [0,1,2])
for i in [0,1,2]: 
                       out_range=(0, m51_RYBL[:,:,i].max()/np.max(RYBL_maxints) ));
for i in [0,1,2]: 
                       scale_max=np.percentile(np.unique(m51_RYBL[:,:,i]),95.) );

for ax in [ax1,ax2]:
    ax.set_xlabel(''); ax.set_ylabel(''); 

plt.suptitle('M51 - Adding a fourth image to RGB',y=0.96)
plt.clf(); plt.close()



* Here is an example of a standard RGB looking like it needs a little bit of a makeover.  This is M101 in the infrared, optical, and UV (and X-rays for a fourth image in right panel).  The different bands don’t correspond to true red, green, and blue light in reality, so it’s not surprising that they don’t blend together particularly well.  Here I’ve just made a simple change to more reserved red, yellow, and blue hues, with the additional X-ray image in pink – more for blending in rather than standing out like the lime above.   This isn’t perfect, but it’s a good improvement.

M101 Red-Yellow-Blue-Pink

### M101, red-yellow-blue-pink ###
#   --> slightly better than M51, but still a bit drab.  A teal-orange-purple-yellow 
#       scheme actually looks good though
#   gamma adjustment with gamma<1.0 (say, 0.5) really brings up the low-vals nicely
#   log runs the risk of vals going over 1.0 ...
#   adjust_sigmoid draws out faint features really well, but turns blacks to grey... 
#   Fixed by percentile scaling...

from skimage.exposure import rescale_intensity, adjust_gamma, adjust_log, adjust_sigmoid

## Note that you may need to adjust the header - the original has CRVAL etc. as strings 
#  instead of floats which doesn't play nice with the some plotting functions...


#Scale image between -1 and 1 if data type is float...

#Convert to RGB grayscale

#Colorize the frames

#Combine and do final scaling
RYBP_maxints=tuple(np.nanmax(m101_RYBP[:,:,i]) for i in [0,1,2])
for i in [0,1,2]: m101_RYBP[:,:,i]=rescale_intensity(m101_RYBP[:,:,i], 
                                    out_range=(0, m101_RYBP[:,:,i].max()/np.max(RYBP_maxints) ));
for i in [0,1,2]: m101_RYBP[:,:,i] = adjust_gamma(m101_RYBP[:,:,i], gamma=.5,gain=1)
for i in [0,1,2]: 
    m101_RYBP_inverse[:,:,i] = adjust_sigmoid(m101_RYBP_inverse[:,:,i], cutoff=0.001, gain=10, inv=True)

for ax in [ax1,ax2]:
    ax.set_xlabel(''); ax.set_ylabel(''); 

plt.clf(); plt.close()



* This is an interesting one.  It is often more appropriate to plot ‘reverse’ color scales such that the background is light, and areas with higher data values are represented by darker points.  For example, this saves ink when printing physical copies, and it may be easier on the eyes in some digital formats.  I don’t think I’ve seen anyone explore this before, but you could also do a multi-reverse-color image.  Building on the code from the previous example, here it is plotted with a white background:
M101 Inverse 4-Color Image

### --> Continuing from the previous M101 code...

for i in [0,1,2]: 
    m101_RYBP_inverse[:,:,i] = adjust_sigmoid(m101_RYBP_inverse[:,:,i], cutoff=0.001, gain=10, inv=True)

ax1=pywcsgrid2.subplot(121, wcs=m101_hdr2)
ax1.xaxis.set_tick_params(color='w'); ax1.yaxis.set_tick_params(color='w'); 
for spine in ax.spines.values(): 
    spine.set_edgecolor('w'); spine.set_linewidth(0.4); spine.set_visible(True);

ax2=pywcsgrid2.subplot(122, wcs=m101_hdr2)
for ax in [ax1,ax2]:
    ax.set_xlabel(''); ax.set_ylabel(''); 
ax2.set_title('Inverse Red-Yellow-Blue-Pink',size=10)

plt.suptitle('M101 - Inverse RGB',y=0.84)
plt.clf(); plt.close()


Convolution and Regridding

Changing Image Resolution (Convolving) and Regridding


First, a few resources.
* Here is the Kapteyn package documentation on reprojecting with the maputils module
* And cygrid’s jupyter notebook

NASA SkyView query:

Normally for example data sets I advertise the wonderful multiwavelength LITTLE THINGS images freely available on the NRAO website.  The LT maps are quite beautiful, but this time I will use a couple random maps pulled from NED instead that are smaller in size – a couple hundred kB instead of a few MB.
A couple of images I think are nice for this purpose:
* M101, Herschel SPIRE 250um (KINGFISH) from NED query [1.4MB], in MJy/sr.  Also, the M101 500um is only 426kB
* M101, VLA 21cm [391kB] * M100, Herschel PACS 100um from NED query, in Jy/pix [4MB…] * M83 MIPS 160um in MJy/sr from NED query [169kB]




When calculating fluxes or luminosities from astronomical images – to create an SED, for example – you will almost certainly need to perform some formatting on your images before you extract fluxes, perform image arithmetic, etc.  Here I will cover two common operations: image convolution and regridding.  These are sometimes wrapped together in developed software packages, but are actually separate concepts.  One very important consideration here involves a quick mention of some terminology.  In normal life, the word “resolution” in regards to images might be described as “the fineness of detail” it contains.  That is true, but it’s actually a slightly subtle point.  And to understand resolution, we will need to understand the concept of point spread functions/beams (see the post on observing) and we need to talk about what precisely we mean by the term ‘pixel’.



Fun fact: Wikipedia tells me that the first use of the word pixel, which simply comes from a shortening of ‘picture element’, was by JPL scientists in the 1960s.

One very important point to understand is that higher resolution does not necessarily equate to more pixels.  This is because ‘pixel’ can refer to two different concepts in astronomy.  One usage deals with image capture [hardware] – that is, the physical cells that capture light in a detector.  (They can also be lovingly referred to as photon buckets.)  Squeezing more (smaller) physical pixels into a given area on your detector will typically give you sharper resolution. Think megapixels in cameras.

The other meaning of pixel deals with image processing and image reconstruction [software].  Pixels in the sense of a digital image refer to the data grid that represents your image information.  You can take your favorite cat photo from the internet and try to scale it in Photoshop to twice the size – giving it more pixels – but you’re not adding any new information to the new pixels that wasn’t already there, so when you zoom in the overall picture will still look just as blurry as the original.  (You can of course apply some clever interpolation algorithms to give the pixels the appearance of being smoother, but that’s a different issue.)  You can combine separate images that are slightly offset to make a new image which has slightly finer resolution than any of the originals – the most common way to do this is called “drizzling“, and is used often in Hubble images, for example.  In radio astronomy (and sub-mm and some infrared instruments), the detector is not directly capturing photons as particles like a CCD camera does.  Usually it’s detecting light as a wave, and the image is reconstructed later from the raw voltages.  In practice, this means that a scientist could reconstruct an image with arbitrarily small digital pixels – just as you could scale your cat photo to an arbitrarily large size – but the extra pixels wouldn’t necessarily contain any new information.


To expound a little more, taking the example of consumer digital cameras, a sensor that boasts 20 Megapixels is regarded as higher resolution than one with, say 15 Megapixels.  This is only part of the story though, as it depends on things like the physical size of the pixels – it’s conceivable to have these two camera sensors made from the same chip of pixels, just cut to separate 15 and 20 Mp areas.  Just considering the sensors themselves here and ignoring the other camera optics, that pair of chips would have the same inherent ability to resolve detail but different total areas. But let’s assume now that they have the same overall area.  It makes sense that, say, a one inch square chip which contains 1000 pixels would produce a much lower resolution photo than a one inch square chip with 1 million pixels.
[Guess the image!] Guess the image
However, you can’t continue the trend forever – engineering issues aside, at some point the pixels would become smaller than the wavelength of the light it was trying to capture.  Beyond that limit, squishing in more pixels won’t help you.

“A gajillion pixels is overkill.” – PJC

Besides, you could open up your photo in some image processing software and divide each pixel into, say, four (or 9, 25, etc…) equal parts if you wanted to.  That would make your image have MANY more pixels.  But that wouldn’t be increasing your resolution at all, because those new smaller pixels are not adding any information.

Let’s reverse that notion – if you are now faced with the task of observing light of longer wavelengths, a single photon can more easily bleed over multiple pixels if they remain small.  Now an infinitesimally small point of light in the real world will register in several pixels, according to some pattern which depends on the optics.  This is the PSF or beam pattern mentioned in other posts.  (This description is vastly simplifying the whole process, of course – for starters, the detector technology of ‘pixels’ is quite different between optical, FIR, and radio telescopes.  Please consult a proper optics or observational astronomical text for a more complete picture.)

So, let’s back up a step.  In a basic sense, the sensor in your typical camera is recording the incoming light of whatever it’s pointing at.  However, if the camera is out of focus or there are defects in the optics, no matter how many pixels you have, the image will appear blurry.  And if your pixels are larger than the innate scale for detail afforded by your optics, your image will be ‘blurrier’ than if you were to employ smaller pixels.  To summarize, the final resolution of your image is determined by a complex interplay between your *entire* optics system, including lenses (possibly interferometer spacing) and recording hardware.

Let’s look at an example of increasing the number of pixels in an image.  Below on the left is an imaginary faraway source (top panel) observed with some telescope.  The resolution (beam FWHM) is shown by the dotted circle – in this case the true source would be a point source with regards to the telescope’s resolution.  The final image is in the bottom panel.  Note that obviously no fine detail about the source can be determined from this image such as if there are 2 or 3 separate objects contained, etc.  Now look at the image on the right side – this one has been produced by processing the raw signal onto a finer pixel grid.  (You can often do this with radio data, for example.)
Pixel Example 1Pixel Example 2

Do you gain anything by doing this?  Well, note that you’re still not able to determine any fine detail about the true source – on that count there is no improvement over coarser image on the left.  If you are trying to get a precise sum of the flux in a circular aperture around the source, though, this may help you out a bit as the smaller pixel sizes would fill the round aperture more fully than larger pixels.  The downside of having smaller pixels in this case would be that processing time could increase dramatically.


Now let’s move on to the discussion of how to compare images from different instruments/settings.  The tools we will use are convolution and regridding.  These can sometimes be related (such as when, in radio interferometer data reduction, you choose the pixel size so that many pixels cover the beam area), but remember they are separate concepts.



For images that are taken with different resolutions, you will need to convolve or “smooth” the sharper images before comparing fluxes.  Comparative photometry requires this because the poorer-resolution image flux may be underestimated compared to the sharper image.  That is, when observing some object with a given inherent luminosity and angular size, if you observe with low spatial resolution (big beam/PSF), then some of the observed flux from the object will be scattered out to larger radii.  If you observe the same object with much higher resolution, this effect is much less pronounced.  So, for photometry, if you want an ‘apples-to-apples’ comparison between different spectral bands, you need to convolve/smooth your images down to the lowest resolution (largest beam size) before extracting your fluxes.

To visualize what changes in a PSF size mean for an image, let’s consider the following image, also mentioned in the post about observational data.  I have taken a sample source image and applied Gaussian blurs of increasing PSF sizes.

Convolution Example 1
The leftmost image is an imaginary ‘astrophysical’ source, as it would appear at infinite resolution. Now, note how the image gets blurred out as the beam size increases in the images to the right. The 2D Gaussian PSF FWHM is denoted by the dotted circles in the lower right corners.  Already in the second panel, in which the PSF is relatively small, the details are difficult to make out. Where is the edge of the bulge in the galaxy? Are the spiral arms narrow or fat?  It’s hard to tell in this panel.  But very notably, the thin comet shape has almost completely disappeared.  This demonstrates the importance of what is called the beam filling factor.  If a source populates the beam too sparsely (small filling factor), even if the overall structure is around the spatial scale of the beam, the instrument will be hard pressed to recover a detection because the flux gets spread over the whole beam (and some outside, too).  In the third panel, even though the beam FWHM is still smaller than the ‘alien’ face or galaxy, it’s already impossible to determine the detail of either blob. (The comet is nowhere to be seen now.)  The two larger features now closely resemble 2D Gaussians, though the left one is obviously not a point source because the emission goes notably beyond the beam size. The fourth panel shows the Gaussian convolution with a kernel roughly the size of the whole collection of sources.  Now the whole thing looks roughly like one single 2D Gaussian beam.

Now note the colorbars in each of those panels – in these plots I chose to let the min/max values be independent between panels in order to focus just on the blurring or flux spreading aspect of the convolution.  But if we look at the exact same figure made with all the plots scaled to the same absolute min/max, something else becomes apparent.  See the following figure:
Convolution Example 2
Here you can clearly see the emission in each panel appear to get dimmer – this is precisely because the larger beam size is spreading the original flux over a larger area (both within the beam, and also to the pixels without). Keep this in mind when you are proposing for observing time on a telescope!

Convolution isn’t only used to match two image resolutions, though – it can also be used to reduce the noise in a single image, if you don’t mind sacrificing some resolution – this is often done in 3D data cubes along the spectral axis.  (c.f. Hanning smoothing, for example.)  Since the ‘blurring out’ of your image by convolving to a larger PSF will ‘smooth out’ your signal, the smaller-scale fluctuations you might see in your original image will average out to a greater degree as the PSF increases.  So that disadvantage of small-scale features ‘disappearing’ from earlier can be repurposed here as a beneficial effect.

In practice, image convolution can be done with a variety of tools.  In popular image processing software packages, you may encounter it as a “Gaussian blur”.  In python, common implementations are scipy.ndimage.convolve or astropy.convolve.  I recommend the latter.  Below we’ll go through examples of how to do this in python.




Why would you want to reorganize an image to a different pixel grid?  There are a number of reasons.  For example, maybe you want to perform some photometry using the same regions with identical pixel grids, to ensure consistency in your measurements.  Or, maybe you want to divide one image by another to make a pixel-by-pixel plot (making a color-color plot, for fitting functions, etc.).  Or maybe one of your images is rotated and you just want to compare them side-by-side.  Whatever the reason, reprojection to a different coordinate grid is relatively easy with the available packages in python.  However, it’s critical to conserve flux by keeping in mind the flux or brightness units of your image.

Let’s take a simple theoretical example pixel with a flux density of 10 mJy.  And let’s assume this is a single pixel from a map with units of [Jy/pixel].  Assuming uniform brightness across the entire pixel as is typically done, if you divide that pixel into four equal sub-pixels, each new one will NOT have a flux density of 10 mJy/pix, but will instead be decreased by its area fraction to 10/4 = 2.5 mJy/pix.  If your map is in units of brightness, however – say, 10 MJy/sr – and you regrid to the smaller pixel sizes, each new pixel still remains 10 MJy/sr.  That’s because brightness is a unit of flux per area – even though the flux is decreasing by say 1/4, the area is also decreasing by 1/4.   See below for another example:

Rendered by


Another technical point to note is that you should try to retain proper Nyquist sampling of your beam/PSF.  That is, you want to have at least two pixels across each beam, otherwise you are losing spatial information. That’s really on the low side, though.  To be safe, three to five pixels per beamwidth is common.  The penalty for increasing the number of pixels in this manner is increased computation time in your subsequent processing.  That being said, you could regrid such that each pixel is effectively one beamwidth across, making each pixel *independent* for the purpose of fitting functions in a spatially resolved way, etc.





Mona Lisa (Original)

Regridded Only

Guess the image







Convolved Only

Mona Lisa (Convolved)

Convolved AND Regridded

Mona Lisa (Convolved and Regridded)

Mona Lisa image credit: Wikimedia Commons



Coming soon!



—– Work in progress —–

Let’s use the Kepler’s SNR images as an example – comparing the IR and radio.  The VLA image has larger pixels and is slightly rotated compared to the IR.   First, to work with any data in maputils, you need to load it in using their tool called FITSimage(), so that it’s formatted correctly.  You can either give FITSimage() a path directly to the fits file, or if you’ve already loaded the data & header elsewhere in your code, you can use the externaldata and externalheader paramaters:

from kapteyn import maputils


from kapteyn import maputils
import as pyfits

Then of course you need to load the IR image:

# OR

Once the data are loaded, it’s really simple to reproject/regrid the IR image to the radio header – in kapteyn it’s just a one-line command called maputils.reproject_to()


The data array is actually stored in map_IR_reprojected.dat, so if you don’t need the extra info that reproject_to() includes, you can just do this right away when you call the function (this is also what I do):


Alternatively, use the reproject package like so:


OR, using the exact reprojection method (not necessary here, but you will probably want this for precise flux conservation when doing comparative photometry):


Now the IR map’s spatial data array follows exactly the radio map’s header (though of course the flux units haven’t been changed). It’s ready to plot or save as you normally do.

*** A couple things to note ***

* For kapteyn‘s reproject_to, the header ‘NAXIS’ card needs to be exactly 2 (that is, a 2D image, not a 3D or 4D data cube).  It’s best to convert it to 2D properly, but if your image is truly just 2D yet retains empty spectral and Stokes axes in the header, you can ‘cheat’ by just manually setting NAXIS to 2:

if map_radio.hdr['NAXIS']>2: map_radio.hdr['NAXIS']=2
if map_IR['NAXIS']>2: map_IR['NAXIS']=2

* Also, if you are planning to do any flux extraction on your reprojected/regridded image, you will need be very careful of the units.  If the units of the original data are Flux [W/m^2, Jy, or similar], then you will need to scale by a factor of  (the new pixel area)/(the old pixel area).  You don’t do this for images with units of brightness [W/m^2/sr, MJy/sr or similar].

# --> Jy_2 = Jy_1_reproj*([sr/pix_2]/[sr/pix_1])
#FIRST, find the degrees per pixel from the header
map_original_flux = pyfits.getdata('....')
map_original_brightness = map_original_flux / degperpix_old**2 
map_reprojected_brightness = map_original_brightness.reproject_to(newheader).dat
map_reprojected_flux = map_projected_brightness * degperpix_new**2