How bad is oversampling?
Recently I took an image of the Crescent Nebula with the new setup at the remote observatory. In 2018, the same target was photographed under different conditions, and it was rewarding to see how much things had improved. Most conditions were better in 2024, but one aspect—the fit between the camera and telescope—was worse. It’s generally believed that the 2018 image was correctly sampled, while the 2024 image was over-sampled. I've often wondered about the effects of over-sampling. How bad is it to move all that data around? Can I bin my images without losing quality? Is there any advantage to having over-sampled data with modern processing tools? This image provided a good chance to explore these questions.
Pixel-scale and Seeing
Over- and under-sampling refer to how detailed a system captures images relative to the seeing conditions. Pixel-scale is the angular field of view of one pixel in a given telescope. Or in other words how much sky does one pixel cover. This depends on the telescope's focal length and the pixel size in the camera sensor. You can calculate pixel-scale using a straightforward formula:
Assume a telescope with focal length 1000mm and a camera with pixel-size of 3.8 micron, the pixel-scale will be 3.8*206/1000 = 0.78 arcsec/px.
Seeing refers to how much the atmosphere blurs light from space before it reaches a telescope. It is measured in arc-seconds, indicating the smallest details visible in the sky. Most places have a seeing of 2-4”. Good locations have a seeing under 2”, while professional observatories are at locations that may achieve a seeing of around 1”.
The appropriate pixel-size for imaging, based on seeing, follows the Nyquist criterion, which states that the sampling rate should be at least twice the highest frequency of the signal. For astrophotography this would mean seeing/2. However, since we’re capturing round stars on square pixels, a more accepted rule of thumb is seeing/3. If the pixel size is smaller, it is considered over-sampled, and if larger, it is under-sampled.
The dilemma
Most locations will have a seeing of 2-4”. The needed pixel sizes range from 0.67 arcseconds per pixel to 1.3 arcseconds per pixel. A 1000mm telescope with a 3.8-micron pixel camera (0.78 arcsec/px) is properly sampled. For a 400mm telescope at 2" seeing, you need 1.3-micron pixels, while a 2500mm telescope would require 8.1-micron pixels. However, for regular CMOS-cameras we don’t have the luxury to pick our pixel-size. With a few exceptions, most modern cooled deep-sky cameras have 3.8-micron pixels. So either the pixels are too big (for wide-field astrographs) or too small (for long focal length SCT/CDK’s). In other terms, a short FL astrograph is often under-sampled and a long FL telescope is often over-sampled. What to do?
For images that are under-sampled, drizzling can help; it's a method that can be easily applied during post-processing. For over-sampled images, binning can be used. Binning 2x2 pixels into one is known as Bin2 and doubles the pixel-scale. So the dilemma is, should one bin high resolution files to get to the proper pixel-scale? This blog will explore that issue.
Remote Observatory setup
The remote observatory setup that I started using at the start of this year consists of a Planewave CDK14 telescope with a Moravian C3-61000 Pro camera. The focal length of the scope is 2563mm and the pixel-size of the camera is 3.78 micron. So the pixel-scale of the system is 0.3 arcsec/px. The conditions in Spain are generally good but not amazing. On very good nights the seeing may come down to 1.8”, but in all fairness on average the seeing is probably more in the range of 2-2.5”. Even on the best nights, Nyquist would say I need 0.6 arcsec/px pixel scale. So I am at least two times oversampled, and on average probably more like three times oversampled. How bad is that?
Comparing 2018 with 2024
Let’s start by comparing the properly sampled image from 2018, captured with the Takahashi TOA-130 refractor and ASI1600MM camera, and the image from 2024, captured with the Planewave CDK14 reflector and C3-61000 camera. We look here at the Ha-channels of the Crescent Nebula. Interestingly the unprocessed images look much more similar than expected. Sure, the CDK14 image has more depth and finesse to it, but the differences are subtle. The differences emerge a lot more when processing the images. A popular AI-based tool for deconvolution is BlurXTerminator (BXT) and in the below example this tool was applied to both. Now the two images look markedly different. While the TOA-130 has sharpened up nicely, it is done in a contrasty, somewhat harsh way. The CDK14 image on the other hand sharpened up with much higher quality, showing a lot of depth, nuances and finesse. Apparently a tool like BXT benefits from the large amount of data that is available in a high resolution image.
Binning
So the over-sampled image responds very well to BXT processing. If we correct the over-sampling by binning, would we loose some of that quality in the BXT results? Let’s try it to find out. The same data is now binned using bin2 (0.6 arcsec/px) and bin3 (0.9 arcsec/px). And for a proper comparison, binning took place on the individual calibrated files, before registration and stacking. The bin3 image has a comparable pixel-scale to the TOA-130.
Let's first compare the stars. Honestly, there isn't much difference among the four images. They all show nice round points of light. While the higher pixel images may look a bit blocky if you look closely, the differences are minimal under normal viewing conditions.
Larger structures in the nebula show different results. The bin3 image is clearly better than the TOA-130 image, which is expected due to other factors like telescope, camera quality and atmospheric conditions. The differences between bin1, bin2, and bin3 images are smaller, with bin3 being the least impressive. Like the TOA-130 image, details in bin3 are more apparent in contrast than in fine details. Bin2 is clearly better than Bin3. And the distinction between bin2 and bin1 is subtle but present. While a compressed image may not fully capture it, the bin1 image does offer finer details and smoother tonal transitions.
Oversampling stars vs. structures
In discussions of under- and oversampling, stars illustrate the theory well, as they are the smallest structures in astro-images. Stars align closely with the one-dimensional cases for which the Nyquist criterion was developed. Though dividing by 3 instead of 2 is needed to avoid covering a single pixel. Notably, while star shapes remain consistent with binning, larger structures appear to benefit from un-binned images. For complex nebulous forms, dividing by 4 seems to be even more effective than dividing by 3. Although differences in unprocessed images are minor, modern AI-based processing tools may leverage detail from un-binned oversampled images. While differences are slight, some oversampling appears beneficial.
Further processing
If one processing step, such as deconvolution, already shows better quality of the over-sampled image, how would that be if we continue to process the images? Let’s go a bit further and consider the below example. Here we only look at a comparison of the three binning-levels of the CDK14 data. But for processing we now have applied BXT, stretching (Histogram Transformation using standard ScreenTransfer parameters) and noise reduction using the AI-based tool NoiseXTerminator (NXT). And let’s zoom in a bit further to get the best impression of any differences.
The stars clearly get a bit more ‘blocky’ in the binned examples, but under normal viewing distance this is probably rather negligible. In larger structural elements however there’s a lot more difference. Clearly the Bin1 results are the clear winner here with a silky smooth texture, beautiful tonal transitions and subtle details throughout. Bear in mind that these are extreme close-ups of only 0.3% of the originally captured image. But it demonstrates that the over-sampled images produce better results than the ‘properly’ sampled images. It appears that over-sampled images have more information for modern AI-based processing tools to be effective in their results. And this benefit carries through the whole pipeline, with a demonstrable better result of the over-sampled image vs the binned or properly sampled image.
Conclusions
Matching camera pixel size to telescope focal length may be ideal but challenging. Most modern cameras use 3.8 micron pixels, limiting options. Depending on the focal length, you might be over- or under-sampled. With under-sampling, drizzling helps to achieve maximum resolution. With over-sampling the main issue is increased storage and processing requirements, which modern computers can typically handle. But over-sampled, higher resolution images do improve image quality. Perhaps not so much in star quality, but particularly in nebula details. It looks like modern AI-based processing tools can do a better job utilising the extra data for improved results.
Even though my system is over-sampled at 0.3 arcsec/px, I won’t bin the data. The burden of dealing with larger data volumes is easily taken for granted when the image quality is better. The difference may not always be that much, but it gives peace of mind that every last bit of resolution has been leveraged without compromises.