Removing Colour artefacts from OSC images
Astrophotography can be done using either monochrome or One-Shot-Colour (OSC) cameras. Probably the general consensus is that for maximum image quality, a monochrome camera is the preferred way to go. But modern OSC cameras can hold their own very well, and they provide their own advantages. It is lighter and more compact than a monochrome camera with a big filterwheel attached. This is helpful when trying to down-size a rig. Also, for ‘fast’-moving objects, such as comets for example, an OSC image makes the processing a whole lot easier. In late 2021, I added an OSC camera to my observatory, the QHY268c, an APS-c sized OSC camera. When processing the images of different targets, two colour artefacts showed up that I had not really seen in monochrome images. These colour artefacts included colour specks, and colour dispersion. This blog describes what they are and what to do to get rid of them.
Colour Specks
When I processed my first OSC images, one thing of notice was that often there were coloured specks visible in the final stack. It is the sort of specks that one would expect from a non-calibrated image. My original assumption was that proper dark calibration would take care of it. But unfortunately calibration did not solve the problem. Some specks definitely disappeared, or got weaker. But in a final integrated image, the specks would still be visible. They were not abundant, and literally just a few pixels in size. But because of their intense Red, Green or Blue colours, they were a bit of an annoyance. Later I learned from others with a similar sensor that they noticed the same. There are different ways to go about this, including PixelMath scripts, Clonestamp tools, Masked Desaturation, etc. But most of these methods were quite labour-intensive and/or did not solve the problem for the full 100%.
The solution was given in one of Adam Block’s videos where he experienced a similar problem. The solution is actually very simple. It involves the process CosmeticCorrection (CC). One of the options in that process is called ‘Use Auto detect’, which is what we need. Tick the option Hot Sigma and choose a value. The value 3 usually works well. Applying this CC to the Calibrated (but not Debayered) image looks for individual pixels where brightness is three sigma’s higher than its surrounding pixels and brings it back to the value of the surrounding pixels.
As can be seen in the images above, the original OSC image shows some intense colour specks. The three specks in this close-up were only marginally eliminated/reduced using regular calibration. But running CosmeticCorrection as described above dealt with the problem completely. The nice thing is that this correction can be completely automated in the WeightedBatchPreProcessing (WBPP) script of PixInsight. When you select one of the light frames In the Calibration tab, the right-hand panel will show a section called CosmeticCorrection. To enable this, first you go back to the CosmeticCorrection dialog and make set the Hot Sigma as described. Then drag the little icon in the bottom-left onto the desktop. This creates an instance of CC on the desktop. Rename if so desired. In WBPP in the CosmeticCorrection section you select this particular instance by selecting the dropdown under ‘Template’. The CC-icon(s) on your desktop show up. Select the one you just created and click ‘Apply to all light frames’. This will run CC after calibration but before the debayering. Ever since I learned from this technique, I have always applied it to all my OSC images and have never seen the colour specks again.
Colour Aberrations
Colour aberrations, or colour fringing, is the phenomenon that different wavelengths of the light don’t behave the same way in their optical path. Optical path for this purpose is not just the telescope, but also the column of atmosphere between telescope and target. Such aberrations are visible when otherwise white-ish objects show a colour pattern at/around the edges. A typical pattern is a white star with a red hue on one side and a blue hue on the other side. But aberrations can also show as halo’s around the star. The origin of aberrations can be quite different, so it is important to understand the difference between them.
Lateral Chromatic Aberration is the phenomenon that different wave lengths of light all have a slightly different magnification factor in the optics used. Often the center of the image is not much affected, but as you move towards the corners of the image, the aberration gets worse.
Longitudinal Chromatic aberration is caused by the fact that different wavelengths have different focus points. This shows as coloured halos around brighter stars for example and is pretty similar across the frame.
Atmospheric dispersion is the phenomenon that different wavelengths of light are diffracted in the earth’s atmosphere at different rates. It shows as stars that have one red-ish side and one blue-ish side. This is homogenous across the frame and increases at lower altitudes of the object, when the light from the object travels through more atmosphere.
Lateral and Longitudinal chromatic aberration (CA) are quality parameters of refractor telescopes. Good quality optics, special types of glass, optical design choices, all can help eliminate these aberrations. A telescope that has these aberrations specifically well controlled is usually referred to as an APO-refractor. But keep in mind that there is no objective definition of how much/little CA makes a scope ‘APO’. Marketing definitely plays a role here as well.
Atmospheric dispersion is something that can be optically corrected using a special type of corrector. For planetary imaging, these correctors are often used. But for deep Sky imaging not so much. The good news is that atmospheric dispersion can be removed quite easily in post-processing.
Removing the aberration
The reason that atmospheric dispersion is showing up on OSC images and not (so much) on monochrome images, is that in the image alignment process of monochrome images the three main colours (red, green and blue) are aligned separately. So any mis-alignment of blue and red compared to green will be corrected in this process. Since all colours are embedded together in the same frames, this correction does not automatically happen in an OSC image. The solution therefore is to separate the colour channels and align them individually.
The WeightedBatchPreProcessing (WBPP) script in PixInsight offers the option for OSC images to separate colour channels at the Debayer step and put everything together at the end again. This means a fully automated workflow and since each frame will be judged individually, a very thorough method. The downside however is that all further steps in the process now have three times as many frames to process, leading to much longer processing time and much bigger amounts of data to be stored somewhere.
Often very good results can also be obtained by first creating the single calibrated/integrated RGB image, then separate into individual colour channels (Image > Extract > Split RGB channels) of the fully integrated image and align them at that stage. At this point, you have an idea how serious the effect is. If altitude of images was high enough, perhaps it is not even necessary.
Next step is to register all three channels with each other. Best is to use Green as the reference, and align both the Blue and the Red channel to Green. Registration at this point can be done in two different ways. The quickest one is StarAlignment. Simply select the Green file/view as reference and load the Blue and Red files/views in the ‘Target Images’ pane. Alternative you can use DynamicAlignment (DA). This is a bit more complicated to use, but I have found that it can sometimes give slightly better results. One important difference is that StarAlignment has limited ways to correct distortion. DA is much more powerful to detect any types of distortions, assuming enough points are used. For anyone who has not used DA before, it is not difficult to use at all, but you have to be aware of how it works. When you open the tool, the cursor changes in a box with the number ‘1’ in it. Click on your reference image. The cursor now changes into a box with the number ‘2’ in it. Click on the image that you want to align. Now click on a star in the reference image and check the ‘Selected Samples’ window. The left image represents the star you just selected. Don’t worry about a slight mis-click, the tool will automatically center around the centroid of the star. The right window shows what the software thinks the equivalent star in the to be registered image would be. In our application, this will always be correct. But if you are aligning two very different images, it may be necessary to tell the tool the location of the referenced star by shifting the little box in the image view. Now repeat this procedure across the whole image, selecting a bunch of stars. After all the alignment points have been selected, click the green check-mark and the image will be aligned.
Correcting star shapes
In principle, now that the blue and red channel have been aligned to the green channel, a combined RGB image should have little to no colour aberrations anymore. Often though, it still does, especially towards the edges. If there is any kind of star deformation, even the DA tool does not register images exactly right. For example, a little bit of elongation towards the edges due to some sensor tilt or a small back-spacing deviation can still lead to some aberrations at this point. In order to correct for that, the BlurXTerminator tool has a very valuable option. This option is called ‘Correct only’. In this mode, the tool does not do any deconvolution and/or sharpening. It only looks at star shapes and tries to correct them to make perfectly round stars. The effectiveness of the tool to correct star shapes in its current iteration is almost magical.
After aligning the individual channels and correcting any star-shape deviations, the individual channels can be put together again, and will almost certainly have no colour aberrations anymore related to atmospheric dispersion. In the current example, images are first aligned and then correct the star-shapes in them. I’m not sure if it matters too much in what order the tools are used. If DA is used for alignment, BXT could probably be applied first as well. With regular StarAlignment, perhaps it is better to do the BXT at the end. But more likely is that the differences are negligible.
The effect of atmospheric dispersion may not always be visible, or serious enough to handle separately in processing. But if it does show, a combination of separated alignment of the Red and Blue channel to Green with either StarAlignment or DynamicAlignment, and BCXT ‘correct only’ can pretty much completely eliminate the effect.