Dark Frames - Who? What? Where?

Everyone aspiring to take some astrophotographs sooner or later will run in to preprocessing and dark frames. There is lots of information available around the internet and various books by noteworthy astrophotographers. After reading through many and more websites and books and trying to figure out what actually happens during dark frame subtraction of light frames I decided to try and do some testing by myself. After all, just blindly following instructions given by someone else on different equipment, does not necessarily yield the best results. Also in some references dark frames are stated as unnecessary. This can easily lead to confusion when deciding on your astrophotography session.

In this post I'll be using as an example my shots of M33 (Triangulum Galaxy "Pinwheel Galaxy"). Please note that I myself am a novice at astrophotography and this also serves as my log of my ideas and thoughts during processing. So, do not take everything as a fact. Try things out by yourself to get to know your equipment and how it functions.

My equipment for this session was a modded AstroMaster 130 and Canon EOS 1000D. All images and dark frames are 45s exposures @ISO1600 (that's why they are quite noisy)

Theory (the boring bit ...)

The main purpose of the dark frame is to negate the effects of dark current your imager. Dark current can exhibit itself as fixed pattern noise or tempral noise in the images. Fixed pattern noise is always the same, only the intensity levels change with respect to the exposure length. Temporal noise is basically random noise, which is always different. Dark frames which are taken at the shortest possible exposure length are called bias frames typically bias frames are only used when scaling different length dark frames. I will not discuss bias frames in this post. Dark frames are subtracted from the light frames, but more of that later on in the post.

In order for dark frame subtraction to be effective, the dark frames need to be taken at the same imager settings as the light frames with the same exposure time. In the perfect world the dark frames would be black regardless of exposure time. This unfortunately is not the case, at least with my imager. Perfectly black dark frames would mean that the light frames do not contain any electronics induced signal, therefore the light frames would contain only the signal captured from the target (wouldn't that be wonderful!). Even in a semi-perfect world the dark frames would all be identical other than for the intensity of the fixed pattern noise. The main problem arises from temporal noise which is always different from image to image. If we didn't have temporal noise one dark frame would be enough to get rid of the unwanted signal from our precious light frames. Each pixel on the sensor receives photons or believes to receive photons in the case of dark frames. 

The photons are stored in the pixels (I will leave out the technical stuff). The stored amount of the photons in each pixel is then read out of the sensor and a value depending on the stored amount of photons is given for each pixel. The value for each individual pixel depends on the available bit depth of the conversion. This available bit depth gives us our maximum number of values each pixel can have. 1 bit conversion would give each pixel only black or white. 8 bit conversion would give 256 levels of grey for each pixel, and so forth. Grey? What do you mean grey? I have a color DSLR.

Since you presumably are working with a DSLR each pixel in the finished image is a combination of 4 pixels, which are all actually greyscale pixels with small colored filters. So when we are working with dark frames we want to handle each pixel separately in order to correct the possible deviations in the actual pixels. During the conversion of the raw image to a color image the value for each pixel is calculated from the values of the pixels surrounding it. There are various algorithms for doing this from simple to really complex. In order for the dark frame subtraction to work later on, the dark frames should be kept in raw format until they are applied to the light frames.

Basically the dark frames act as a noise map for your light frames defining the location and strength of the non-signal data accumulated for each pixel on the CMOS/CCD array of the imager. Some image processing software allow you to generate a bad pixel map to get rid of the "hot" and "cold" pixels in the image. Hot meaning that the said pixel is saturated (or close to) and cold pixels are completely dark. Whether to use use bad pixel mapping or stacking dark frames is up to you (but don't do both).

I suggest using an image with various colors to adjust your software's bayer conversion algorithm. At least I noticed that for some mysterious reason, my software mapped the raw pixels to wrong colors.

Combining

The same is true for dark frames as for lights, combination rules the day. In order to get a smooth outcome dark frames should be combined. There are a bunch different combination styles available in dedicated software. Typically dark frames are averaged. Since the noise in each image is different, not counting dead or hot pixels, averaging gives a good result. Combination is used to avoid introducing additional noise in to the light image, but even one dark frame is usually better than none at all. Combination should be performed without any alignment. I've also tried standard deviation stacking for dark frames, but the result has not been as good as with average combining. In the image below are depicted two separate dark frames and the result of average combining 30 dark frames. Even though dark frames are expected to black (no light reaches the sensor) it can easily be seen that the noise induces a quite constant signal level throughout the image. This same level, as an average, is also injected in to your light frames, that's why we need to get rid of that signal in order to increase contrast in the images.



Applying

The combined "Master Dark Frame" is subtracted from each light frame before debayering the light frames. The image series below depicts the difference between the images after subtracting the dark frames from the light frames. The top row is an area from the right top of a single image (hence the extreme coma, we won't worry about that for now) without any stretching (as the image looks originally). The bottom row is strongly stretched to bring out the noise in the images. The stretching has been done to the same values for each image in order to bring out the difference. As you can see, even the subtraction of one dark frame already brings the background sky level closer to what we expect the sky background to be. The addition of additional dark frames in to the Master Dark Frame evens out the noise even more and gives a more even background and better contrast.

Summary

  • Take the dark frames in conjunction with the light frames in order to have equal (or close to) temperature of the sensor
  • Prevent light from reaching the sensor
  • Use the same ISO -speed as for the lights
  • Use the same exposure time as for the lights
  • Preferably take more than one dark frame 
  • Do not debayer the frames (dark or light)
  • Average combine the dark frames without aligning --> Master Dark Frame
  • Subtract the combined Master Dark Frame from each light frame


0 comments:

Post a Comment