[size=175]Image Processing Software for Astrophotography[/size]
[list][*]Advanced astrophotographers will achieve markedly better results thanks to pervasive noise Tracking and data mining algorithms.[/*][*]Beginners will appreciate the ease of use and signal preservation safeguards.[/*][*]A novel engine does away with signal path limitations and poor user experience of traditional astrophotography software.
[/*][/list][url=http://startools.org/introducing-startools][size=175]Introducing StarTools[/size][/url]

StarTools is a image post-processing application built from the ground up exclusively for modern hardware and modern astrophotographers.

By tracking signal and  noise evolution during processing, it lets you effortlessly accomplish hitherto "impossible" feats  like deconvolution of a heavily processed image, or autonomous denoising  without local supports or masks.

Having per-pixel knowledge of signal and noise levels at all times, StarTools effortlessly produces results that have no equal in terms of fidelity, while making it far easier to use than traditional astrophotography software.

We believe the onus should be on the software to understand the user's intentions, rather than putting the onus on the user to understand the software.

This belief has seen StarTools become the tool of choice for a many thousands of enthusiasts, schools and institutions.
[url=http://startools.org/modules][size=175]Module features and documentation[/size][/url]

StarTools comprises several modules with deep, state-of-the-art functionality that rival (and often improve on) other software packages.


Don't be fooled by StarTools' simple interface - you are forgiven if, at first glance, you get the impression StarTools offers only the basics. Nothing could be further from the truth!

StarTools goes deep. Very deep. It's just not 'in your face' about it and you can still get great results without delving into the depths of its capabilities. It's up to you.

If you're a seasoned photographer looking to get more out of your data, StarTools will allow you to visibly gain the edge with novel, brute-force techniques and data mining routines that have only just become viable on modern 64-bit multi-core CPUs and increases in RAM and storage space.

If you're a beginner, StarTools will assist you by making it easy to achieve great results out-of-the box, while you get to know the exciting field of astrophotography better.

Whatever your situation, skills, equipment and prior experience, you'll find that working with StarTools is quite a bit different than most software you've worked with. And in astrophotography, that tends to be a [i]good[/i] thing!


^ [i]Example of the main screen's interface.

Navigation within StarTools generally takes place between the main screen and the different modules. StarTools' navigation was written to provide a fast, predictable and consistent work flow.

There are no windows that overlap, obscure or clutter the screen. Where possible, feedback and responsiveness will be immediate. Many modules in StarTools offer on-the-spot background processing, yielding quick final results for evaluation and further tweaking.

In some modules a preview area can be specified in order to get a better idea of how settings would modify the image in a particular area, saving the user from waiting for the whole image to be re-calculated.

In both the main screen and the different modules, a toolbar is found at the very top, with buttons that perform functionality that is specific to the active module. In case of the main screen, this toolbar contains buttons for opening an image, saving an image, undoing/redoing the last operation, invoking the mask editor, switching Tracking mode on/off, restoring the image to a particular state, and opening an 'about' dialog.

Exclusive to the main screen, the buttons that activate the different modules, reside on the left hand side of the main screen. Note that the modules will only successfully activate once an image has been loaded, with the exception of the 'LRGB' module. Note also that some module may remain unavailable, depending on whether Tracking mode is engaged.

Consistent throughout StarTools, a set of zoom control buttons are found in the top right corner, along with a zoom percentage indicator.

Panning controls ('scrollbar style') are found below and to the right of the image, as appropriate, depending on whether the image at its current zoom level fits in the application window.

Common to most modules is a 'Before/After' button, situated next to the zoom controls, which toggles between the original and processed version of an image for easy comparison.

All modules come with a 'Help' button in the toolbar, which explains, in brief, the purpose of the module. Furthermore, all settings and parameters come with their own individual 'Help' buttons, left to the parameter control. These help buttons explain, again in brief, the nature of the parameter or setting.
[url=http://startools.org/modules/introduction/interface/zooming--panning-and-scaling][size=125]Zooming, panning and scaling[/size][/url]

^ StarTools' astrophotography-optimised scaling algorithm can highlight latent pattern issues. It also was designed to show constant noise levels regardless of zoom level.

Even the way StarTools displays and scales images, has been created specifically for astrophotography.

StarTools implements a custom scaling algorithm in its user interface, which makes sure that perceived noise levels stay constant, no matter the zoom level. This way, nasty noise surprises when viewing the image at 100% are avoided.

^ At 100% zoom level a barely distinguishable horizontal pattern can indeed be seen.

Even more clever, StarTools scaling algorithm can highlight latent and faint patterns (often indicating stacking problems or acquisition errors) by intentionally causing an aliasing pattern at different zoom levels in the presence of such patterns.

[url=http://startools.org/modules/introduction/interface/changing-parameters][size=125]Changing parameters in StarTools[/size][/url]

^ An example of a levelsetter control in StarTools

 The parameters in the different modules are typically controlled by one of two types of controls­;
[list=1][*]A level setter, which allows the user to quickly set the value of a parameter within a certain range[/*][*]An item selector, which allows the user to switch between different modes.[/*][/list]

^ An example of a selector control in StarTools

 Setting the value represented in a level setter control is accomplished by clicking on the '+' and '-' buttons to increment or decrement the value respectively. Alternatively you can click anywhere in the area between the '-" and '+' button to set a value quickly.

 Switching items in the item selector is accomplished by clicking the arrows at either end of the item description. Note that the arrows may disappear as the first or last item in a set of items is reached. Alternatively the user may click on the label area of the item selector to see the full range of items which may then be selected from a pop-over menu.

^ Tracking begins as soon as you load your data.

 'Tracking' data mining plays a very important role in StarTools and understanding it is key to achieving superior results with StarTools.

As soon as you load any data, StarTools will start Tracking  the evolution of every pixel in your image, constantly keeping track of things like noise estimates, parameters you use and other statistics.

Tracking makes workflows much less linear and allows for StarTools' engine to "time travel" between different versions of the data as needed, so that it can insert modifications or consult the data in different points in time as needed ('change the past for a new present and future'). It's the primary reason why there is no difference between linear and non-linear data in StarTools, and the reason why you can do things in StarTools that would have otherwise been nonsensical (like deconvolution after stretching your data). If you're not familiar with Tracking and what it means for your images, signal fidelity and simplification of the workflow & UI, please do read up on it!

 Tracking how you process your data also allows the noise reduction routines in StarTools to achieve superior results. By the time you get to your end result, the Tracking feature will have data-mined/pin-pointed exactly where (and how much) visible noise grain exists in your image. I therefore 'knows' exactly how much noise reduction to apply in each area of your image.

Noise reduction is applied at the very end, as you switch Tracking off, because doing it at the very last possible moment will have given StarTools the longest possible amount of time to build and refine its knowledge of where the noise is in your image. This is different from other software, which allow you to reduce noise at any stage, since such software does not track signal evolution and its noise component.

 Tracking how you processed your data also allows the Color module to calculate and reverse how the stretching of the luminance information has distorted the color information (such as hue and saturation) in your image, without having to resort to 'hacks'. Due to this capability, color calibration is best done at the end as well, before switching Tracking off. This too is different from other software, which wants you to do your colour calibration before doing any stretching, since it cannot deal with colour correction after the signal has been non-linearly transformed like StarTools can.

The knowledge that Tracking gathers is used in many other ways in StarTools, however, the nice thing about Tracking is that it is very unobtrusive. In fact,  it actually helps get you get better results from your data in less time by homing in on parameters in the various modules that it thinks are good defaults, given what Tracking has learnt about your data.
[url=http://startools.org/modules/introduction/quick-start][size=125]Quick Start Tutorial: a quick generic work flow[/size][/url]

^ Giving StarTools virgin data is of the utmost importance. For example, if you are using DeepSkyStacker, make sure 'RGB Channels Background Calibration' and 'Per Channel Background Calibration' are set to 'No'.

Getting to grips with new software can be daunting, but StarTools was designed to make this as painless as possible. This quick, generic work flow will get you started.
[size=175]Step 1[/size]

Open an image. Processing in StarTools is easiest and will yield vastly better results if the data is as "virgin" as possible, meaning unstretched, not colour balanced, not noise reduced and not deconvolved. Best results are achieved with data that is as close to what the camera recorded (e.g. simple photon counts) as possible.

Do not use any software that may have meddled with the data, such as RAW converters or any software that came with your camera. Make sure that any stacking software that you use is set up to perform as little processing to the data as possible. For example, if you use [url=http://deepskystacker.free.fr/]Deep Sky Stacker[/url] make sure that Per Channel Color Calibration and RGB Channels Calibration are set to 'no'.

Upon opening an image, the Tracking dialog will open, asking you about the characteristics of the data. Choose the option that best matches the data being imported.

^ In the presence of problems in your data that need fixing, AutoDev will show you exactly what they are. Here we can see stacking artefacts, some vignetting towards the corners and a 'dirty' yellow/brown bias caused by light pollution.[size=175]Step 2[/size]

Launch AutoDev to help inspect the data. Chances are that the image looks terrible, which is - believe it or not - the point. In the presence of problems in the data, AutoDev will show these problems until they are dealt with. Because StarTools constantly tries to make sense of your data, StarTools is very sensitive to artefacts, meaning anything that is not real celestial detail (such as stacking artefacts, dust donuts, gradients, terrestrial scenery, etc.). Just 'Keep' the result. StarTools, thanks to Tracking, will allow us to redo the stretch later on.

At this point, things to look out for are;
[list][*]Stacking artefacts close to the borders of the image. These are dealt with in the Crop or Lens modules[/*][*]Bias or gradients (such as light pollution or skyglow). These are dealt with in the Wipe module.[/*][*]Oversampling (meaning the finest detail, such as small stars, being "smeared out" over multiple pixels). This is dealt with in the Bin module.[/*][*]Coma or elongated stars towards one or more corners of the image. These can be ameliorated using the Lens module.[/*][/list][size=175]Step 3[/size]

Fix the issues that AutoDev has brought to your attention;
[list=1][*]Ameliorate coma using the Lens module.[/*][*]Crop any remaining stacking artefacts.[/*][*]Bin the image until each pixel describes one unit of real detail.[/*][*]Wipe gradients and bias away. Be very mindful of any dark anomalies - bump up the Dark Anomaly filter if dealing with small ones (such as dark pixels) or mask big ones out using the Mask editor. Use the 'Temporary AutoDev' feature to get a better idea of how Wipe is doing.[/*][/list]

^ Using AutoDev ('redo') again after fixing the initial problems that AutoDev showed us before; stacking artifacts and light pollution were removed.[size=175]Step 4[/size]

Once all issues are fixed, launch AutoDev again and tell it to 'redo' the stretch. If all is well, AutoDev will now create a histogram stretch that is optimised for the "real" object(s) in your clean data. If your data is very noisy, it is possible AutoDev will optimise for the noise, mistaking it for real detail. In this case you can tell it to Ignore Fine detail.

If your object(s) reside on an otherwise uninteresting or "empty" background, you can tell AutoDev where the interesting bits of your image are by clicking & dragging a Region Of Interest.

Don't worry about the colouring just yet - focus getting the detail out of your data first.

[size=175]Step 5[/size]

Season your image to taste. Apply some deconvolution with the Decon module, dig out detail with the Wavelet Sharpen ('Sharp') module, enhance Contrast with the Contrast module and fix any dynamic range issues with the HDR module.

There are many ways to enhance detail to taste and much depends on what you feel is most important to bring out in your image.

^ The image after deconvolution (Decon), wavelet sharpening (Sharp), local dynamic range optimisation (HDR) and color calibration (Color).[size=175]Step 6[/size]

Launch the Color module.

 See if StarTools comes up with a good colour balance all by itself. A good colour balance shows a good range of all star temperatures, from red, orange and yellow through to white and blue. HII areas will tend to look purplish/pink, while galaxy cores tend to look yellow and their outer rims tend to look bluer.

 Green is an uncommon colour in outer space (though there are notable exceptions, such as areas that are strong in OIII such as the core of M42). If you see green dominance, you may want to reduce the green bias. If you think you have a good colour balance, but still see some dominant green in your image, you can remove the last bit of green using the 'Cap Green' function.
[size=175]Step 7[/size]

Switch Tracking off and apply noise reduction. You will now see what all the fuss is about, as StarTools seems to know exactly where the noise exists in your image and snuffs it out. The main parameters to tweak are 'Smoothness', 'Brightness Detail Loss' and 'Color Detail Loss'.
[size=175]Step 8[/size]

Pour yourself your favourite beverage and pat yourself on the back for a job well done!

^ 200% zoom with right part of the image denoised by Tracking supported Denoise, and no noise reduction applied to the left part of the image..[url=http://startools.org/modules/mask][size=150]Masks[/size][/url]

^ Masking is an integral part of working with StarTools.

 The Mask feature is an integral part of StarTools. Many modules use a mask to operate on specific pixels and parts of the image, leaving other parts intact. 

Importantly, besides operating only on certain parts of the image, it allows the many modules in StarTools to perform much more sophisticated operations.

 You may have noticed that when you launch a module that is able to apply a mask, the pixels that are set in the mask will flash three times in green. This is to remind you which parts of the image will be affected by the module and which are not. If you just loaded an image, all pixels in the whole image will be set in the mask, so every pixel will be processed by default. In this case, when you launch a module that is able to apply a mask, the whole image will flash in green three times.

 Green coloured pixels in the mask are considered 'on'. That is to say, they will be altered/used by whatever processing is carried out by the module you chose. 'Off' pixels (shown in their original colour) will not be altered or used by the active module. Again, please note that, by default all pixels in the whole image are marked 'on' (they will all appear green).

 For example, an 'on' pixel (green coloured) in the Sharp module will be sharpened, in the Wipe module it will be sampled for gradient modelling, in Synth it will be scanned for being part of a star, in Heal in will be removed and healed, in Layer it will be layered on top of the background image, etc.

 To recap;
[list][*]If a pixel in mask is 'on' (coloured green), then this pixel is fed to the module for processing.[/*][*]If a pixel in mask is 'off' (shown in original colour), then tell the module to 'keep the pixel as-is, hands off, do not touch or consider'.  [/*][/list][url=http://startools.org/modules/mask/usage][size=125]Usage[/size][/url]

 The Mask Editor is accessible from the main screen, as well as from the different modules that are able to apply a mask. The button to launch the Mask Editor is labelled 'Mask'. When launching the Mask Editor from a module, pressing the 'Keep' or 'Cancel' buttons will return StarTools to the module you pressed the 'Mask' button in.

As with the different modules in StarTools, the 'Keep' and 'Cancel' buttons work as expected; 'Keep' will keep the edited Mask and return, while 'Cancel' will revert to the Mask as it was before it was edited and return.

 As indicated by the 'Click on the image to edit mask' message below the image, clicking on the image will allow you create or modify a Mask. What actually happens when you click the image, depends on the selected 'Brush mode'. While some of the 'Brush modes' seem complex in their workings, they are quite intuitive to use.

 Apart from different brush modes to set/unset pixels in the mask, various other functions exist to make editing and creating a Mask even easier;
[list][*] The 'Save' button allows you to save the current mask to a standard TIFF file that shows 'on' pixels in pure white and 'off' pixels in pure black.  [/*][*] The 'Open' button allows you to import a Mask that was previously saved by using the 'Save' button. Note that the image that is being opened to become the new Mask, needs to have the same dimensions as the image the Mask is intended for. Loading an image that has values between black and white will designate any shades of gray closest to white as 'on', and any shades of gray closest to black as 'off'.
[/*][*] The 'Auto' button is a very powerful feature that allows you to automatically isolate features.[/*][*] The 'Clear' button turns off all green pixels (i.e. it deselects all pixels in the image).  [/*][*] The 'Invert' button turns on all pixels that are off, and turns off all pixels that were on.[/*][*]The 'Shrink' button turns off all the green pixels that have a non-green neighbour, effectively 'shrinking' any selected regions. [/*][*]The 'Grow' button turns on any non-green pixel that has a green neighbour, effectively 'growing' any selected regions. [/*][*]The 'Undo' button allows you to undo the last operation that was performed. [/*][/list]

[b]NOTE: To quickly turn on all pixels, click the 'clear' button, then the 'invert' button.[/b]
[url=http://startools.org/modules/mask/usage/brush-modes][size=125]Brush modes[/size][/url]

^ 10 different brush modes are at your disposal.

 Different 'Brush modes' help in quickly  selecting (and  de-selecting) features in the image.

 For example, while  in 'Flood fill  lighter pixels' mode, try clicking next to a bright star  or feature to  select it. Click anywhere on a clump of 'on' (green)  pixels, to toggle  the whole clump off again.

The mask editor has 10 'Brush modes'­;
[list][*] [b]Flood fill lighter pixels[/b];    use it to quickly select an adjacent area that is lighter than the clicked pixel (for example a star or a galaxy). Specifically, Clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that; either all neighbouring pixels of a particular pixel are already filled (on/green), or the pixel under evaluation is darker than the original pixel clicked. Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any green neighbouring pixels.    [/*][*] [b]Flood fill darker pixels[/b]; use it to quickly select an adjacent area that is darker than the clicked pixel (for example a dust lane).    Specifically, clicking a non-green pixel will, starting from the clicked pixel, recursively fill the image with green pixels until it finds that;    either all neighbouring pixels of a particular pixel are already filled (on/green),   or the pixel under evaluation is lighter than the original pixel clicked.    Clicking on a green pixel will, starting from the clicked pixel, recursively turn off any green pixels until it can no longer find any on/green neighbouring pixels.   [/*][*][b]Single pixel toggle[/b];    clicking a non-green pixel will make a non-green pixel turn green.   Clicking a green pixel will make green pixel turn non-green. It is a simple toggle operation for single pixels.[/*][*][b]Single pixel off (freehand)[/b]; clicking or dragging while holding the mouse button down will turn off pixels. This mode acts like a single pixel "eraser".[/*][*][b]Similar color[/b]; use it to quickly select an adjacent area that is similar in color.[/*][*][b]Similar brightness[/b]; use it to quickly select an adjacent area that is similar in brightness.[/*][*][b]Line toggle (click & drag)[/b]; use it to draw a line from the start point (when the mouse button was first pressed) to the end point (when the mouse button was released). This mode is particularly useful to trace and select satellite trails, for example for healing out using the Heal module.[/*][*][b]Lasso[/b]; toggles all the pixels confined by a convex shape that you can draw in this mode (click and drag). Use it to quickly select or deselect circular areas by drawing their outline.[/*][*][b]Grow blob[/b]; grows any contiguous area of adjacent pixels by expanding their borders into the nearest neighbouring pixel. Use it to quickly grow an area (for example a star core) without disturbing the rest of the mask.[/*][*][b]Shrink blob[/b]; shrinks any contiguous area of adjacent pixels by withdrawing their borders into the nearest neighbouring pixel that is not part of a border. Use it to quickly shrink an area without disturbing the rest of the mask.[/*][/list][url=http://startools.org/modules/mask/usage/auto][size=125]The Auto Feature[/size][/url]

^ The Auto Mask Generator is indispensible when, for example, dealing with star mask, as required for many of the modules in StarTools.

 The powerful 'Auto' function quickly and autonomously isolates features of interest such as stars, noise, hot or dead pixels, etc.

 For example, isolating just[/i] the stars in an image is a necessity for obtaining any useful results from the 'Decon' and 'Magic' module.

The type of features to be isolated are controlled by the 'Selection Mode' parameter­
[list][*][b]Light features + highlight > threshold[/b]; a combination of two selection algorithms. One is the simpler 'Highlight > threshold' mode, which selects any pixel whose brightness is brighter than a certain percentage of the maximum value (see the 'Threshold' parameter below). The other selection algorithm is 'Light features' which selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max. feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below'). This mode is particularly effective for selecting stars. Note that if the 'Threshold' parameter is kept at 100%, this mode produces results that are identical to the 'Light features' mode.  [/*][*][b]Light features[/b]; selects high frequency components in an image (such as stars, gas knots and nebula edges), up to a certain size (see 'Max feature size') and depending on a certain sensitivity (see 'Filter sensitivity').  [/*][*][b]Highlight > threshold[/b]; selects any pixel whose brightness is brighter than a certain percentage of the maximum (e.g. pure white) value. . If you find this mode does not select bright stars with white cores that well, open the 'Levels' module and set the 'Normalization' a few pixels higher. This should make light features marginally brighter and dark features marginally darker.  [/*][*][b]Dead pixels color/mono < threshold[/b]; selects dark high frequency components in an image (such star edges, halos introduced by over sharpening, nebula edges and dead pixels), up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below') and whose brightness is darker than a certain percentage of the maximum value (see the Threshold parameter below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects (dead pixels). Two versions are available, one for color images, the other for mono images.
[/*][*][b]Hot pixels color/mono > threshold[/b]; selects high frequency components in an image up to a certain size (see 'Max feature size' below) and depending on a certain sensitivity (see 'Filter sensitivity' below). It then further narrows down the selection by looking at which pixels are likely the result of CCD defects or cosmic rays (also known as 'hot' pixels). The 'Threshold' parameter controls how bright hot pixels need to be before they are potentially tagged as 'hot'. Note that a 'Threshold' of less than 100% needs to be specified for this mode to have any effect. Noise Fine - selects all pixels that are likely affected by significant amounts of noise. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode.  Two versions are available, one for color images, the other for mono images.[/*][*][b]Noise[/b]; selects all pixels that are likely affected by significant amounts of noise. This algorithm is more aggressive in its noise detection and tagging than 'Noise Fine'. Please note that other parameters such as the 'Threshold', 'Max feature size', 'Filter sensitivity' and 'Exclude color' have no effect in this mode.[/*][*][b]Dust & scratches[/b]; selects small specks of dusts and scratches as found on old photographs. Only the 'Threshold' parameter is used, and a very low value for the 'Threshold' parameter is needed.
[/*][*][b]Edges > Threshold[/b]; selects all pixels that are likely to belong to the edge of a feature. Use the 'Threshold' parameter to set sensitivity where lower values make the edge detector more sensitive.  [/*][*][b]Horizontal artifacts[/b]; selects horizontal anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.[/*][*][b]Vertical artifacts[/b]; selects vertical anomalies in the image. Use the 'Max feature size' and 'Filter sensitivity' to throttle the aggressiveness with which the detector detects the anomalies.[/*][*][b]Radius[/b]; selects a circle, starting from the centre of the image going outwards. The 'Threshold' parameter defines the radius of the circle, where 100.00 covers the whole image.[/*][/list]

Some of the selection algorithms are controlled by additional parameters­;
[list][*][b]Exclude color[/b]; tells the selection algorithms to not evaluate specific colour channels when looking for features. This is particularly useful if you have a predominantly red, purple and blue nebula with white stars in the foreground and, say, you'd want to select only the stars. By setting 'Exclude color' to 'Purple (red + blue), you are able to tell the selection algorithms to leave features in the nebula alone (since these features are most prominent in the red and blue channels). This greatly reduces the amount of false positives.  [/*][*][b]Max feature size[/b]; specifies the largest size of any feature the algorithm should expect. If you find that stars are not correctly detected and only their outlines show up, you may want to increase this value. Conversely, if you find that large features are being inappropriately tagged and your stars are small (for example in wide field images), you may reduce this value to reduce false positives.  [/*][*][b]Filter sensitivity[/b]; specifies how sensitive the selection algorithms should be to local brightness variations. A lower value signifies a more aggressive setting, leading to more features and pixels being tagged.  [/*][*][b]Threshold[/b]; specifies a percentage of full brightness (i.e. pure white) below, or above which a selection algorithm should detect features.  [/*][/list]

Finally, the 'Source' parameter selects the source data the Auto mask generator should use. Thanks to StarTools' Tracking functionality which gives every module the capability to go "back in time", the Auto mask generator can use either the original 'Linear' data (perfect for getting at the brightest star cores) or the data as you see it right now.


^ [i]Top: traditional Digital Development curve, Bottom: AutoDev. Notice more detail visible in the shadows, while not compromising on detail in midtones or blowing out stars.

In StarTools, Histogram Transformation Curves are considered obsolete. AutoDev uses image analysis to achieve better results in a more intuitive way.

When data is acquired, it is recorded in a linear form, corresponding to raw photon counts. To make this data suitable for human consumption, stretching it non-linearly is required.

^ Not a bug, but a feature! Don't let a first result like this scare you. AutoDev is doing you a favor by showing you exactly what is wrong with your data. In this we can see heavy light pollution, gradients and stacking artifacts that need taking care of before we can go any further.

Historically, simple algorithms were used to emulate the non-linear response of photographic paper by modelling its non-linear transformation curve. Later, in the 1990s because dynamic range in outer space varies greatly, "levels and curves"  tools allowed imagers to create custom histogram transformation curves that better matched the object imaged so that the most amount of detail became visible in the stretched image.

 Creating these custom curves was a highly laborious and subjective process. And, unfortunately, in many software packages this is still the situation today. The result is almost always sub-optimal dynamic range allocation, leading to detail loss in the shadows (leaving recoverable detail unstretched), shrouding interesting detail in the midtones (by not allocating it enough dynamic range) or blowing out stars (by failing to leave enough dynamic range for the stellar profiles).

StarTools' AutoDev module however uses image analysis to find the optimum custom curve for the characteristics of the data. By actively looking for detail in the image, AutoDev autonomously creates a custom histogram curve that best allocates the available dynamic range to the scene, taking into account all aspects and detail. As a consequence, the need for local HDR manipulation is minimised.

AutoDev is, in fact, so good at its job that it is also one of the most important tools in StarTools for initial data inspection; using AutoDev as one of the first modules on your data will see it bring out problems in the data, such as stacking artefacts, gradients, bias, dust donuts, etc. Upon removal and/or mitigation of these problems, AutoDev may then be used to stretch the cleaned up data.

^ Great allocation of dynamic range by AutoDev after taking care of the stacking artifacts, gradients and light pollution using the Wipe module.

AutoDev has a lot of smarts behind it. It analyses a Region of Interest ("RoI") - by default the whole image - so that it can find the optimum histogram transformation curve based on what it sees. The 'Develop' module by comparison, is more simple in that it mimics photographic film development, which doesn't actually take into account what[/i] is in the image.

Understanding AutoDev is pretty simple really; its job is to look at what's in your image and to make sure as much as possible is visible. The problem with a histogram transformation curve (aka 'global stretch') is that it affects all pixels in the image. So, what works in one area (bringing out detail in the background), may not necessarily work in another (for example, it may make a medium-brightness DSO core harder to see). Therefore stretching the image is always a compromise. AutoDev finds the best compromise global curve, given what detail is visible in your image and your preferences. Of course, fortunately we have other tools like the Contrast and HDR modules to 'rescue' [i]all[/i] detail by optimising for local dynamic range on top of global dynamic range.  

The latter is a really useful feature, as it is [i]also[/i] very adept at finding artefacts or stuff in your image that is[b] not [/b]real detail but requires attention. That's why AutoDev is also extremely useful to launch as the first thing after loading an image to see what - if any - issues need addressing before proceeding. If there are any, AutoDev will show them to you guaranteed.

After fixing such issues, we can start using AutoDev's skills for showing the remaining (this time [i]real celestial[/i]) detail in the image.

If most of the image consists of a background and just a small object of interest, by default AutoDev will weigh the importance of the background higher (since it covers a much larger part of the image vs the object); given what it has to work with it's the best compromise. If the background is noisy, it will start digging out the noise, mistaking it for fine detail. If this behaviour is undesirable, there are a couple of things you can do in AutoDev.
[list=1][*]Change the [b]'Ignore Detail < parameter[/b]', so that AutoDev will no longer detect fine detail (such as noise grain).[/*][*]Simply tell it what it should focus on instead by specifying an ROI and not regard the area outside the ROI just a little bit ('[b]Outside ROI influence[/b]').[/*][/list]

You'll find that, as you include more background around the object, AutoDev, as expected, starts to optimise more and more for the background and less for the object; it's doing its job very well!

So, to use the ROI effectively, give it a 'sample' of the important bit of the image. This can be a whole object, or it can be just a slice of the object that is a good representation of what's going on in the object in terms of detail, for example a slice of a galaxy from the core, through the dust lanes, to the faint outer arms.

There is no shame in trying a few different ROIs in order to find one you're happy with. What ever the case, it certainly beats pulling histogram curves, both in results and objectivity (you've got a dedicated algorithm/assistant watching over your shoulder!).
[url=http://startools.org/modules/band][size=150]Band: Banding Reduction[/size][/url]

^ [i]Banding in a $7 web cam.

The Band module reduces horizontal and vertical banding/striping, often caused by read noise.

Using the Band module is quite straight forward; simply specify the orientation of the banding ("Horizontal" or "Vertical") and click 'Do'. An 'algorithm' parameter switches between two subtly different algorithms that attempt to reduce banding. If the default algorithm ('Algorithm 1') does not produce satisfactory results, 'Algorithm 2' may possibly yield better results.
[url=http://startools.org/modules/bin][size=150]Bin: Trade Resolution for Noise Reduction[/size][/url]

^ 400% zoomed crop of an image. Left: scaled down to 35% of its original size using nearest neighbor sampling (retaining noise). Right: binned to 2.83x2.83 (binned down to 35% of its original size). A significant amount of noise reduction has occured. Futher deconvolution is now an option. Notice real structural detail is not compromised, but any non-structural detail (noise) has been removed.

The Bin module puts you in control over the trade-off between resolution, resolved detail and noise.

With today's multi-megapixel imaging equipment and high density CCDs, oversampling is a common occurrence; there is only so much detail that seeing conditions allow for with a given setup. Beyond that it is impossible to pick up fine detail. Once detail no longer fits in a single pixel, but instead gets "smeared out" over multiple pixels due to atmospheric conditions (resulting in a blur), binning may turn this otherwise useless blur into noise reduction. Binning your data may make an otherwise noisy and unusable data set usable again, at the expense of 'useless' resolution.

The Bin module was created to provide a freely scalable alternative to the fixed 2×2 (4x reduction in resolution) or 4×4 (16x reduction in resolution) software binning modes commonly found in other software packages or modern consumer digital cameras and DSLRs (also known as 'Low Light Mode'). As opposed to these other binning solutions, the StarTools' Bin module allows you to bin your data (and gain noise reduction) by the amount you want – if your data is seeing-limited (blurred due to adverse seeing conditions) you are now free to bin your data until exactly that limit and you are not forced by a fixed 2×2 or 4×4 mode to go beyond that.

Similarly, deconvolution (and subsequent recovery of detail that was lost due to atmospheric conditions) may not be a viable proposition due to the noisiness of an initial image. Binning may make deconvolution an option again. The StarTools Bin module allows you to determine the ratio whith which you use your oversampled data for binning and deconvolution to achieve a result that is finely tuned to your data and imaging circumstances of the night(s).

Core to StarTools' fractional binning algorithm is a custom built anti-aliasing filter that has been carefully designed to not introduce any ringing (overshoot) and, hence, to not introduce any artefacts when subsequent deconvolution is used on the binned data.

^ StarTools' Bin module has a simple interface with just one parameter (

The Bin module is operated with just a single parameter. This parameter controls the amount of binning that is performed on the data. The new resolution is displayed ('New Image Size X x Y') , as well the single axis scale reduction, the Signal-to-Noise-Ratio improvement and the increased bit-depth of the new image.
[size=175]When to bin?[/size]

[url=http://en.wikipedia.org/wiki/Data_binning]Data binning is a data pre-processing technique used to reduce the effects of minor observation errors.[/url] Many astrophotographers are familiar with the virtues of [b]hardware[/b] binning. The latter pools the value of 4 (or more) CCD pixels before the final value is read. Because reading introduces noise by itself, pooling the value of 4 or more pixels reduces this 'read noise' also by a factor of 4 (one read is now sufficient, instead of having to do 4). Ofcourse, by pooling 4 pixels, the final resolution is also reduced by a factor of 4. There are many, many factors that influence hardware binning and [url=http://www.starrywonders.com/binning.html]Steve Cannistra has done a wonderful write-up on the subject on his starrywonders.com website[/url]. It also appears that the merits of hardware binning are heavily dependent on the instrument and the chip used.

Most OSCs (One-Shot-Color) and DSLR do not offer any sort of hardware binning in color, due to the presence of a [url=http://en.wikipedia.org/wiki/Bayer_filter]Bayer matrix[/url]; binning adjacent pixels makes no sense, as they alternate in the color that they pick up. The best we can do in that case is create a grayscale blend out of them. So hardware binning is out of the question for these instruments.

So why does StarTools offer software binning? Firstly, because it allows us to trade resolution for noise reduction. By grouping multiple pixels into 1, a more accurate 'super pixel' is created that pools multiple measurements into one. Note that we are actually free to use any statistical reduction method that we want. Take for example this 2 by 2 patch of pixels;

7 7
3 7

A 'super pixel' that uses simple averaging yields (7 + 7 + 3 + 7) / 4 = 6. If we suppose the '3' is anomalous value due to noise and '7' is correct, then we can see here how the other 3 readings 'pull up' the average value to 6; pretty darn close to 7.

We could use a different statistical reduction method (for example taking the median of the 4 values) which would yield 7, etc. The important thing is that grouping values like this tends to filter out outliers and make your super pixel value more precise.
[size=175]Binning and the loss of resolution

But what about the downside of losing resolution? That super high resolution may[/i] have actually been going to waste! If for example your CCD can resolve detail at 0.5 arcsecs per pixel, but your seeing is at best 2.0 arcsecs, then you effectively have 4 times more pixels than you need to record one 1 unit of real resolvable celestial detail. Your image will be "oversampled", meaning that you have allocated more resolution than the signal really will ever require. When that happens, you can zoom in into your data and you will notice that all fine detail looks blurry and smeared out over multiple pixels. And with the latest DSLRS having sensors that count 20 million pixels and up, you can bet that most of this resolution will be going to waste at even the most moderate magnification. Sensor resolution may be going up, but the atmosphere's resolution will forever remain the same - buying a higher resolution instrument will do nothing for the detail in your data in that case! This is also the reason why professional CCDs are typically much lower in resolution; the manufacturers rather use the surface area of the chip for coarser but more deeper, more precise CDD wells ('pixels') than squeezing in a lot of very imprecise (noisy) CCD wells (it has to be said the latter is a slight oversimplification of the various factors that determine photon collection, but it tends to hold).
[size=175]Binning to undo the effects of debayering interpolation

There is one other reason to bin OSC and DSLR data to at least 25% of its original resolution; the presence of a bayer matrix means that (assuming an RGGB matrix) after applying a [url=http://en.wikipedia.org/wiki/Demosaicing]debayering (aka 'demosaicing') algorithm[/url], 75% of all red pixels, 50% of all green pixels, and another 75% of all blue pixels are completely made up!

Granted, your 16MP camera may have a native resolution of 16 million pixels, however it has to divide these 16 million pixels up between the red, green and blue channels! Here is another very good reason why you might not want to keep your image at native resolution. Binning to 25% of native resolution will ensure that each pixel corresponds to one real recorded pixel in the red channel, one real recorded pixel in the blue channel and two pixels in the green channel (the latter yielding a 50% noise reduction in the green channel).

There are, however, instances where the interpolation can be undone if enough frames are available (through sub-pixel dithering) to have exposed all sub-pixels of the bayer matrix to real data in the scene ([url=https://en.wikipedia.org/wiki/Drizzle_%28image_processing%29â]drizzling[/url]). 

[size=175]Fractional binning

StarTools' binning algorithm is a bit special in that it allows you to apply 'fractional' binning; you're not stuck with pre-determined factors (ex. 2x2, 3x3 or 4x4). You can bin exactly the amount that achieves a single unit of celestial detail in a single pixel. In order to see what that limit is, you simply keep reducing resolution until no blurriness can be detected when zooming into the image. Fine detail (not noise!) should look crisp. However, you may decide to leave a little bit of blurriness to see if you can bring out more detail using deconvolution.
[url=http://startools.org/modules/color][size=150]Color: Advanced Color Correction and Manipulation[/size][/url]

^ [i]Left: traditional processing, Right: StarTools color constancy showing star temperatures evenly until well into the core.

Thanks to StarTools' Tracking feature the Color module provides you with unparalleled flexibility when it comes to colour presentation in your image.

Whereas other software without Tracking data mining, destroys colour and colour saturation in bright parts of the image as the data gets stretched, StarTools allows you to retain colour and saturation throughout the image with its 'Color Constancy' feature. This ability allows you to display all colours in the scene as if it were evenly illuminated, meaning that even very bright cores of galaxies and nebulas retain the same colour throughout, irrespective of their local brightness, or indeed acquisition methods and parameters.

This ability is important in scientific representation of your data, as it allows the viewer to compare similar objects or areas like-for-like, since colour in outer space very often correlates with chemical signatures or temperature.

^ Top: traditional processing, Bottom: StarTools color constancy showing true color of the core, regardless of brightness. (image acquisition by Jim Misti)

The same is true for star temperatures across the image, even in bright, dense star clusters. This mode allows the viewer of your image to objectively compare different parts and objects in the image without suffering from reduced saturation in bright areas. It allows the viewer to explore the universe that you present in full colour, adding another dimension of detail, irrespective of the exposure time and subsequent stretching of the data.

For example, StarTools enables you to keep M42's colour constant throughout, even in its bright core. No fiddling with different exposure times, masked stretching or saturation curves needed. You are able to show M31's true colours instead of a milky white, or resolve star temperatures to well within a globular cluster's bright core. All that said, if you're a fan of the traditional 'handicapped' way of colour processing in other software, then StarTools can emulate this type of processing as well.

The Color module's abilities don't stop there, however. It is also capable of emulating a range of complex LRGB color compositing methods that have been invented over the years. And it does it at the click of a button! Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.

The Color module allows for various ways to calibrate the image, including by star field, sampling G2V star, galaxy sampling and - unique to StarTools - the MaxRGB calibration view. The latter allows for objective colour calibration, even on poorly calibrated screens.

Aside from Color calibration (thanks to Tracking data mining carried out on a linear version of your data, no matter whether you have stretched it or not), the Color module comes with a number of ways to control colour saturation in your image. A green removal algorithm rounds out the feature set.


The Color module is very powerful - offering capabilities surpassing most other software - yet it is simple to use.

The primary goal that the Color module was designed to accomplish, is achieving a good colour balance that accurately describes the colour ratios that were recorded. In accomplishing that goal, the Color module goes further than other software by offering a way to negate the adverse effects of non-linear dynamic range manipulations on the data (thanks to Tracking data mining). In simple terms, this means that colouring can be reproduced (and compared!) in a consistent manner regardless of how bright or dim a part of the scene is shown.
[url=http://startools.org/modules/color/usage/launching-the-color-module][size=125]Launching the Color module[/size][/url]

^ If a full mask is not set, the Color modules allows you to set it now, as colour balancing is typically applied to the full image (requiring a full mask).

Upon launch, the colour module blinks the mask three times in the familiar way. If a full mask is not set, the Color modules allows you to set it now, as colour balancing is typically applied to the full image (requiring a full mask).

^ StarTools tends to come up with a reasonable colour balance by default.

In addition to blinking the mask, the Color module also analyses the image and sets the Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors to an value which it deems the most appropriate for your image. This behaviour is identical to manually clicking the 'Sample' button.
[url=http://startools.org/modules/color/usage/setting-a-colour-balance][size=125]Setting a colour balance[/size][/url]

^ The Red, Green and Blue Bias controls.

The Red Bias Reduce, Green Bias Reduce and Blue Bias Reduce factors are the most important settings in the Color module. They directly determine the colour balance in your image. Their operation is intuitive; too much red in your image? Pump up the 'Red Bias Reduce' value. Too little red in your image? Reduce the 'Red Bias Reduce' value.

If you'd rather operate on these values in terms of Bias Increase[/i], then simply switch the 'Bias Slider Mode' setting to 'Sliders Increase Color Bias'.

Switching between these two modes you can see that, for example, a RedBias Reduce of 8.00 is the same as a Green and Blue Bias [i]Increase[/i] of 8.00. It makes intuitive sense when you think about it - a relative decrease of red makes blue and green more prevalent and vice versa.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance][size=125]Color balancing techniques[/size][/url]

Now that we know how to change the colour balance, how do we know what to actually set it to?

There are a great number of tools and techniques that can be applied in StarTools that let you home in on a good colour balance. Before delving into them, It is highly recommended to switch 'Style' to 'Scientific (Color Constancy)' during color balancing, even if that is not the preferred style of rendering the colour of the end result, this is because the Color Constancy feature makes it much easier to colour balance by eye in some instances due to its ability to show continuous, constant colour throughout the image. Once a satisfactory colour balance is achieved, of course, feel free to switch to any alternative style of colour rendering.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance/white-reference-by-clicking-pixel][size=125]White reference by clicking a pixel[/size][/url]

If you know that a particular pixel or area in your image is supposed to be a shade of neutral white or gray, simply clicking on it is sufficient to let StarTools compute the right Red, Green and Blue bias settings to make that pixel appear neutral. This technique is particularly useful if you have a star of spectral type G2V (sun-like) in your image. The reasoning is that the sun is the perfect daylight white reference, and so any similar star elsewhere in the galaxy should be too.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance/white-reference-by-mask-sampling][size=125]White point reference by mask sampling[/size][/url]

^ [i]We can calibrate against a big enough population of non-associated foreground stars, by putting them in a mask, clicking 'Sample' in the Color module and applying the found bias values to the whole image again.

Upon launch, or upon clicking the Sample button, the Color module samples whatever mask is set (note also that the set mask also ensures the Color module only applies any changes to the masked-in pixels!) and sets the Red, Green and Blue bias settings accordingly.

We can use this same behaviour to sample larger parts of the image that we know should be white. This method mostly exploits the fact that stars come in all sorts of sizes and temperatures (and thus colours!) and that this distribution is completely random. Therefore if we sample a large enough population, we should find the average star to be somewhere in the middle. Our sun is a very average star and is the white balance that we're after. Therefore, if we sample a large enough number of pixels containing a large enough number of stars, we should find a good colour balance.

^ A reasonably good color balance achieved by putting all stars in a mask using the Auto feature and sampling them.

We can accomplish that in two ways; we either sample all stars (but only stars!) in a wide enough field, or we sample a whole galaxy that happens to be in the image (note that the galaxy must be of a certain type to be a good candidate and be reasonably close - preferably a barred spiral galaxy much like our own Milkyway).

Whichever you choose, we need to create a mask, so we launch the Mask editor. Here we can use the Auto feature to select a suitable selection of stars, or we can us the Flood Fill Brighter or Lassoo tool to select a galaxy. Once selected, return to the Color module and click Sample. StarTools will now determine the correct Red, Green and Blue bias to match the white reference pixels in the mask so that they come out neutral.

To apply the new colour balance to the whole image, launch the Mask editor once more and click Click, then click Invert to select the whole image. Upon return to the Color module, the whole image will now be balanced by the Red, Green and Blue bias values we determined earlier with just the white reference pixels selected.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance/maxrgb-mode][size=125]White balancing in MaxRGB mode[/size][/url]

^ Major green channel dominance in the core points to color imbalance in that area.

StarTools comes with a unique colour balancing aid called MaxRGB. This mode of colour balancing is exceptionally useful if trying to colour balance by eye, but the user suffers from colour blindness or uses a screen that is not colour calibrated very well.

^ Reducing the green bias has removed green dominance in the core, leaving only spurious/random green dominance due to noise.

The MaxRGB aid allows you to view which channel is dominant per-pixel. If a pixel is mostly red, that pixel is shown red, if a pixel is mostly green, that pixel is shown green, and if a pixel is mostly blue, that pixel is shown blue.

^ Switching from MaxRGB mode to Normal mode confirms the image still looks good.

By cross referencing the normal image with the MaxRGB image, it is possible to find deficiencies in the colour balance. For example, the colour green is very rarely dominant in space (with the exception of highly dominant OIII emission areas in, for example the Trapezium in M42).

Therefore, if we see large areas of green, we know that we have too much green in our image and we should adjust the bias accordingly. Similarly if we have too much red or blue in our image, the MaxRGB mode will show many more red than blue pixels in areas that should show an even amount (for example the background). Again we then know we should adjust red or green accordingly.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance/known-features-and-processes][size=125]White balancing by known features and processes[/size][/url]

^ M101 exhibiting a nice yellow core, bluer outer regions, red/brown dust lanes and purple HII knots, while the foreground stars show a good distribution of color temperatures from red to orange, yellow, white to blue.

StarTools' Color Constancy feature makes it much easier to see colours and spot processes, interactions, emissions and chemical composition in objects. In fact, the Color Constancy feature makes colouring comparable between different exposure lengths and different gear. This allows for the user to start spotting colours repeating in different features of comparable objects. Such features are, for example, the yellow cores of galaxies (due to the relative over representation of older stars as a result of gas depletion), the bluer outer rims of galaxies (due to the relative over representation of bright blue young stars as a result of the abundance of gas) and the pink/purplish HII area 'blobs' in their discs.  Red/brown (white light filtered by dust) dust lanes complement a typical galaxy's rendering.

Similarly, HII areas in our own galaxy (e.g. most nebulae), while in StarTools Color Constancy Style mode, display the exact same colour signature found in the galaxies; a pink/purple as a result of predominantly deep red Hydrogen-alpha emissions mixed with much weaker blue/green
emissions of Hydrogen-beta and Oxygen-III emissions and (more dominantly) reflected blue star light from bright young blue giants who are often born in these areas, and shape the gas around them.

Dusty areas where the bright blue giants have 'boiled away' the Hydrogen through radiation pressure (for example the Pleiades) reflect the blue star light of any surviving stars, becoming distinctly blue reflection nebulae. Sometimes gradients can be spotted where (gas-rich) purple gives away to (gas-poor) blue (for example the Rosette core) as this process is caught in the act.

Diffraction spikes, while artefacts, also can be of great help when calibrating colours; the "rainbow" patterns (though skewed by the dominant colour of the star whose light is being diffracted) should show a nice continuum of colouring.

Finally, star temperatures, in a wide enough field, should be evenly distributed; the amount of red, orange, yellow, white and blue stars should be roughly equal. If any of these colors are missing or are over-represented we know the colour balance is off.
[url=http://startools.org/modules/color/usage/how-to-determine-a-good-color-balance/colour-balancing-light-pollution-filter][size=125]Colour balancing of data that was filtered by a light pollution filter[/size][/url]

Colour balancing of data that was filtered by a light pollution filter is fundamentally impossible; narrow (or wider) bands of the spectrum are missing and no amount of colour balancing is going to bring them back and achieve proper colouring. A typical filtered data set will show a distinct lack in yellow and some green when properly colour balanced. It's by no means the end of the world - it's just something to be mindful of.

Correct colouring may be achieved however by shooting deep luminance data with light pollution filter in place, while shooting colour data without filter in place, after which both are processed separately and finally combined. Colour data is much more forgiving in terms of quality of signal and noise; the human eye is much more sensitive to noise in the luminance data that it is in the colour data. By making clever use of that fact and performing some trivial light pollution removal in Wipe, the best of both worlds can be achieved.
[url=http://startools.org/modules/color/usage/tweaking-your-colours][size=125]Tweaking your colors[/size][/url]

^ Increasing saturation makes colours more vivid, while increasing the Dark Saturation response parameter introduces more colour in the shadows. 

Once you have achieved a color balance you are happy with, the StarTools Color module offers a great number of ways to change the presentation of your colours.

The parameter with the biggest impact is the 'Style' parameter. StarTools is renowned for its Color Constancy feature, rendering colours in objects regardless of how the luminance data was stretched, the reasoning being that colours in outer space don't magically change depending on how we stretch our image. Other software sadly lets the user stretch the colour information along with the luminance information, warping, distorting and destroying hue and saturation in the process. The 'Scientific (Color Constancy)' setting for Style undoes these distortions using Tracking information, arriving at the colours as recorded.

To emulate the way other software renders colours, two other settings are available for the Style parameter, being "Artistic, Detail Aware" and "Artistic, Detail Aware". The former still uses some Tracking information to better recover colours in areas whose dynamic range was optimised locally, while the latter does not compensate for any distortions whatsoever.
[size=175]LRGB Method Emulation[/size]

The LRGB Method Emulation allows you to emulate a number of colour compositing methods that have been invented over the years. Even if you acquired data with an OSC or DSLR, you will still be able to use these compositing methods; the Color module will generate synthetic luminance from your RGB on the fly and re-composite the image in your desired compositing style.

The difference in colouring can be subtle or more pronounced. Much depends on the data and the method chosen.

'Straight CIELab Luminance Retention' manipulates all colours in a psychovisually optimal way in CIELab space, introducing colour without affecting apparent brightness.

^ A more 'handicapped' way of showing colours is also available, emulating the way other software distorts and destroys hues and saturation along with stretching the luminance data.

'RGB Ratio, CIELab Luminance Retention' uses a [url=http://www.allthesky.com/articles/colorpreserve.html]method first proposed by Till Credner of the Max-Planck-Institut[/url] and subsequently [url=http://darkhorizons.emissionline.com/NewLRGB.htm]rediscovered by Paul Kanevsky[/url], using RGB ratios multiplied by luminance in order to better preserve star colour. Luminance retention in CIELab color space is applied afterwards.

'50/50 Layering, CIELab Luminance Retention' uses a [url=http://www.robgendlerastropics.com/LRGB.html]method proposed by Robert Gendler[/url], where luminance is layered on top of the colour information with a 50% opacity. Luminance retention in CIELab color space is applied afterwards. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.

'RGB Ratio' uses a [url=http://www.allthesky.com/articles/colorpreserve.html]method first proposed by Till Credner of the Max-Planck-Institut[/url] and subsequently [url=http://darkhorizons.emissionline.com/NewLRGB.htm]rediscovered by Paul Kanevsky[/url], using RGB ratios multiplied by luminance in order to better preserve star colour. No further luminance retention is attempted.

'50/50 Layering, CIELab Luminance Retention' uses a [url=http://www.robgendlerastropics.com/LRGB.html]method proposed by Robert Gendler[/url], where luminance is layered on top of the colour information with a 50% opacity. No further luminance retention is attempted. The inherent loss of 50% in saturation is compensated for, for your convenience, in order to allow for easier comparison with other methods.

Note that the LRGB Emulation Method feature  is only available when Tracking is engaged.


The 'Saturation' parameter allows colours to be rendered more, or less vividly, whereby Bright Saturation and Dark Saturation control how much colour and saturation is  introduced in the highlights and shadows respectively. It is important to note that introducing colour in the shadows may exacerbate colour noise, though Tracking will make sure any such noise exacerbations are recorded and dealt with during the final denoising stage.
[size=175]Cap Green

The 'Cap Green' parameter, finally, removes spurious green pixels if needed, reasoning that green dominant colours in outer space are rare and must therefore be caused by noise. Use of this feature should be considered a last resort if colour balancing does not yield adequate results and the green noise is severe. The final denoising stage should, thanks to Tracking data mining, pin pointed the green channel noise already and should be able to adequately mitigate it.

[url=http://startools.org/modules/contrast][size=150]Contrast: Local Contrast Optimization[/size][/url]

^ Top: globally stretched data without further local dynamic range optimisation. Bottom: Large to medium scale local dynamic range optimisation with the Contrast Module.

The Contrast module optimizes local dynamic range allocation, resulting in better contrast, reducing glare and bringing out faint detail.

It operates on medium to large areas and is especially effective for enhancing contrast in nebulae, globular clusters and galaxy cores.

^ We will use this Hydrogen Alpha dataset of Meloitte 15, acquired by Jim Misti to demonstrate the Contrast module.

The Contrast module has some parameters in common with the Wipe module. In some ways it is similar, though not the same.

Just like the Wipe module, the Contrast module is sensitive to "dark anomalies"; pixels not of celestial origin that are darker than the real celestial background.

^ A lower 'Aggrressiveness' setting tends to yield images that are less stark, by being less aggressive with local dynamic range optimisation.

So, just like the Wipe module, if dark anomalies are present, we need to make sure that any such anomalies are mitigated before Contrast sees them, either by removing them (cropping them out) or instructing the Contrast module to ignore them (increasing the '[b]Dark anomaly filter[/b]' parameter).

Once any dark anomalies are taken care of, a suitable '[b]Aggressiveness[/b]' parameter needs to be chosen. The '[b]Aggressiveness[/b]' parameter controls how 'local' the dynamic range optimisation is allowed to be. You will find that a higher '[b]Aggressiveness[/b]' value with all else equal, will yield an image with areas of starker contrast. More generally, you will find that changing the '[b]Aggressiveness[/b]' value will see the Contrast module take pretty different decisions on what and where to optimise. The rule of thumb is that a higher '[b]Aggressiveness[/b]' value will see smaller and 'busier' areas given priority over larger more 'tranquil' areas.

^ A higher 'Aggrressiveness' setting tends to yield images that more stark, by being more aggressive with local dynamic range optimisation.

Similar to the Wipe module, the '[b]Precision[/b]' parameter can be used to increase the precision when dealing with highly detailed wide-fields with a lot of undulating detail, combined with high '[b]Aggressiveness[/b]' values.

The '[b]Dark anomaly headroom[/b]' parameter controls how heavily the Contrast module "squashes" the dynamic range of larger scale features it deems "unnecessary". By de-allocating dynamic range that is used to describe larger features and re-allocating it to interesting local features, the de-allocation necessarily involves reducing the larger features' dynamic range, hence "squashing" that range. Very low settings may appear to clip the image (though this is not the case). For those familiar with music production, the Contrast module is very much akin to a Compressor, but for your image instead.

The '[b]Compensate gamma[/b]' feature attempts to apply a non-linear curve that makes the image just as bright as the source (input) image. This option may be desirable if the image has gotten to dark.

Finally, the '[b]Expose dark areas[/b]' option can help expose detail in the shadows by normalizing the dynamic range locally; making sure that the fully dynamic range is used at all times. This option may generate artefacts at high '[b]Aggressiveness[/b]' settings, which may be mitigated in some instances by increasing the '[b]Precision[/b]' parameter.

[url=http://startools.org/modules/decon][size=150]Deconvolution: Detail Recovery from Seeing-Limited Data[/size][/url]

^ Top: before deconvolution. Bottom: after deconvolution. (200% zoom)

StarTools' Deconvolution module allows for recovering detail in seeing-limited data sets that were affected by atmospheric turbulence.

The Deconvolution algorithm in StarTools is so fast, that previewing and experimentation to find the right parameters can be done in near-real-time.

^ Combined with StarTools unique Tracking feature, Decon is able to perform matematically correct deconvolution, even after heavy stretching a processing. (300% zoom)

The Deconvolution module incorporates a new regularization algorithm that automatically finds the optimum balance between noise and detail and puts you in control of this trade-off in an intuitive way.

A novel de-ringing algorithm ensures stars are protected from the Gibbs phenomenon (also known as 'panda eye effect'), while actually being able to still coalesce singularities like over exposed white cores of stars into point lights. You have full control over your de-ringing mask during the operation (not just before).

Creating a suitable de-ringing mask that works very well in most cases, is done in a single click!
[url=http://startools.org/modules/denoise][size=150]De-Noise: Detail Aware Wavelet-based Noise Reduction[/size][/url]


The De-Noise module offers detail-aware, astro-specific noise reduction, which, paired with StarTools' Tracking feature, yields results that have no equal.

Whereas generic noise reduction routines and plug-ins for terrestrial photography are often optimised to detect and enhance geometric patterns and structures in the face of random noise, the De-Noise module is optimised to do the opposite and optimise patterns and structures that are non-geometric in nature in the face of random noise (as well as read noise).

When used in conjunction with StarTools' 'Tracking' feature which data mines every decision and noise evolution per-pixel during the user's processing, the results that De-Noise is able to deliver autonomously are absolutely unparalleled. The extremely targeted noise reduction that is provided in this case, can only be approximated in other software by spending many hours creating a noise mask by hand.

^ The final image before entering the denoising stage. Large local dynamic range variations were intentionally introduced using the Contrast module to decouple luminance and noise levels (e.g. one is no longer a predictor for the other), in order to demonstrate the noise evolution Tracking capabilities of StarTools.

Denoising starts when switching Tracking off. It is therefore generally the last step, and for good reason. Being the last step, Tracking has had the longest possible time to track and analyse noise propagation. It therefore has the best and most accurate statistics available and can therefore achieve the best results on your behalf.

The first stage of noise reduction involves the selection of 3 subtly different noise reduction algorithms, and helping StarTools establish a visual base line for the noise grain. To establish this baseline, increase the '[b]Grain size[/b]' parameter until no noise grain of any size can be seen any longer. StarTools will use this baseline to more intelligently redistribute the energy in the various bands that is taken out during the wavelet denoising in the second stage. Note that this parameter is also still available for modification in the second stage, though it lacks the visual aid presented here.

^ Noise reduction is performed at the end of your processing by switching Tracking off.

After clicking 'Next', the wavelet scale extraction starts, upon which, after a short while, the second interactive noise reduction stage interface is presented.

The base algorithm that performs noise removal is an enhanced wavelet denoiser, meaning that it is able to remove features (such as noise) based on their size. Noise grain caused by shot noise - the bulk of the noise astrophotographers deal with - exists on all size levels, becoming less noticeable as the size increases. Therefore, much like the Sharp module, a number of scale sizes are available to tweak, allowing the denoiser to be more or less aggressive when removing features deemed noise grain at different sizes.

^ The first stage of the noise reduction procedure provides StarTools with a visual calibration with regards to the upper range of noise grain visibility, as well as a selection of 3 different noise reduction algorithms.

Some astrophotographers prefer to leave in a little noise at the lowest scale(s) to avoid an overly smooth image, though the algorithm in StarTools already tends to avoid oversmoothing due to its correlation feature.

The parameters that govern global noise reduction response (rather than per-feature-size) are '[b]Brightness/Color detail loss[/b]' and '[b]Smoothness[/b]'.

^ The second and final stage of the noise reduction process lets you fine-tune all aspects of the inherent trade-off between noise reduction and detail loss.

'[b]Brightness/Color detail loss[/b]' specifies a measure of allowed acceptable detail loss in order to reduce noise. In color images, the '[b]Color detail loss[/b]' parameter works solely on any color noise, while the '[b]Brightness detail loss[/b]' parameter works on the detail itself, but not its colors.

The '[b]Smoothness[/b]' parameter determines how much (or little) the denoiser should take notice of any inter-scale detail correlation. Detail correlation is higher in areas that look 'busy' such as galaxy or nebula cores or shock waves, whereas detail correlation is low in areas that are 'tranquil' such as opaque homogenous gas clouds. Increasing '[b]Smoothness[/b]' progressively ignores such correlation, allowing for more aggressive noise reduction in areas of higher correlation.

^ A 200% enlarged crop of the image before (left) and after (right) the Tracking-driven denoising stage. No masks were used, while noise reduction has kept perfect lock-step with perceived grain despite large local dynamic range variations.

'[b]Scale correlation[/b]' specifies how deep the denoiser should look for detail that may be correlated across scales. Most data can withstand deep correlation, however some types of data may exhibit an artificially introduced correlation. This can be the case with data that;
[list][*]has been drizzled with insufficient frames[/*][*]originates from a sensors with a color filter array (for example an OSC or DSLR) and where insufficient frames were stacked[/*][*]was not sufficiently dithered between sub-frame acquisition[/*][*]has any other type of recurring embedded pattern, visible or latent

Noise in such cases will not exhibit a Poission distribution (e.g. it does no longer resemble shot noise) and will exhibit correlation in the form of clumps or streaks. Such data may require a shallower '[b]Scale correlation[/b]' value. More generally, such types of noise/artefacts are beyond the scope of the denoise module's capabilities and should be corrected during acquisition and pre-processing, rather than at the post-processing stage.


[url=http://startools.org/modules/develop][size=150]Develop: Stretching with Photographic Film Emulation[/size][/url]

^ Top: linear image, Bottom: image developed by photographic film curve using Develop module by 'homing in'. Notice the lack of star bloat, courtesy of the automatic black and white point detection.

The Develop module was created from the ground up as an alternative the classic Digital Development algorithm that attempts to emulate classic film response when first developing a raw stacked image.

It effectively functions as a digital 'dark room' where your prized raw signal is developed and readied for further processing.

Automated black and white point detection ensures your signal never clips, while making histogram checking a thing of the past. A semi-automated 'homing in feature' attempts to find the optimal settings that bring out as much detail as possible, while still adhering to the Digital Development curve.

The Development module, along with the Auto Develop, HDR and Contrast modules is part of StarTools' automated stretching solution, making endless curve tweaking and histogram checking a thing of the past; leaving the guesswork to the computer means attaining superior results.

coming soon
[url=http://startools.org/modules/filter][size=150]Filter: Feature Manipulation by Colour[/size][/url]

^ Top Left: Source, Top Right: Hydrogen-alpha enhanced, Bottom Left: Core light emissions (yellow) reject, Bottom Right: Dust lane pass

The Filter module allows for the modification of features in the image by their colour by simply clicking on them. It's as close to a post-capture colour filter wheel as you can get.

^ Left: original image, Right: Fringe killer applied

Filter can be used to bring out detail of a specific colour (such as faint Ha, Hb, OIII or S2 details), remove artefacts (such as halos, chromatic aberration) or isolate specific features. It functions as an interactive colour filter.

The Filter module is the result of the observation that many interesting features and objects in outer space have distinct colors, owing to their chemical make up and associated emission lines. Thanks to the Color Constancy feature in the Color module, colours still tend to correlate well to the original emission lines and features, despite any wideband RGB filtering and compositing. The Filter module was written to capitalise on this observation and allow for intuitive detail enhancement by simply clicking different parts of the image with a specific colour.
[url=http://startools.org/modules/flux][size=150]Flux: Automated Astronomical Feature Recognition and Manipulation[/size][/url]

^ Flux sharpening by self similarity.

The Fractal Flux module allows for fully automated analysis and subsequent processing of astronomical images of DSOs.

The one-of-a-kind algorithm pin-points features in the image by looking for natural recurring fractal patterns that make up a DSO, such as gas flows and filaments. Once the algorithm has determined where these features are, it then is able to modify or augment them.

^ Flux Sharpening by self-similarity feature detection. Only areas that are deemed recurring detail are sharpened.

Knowing which features probably represent real DSO detail, the Fractal Flux is an effective de-noiser, sharpener (even for noisy images) and detail augmenter.

Detail augmentation through flux prediction can plausibly predict missing detail in seeing-limited data, introducing detail into an image that was not actually recorded but whose presence in the DSO can be inferred from its surroundings and gas flow characteristics. The detail introduced can be regarded as an educated guess.

It doesn't stop there however – the Fractal Flux module can use any output from any other module as input for the flux to modulate. You can use, for example, the Fractal Flux module to automatically modulate between a non-deconvolved and deconvolved copy of your image – the Fractal Flux module will know where to apply the deconvolved data and where to refrain from using it.
[url=http://startools.org/modules/hdr][size=150]HDR: Automated Local Dynamic Range Optimization[/size][/url]

^ Top Left: Source. Top Right: 'Tame' algorithm taming the bright core. Bottom Left: 'Equalize' algorithm taming bright core and lifting detail in shadows for a uniformly lit image. Bottom Right: 'Reveal' algorithm recovering dark structures within low dynamic range areas.

The HDR module optimises local dynamic range, in order to bring the maximum amount of detail that is hidden in your data.

A HDR optimisation tool is a virtual necessity in astrophotography, owing to the huge brightness differences of the objects that exist in space.

As opposed to other approaches (for example wavelet-based ones), StarTools' HDR enhances dynamic range allocation locally (not just globally) and takes into account psycho-visual theory (i.e. the way human vision perceives and processes detail). The result is an artefact free, totally natural looking image with real detail that does not suffer from the problems that other approaches suffer from, such as looking 'flat', looking too busy, or blowing out highlights such as stars.

3 subtly different algorithms are available to address 4 different common dynamic range challenges;
[list=1][*]The 'Equalize' algorithm lifts faint detail and tames distracting glare and bright cores.[/*][*]The 'Optimize' algorithm uses dynamic range manipulation to enhance psycho-visual acuity without modifying actual detail.[/*][*]The 'Reveal' algorithm digs deep into bright DSO cores, extracts any detail it can find and re-embeds the detail in a corrected (less bright) super structure.[/*][/list]

The results are absolutely impeccable and you'll be wondering why you were ever bothering with confusing and sub-optimal wavelet layers, or blunt tools like global shadow, midtone, highlight manipulation.
[url=http://startools.org/modules/heal][size=150]Heal: Unwanted Feature Removal[/size][/url]

^ Removal of stars is an effective way to draw attention to the underlying nebulosity, or to process nebulosity spearately from the stars.

The Heal module was created to provide a means of substituting unwanted pixels in an neutral way.

^ The heal module's algorithm is similar to that found in expensive photo editing packages 

Cases in which healing pixels may be desirable may include the removal of stars, hot pixels, dead pixels, satellite trails and even dust donuts.

The Heal module incorporates an algorithm that is content aware and is able to synthesise extremely plausible substitution pixels for even the large areas. The algorithm is very similar to that found in the expensive photo editing packages, however it has been specifically optimised for Astrohpotography purposes.
[url=http://startools.org/modules/layer][size=150]Layer: Versatile Pixel Workbench[/size][/url]

^ The Layer module allows you to chain, mask and layer and apply countless of operations and filters.

The Layer module is an extremely flexible pixel workbench for advanced image manipulation and pixel math, complementing StarTools' other modules.

It was created to provide you with a nearly unlimited arsenal of implicit functionality by combining, chaining and modulating different versions of the same image in new ways.

 Features like selective layering, automated luminance masking, a vast array of filters (including Gaussian, Median, Mean of Median, Offset, [url=http://staff.polito.it/amelia.sparavigna/Astronomical-astrofractool-web.htm]Fractional Differentation[/url] and many, many more) allow you to emulate complex algorithms such as SMI (Screen Mask Invert), PIP (Power of Inverse Pixels), star rounding, halo reduction, chromatic aberration removal, HDR integration, local histogram optimization or equalization, many types of noise reduction algorithms and much, much more.
[url=http://startools.org/modules/lens][size=150]Lens: Distortion Correction and Field Flattening[/size][/url]

^ Top: source image (courtesy of Marc Aragnou), notice star elongation towards corners. Bottom: Lens corrected image (without auto crop to show curvature).

The Lens module was created to digitally correct for lens distortions and some types of chromatic aberration in the more affordable lens systems, mirror systems and eyepieces.

One of the many uses of this module is to digitally emulate some aspects of a field flattener for those who are imaging without a physical field flattener.

While imaging with a hardware solution to this type of aberration is always preferable, the Lens module can achieve some very good results in cases where the distortion can be well modeled.
[url=http://startools.org/modules/life][size=150]Life: Global Light Diffraction Remodeling of Large Scale Structures[/size][/url]

^ The Life module's Isolate preset at work, 'pushing back' a busy star field and refocusing attention on the nebulosity.

The Life module brings back 'life' into an image by remodelling uniform light diffraction, helping larger scale structures such as nebulae and galaxies stand out and (re)take center stage.

Throughout the various processing stages, light diffraction (a subtle 'glow' of very bright objects due to lens or mirror diffraction) may be distorted and suppressed through the various ways dynamic range is manipulated, sometimes leaving an image 'flat' and 'lifeless'. The Life module attempts to restore light diffraction uniformly throughout a processed image, imparting a natural sense of depth and ambiance to an image that was otherwise lost. In many ways it's the anti-HDR module.

The Life module may additionally be used locally by means of a mask. In this case the Life module can be used to isolate objects in an image and lift them from an otherwise noisy background. By having the Life module augment an object's super-structure, faint objects that were otherwise unsalvageable can be made to stand out from the background.
[url=http://startools.org/modules/magic][size=150]Magic: Star Apperance Manipulation[/size][/url]

^ Top: source. Middle: 'tighten' algorithm, note more defined stars. Bottom: 'shrink' algorithm, note smaller stars..

The Magic module allows you to modify the appearance of stars in your image. It allows you to shrink stars, tighten stars and better color stars.
[url=http://startools.org/modules/repair][size=150]Repair: Star Rounding and Repair[/size][/url]

^ The Repair module's "Warp" algorithm uses the original pixels from the image to reverse-warp stars back into shape.

The Repair module attempts to detect and automatically repair stars that have been affected by optical or guiding aberrations.

Repair is useful to correct the appearance of stars which have been adversely affected by guiding errors, incorrect polar alignment, coma, collimation issues or mirror defects such as astigmatism.

^ The Repair module's "Redistribute" algorithm uses the original pixels from the image and recalculates their appearance and position as if they originated from a point light source.

The Repair module allows for the correction of more complex aberrations than the much less sophisticated 'offset filter & darken layer' method, whilst retaining the star's exact appearance and color.

The repair module comes with two different algorithms. The 'Warp' algorithm uses all pixels that make up a star and warps them into a circular shape. This algorithm is very effective on stars that are oval or otherwise have a convex shape. The 'Redistribution' algoirthm uses all pixels that make up a star and redistributes them in such a way that the original star is reconstructed. This algorithm is very effective on stars that are concave and can not be repaired using the 'Warp' algorithm.
[url=http://startools.org/modules/sharp][size=150]Sharp: Wavelet-based Detail Aware Structural Detail Sharpening[/size][/url]

^ Left: source. Middle: bias towards larger scale structures. Right: bias towards smaller scale structures.

StarTools' Detail-aware Wavelet Sharpening allows you to bring out faint structural detail in your images.

Other Wavelet Sharpening implementations can often drown out other fine detail because of different frequency ranges competing for the modification of the same pixel - in those implementations, the different scales (bands) interfere with each other and are not aware of the sort of detail you are trying to bring out.

Uniquely, StarTools' Wavelet Sharpening gives you control over how detail enhancements across different scales interact. Apart from traditional parameters like controlling the strength of the detail enhancement per band, StarTools allows you to be the arbiter when two scales (bands) are competing to enhance detail in their band for the same pixel.

As with all modules in StarTools, the Wavelet Sharpening module will never allows you to clip your data, always yielding useful results, no matter how outrageous the values you choose, while availing of the Tracking feature's data mining. The latter makes sure that, contrary to other implementations, only detail that has sufficient signal is emphasised, while noise grain propagation is kept to a minimum.

Using StarTools' Auto Mask Generator, stars are automatically left alone. And, best of all, the complete algorithm is so fast that results are calculated in virtually real-time, while the interface couldn't be more user friendly.
[url=http://startools.org/modules/synth][size=150]Synth: Star Resynthesis and Augmentation[/size][/url]

^ Diffraction partterns are not painted on; they can be quite subtle.

The Synth module generates physically correct diffraction and diffusion of starlight, based on a virtual telescope model.

Besides correcting and enhancing starlight, it may even be 'abused' for aesthetic purposes to endow stars with diffraction spikes where they originally had none. Any other tools on the market today simply approximate the visual likeness of such star spikes and 'paint' them on.
[url=http://startools.org/modules/wipe][size=150]Wipe: Light Pollution, Vignetting and Gradient Removal[/size][/url]

^ The Wipe module detects, models and removes any source of unwanted light bias.

The Wipe module detects, models and removes any source of unwanted light bias.

^ 2 sources of unwanted light; a gradient starting at the upper right corner, and light pollution in the form of the typical yellow/brown light. Also visible is vignetting, as seen in the darkening of the corners. ​Image courtesy of Charles Kuehne.

 The Wipe module's main purpose is to eliminate unwanted light in an image and establish a neutral background.

Unwanted light may come in the form of gradients, colour cast or light pollution.
[list][*] Gradients are usually prevalent as gradual increases (or decreases) of background light levels from one corner of the image to another. Sources may include the or a nearby street light.[/*][*] Colour casts are a tint of a particular colour which, contrary to a gradient, affects the whole image evenly.[/*][*] Light pollution is the presence of a persistent haze of (often) coloured light, caused by urban street lighting.[/*][/list]

Other issues that the Wipe module may ameliorate are vignetting and amp glow;

[list][*] Vignetting manifests itself as the gradual darkening of the image towards the corners and may be caused by a number of things. [/*][*] Amp glow is caused by circuitry heating up in close proximity to the CCD, causing localised heightened thermal noise (typically at the edges). On some older DSLRs and Compact Digital Cameras, amp glow often manifests itself as a patch of purple fog near the edge of the image.[/*][/list]

Strictly speaking, Vignetting is not an additive light source and the correct course of action is to apply flat frames during sub frame calibration. That said, reasonable results can be achieved using Wipe's "vignetting" preset.

Note that while part of Wipe's job description is 'establishing a neutral background', this doesn't necessarily the background is colourless. It simply means that the colour channels are now bias-less, however colour calibration of the channels by the Color module is still required.

[url=http://startools.org/modules/wipe/usage/preparing-data][size=125]Preparing data for the Wipe module[/size][/url]

^ Leaving stacking artifacts in will cause Wipe to interpret the anomalous data as true background, causing it to back off near the location of the artifacts.

It is of the utmost importance that Wipe is given the best artefact-free, linear data you can muster.

Because Wipe tries to find the true (darkest) background level, any pixel reading that is mistakenly darker than the true background in your image (for example due to dead pixels on the CCD, or a dust speck on the sensor) will cause Wipe to acquire wrong readings for the background. When this happens, Wipe can be seen to "back off" around the area where the anomalous data was detected, resulting in localised patches where gradient (or light pollution) remnants remain. These can often look like halos. Often dark anomalous data can be found at the very centre of such a halo or remnant.

^ Halo around a simulated dust speck dark anomaly.

The reason Wipe backs off is that Wipe (as is the case with most modules in StarTools) refuses to clip your data. Instead Wipe allocates the dynamic range that the dark anomaly needs to display its 'features'. Of course, we don't care about the 'features' of an anomaly and would be happy for Wipe to clip the anomaly if it means the rest of the image will look correct.

^ Masking out the dust speck in order to make Wipe ignore that location.

Fortunately, there are various ways to help Wipe avoid anomalous data;
[list][*]A '[b]Dark anomaly filter[/b]' parameter can be set to filter out smaller dark anomalies, such as dead pixels or small clusters of dead pixels, before passing on the image to Wipe for analysis.[/*][*]Larger dark anomalies (such as dust specks on the sensor) can be excluded from analysis by, simply by creating a mask that excludes that particular area (for example by "drawing" a "gap" in the mask using the Lassoo tool in the Mask editor).[/*][*]Stacking artefacts can be cropped using the Crop module.[/*][/list]

^ The result of making Wipe ignore the anomalous data. No halo-like remnant is left.

Bright anomalies (such as satellite trails or hot pixels) do not affect Wipe.

[url=http://startools.org/modules/wipe/usage/operating-the-wipe-module][size=125]Operating the Wipe module[/size][/url]

Once any dark anomalies in the data have successfully been dealt with, operating the Wipe module is fairly straightforward.

By default, a setting is selected that performs well in the presence of moderate gradients, colour casts or bias levels.

If the gradient is found to undulate stronger, a higher '[b]Aggressiveness[/b]' setting may be appropriate. When using a higher '[b]Aggressiveness[/b]', be mindful of Wipe not 'wiping' away any medium to larger scale nebulosity. To Wipe, larger scale nebulosity and a strong undulating gradients can look like the same thing!

If you're worried about Wipe removing any larger scale nebulosity, you can protect this nebulosity by masking it out, so that Wipe doesn't sample it.

Because Wipe's impact on the dynamic range in the image is typically very, very high, a (new) stretch of the data is almost always appropriate so that the freed up dynamic range  that used to be occupied by the gradients and/or light pollution can now be put to good use to show detail. Therefore, a global re-stretch using the AutoDev or Develop module is almost always required.

Having to 'Keep' the result and switching to 'AutoDev' or 'Develop', just to see the result, is a bit tedious. Therefore,  switching on a courtesy '[b]Temporary AutoDev[/b]' operation allows you to see the result.

[url=http://startools.org/modules/wipe/usage/advanced-parameters][size=125]Advanced parameters[/size][/url]

A number of controls for advanced use and special cases are available.

The '[b]Corner aggressiveness[/b]' lets the user specify a different aggressiveness value for the corners of the image. This can be useful if gradients become stronger in just the corners and can help ameliorate vignetting. The '[b]Drop off point[/b]' determines how far from the center of the image the '[b]Corner aggressiveness[/b]' starts taking over from the main '[b]Aggressiveness[/b]' parameter. At 100% for the '[b]Drop off point[/b]', no effect is visible (e.g. only the main 'Aggressiveness' parameter is used) since the' [b]Corner aggressiveness[/b]' only comes into effect 100% of the way between the center of the image and the corners. 

 The '[b]Precision[/b]' parameter can help when dealing with rapidly changing (e.g. undulating) gradients combined with high '[b]Aggressiveness[/b]' values.

The '[b]Mode[/b]' parameter allows for the selection of what aspect of the image should be corrected by Wipe;
[list][*][b]Correct color and brightness[/b]; removes  both colour and brightness bias across the image.
[/*][*][b]Correct color only[/b]; removies color casts but does not impact brightness bias.
[/*][*][b]Correct brightness only[/b]; retains color but corrects brightness bias. This mode is useful when processing narrowband data, or data that was not acquired on earth (for example Hubble Space Telescope data).[/*][/list][url=http://startools.org/tracking][size=175]Tracking[/size][/url]

^ Traditional image processing software flow. Each algorithm/filter only has access to data as it was generated by the step immediately preceding it and only outputs data once to the step coming after it.

 StarTools' pervasive "Tracking" data mining feature is responsible for the markedly improved results compared to other traditional software.

As opposed to all other astronomical image processing software packages that are currently on the market, StarTools takes a completely different approach to processing. Rather than the traditional approach of having an application consisting of many granular algorithms, steps and filters that are carried out in sequence, StarTools is akin to a living and breathing organism; everything is connected and has its place, forming a whole that is greater than the sum of its parts.

Your data remains in a super position of states, being simultaneously linear and non-linear, deconvolved and not deconvolved, colour calibrated an not colour calibrated, etc. This allows StarTools to consult the data in its most suitable, unadulterated state for the task at hand.

^ StarTools interconnected processing. Each algorithm has access to the output of every other algorithm, independent of sequence, and each algorithm can send output retroactively (e.g. feed back) to any of the algorithms that were used any any time to get to the result you are viewing at the moment.

Meanwhile StarTools observes how you stretch your signal and meticulously keeps track of visible noise propagation, levels and processing sequences throughout your processing session. It does all of this in the background, and without bothering you.

By data mining these two facets of image processing, StarTools is able to accomplish something quite remarkable; StarTools is able to effortlessly track and calculate cause and effect when making changes to any of the different states. It's a like time travel; with StarTools you can change steps you took in the past, to affect how the image looks in the present and future.

So, what does this mean for your image processing?

[size=150]"Impossible" operations made possible

Firstly, it means that you are no longer beholden to the sequence in which you processed your image. Mathematically correct deconvolution after stretching (e.g. deconvolution of non-linear data)? No problem! StarTools knows how to reverse your stretching, apply deconvolution and reapply your stretching. Correct linear colour calibration of heavily processed data? No problem! StarTools will know how to completely negate the adverse effects of luminance manipulation and recover true colours.

It's like magic, except it isn't; it all boils down to StarTools simply being smarter with your hard won data.

^ No local supports, luminance masks or other crutches needed; Tracking helps StarTools' modules to achieve better results autonomously.

 You don't need to worry any longer about the correct sequence of operations on your data. The notion of linear data versus non-linear data has been abstracted away completely. If you're a beginner and never even knew about the requirement of 'old' software to keep data linear for select operations, now you don't even have to worry about it. If you are an image processing veteran, you can now do things that would otherwise be impossible, all without having to bother with sub-optimal crutches like screen stretches and the like.
[size=150]Old familiar tools with new tricks

The Tracking data mining feature breathes new life into old tools like deconvolution, wavelet sharpening and noise reduction. By infusing such algorithms with per-pixel accurate statistics about detail, noise and historical pixel values, these tools have gained a completely new dimension when it comes to signal preservation and noise suppression.
[size=150]Noise reduction effectiveness that has no equal

 Noise reduction in StarTools is arguably the Tracking feature's Pièce de résistance[/i].  By postponing noise reduction to the last possible moment (so that Tracking has had the longest opportunity to data mine the data's evolution and your actions), StarTools is able to provide noise reduction that is unsurpassed in its accuracy and effectiveness, all without the need for local supports or luminance masks. The latter sub-optimal crutches have simply been made obsolete due to the availability of something better; true, objective and accurate statistics on each and every pixel in the image.
[size=150]Promotion of "closure" and protection against "overcooking"

Fidelity and signal preservation is helped immensely by avoiding compounding rounding errors and (user induced) "overcooking" of images. Since StarTools always selects the correct super position state of your data, depending on the requirements of the algorithms at hand, the input (source) is always the cleanest, most unadulterated version of your data possible. Results are predictable and almost always useful; no longer is the result solely dependent on what the previous algorithm or filter generated as its output. Tracking helps with promoting a feeling of 'closure' and prevents endless cycles of applying filter-upon-filter.
[size=150]You can stop repeating yourself; faster processing with less guesswork[/size]

Finally, thanks to Tracking's data mining, all modules "talk" to each other and are aware of what all the other modules have done to your signal prior. 'Old' software effectively sees you specifying the same constraints, thresholds, regularization amounts, kernel sizes and local supports over and over again because the individual algorithms have no idea what you specified before in the previous algorithm (which invariably needs similar input), nor does such 'old' software have any idea about the characteristics of your data or what an appropriate baseline would be for the algorithm's settings. Not so in StarTools; instead asking you to 'pick a number', where possible StarTools asks you to pick a [i]deviation[/i] from what it thinks is a good baseline (based on prior input and Tracking data mining statistics). The result is that most modules in StarTools tend to come up with usable and reasonable default results that merely need tweaking to achieve your artistic vision for your data.

In fact, this behaviour greatly reduces the number of clicks and parameters settings that you need to make. It eliminates guesswork for parameters that can already be safely, reliably and objectively derived from the data or prior input without additional human (i.e. your) intervention. Not only is Tracking data mining a great help, it's also a great time saver. The time spent processing an image in StarTools is measured in minutes, not hours.