Annual Meeting of the Deep Sky Section, 2007 March 3


High-Definition Imaging of Planetary Nebulae

Mr Tasselli opened by describing his instrumentation: an 8" Intes Micro M809 Cassegrain of working focal length 1,400-2,000 mm, fitted with a Starlight Xpress SXVF-H9 CCD array, which yielded a resolution of 0.65-1" per pixel. For filters, he used a True Tech SupaSlim filter wheel with RGB, Hα and OIII filters.

The term 'high-definition imaging' could be defined in many ways; presently, the speaker would consider work to qualify as such when the full width half maximum (FWHM) of the point spread function (PSF) of the unprocessed frames was 2" or less and the spatial sampling of the CCD pixel array was fine enough to satisfy the Nyquist criterion – a necessary requirement for subsequent deconvolution.

Outlining why he was attracted to imaging planetary nebulae (PNs), he explained that they comprised some of the brightest deep sky objects – many were brighter than Uranus and Neptune – and this made them quite easy targets. They were usually compact, fitting readily into single CCD frames. Their edges were usually sharp, unlike those of galaxies, whose extremities usually faded more gradually into the sky background; this meant that thorough flat-fielding was not so vital when imaging them. They showed a plethora of emission lines, and so responded well to the use of a wide range of narrow-band filters – of great help when observing from sites with significant light pollution. Put together, these considerations meant that expensive equipment was generally not required.

The array of objects on offer were themselves a very pleasing crop, showing clear contrast and colour variations, and amongst them great a diversity of shape and form. Surprisingly, they seemed to be rather under-appreciated objects: many had never been imaged by either amateurs or professionals before, let alone at high resolution.

Outlining what was required, the speaker identified good seeing as the single greatest prerequisite. Matters which were under the observer's control, those of equipment choice, were surely secondary. A telescope with good mechanical stability was a great help, but only to minimise the time spent tweaking instrumentation rather than taking images. Its optics needed to be good, but once again, nothing special. Good focusing was needed, but no special hardware was needed to achieve it. A dew shield was, however, strongly recommended – dewed up optics were easy to miss, and it could be infuriating to later find many tens of minutes of exposures so ruined.

The thermal stability of one's observatory needed consideration. In the winter, when daily temperature variations were usually only 3-4°, this was less important, but in the summer, when they could reach 20°, it was much more so. The speaker's garden observatory had no roof – a set-up with a great strength here: there were no walls around it to retain heat and produce thermal currents. Even so, he still found the telescope's own heat retention to be sufficient to render an internal fan necessary in summer. One drawback of this set-up, however, was that it needed to be polar aligned anew every night; it could take 1-2 hours to achieve 0.2"/minute tracking accuracy. He remarked, though, that even fixed plinth mounts could require similar treatment for their first few years, as their foundations shifted in the rain.

In good seeing, he aimed to focus to within 0.1" accuracy, which generally took around 30 minutes. In the winter, he usually found that after one focussing session at the start of each night, a single quick check in the early hours sufficed. In the summer, however, 3-4 full re-focussing sessions were often required as the temperature fell through the night.

Mr Tasselli always worked by taking large numbers of short exposures, no longer than 30 seconds each. He found seeing conditions to often vary greatly from one 30-second period to the next; by taking many 100s–1000s of such exposures and discarding all but the best, he could build up a long exposure whilst "freezing" the best seeing. This also minimised the effect of tracking errors upon the final image; the speaker found that auto-guiding was unnecessary with this approach.

He worked only in the best seeing conditions, which usually accompanied anti-cyclonic weather in the summer months; high-pressure systems tended to yield a steady atmosphere. January and February could also bring similar conditions, though they had not in the winter just passed. The speaker avoided observing objects at low altitudes, where atmospheric distortions were greatest; as a rule of thumb, he restricted himself to altitudes >65°.

Having obtained raw data, he used four post-processing software packages: the standard Starlight Xpress image capture programme, the powerful free Iris1 image processing suite for the bulk of his image enhancement, Adobe Photoshop to perform final retouching, and finally Neat Image2 to reduce noise by filtering out the graininess of his CCD array.

After visually selecting his best exposures, he applied standard processing techniques to them: dark subtraction and flat-fielding. He then averaged them with sigma rejection – a technique good for filtering out especially noisy images, those with cosmic ray hits, etc. This yielded low-noise images, but the results were still typically quite blurry, due both to tracking errors and seeing conditions. To rectify this, he used three contrast enhancement tools: unsharp masking, Richardson-Lucy Deconvolution, and – somewhat stronger – Van Cittert Deconvolution. To achieve dark backgrounds, and to bring out contrast among the upper luminance contours, the speaker also applied some histogram modification, using the Digital Development Process (DDP) pioneered by Kunihiko Okano and Photoshop's non-linear curve modification facility.

He applied these in a fairly consistent pipeline. First he made a luminance image, to which he applied DDP to get the right distribution of grey levels across the image. He then filtered out any pixels with negative luminances – a prerequisite for deconvolution – before applying 4-5 iterations of Richardson-Lucy Deconvolution, saving the result as a "low-resolution" frame. He then applied a further 3-5 iterations of stronger Van Cittert Deconvolution, used a weak low-pass filter to remove some of the resulting noise, and saved a second "high-resolution" frame. In Photoshop, he blended the high-luminance levels of the latter image with the lower-luminance levels of the first. In the result, the fainter regions of the frame – typically the sweeping extremities of objects, dominated by lower spatial frequencies – were only mildly sharpened, to minimise noise, whilst brighter regions were sharpened quite strongly to bring out fine structure.

The speaker closed by showing a number of examples of his work – the Cat's Eye Nebula (NGC 6543), the Blinking Eye Nebula (NGC 6826), the Cheeseburger Nebula (NGC 7026), the Blue Snowball (NGC 7662) and the Ring Nebula (M57) – in each case comparing the amount of detail resolved in his images with results from the Hubble Space Telescope (HST); he remarked that the comparison was not a bad one with early HST images.

Following the applause, Dr Moore introduced the morning's second speaker, Mr Paul Clark.




Color scheme