100% found this document useful (1 vote)
68 views377 pages

Getting Started - Long Exposure Astrophotography (PDFDrive)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 377

Copyright © 2013 by Allan Hall

10 9 8 7 6 5 4 3 2 1

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any
form or by any means, including photocopying, recording, or other electronic or mechanical
methods, without the prior written permission of the publisher, except in the case of brief quotations
embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For
permission requests, write to the publisher, addressed “Attention: Permissions Coordinator,” at the
address below.

Allan Hall
1614 Woodland Lane
Huntsville, TX 77340
www.allans-stuff.com/leap/

Although the author and publisher have made every effort to ensure that the information in this book
was correct at press time, the author and publisher do not assume and hereby disclaim any liability to
any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or
omissions result from negligence, accident, or any other cause.

Any trademarks, service marks, product names or named features are assumed to be the property of
their respective owners, and are used only for reference. There is no implied endorsement if we use
one of these terms.

'Adobe'. ‘Adobe Photoshop’, and ‘Adobe Photoshop Lightroom’ are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States and/or other countries.

Front cover background: Rosette Nebula, Copyright Allan Hall


Front cover smaller images left: M13, Copyright Allan Hall
Front cover smaller images center: Helix Nebula, Copyright Allan Hall
Front cover smaller images right: M74, Copyright Allan Hall
Back cover: Copyright Allan Hall
Section title page images: Copyright Allan Hall
Acknowledgements:
The following persons/companies have graciously agreed to allow reprints
of their screens in this publication.
Adobe product screenshot(s) reprinted with permission from Adobe Systems Incorporated.
Star charts printed from AstroPlanner V2, used with permission, Paul Rodman, Author
AstroPlanner & AstroAid screenshots used with permission, Paul Rodman, Author
Stellarium screenshots used with permission, Alexander Wolf, Developer
Star Walk screenshots used with permission, Olga Shtaub, Vito Technology Inc.
Clear Sky Chart screenshot used with permission, Attilla Danko, Author
Images Plus screenshots used with permission, Mike Unsold, Author
EQMOD screenshots used with permission, Chris Shillito, Author
C2A screenshots used with permission, Philippe Deverchere, Author
TheSkyX screenshots used with permission, Daniel Bisque, VP Software Bisque, Inc.
Deep Sky Stacker screenshots used with permission, Luc Coiffier, Author
Deep Sky Planner screenshots used with permission, Phyllis Lang, Owner Knightware LLC
SkyTools screenshots used with permission, Greg Crinklaw, Skyhound.com
FITS Liberator screenshots used with permission, Lars Christensen, Project Executive
Images from New Mexico Skies observatory used with permission, Lynn Rice, Co-Owner
DSLRShutter/PHD screenshots used with permission, Craig Stark, Author
Alignmaster screenshots used with permission, Matthias Garzarolli, Author
RSpec screenshots used with permission, Tom Field, Author
Orion Telescopes logos and brand names appear with permission, Mary Caballo, Orion Inc.
SkySafari screenshots used with permission, Tim DeBenedictis, Owner
PixInsight screenshots used with permission, Juan Conejero, Principal Developer, Pleiades
Astrophoto, S.L.
PHDLab screenshots used with permission, John Wainwright, Author
NASA HD App screenshots used with permission, NASA App Team
BackyardEOS screenshots used with permission, Guylain Rochon, Author
For Big...
Table of Contents
Section 1. The basics
1.0 Introduction
1.1 About the book
1.2 Your budget & realistic expectations
1.3 The mount
1.4 The telescope
1.5 Basic setup
1.6 Guide scope & guiding
1.7 The camera
1.8 Other important equipment
1.9 Acquiring images
Section 2. Up and running
2.1 Camera control software
2.2 Mount control/Planetarium software
2.3 Tablet software
2.4 My setup procedure
2.5 Exposure considerations
2.6 Post processing overview
2.7 Finding targets, session planning
2.8 Astrophotography with camera lenses
2.9 Brand specific considerations
2.10 Diagnosing image errors
Section 3. Making your head hurt
3.1 Shooting mono to get color
3.2 Stacking images
3.3 Stretching images
3.4 Image acquisition tricks and tips
3.5 High Dynamic Range acquisition & processing
3.6 Tuning PHD for best guiding results
3.7 Creating a custom light box for flats
3.8 Remote observatories
3.9 Being different: Spectroscopy
3.10 Closing notes
3.11 Where to go from here
Section 4. Top 25 targets to start with
Section 5. Glossary
My primary imaging scope aligning on a target
1.0 Introduction

As a photographer and someone who was infatuated with all things space
related it seemed natural that I would put the two together. After all, I grew
up in the great space era spanning from the Apollo programs through
Skylab, Mir, the shuttles and now with the ISS and private space carriers.

My father was quite an active private pilot and I learned to fly before I
could see over the dash. The skies are beautiful over the countryside at ten
thousand feet, and this got me wanting to see more of it.

The first experience with real astronomy was with a horribly under mounted
EQ reflector that I managed to buy one year after working more holiday
hours than any sane person ever should. This was back in the pre-internet
days and in a small town for someone who had to work three solid days
during Thanksgiving just to afford a $300 telescope there were not a lot of
places to turn for help. And so I hated that scope, not because it didn’t
work, but because I could never learn to use it. Fortunately, it still survives
and gets used with a good friend and his three small children.

It wasn’t until later in life that I finally acquired the means and time to
actually pursue amateur astronomy as anything more than occasionally
looking up at the sky with wonder and frustration. Finally I thought I could
do something with this! What’s more, I had the photography equipment that
should allow me to not only look at objects, but capture them. Then I
realized something horrible, the information on astrophotography I could
find was sparse, outdated and incomplete from a beginner’s point of view.

I read several books, searched the Internet and poured over online forums.
Nowhere was there a really good starter’s guide on serious
astrophotography. Sure, there were some good books but they mainly
focused on a brief overview of the ideas coupled with tutorials on a specific
piece of software that I had never heard of and couldn’t find anyone who
actually used it. What I wanted, what I really needed, was something that
covered pretty much all the theory and most of the practical application all
in one place.

Still, I pushed ahead, determined to make a go of it. It was at this time that
an astronomy club of which I was a member asked me to speak at their
beginner’s meeting. I was flabbergasted. Why would anyone in their right
mind want someone who has been doing this all of four months to speak at
their meeting? Then it hit me, because I had only been doing this four
months I had a very unique perspective.

The problem was I hadn’t really thought about it, but I still gave it my best
shot. I cobbled together the foundation of what would eventually become
this book, approximately nineteen pages of notes and scribbles of how I
made things work. Unfortunately I had little time to prepare what I would
say, and spent most of that time working on the paper and some prints to
mount and show, so my speaking left a lot to be desired. I recorded the
whole thing so I could go back and laugh at myself, which I still do.

This book is an attempt to rectify all the problems I found when I started
down the path.
Before we begin I feel it is only right to thank a few people. First and
foremost is of course my wife Sue Ann. Without her understanding and
support none of this would have been possible. Mike Prokosch for helping
me at the local observatory. Don Taylor for answering a never-ending
stream of questions at the local observatory all night long, for more nights
than he probably cares to remember. The Sam Houston State University
Physics Department for sharing their fine facility with the local amateur
astronomy organizations. The sales and service team at Orion Telescopes
for helping me get the exactly correct setup for what I wanted to do and for
helping me iron out the kinks. Tom Field from R-Spec for his unbelievable
support in spectroscopy. George Marsden from the North Houston
Astronomy Club for asking me to speak at the Novice Session which started
all this writing mess (by the way George, my wife knows you are to blame
for all the time I spent writing this, and for making her read it over, and
over, and over. Don’t open any packages from her ). Also thanks to
George and the NHAC leadership for helping me get NHAC members to
read the manuscript. The people who reviewed and helped me check the
book for errors, including:

Don Taylor, Dan Davidson, Tom Field, Rory Glasgow, David Tomlin,
Aaron Clevenson, David Lambert, Todd Sullivan, Mary Moore, and James
Billings.

And far too many more people to mention. Thank you all.
1.1 About this book

When I decided to write this book I wanted something written from the
point of view of the beginner, answering what seems like the stupid
questions right up front. I wanted to start with the very basics to get you up
to where you can participate quickly, then expose you to some of the more
advanced techniques to give you room to grow. Hopefully this book will
also help answer the “why’s” and “how’s” that I never seemed to
understand.

I have written this book in short, digestible sections in a logical progression


from what you need to know before buying anything, to the basics of how
to set it up and use it, through more advanced methods. I hope that it will
give you enough information on enough topics to get you going and keep
you interested.

This book also contains a wealth of pictures, graphics and charts to show
you, instead of just tell you, what it is we are talking about.

I have not tried to be the best at everything--there are, for example, better
books on image processing with specific applications. This book is more
about overviews and understanding what and why you want to do things
rather than exactly how to do it in a specific application. In some areas I
have tried to show you how to do something in more than one program so
you learn the ideas and not just how to work one program.

This book could go on indefinitely, or at least I can’t see an end. What is


here is where I just decided enough was enough. My goal was to give you
enough information to keep you going and get you over the beginner’s
hump, to where you could not only ask intelligent questions but understand
the answers once you got them. A base you could draw on to start to figure
out the more complex issues on your own.

So what exactly is long exposure astrophotography (AP for short)?


Normally this is considered exposures longer than thirty seconds at a very
far away and dim object such as a nebula (brightly colored gas cloud in
space). All those really cool Hubble telescope images are in this category.

Why should I take long exposures instead of short exposures? Simply put,
short exposure AP work is a field all its own and can create some
spectacular images. Unfortunately it cannot image the same extremely faint
objects that long exposure can, even if you stack hundreds of images
together (more on stacking later).

Let’s start by scaring the heck out of you, long exposure astrophotography
typically starts with at least $2000 (assuming you already have a camera)
and can have you out in the freezing cold dead of night for 8+ hours
dripping wet from dew to get one little picture that some people won’t
believe you took anyway. If that freaks you out this may not be the hobby
for you.

Now that is not to say that you cannot do any astrophotography unless you
have lots of money because it completely depends on what you want to get
out of it. This book is written with an aim at the middle of the road, so to
speak. I assume you have enough money to buy a setup that will track a
target accurately for ten minutes. I do not assume you have enough money
to buy a $10,000 research grade mount. Even if you are starting with a poor
starving student’s bank account balance, you can still get a lot of
information from this book to help you understand your limitations, what
you can do, and where you may want to go in the future.

Still with me? Then let’s move on!

Let’s take a moment to point out that most of what is in this book is what
worked for me. Asking lots of questions and reading lots of books is a great
start, but eventually you need to get out there and test things to see how
they work for you. What I am sharing is mostly what I got out and tried.

Astrophotography and other forms of photography have much in common.


Both can be done by anyone with enough money for a body/lens or
body/telescope combination, and both can be improved with the right skills
and right equipment.
With either type of photography you have to ask yourself what you want to
accomplish before you just jump in. There are many different levels and
tons of different specializations you can master. The bulk of people I hear
talking about wanting to do astrophotography want to do a few things
including the moon, Jupiter, Saturn, the Orion nebula, the Horsehead
nebula, and a few other galaxies and nebulas to show their friends and
impress their family. This does not require a huge investment and can to
some degree be done with just a few hundred dollars. The key is
information.

One of the fantastic things about this hobby is how far you can push it.
Once you get the bright Messier objects done there are the Caldwell objects,
then the Herschel objects, then thousands of NGC targets. From there you
have tons of other lists: splitting binary stars, stellar spectroscopy,
measuring variable stars and the list just goes on and on. While all this is
going on you are constantly refining and improving your capturing and
processing abilities, and then on to re-imaging targets to make the images
even better.

All of this can span many lifetimes so it is very unlikely you will ever run
out of things to do Every astrophotography image on and in this book was
done by myself, with the equipment recommended by and shown in this
book. The cameras, scopes, mounts and accessories you see pictures of
throughout the pages are mostly what I use.

NEVER EVER EVER POINT ANYTHING TOWARDS THE SUN


WITHOUT THE CORRECT PROTECTION. NOT YOUR EYES,
NOT BINOCULARS, AND CERTAINLY NOT YOUR
TELESCOPE. FAILING TO HEED THIS WARNING COULD
LEAD TO PERMANENT BLINDNESS, EQUIPMENT
DAMAGE, OR FIRES (AND FIRES CAN LEAD TO PROPERTY
DAMAGE, BODILY INJURY, AND EVEN DEATH). NO NO NO,
BAD BAD BAD

Don’t forget to visit the website for this book where you can download
example files, watch videos, and participate in discussions about the book
and astrophotography in general.

http://www.allans-stuff.com/leap/
1.2 Your budget & realistic expectations

This is the most contentious area of AP. Some people will tell you that you
need a $10,000 budget to really do AP, some will say $5,000, and some
claim that $500 will be more than enough. The problem here is they are all
right!

You can in fact get started in AP with an old shoe box, some duct tape and a
piece of film. This is called a pinhole camera and was invented around 1850
by a Scottish scientist named Sir David Brewster (note that the physics of
the pinhole camera were well known since the 5th century BC, but there was
no film). Yes, you can take images of the moon and stars with this, and
people have indeed done so.

I am going to go out on a limb here and say that is probably not what you
had in mind, which is fine, but that is really all you need to be able to “do
AP”. Anything you spend above that gets you better images and/or less work
to get them, so the question now becomes how much better do you want to
get?

If your answer is that you want to take snapshots of the moon, maybe Saturn
and Jupiter, possibly the Orion nebula and the Andromeda Galaxy then you
can use your smartphone/tablet, or point & shoot digital camera to shoot
through the eyepiece of any telescope out there, including the cheapy $25
mall specials.

Again, that is probably not what you had in mind if you have this book. To
help you get an idea of where you can go with what budgets I have compiled
a chart you will see shortly. The further to the right on the chart you go, the
better quality images you can take (and larger number of targets you can
capture). Unfortunately this chart does not take into consideration cameras,
laptops, processing software, blankets, batteries, etc.

Before we really look at the chart keep in mind these suggestions are purely
from an “I want to do AP” point of view and do not take anything visual into
consideration at all.

Let’s assume you have a reasonable DSLR (reasonable meaning $500+,


Canon or Nikon preferred), a reasonable laptop (something made in the last
four years) and at least a few hundred dollars for software, blankets, dew
prevention items, batteries, etc. If you don’t have all of that, add somewhere
between $1,000 and $1,500 to the prices on the chart for the basics.

Now the reason the chart is color coded is because of the capabilities of each
type of setup. The blacks are the worst choices for AP, they will suffer from
field rotation, they have very small apertures so will require tons of frames
to get a reasonable image, and will have a hard time supporting your camera.

The dark gray with white text will have the same problems as the black
except they will hold the camera a little better and have larger apertures so
your image count might be less than 100 per target.

The medium gray can start to get you into some pretty good exposures, a
minute or more if you are careful instead of the thirty second max with the
blacks and dark grays. You also can now illuminate most of the sensor in
your camera since you are using a 2” camera adapter instead of the 1.25” of
the darker colors, we will discuss what all this means a little later.

The light gray packages can now get you into where you can image
thousands of targets, and get some pretty dang good images. With some
patience, luck, and skill you can really crank out some nice stuff.

White? What can I say, with that and a nice CCD you can rival anyone else
out there short of the Hubble.
Figure 1 Astrophotography package selections based on price.

Now people usually start screaming about the $5,300 price tag of the bottom
of the line white because when you read the description of the white section
above, that is what you want to do, but you want to do it for the price tag of
the black section. Sorry, in this field cost is directly proportional to the
quality you can achieve and targets you can capture.

Realistically, however, as long as you are willing to work hard, learn a lot,
and spend countless hours in the field a light gray kit will get you images
that are very impressive. All the images on my website and throughout this
book were shot with a kit that is at the top of the light gray section to give
you an idea.

If, on the other hand, your interest in AP is a fascination more than a serious
interest and you are content with mainly viewing the heavens and snapping a
few shots to share with friends, a black kit may suit you just fine.

The chart is a starting place; it is not supposed to be the end-all be-all. It was
put together to show you the different tiers and what a reasonable suggested
package would be for that amount of money. If I had a camera, a laptop,
Adobe Photoshop®, $500 or so in my pocket in addition to the prices on the
chart for miscellaneous expenses, then the chart fairly accurately shows what
I would personally look at spending my money on.

One issue I face when explaining short exposure versus long exposure is that
you can get the same amount of total exposure time by taking one hundred 1
minute images as if you take ten 10 minute images, so why should I do long
exposure work?

A Salvation Army bell ringer stands outside a store for five hours and
collects $600 in donations. If you do the math this works out to an average
of just over $0.03 per second. If that same bell ringer set up for only one
second, do you think he would get $0.03? No? It’s the same idea here. You
have to be there long enough to collect some photons to stack in the first
place, this is where long exposure AP comes in.
1.3 The mount

The single most important thing in AP is the mount. This is also where you
will spend the single biggest block of your money to start with. If the mount is
not sufficiently solid, and accurate enough, your images will look like blurry
little blobs. Do you want to take pictures of blurry little blobs? I didn’t think
so.

There are two types of mounts for telescopes, Alt/Az (short for Altitude
Azimuth, below left), and EQ (short for Equatorial, below right). Alt/Az
mounts move up and down, and left and right. Unfortunately the stars appear
to rotate around the earth instead of move up/down left/right. If you use an
Alt/Az mount for long exposures the object you have in the center of your
eyepiece will stay in the center, and everything will rotate around the center.
This is called field rotation.
Figure 2 Alt/Az mount on the left, EQ mount on the right.

Think of it this way. Point your camera at the center of a windmill. Now
watch as the center of the windmill stays in the center of the frame, but the
blades rotate around in a circle. With an Alt/AZ mount you would only be
able to take the picture by moving the camera up, down, left or right; you
could not rotate it, so the outside of the windmill would always be blurry from
motion. With this type of mount you cannot do long exposure
astrophotography.

Alt/Az mounts are typically found on scopes used primarily for visual use
such as Dobsonians and many other reflectors. Figure (3) shows an illustration
of what happens when you try to do long exposure AP with an Alt/Az mount.
Figure 3 How field rotation works

In figure 3 you see how the constellation of Cassiopeia rotates over time
around the north celestial pole, roughly located at the star Polaris. While
Polaris seems to always stay right in the center, the constellation not only
circles around it, but rotates so that it is always pointing the same direction in
relation to Polaris.

In fact, if you actually stood looking at Polaris all night long you would
actually be able to watch Cassiopeia make this trip.

What this does to astrophotography is if you were shooting an image of the


center star in Cassiopeia with an Alt/Az mount, that star would always remain
dead center of your image, but the other stars in the constellation would rotate
around it.

Let’s look at some actual images to see what this looks like.
Figure 4 Illustration of what happens using an Alt/Az mount for long exposure AP.

In this image note the stars are very streaked on the outer edges and get
sharper as you move towards the center. You can also see that things in the
image are not as bright as they are in the next image because the light gets
smeared instead of concentrated in one point. An Alt/Az mount will take
images similar to this at long exposures depending on where you are shooting
in the sky.

This brings us to the Equatorial mount. On an EQ mount the scope rotates on


two axes. This rotation allows for it not only to keep the target centered in the
frame, but to keep the entire frame correct by rotating the camera with the
scope as the stars rotate. Again with the windmill, you can now rotate the
camera at the same speed as the blades of the windmill spin, so the image is
nice and sharp and everything is stopped as shown in the next figure.
Figure 5 The same image as in the previous figure but on an EQ mount.

The ideal mount for AP work starts with the HEQ-5 such as the Orion Sirius.
Anything smaller will most likely not drive your AP scope accurately enough
to get you what you want. (An exception to this is widefield which can be shot
with just a camera and lens on a mount, or a very small refractor telescope.)
The next step up from the HEQ-5 is the EQ-6 like the Orion Atlas. The only
real difference in these two is the amount of weight you can put on them, and
of course how much they weigh. Other manufacturers make comparable
mounts but as of 01/01/2013 you can expect to pay roughly $1200+ for a
starter AP mount. That is the mount alone, no scope, no camera, no adapters,
nothing.

Just to give you the whole picture, mounts can get quite expensive. In my
opinion the top of the line commercial mounts are the Paramount MX and ME
II from Software Bisque and run $9,000 and $12,750 respectively. These also
do not come with a tripod or pier for those prices! On the other side, the
Paramount ME will support up to a 240lb load compared to the Atlas’ 40lbs.
Of course they have tons more features than just increased payload but you do
pay for them.

Check out their website at:

www.bisque.com

for lots more information if you are looking for a top of the line mount, or if
you are just curious what one will do.

Figure 6 A wedge for a fork mounted 8” SCT.

There are devices called EQ wedges which can convert some forked Alt/Az
mounts into something that works much like an EQ mount. A forked mount is
a mount where the telescope does not attach directly to the top of the mount
but instead has one or more arms that attach from the mount head to the
telescope tube. This may be sort of a solution if you already have a good deal
invested in an Alt/Az mount. There are, however, several issues with this. The
bearings that connect the fork to the optical tube are typically not designed to
carry the weight of a heavier camera and/or guidescope. The distance from the
mount axis to the telescope is greater on a fork mount, which makes it less
stable than an EQ mount. Next, the scope is pretty much integrated with the
fork mount, making upgrading the scope or mount practically impossible. In
some cases however, the scope can be taken off the forks and placed on an EQ
mount.

In addition, weird as it may seem, the fork Alt/Az mount even without a
$400-$800 wedge can be more expensive than about the same setup with an
EQ mount. Case in point would be the Celestron C8 S-GT XLT on an EQ
mount for $1529 while the Celestron CPC 800 XLT mounted on an Alt/Az is
$1999. It also weighs 7lbs more even without the wedge!

Now this is certainly not meant to imply you can’t use a wedge. If you already
have a fork mount scope such as the very popular 8” SCT you could make it
work, and work fairly well. You could also spend the money you would spend
on a wedge as a down payment on a nice EQ mount which will serve you far
better in the long run.

Alt/Az mounts differ in design from EQ mounts in several noticeable ways


and can be easily identified. If there is a bar hanging down with weights on it
to balance the payload (more on this later) and the bottom of the scope
attaches to the top of the mount, that would be an EQ mount. If there are one
or two arms that connect to the sides of the scope, that would be an Alt/Az
mount.

Recently there has been a new type of mount released, a combination mount.
These include the Meade LX-80 and the Orion Atlas Pro. They are both an
EQ and an Alt/Az in one. These are great if you use one rig for both visual
and imaging and regularly switch between the two. The down side is of course
they weigh a lot and are so far fairly unproven designs for AP work.

There are lots of things to consider when buying a mount. Most of them, such
as polar scopes and payload maximums, will be discussed a little later on in
the book.

Lower end mounts used for widefield include the EQ-5 mounts such as the
Celestron CG-5 and Orion’s SkyView Pro. These and the ones above them are
all GoTo mounts (the SkyView Pro mount is also available motorized without
GoTo, or with no motors at all), meaning they not only track targets but also
have the capability to automatically find targets either on their own or when
hooked up to computers. This can be extremely valuable for finding very faint
targets, not to mention speed and convenience.

If you plan on using just your camera or a very small and light refractor (less
than about 8lbs) you can use a smaller mount such as the Celestron CG-4, EQ-
4 or the Orion AstroView and just add tracking motors.

A quick note here is that the terms CG-? and EQ-? are interchangeable. CG
denotes a Celestron mount whereas the generic term is the EQ.

Moving even further down, you can mount just a camera with a lens on
something like an EQ-3 mount, or maybe even an EQ-2. I have even heard of
reasonable successes on an EQ-1 although I have heard of a lot of difficulties
with this mount as well.

If you are serious about even camera-only widefield AP, I would suggest at
least an EQ-2 and preferably an Orion AstroView.

Your choice of scope will also help determine your mount. You need a mount
that can easily hold the weight and resistance of your scope. Mounts are rated
for a maximum load capacity. For example, a mount with a 30lb capacity can
accurately drive up to 30lbs of payload, not including the weight of the
counterweights. Most people say that for AP work you should never exceed
75% of the mount’s rated capacity. I disagree with such a broad statement and
get a little more technical with it.

Be warned; do not go by the maximum load rating alone as these are


sometimes incorrect. You also should consider the software you will use to
drive the mount if you will be using a computer. I have had experience with
an Orion Sirius mount (rated at 30lbs) and a Celestron CG-5 (rated at 35lbs)
and in my experience would take the Sirius mount every time. It is more
stable at any load, quieter, interfaces with my software much better, and is far
easier to align and use out of the box. If you get the chance to use both before
you buy one it will greatly help you decide on what works best for you.

Choosing your mount all starts by adding up the weight of your main scope,
guide scope (if you intend on using a separate scope for guiding), guide
camera, dew heaters, finders, filters, field flattener or coma corrector, and
camera.
To figure the maximum load for your mount you should start with 50% of the
rated maximum, then add and subtract based on factors such as how large the
load is, how long it is, etc. For example, a short refractor weighing 75% of the
maximum load will actually provide better results than a large Newtonian at
50%. Why? Think of it this way, which is easier to carry around, a 15lb bag of
dog food or a 10lb 4’x8’ sheet of ¼” plywood? The plywood is huge, turns
into a kite in the slightest breeze, and because of its size makes it very hard to
control quickly and accurately. Even though the dog food is heavier, it is much
easier to control and wind does not affect it.

So I say up to 75% of the maximum payload for refractors, and up to 50% for
reflectors, as a general rule. This means with a payload (including scope,
camera, adapters, guidescope, etc) of 20lbs you can use an HEQ-5 rated at
30lbs for a refractor, or if you want to use a reflector you need to use an EQ-6
rated at 40lbs. One note is that these weight ratings do NOT usually include
the counterweights. So if a mount can hold a maximum of 20lbs of load, that
load is in addition to any counterweights needed to balance that 20lbs.

Here is a piece of advice for you, be leery of reviews for any mount that
scores excessively high. For example I read a review that was very detailed
and well thought out regarding a CGEM someone had purchased (a fine
mount) which scored a perfect 10 on tracking. I find this interesting because if
you compare it to the Paramount ME II with the optional RA and DEC on-
axis encoders and tripod for a total price of around $21,000 (the CGEM is
around a $1,500 mount with a tripod) you will find that:

The Paramount is belt driven so has no perceptible backlash, whereas the


CGEM’s gears do (as do all geared mounts). The Paramount corrects for
flexing OTAs (Optical Telescope Assembly, the tube) with TPoint software,
the CGEM does not. The Paramount compensates for differential refraction,
the CGEM does not. With the optional encoders the Paramount essentially has
no periodic error, the CGEM does.

I am not putting down the CGEM, it is a fantastic mount for the money. My
point here is that there is no perfect mount so any review you see where one
feature scores perfect is not a review you want to rely on. Even the Paramount
with all the optional features and software doesn’t claim to be perfect!

To be fair, it was an excellent review, inflated scores aside.


Now let’s compare some mounts side by side...

Figure 7 Mount comparison by payload class.

I should first point out that mounts with an asterisk are not GoTo mounts, they
are tracking only.

Next, there will undoubtedly be a lot of concern over my decision to place the
Celestron CG-5 in the 20-29lbs class when I have seen it rated for up to 35lbs
online, even on Celestron’s own website:

http://www.celestron.com/c3/support3/index.php?
_m=knowledgebase&_a=viewarticle&nav=0&kbarticleid=2400

Honestly, I don’t know what its weight rating really is. What I do know is the
CG-5 is based on the EQ-5 mount, just like the Orion SkyView Pro and the
Skywatcher EQ5 Synscan, both of which are rated right at 20lbs. I put all
mounts based on the same basic design (in this case an EQ-5) in the same
weight class.

You, of course, can do things completely differently.

So now that you know which mounts are reasonably close to which other
mounts, what manufacturer do you choose? That is an extremely personal
choice although there are some considerations.
The Celestron CG-5 is the least expensive GoTo mount capable of supporting
a reasonable imaging payload and is a well-respected mount.

Celestron mounts come with a two year warranty, as do the iOptron IEQ30 &
IEQ45, most others are one year.

Celestron was the only manufacturer that had an “all-star polar alignment” to
allow you to do a polar alignment even when Polaris was not visible. Orion
and Skywatcher mounts upgraded to use the Synscan 3.32 firmware have the
same capabilities.

The Orion and Skywatcher mounts I believe are the only ones that interface
with the fantastic mount control software, EQMOD, and as you will see later
in the book, I just don’t know what I would do without it.

Orion is very well known for excellent customer service should something go
wrong. Unfortunately I have had to test this claim and my service was
excellent.

Skywatcher mounts are, as far as I can tell, only available outside the US,
specifically in Canada, Australia and Europe.

iOptron claims to have the highest payload to weight ratio, meaning that their
mount with a payload of 30lbs weighs less than anyone else’s.

Takahashi and Losmandy are both top tier mounts that I know very little about
except they are highly respected by serious astrophotographers and generally
quite expensive.

Keeping up with new technology means we now see mounts such as the AZ
EQ6-GT from Skywatcher and the Atlas Pro from Orion. Both mounts meet
or exceed the weight of their lower brethren (Skywatcher EQ6 and Orion
Atlas respectively), they do it while weighing less, and of course have
compatibility with EQMOD. In addition they add a feature similar to
Celestron’s all-star polar alignment and then go a bit further with dual axis
encoders.

What are and why do you care about dual axis encoders? Once you align your
telescope for the evening, you can loosen the clutches, swap scopes,
rebalance, dance the jig, whatever you want, and then reengage the clutches
and the scope knows exactly where it is. There is no need to realign the scope.
These are the first mounts outside of a professional observatory I have ever
seen do that internally from the factory. Talk about a game changer!

Another newcomer to the fray is the Celestron Advanced VX mount


announced January 7th 2013. I have not been able to see one of these yet but
from the images it looks like an improved version of their CG-5 mount (no
more coffee grinder?). In fact, if you look closely, the general shape of the
mount, placement and shape of the motor covers, and the use of an external
cable from the RA to DEC motor housings look oddly like the Orion SkyView
Pro (SVP) layout, which just happens to be priced within $50 of this new
mount as well. There are some other notable differences such as the SVP uses
an external electronic controller in between the hand controller and motors
(designed for a bolt on kit as the SVP is available as an all manual mount as
well as full GoTo) and the Celestron mount also uses 2” tripod legs instead of
the SVP’s 1.75” (not much worry with AP as none of the legs should be
extended anyway, which adds a lot of stability).

Meade has a couple of new mounts in the LX-80 and LX-800, both of which
met with quite a few issues according to reports. They have since released an
updated LX-850 (announced January 9th, 2013) to replace the LX-800 which
hopefully has these issues fixed. I have heard very little about updates to the
LX-80 mounts but note that the LX-80 looks like the same basic platform as
the Orion Atlas Pro and Skywatcher AZ EQ6-GT so I imagine these issues
are, or will be soon resolved.

The last thing I want to touch on about EQ mounts is a service called


hypertuning. Most mounts we have touched on are mass produced and as such
have certain “issues” that inhibit them from performing as well as they could.
Hypertuning service is available from vendors such as:

www.deepspaceproducts.com

This could improve the pointing accuracy and tracking capabilities of your
mount. No, it will not increase your load capacity or do away with the need to
guide. It can however take an average performing mount and turn it in to an
excellent performing mount. Check out the provided website for more details.

A common issue I am seeing these days is people who want to buy a


Dobsonian and then put it on an EQ mount. There are two basic scenarios for
this: the first is buying a large mount like an Atlas or larger and using scope
rings just like a large Newtonian. First off, the Dobsonian was never designed
to be mounted like this so the stresses could be too much and it could literally
fall apart. Next, you still have the issue of not being able to get the scope to
focus correctly. Lastly, this is a huge scope that will vibrate like a tuning fork,
and then if the wind blows at all, well you can imagine how bad that would
be.

The second scenario is to put the Dobsonian on what is called an EQ platform.


The first problem here is that these platforms are never anywhere near as
accurate as a real EQ mount. All of the “example photos” I have seen from the
platform manufacturers are either very short exposures (30 seconds) or a few
300 second exposures with probably 75% of the exposures thrown out. Their
quality is never what I am looking for.

The next problem with the platform is that they usually have a very small area
of movement. My EQ mounts can cover over ½ of the visible sky before
requiring a meridian flip, some of these platforms can only run on 15% of the
visible sky, period.

Once you do your homework and see what the specs on these platforms are
you will see they are for people who have a lot invested in their Dobsonian
telescope, and might want to dabble in AP. Not one single serious AP person I
have ever met would ever consider using one for serious AP work.
1.4 The telescope

Next up is the scope. There are two basic types of scopes: reflector and
refractor. You can use either for AP work and the choice is usually a
personal one although most AP people I know use refractors.

Figure 8 Diagram of a refractor telescope.

Refractors can be good for AP work because they have no central


obstruction, do not suffer from coma (an optical aberration in reflectors), do
not usually need to be collimated (have the mirrors aligned), require
virtually no cool down time, offer low wind resistance, have a higher Strehl
ratio by nature (practical optical quality in the real world as opposed to
theoretical), and a smaller area for their mass (this makes it easier for the
mount to drive them). The down side for a refractor is that inch for inch,
they are the most expensive type of telescope. They also generally have a
shorter focal length than SCTs (Schmidt Cassegrain Telescope) and longer
than imaging Newtonians, which is neither good nor bad, but is a
consideration. The figure above shows a refracting telescope. Note that the
light just passes straight through; this maximizes the amount of light
gathered per millimeter of aperture and minimizes problems inherent with
bouncing light all over the place as mentioned below in the section on
reflector telescopes.

For a more in depth discussion of optics and specifics of scopes, see:


www.brayebrookobservatory.org/BrayObsWebSite/HOMEPAGE/forum/choosing_a_first_telescope.p
df

Figure 9 On the left, a refracting telescope with a smaller refracting telescope mounted on top
as a guidescope. On the right, a reflecting Newtonian telescope with a refractor guidescope on it
as well. Both scopes are on EQ mounts.

Figure 10 Diagram of a Newtonian reflector telescope.

Reflectors can be good for AP work because they can easily offer longer
focal lengths or faster focal ratios and are less expensive per inch. Some can
also be much more compact than a refractor. Note that in the Newtonian
design, which is the most common reflector type, the light has to bounce off
of two mirrors that must be precisely aligned. The process of alignment is
called “collimation”, and needs to be done frequently. Also, the end of the
tube through which light enters the Newtonian design is open to the air,
which allows in dust, dirt, spiders, dew and other things we don’t want to
talk about.

I once heard a story of a scope pointed straight up


when a bird landed on the spider vanes and ...
relieved himself. Don’t be “that guy”. Keep your
Newtonian covered!

Figure 11 Diagrams of a Schmidt Cassegrain telescope and a Maksutov Cassegrain telescope.

There are also hybrid scopes such as Maksutov-Cassegrains (MCT) and


Schmidt-Cassegrains (SCT) which combine some of the qualities of both a
refractor (being sealed and sometimes having lenses) and a reflector
(multiple mirrors). These typically have the advantage of being much
smaller than Newtonians, and are sealed against dust. They have the
disadvantages of being slow scopes (higher focal ratios), having to be
frequently collimated and having long cool down times (longer than
Newtonians since they are sealed).
Figure 12 An 8” SCT on a fork mount.

The fact that they are slow scopes can be mitigated by either using a focal
reducer (which can also be used on a refractor as well) or using a setup
called HyperStar which mounts the camera in front of the scope. Be careful
with HyperStar though, it massively increases your field of view (reduces
apparent magnification) and the camera can become quite an obstruction
reducing the amount of gathered light. The HyperStar setup can also be
quite expensive if your scope was not designed for it from the factory.
Another downside of HyperStar is that it reduces your focal ratio to about
f2, which can be extremely challenging to focus as your plane of focus
(amount of area that is in focus) is very very small.

All commercially available reflector types have a central obstruction; this is


where there is a hole in the image because the secondary mirror is in the
light path. In some telescopes this obstruction can block as much as 34% or
more of the incoming light. Normally, the central obstruction does not
manifest itself in the image as seen through the eyepiece (or on a camera),
but it does reduce the telescope’s ability to gather light as well as decrease
contrast in the image.
Figure 13 Telescope type properties.

The next question is always, which one is best? That is much like asking
what kind of vehicle is best. For some people, a pickup is better because
they are constantly hauling things. Others may need a minivan to put all the
kids in. Still others may need a small, light car for fuel economy. The chart
above is meant to give you general guidelines for your selection, but
remember, there are exceptions to every rule

The old adage of “bigger is always better” or “aperture is king” in visual


astronomy is only part of the story in visual astronomy and is even less
applicable in AP. My 4” refractor is regularly imaging the same targets as
people who use an 8” SCT, and doing just as well or better. The key is
getting a scope with the right combination of features that works well for
you.

If I was planning on doing very widefield objects, a small 80mm refractor


would be excellent. For very small objects something like an 8” SCT would
be a great choice. Planetary might do very well with an MCT. Since I shoot
mostly larger objects including nebulae, galaxies and clusters I went with a
refractor. Actually, I went with five refractors!

Once you pick the type of scope, there are other factors to consider. For
refractors you should pick an ED APO scope for imaging (ED is Extra-low
Dispersion, APO is Apochromatic, these make sure that all wavelengths of
light converge at exactly the same place preventing the blue/violet “glow”
you see on many bright objects through a telescope). You can even go one
step further and get an ED APO triplet which adds another glass lens to do
an even better job reducing those halos.

Telescopes are designed to focus at a specific length that


starts at the front of the telescope and is measured to the
center of the eyepiece which usually falls inside, or just
behind the focuser a couple of milimeters. Cameras on the
other hand have their sensors set back from the front of the
camera up to 40mm and when coupled with the adapters
needed to mount them on the focuser can move the sensor
back 60mm or more away from the end of the focuser. This
is much further than the plane of focus will allow and hence
will not allow the camera to be brought into focus.

For reflectors you need to make sure they are designed for
astrophotography. The reasoning is that many reflectors
(including most Dobsonian mounted Newtonians) cannot focus
with a camera attached and need to have the primary mirror
moved forward or in some cases, getting a specialized low
profile focuser. Moving the mirror forward is not at all an easy
task, will certainly void your warranty and should not be
attempted except by a professional or very advanced amateur.
Astrographs are a type of reflector telescope specifically made
to be able to focus with a camera attached.

An interesting note is that when using a reflector you will need


a coma corrector, and if you use the Baader Multi Purpose
Coma Corrector (MPCC) it can reduce your back focus by
10mm. This may in fact be a solution for some Newtonians that will not
quite come to focus.

Next thing to consider is the scope’s f-ratio (focal ratio). Scopes are usually
listed as, for example, a 110mm f7 scope, which means that the opening on
the front is 110mm (aperture), and that it has a focal ratio of f7, we can
calculate the focal length with this formula:

FocalLength = Aperture X FocalRatio

So 110 x f7 = 770. This is important because the lower the f-ratio, the
shorter exposures you can use, and this is called a faster scope. So, an f5
scope is faster than an f7 scope which is faster than an f12 scope which is
slower than an f10 scope. This also means a faster scope has a shorter focal
length and less magnification (actually larger field of view) than a slower
scope given the same aperture.

Camera lenses and telescopes are measured the same, but camera lenses
usually have multiple f-stops, or apertures. If, for example, you had a 50mm
focal length lens at f5.6 your effective aperture would be 50=x*5.6, or using
a little algebra we move things around to x=50/5.6 so x=9mm effective
aperture.

Other factors come into play with focal ratio, binning 2x2 (more on that
later) effectively halves the focal ratio as far as exposure is concerned
without changing the field of view.

The focal length of the scope gives you the field of view. This matters
because, when combined with the size of your sensor in your camera, it tells
you how large a target will be in your pictures. A good way to see this is to
download Stellarium (Free, PC/Mac/Linux) from:

www.stellarium.org

Stellarium is a nice little planetarium program where you can input your
scope and camera specifications in the Oculars section, and take a look at
some targets to see how big they will look in your camera. We take a closer
look in section 2.2 including using the Oculars plugin.
Contrary to popular belief, you do not want maximum magnification for all
targets. My 110mm f7 scope with a Nikon D7000 almost perfectly frames
the Rosette nebula, but is far too much magnification for the North America
Nebula and far too little for shooting planets such as Mars, Jupiter and
Saturn. This of course all comes back to the fact that there is no perfect
scope for every use.

Here is where focal reducers and barlows come into play. A focal reducer
can reduce the focal length of your scope, which also reduces the expected
exposure times. Most focal reducers come in .5x, .6x and .8x ranges.
Barlows increase your focal length and increase the expected exposure
times in 1.5x, 2x, 3x and higher ranges. You can plug these into Stellarium
too to see the effects it will have on the image size of your targets. While
adding anything like this degrades image quality to some degree, I would
highly recommend you stay away from anything above a 2x barlow or
anything below a .65x focal reducer as they degrade image quality pretty
substantially.

The figure on the next page gives you a good idea of how both a focal
reducer and a barlow work. The warning about a focal reducer is if you do
it too much, you cannot completely cover the camera sensor with light. The
barlow on the other hand, with too much magnification, stretches the image
out more than the resolution on the telescope can handle and causes a blurry
image.

Figure 14 The effects of a focal reducer.


Note in the image above that even though the amount of collected light
remains the same, its concentration on the camera sensor increases as you
use a smaller focal reducer, thereby changing your required exposure. In the
last example on the right, you can even see that this focal reducer does not
cover the entire sensor which will cause severe vignetting. You can reverse
this process to see how a barlow works.

One other important consideration with your choice of scopes is the focuser.
Many starter scopes come with a 1.25” focuser. This is a problem because if
you hook up a camera such as a DSLR to this you are likely to get some
vignetting of the image. Vignetting is where the center is nice and bright but
the outside edges are darker, especially the corners. It is much better to get a
2” focuser right off the bat. Many scopes designed just for astrophotography
have 2.5”, 3” and even 4” focusers to ensure this is not a problem. Since I
shoot a crop sized DSLR sensor 2” is more than enough. You can expect
some very slight vignetting using a DSLR with a 2” focuser but nothing a
couple of flat frames will not fix quite easily (more on these later).
However a 1.25” focuser will lose too much light to fix well with flat files.
Larger focusers also provide more stability to the image train with lots of
things like cameras, field flatteners, filters, etc attached.

Figure 15 Rack and pinion focuser on the left, Crayford on the right.

While we are discussing focusers there are two basic designs: rack and
pinion and Crayford. While there are advanced rack and pinion designs, for
all intents and purposes at this level you want the smooth adjustability of a
dual speed Crayford. Crayford focusers use very smooth bearings instead of
just geared teeth and allow very slight adjustments with the dual speed
feature. This makes it possible to really get the focusing exact which is very
important in AP.

For scopes you can expect to pay anywhere from $500 up just for the
optical tube. My refractor, is an Orion 110mm f7 ED APO Premium, which
I believed to be the best bang for my buck when I bought it for about
$1000.

While we are talking about different types of scopes, let’s talk scope
materials. There are generally two materials used in scope tubes: aluminum
and carbon fiber. Aluminum is cheaper, carbon is lighter and does not
expand/contract with temperature changes like metals do.

So why is this important? Since carbon weighs less, that means your mount
does not have to work as hard to keep it accurate and it also means you
could put more other equipment on board before reaching your maximum
payload. In addition, since it does not expand and contract with temperature
changes your focus will not tend to shift during the night like it can with
aluminum.

What you say? Scopes change focus? Yep! Metal expands when heated and
contracts when cooled. Aluminum for example has an expansion coefficient
of .000023, and you use this formula to calculate the expansion:

ChangeLength = InitialLength X
LinearExpansionCoefficient X (InitialTemp - FinalTemp)

Where the change in length is in inches, initial length is in inches, initial


temperature is in Fahrenheit and final temperature is in Fahrenheit. So if
you have a 900mm scope tube and the temperature drops from 60F to 30F
during your imaging session, the scope will shrink by .0138 inches. This
may not seem like much but in focusing it certainly can be. The down side
is of course carbon fiber is more expensive.
1.5 Basic setup

EQ mounts typically come in several pieces that need to be assembled.


Those pieces are the tripod, the mount, and counterweights. To start with
we will need to level the tripod. Place the tripod on a flat level surface
without the legs extended. Now place a level on the top flat portion and
work at getting that piece as level as possible by adjusting the leg
extensions. I use a two axis level like this one:

Figure 16 Dual axis level being used to level the tripod

One word of caution, some mounts have a level built into them, and many
of these are very incorrect in their readings. Before you rely on a built in
level be absolutely sure it is correct by checking it with other levels.

You can use a single straight level but be sure you level on at least two axes
if not three. Once the mount is level make sure all the leg extensions are
fully locked and place the mount without counterweights on the tripod and
secure it. To make things much easier use your altitude adjustment screws
and set the mount to the smallest declination you can while allowing the
mount to rotate freely with the counterweight bar fully extended.

Let’s look at EQ mounts, in particular, the Orion Sirius EQ-G (as shown).
The first thing we need to understand is the controls for declination and
right ascension. Note in the following image there are two light colored
levers:

Figure 17 Declination (top left) and right ascension (lower right) clutch levers.

The upper left lever is the declination; the lower right lever is the right
ascension. An easy way to remember what you are moving is that if the
weights move, you are moving right ascension, if the weights do not move,
you are moving declination.
The first thing we need to know is where our ‘home’ position is. Let’s start
by releasing the right ascension lock and rotating the mount head so that the
counterweight shaft is straight out to the left side as seen from behind. Now
place the level on the shaft and make sure it is completely level, then lock
the right ascension lock. Double check the counterweight shaft to make sure
it is still level.

Figure 18 Making sure the counterweight bar is level.

Next you need to find the right ascension clock ring on the rear of the
mount and unlock it so it freely rotates. For the northern hemisphere, set
this ring to 6 on the top scale of the ring if it has more than one scale. Now
unlock the right ascension lock and rotate the mount head until it reads 0,
then relock it. This is your RA home position. Now we need to mark this
with a scribe, tape, paint, magic marker, whatever. You need to have a line
that goes from the mount head to the rest of the mount so that you can
return to the RA home position easily and quickly. I used paint like this:

Figure 19 Line marking the right ascension home position.

With the mount in the locked RA home position it is time to find your
declination home position. Release your declination clutch and rotate the
head until it is roughly at 90 degrees. Place the level in the slot where your
dovetail would go facing left and right as seen from behind your mount like
this:
Figure 20 Making sure the declination axis is level.

Once the level shows level, release the lock on your declination clock ring
and set it to 0 degrees. Now rotate the mount head until it reads 90 degrees
paying particular attention to which side the locks for your scope dovetail
are on (on mine, the locks are on the right so I rotated the head until the
declination clock ring read 90 degrees and the locking screws for the
dovetail were on the right) and lock the declination clutch. Now you need to
make a line marking this as your declination home position.
Figure 21 Marking the declination home position.

The next problem we run into is our polar scope. On rare occasions the
polar scope is aligned correctly, more often than not we have to adjust it.
You can do this indoors on a rainy day. Place the scope as far from one wall
as you can get it (down a hallway is great) but leave plenty of room behind
it because that is where you will be working).
Figure 22 Location of the polar scope on an EQ mount.

Still working without counterweights or telescope make sure that your


counterweight bar is fully extended and that the declination is turned to 0
degrees. Remove any covers for the polar scope and note the three allen
head screws around the circumference of the polar scope as shown in the
next image. Now look through the polar scope and make sure you can see a
wall, not a ceiling. If you cannot see a wall, you can prop the rear legs of
the tripod up on books or something to raise the rear enough to point the
polar scope at a wall. Now get a piece of paper and put a large (about the
size of a pea) dot in the center. You need to have the scope in the RA home
position and place the piece of paper on the wall in such a way that the dot
is right in the center of the polar scope crosshairs.

Unlock the right ascension clutch and slowly rotate the mount head until the
counterweight shaft is straight up in the air. If your polar scope is perfectly
lined up you will still see the dot right in the center of the crosshairs. If it is
not, you need to adjust the three screws until you bring it back HALF THE
DISTANCE to the center. Do this process slowly, carefully, and you may
never have to do it again.

DO NOT LOOSEN THE SCREWS MORE THAN 1/2


TURN AT A TIME OR THE GLASS FOR THE POLAR
SCOPE WILL FALL OUT AND IT IS TIME
CONSUMING TO GET IT BACK IN.
Figure 23 One of the three polar scope adjustment allen head screws.

Assuming that you made an adjustment move the paper on the wall until the
dot is again in the crosshairs and rotate the mount head back to home. It
should still be right in the middle of the crosshairs. If not, repeat the process
until you can rotate the mount head anywhere you like and the dot stays
right in the center.

I will tell you I have never gotten this perfect, meaning no movement of the
dot at all, but at no time does the dot not touch the exact center of the
crosshairs, there is just a slight wobble to the dot. Now make sure all three
of the allen screws are snug but do not over tighten them. They need to be
tight enough to stop them from coming loose but no more.

Now we will use the altitude bolts to dial in your approximate declination.
For example, my latitude is 30.48 so in theory I should set my declination
to 30 ½ or so. Unfortunately these stickers that mark the declination are
rarely placed correctly so all we are worried about is getting it in the
ballpark. We will fine tune this later.

The next thing on our list is adjusting the telescope for cone error. We need
to start by understanding what cone error is. Let’s think of something silly
to get an understanding. In your mind (not in reality!) picture your scope
with the dovetail mounted except you put a 6” block of wood under the
front scope ring attachment to the dovetail. Yes, that’s right, the rear of the
scope will be a little lower than it is now but the front of the scope will be
pointing way up in the sky. Now imagine you rotate the mount on the RA
axis. This is cone error to the extreme! The scope will never be pointed
where it should be, and if you align on one star, when you slew to another
star it will be way off.

So how do we test for cone error? Simple! With the scope in the home
position release the RA clutch and rotate the scope until it is at the side and
level with the counterweights. Now either using a star or a faraway object
like the top of a telephone pole using only the declination axis to move the
nose of the scope up/down or the azimuth bolts on your mount to move the
scope left/right. Center the object in the middle of your eyepiece (this works
MUCH better if you have an eyepiece with crosshairs in it). Once you have
it centered lock your azimuth bolts (by making sure both are tight) and lock
your declination clutch. Now release your RA clutch and rotate the scope
180 degrees so that it is on the other side of the mount level with the
counterweights and lock the RA clutch. Using ONLY your declination
adjustment find the point again. If you can line it up in the center of the
crosshairs using only declination then you have no cone error. If not, you
need to either use your cone error adjustment screws on either end of the
dovetail bar or install shims between your scope tube ring and the dovetail
bar. Just like when working with our polar scope you only want to correct
HALF the error. If you have an error make your adjustments and then
repeat the process until you have no more cone error.

Now you need to balance the scope. There are three axes you need to
balance on an EQ mount, the first is shown here:
Figure 24 Balancing the scope's declination axis.

In the image above I have turned the scope on its side by loosening the right
ascension release then locking it at 90 degrees. This will result in the scope
being on one side of the mount and the weights on the other side. I can then
loosen the declination release and pivot one end of the scope up and down
to see if it is balanced. To balance it, I can either loosen the scope ring
knobs and slide the scope towards the lighter end and recheck, or carefully
slide the entire assembly by loosening the dovetail clamps and sliding the
dovetail on the mount head (this is also how you would balance a SCT or
MCT that has no scope rings). Once finished, I tighten the scope ring knobs
or dovetail clamps back down and move to the front of the scope.

When using the dovetail method be sure not to loosen the clamp on the
dovetail while the scope is tilted in the position shown in these images as it
could fall out of the mount onto the ground and do a lot of damage to the
scope and your feet.
Figure 25 Balancing the scope's right ascension axis.

Now we need to balance the second axis just like we did the first one. We
start in the same position as shown on the previous page, then loosen the
right ascension lock and pivot the scope and weight up and down. You can
slide the weight left and right (as it appears in the figure above) until it
balances the scope. Lock everything down, return the scope to its home
position.
Figure 26 Balancing the nose of a reflector.

The third axis is just pointing the nose of the scope straight up in the air and
making sure that the nose does not tip one direction or the other. This is
primarily to see if you have too much weight strapped to one side or the
other such as on a Newtonian astrograph with the camera hanging off one
side. This is a critical step for Newtonians that many people miss and
failure to complete this setup can cause severe guiding problems.

In order to take long exposures, even with a great mount, you need to align
it correctly. EQ scopes need to be polar aligned, or aligned with the north
celestial pole (or south celestial pole as the case may be). This is just about
pointed to the star Polaris in the northern sky for the northern hemisphere.
In fact, if you are doing visual you can just point the mount towards Polaris
and be done. Unfortunately with AP you need to be a little more accurate.

Polar alignment means two things, moving the azimuth bolts so the scope
moves left and right until it is perfectly in line with Polaris. Secondly
moving the altitude bolts to raise and lower the scope until that, too, is
perfectly in line with where Polaris needs to be in your polar scope.

Obviously your azimuth moves the mount left and right until you get it
lined up right with Polaris since you probably will not place the tripod
down exactly facing celestial north, but why exactly are you changing the
elevation if Polaris is more or less fixed in the sky? Good question! It is
because Polaris appears at different points in the sky depending on where
you are on the earth. Use this illustration to help you with the concept:
Figure 27 Different points of view on earth result in apparent changes to Polaris’ position in
space.

As you can see from the image above, a person near the North Pole will see
Polaris directly overhead while someone near the equator will see it very
low in the sky.

See the next two figures for examples of images shot with incorrect
alignments. Note that since it is an alignment problem the first figure,
which is the top left corner of the image, and the second figure, which is the
lower left, show the exact same problem: elongated stars in the same
direction.

Figure 28 Results of bad alignment

For this, you need to have the scope outside at night with a clear view of the
star Polaris (for the northern hemisphere). Again, get your mount level, then
point your mount roughly north. You will need a free piece of software
called Polar FinderScope (PC) by MyAstroImages.com. Download this and
enter your longitude and look at the charts it gives you. This will show you
where Polaris should be in your polar scope. Rotate your mount on the RA
axis which will rotate your polar scope until the little circle on the larger
circle is where Polar FinderScope says Polaris should be (in the next
figures, Polaris is in the upper right at about 45 degrees), then use your
altitude and azimuth bolts to put Polaris right in that little circle.

Double check where the constellations in the polar scope are versus where
they really are in the sky to make sure they match.
Figure 29 Illustration of Polar Finder software showing Polaris’ position (left) and the
matching view through a polar scope (right).

You are now polar aligned! Here we are at a crossroads. You can use your
hand controller to do an alignment or you can use other software. I will
assume you are going to use your hand controller for now and we will touch
on other methods a little later in the software section. Use your hand
controller to do a three star alignment now by selecting it on your hand
controller’s menu if it does not automatically come up after entering your
date, time and location information.

Just in case I have some readers south of the equator I am including a nice
little graphic that shows where to align your scope “down there”:
Figure 30 Illustration of the south celestial pole.

Keep in mind that for visual you could probably just point your scope right
at Sigma Octantis just like we can up north at Polaris, but you need to be
much more accurate for AP work. Unfortunately, Sigma Octantis is far
dimmer than Polaris so you really need to use the pointers.

We have now pretty well covered the basics of setting up an EQ mount, but
what about if we want to use a fork mount scope on an EQ wedge?

A wedge works by tilting the mount at an angle equal to our latitude, just
like the latitude/declination/altitude adjustments for our EQ mount. I should
mention here that not all fork mount GoTo telescopes will work with a
wedge; the on-board computer may not have an “equatorial mode” which is
required for alignment.

Figure 31 SCT telescope mounted on a wedge.

Just like on the EQ we have a declination or altitude scale which is shown


here in the next image:
Figure 32 Declination setting of a wedge.

This can be adjusted with a knob or screw usually found on the rear of the
wedge:
Figure 33 Declination set screw.

Once we have our declination set to where we think it should be we can


check it by setting both the declination and right ascension of the scope to
the home positions and see if Polaris (or Sigma Octantis for the southern
hemisphere) shows up in the viewfinder. If not, we can fine tune it until it
shows up roughly where it should in the viewfinder.

You may note that we talked a lot about balancing EQ mounted setups but
haven’t mentioned it with fork mounted setups yet. Yes, they do make
weight kits for fork mounts but they are aftermarket only. I would look to:

www.scopestuff.com

for my SCT weight kits as they have a nice selection and very reasonable
prices. Balancing the scope is a lot like doing it on an EQ mount, you just
can’t rotate the scope to make up for off axis weights like you can a
Newtonian.

There are two ways to perfect our polar alignment and we will start with the
old school method of drift alignment. It seems that this method has
frightened a lot of people because they think it is overly complex, but it is
actually pretty simple albeit time consuming. Let’s get started.

Once the telescope is set up, aligned with the polar scope (if we have one)
and aligned with the computer or hand controller we need to find a star near
the meridian (the line that runs from north to south just overhead), north of
the celestial equator (the line that runs from east to west directly over the
earth’s equator). The star should be roughly 65 degrees or so in height and
no brighter than Polaris (magnitude 2) and no dimmer than around
magnitude 4.

At this point we need an illuminated reticle eyepiece that will provide over
100-150x which we can compute with the following formula:

Magnification = TelescopeFocalLength /
EyepieceFocalLength

If the eyepiece gives you insufficient magnification you can use a Barlow.

Now center the star in the eyepiece:


Figure 34 Centering a star for drift alignment.

Our next objective is to move the telescope using the hand controller one
direction and then another making sure that when we move the telescope
the star moves along one of the lines in our eyepiece exactly. Rotate the
eyepiece until this is done and then lock the eyepiece down. Make sure the
star moves exactly down the line all the way to the end.

Now put the star directly in the center of the eyepiece and let it track for
about five minutes.

First we want to worry about north/south drift and ignore any east/west
drift. As the star drifts away from center (assuming you do not have perfect
alignment) we need to adjust the azimuth knobs on the base of the mount. It
is worth noting that if you are using a Newtonian style reflector you need to
reverse these directions. If the star moves up (north) then adjust the azimuth
knobs so that the star moves down and then use the hand controller to
recenter the star and repeat the test until there is no up/down (north/south)
drift.
Now we need to select a second star somewhere between 20-30 degrees
high in the east about the same distance and direction from the celestial
equator as our last star.

We repeat the steps to make sure that the star moves directly down the lines
of our reticle when we move the scope north/south and east/west just like
before, rotating the eyepiece as needed.

Using the hand controller recenter the star exactly in the center and let it
drift until you clearly see it move off the line or for five minutes.

If the star moves up, adjust your altitude bolts to move it back down. If the
star moves down, adjust the altitude bolts to move it back up.

Recenter the star using the hand controller and try again until you have
adjusted the altitude bolts to keep the star right in the center.

The second method of perfecting our polar alignment is a better, faster and
easier way to do it. Unfortunately this will cost you roughly what you
would spend on a delivery pizza but it is well worth it.

You need a piece of software called AlignMaster (PC, $19) available from:

www.alignmaster.de

He offers a free 30 day trial so you can try before you buy. Download and
install the software and use it to perform your precise polar alignment. No,
seriously, this will save you a ton of time and get you just as accurate as
drift alignment. Unlike drift alignment it also tells you exactly how close
you are to perfect alignment.
Figure 35 AlignMaster main screen.

To use AlignMaster all you do is input your exact longitude, latitude and
time offset then click the ASCOM setup button and select your telescope
driver. Now click next and select a pair of stars to do the alignment then
click Next. Now you see a button that says Goto, click that to slew to the
first star and then use your hand controller/gamepad to center the star in the
exact center of your eyepiece/camera. Click the Next button and then click
the Goto button to slew to the second star. Again, use your hand
controller/gamepad to center the star in your eyepiece/camera and click
Next again. Now it will tell you how far out of alignment you are and
present you with a Next button. Click that Next button and a pop-up box
comes up saying “Align AZ?”, click Yes. You will see the scope slew a tiny
bit. Using only your Azimuth adjustments on the scope base move the star
back to center. Click next and you will see another pop-up box saying
“Align ALT?” and you click Yes. Now the scope slews a little and you need
to use only your Altitude adjustments to recenter the star. At this point
when you click next it will ask you if you want to redo the procedure to
check your result, this is highly recommended and only takes a minute.

All done!
The last thing you may need to worry about is with setting up a mount is
PEC, or Periodic Error Correction. Inside your mount are gears. These
gears, the gear train, and the balance of the mount (the actual mount, not
what you put on it) all contribute to the errors caused by the worm gear that
form an eccentric circle, or wave like pattern. This means that it does not
track perfectly, but has a predictable oscillation for which you can correct to
get more accurate tracking/guiding.

What happens is that you use your guiding software to record a log file of
guiding errors and the actions the software took to make the corrections.
This log file is then put into another piece of software that can read the log
file and strip out things that are unrelated to the worm gear error and creates
another file with just those errors. Lastly, either your guiding software or
mount control software reads this file and corrects automatically for these
errors, leaving the guiding system free to counteract other errors.

There has been some debate on whether using PEC is necessary when you
are guiding, and I can certainly understand both sides of the argument. I
believe it can be worth it because in a setup similar to mine EQMOD
(mount control software) will be correcting for the PEC error without the
guiding software ever knowing it happened, which leaves the guiding
software free to spend its time worrying about other things.

If you have a reflector telescope you may need to collimate it. Collimation
is the process of making sure all the light that is gathered coming in your
scope is correctly focused in your eyepiece or on your imager. Let’s start
with an overview on a Newtonian:
Figure 36 Laser collimation diagram on a Newtonian telescope tube.

As you can see above the objective here is to make sure that the laser beam
from a laser collimator once shined in the focuser comes exactly back into
the focuser, which means the mirrors are lined up.

The first step in laser collimating your Newtonian is to make sure the laser
is on, inserted and centered in the eyepiece holder of the focuser and then to
make sure it is shining in the center of the circle in the middle of your
mirror (or on the central dot). In order to do this, you need to look down the
telescope tube.

START BY PLACING A PIECE OF PAPER IN FRONT


OF THE SCOPE AND LOOKING AT THE PAPER
BEFORE LOOKING DOWN THE SCOPE TUBE. IF
THE LASER HITS YOU OR SOMEONE ELSE IN THE
EYE IT CAN CAUSE PERMANENT BLINDNESS!

Once you are sure it is safe to do so, look down the end of the tube and you
should see something similar to this:
Figure 37 Laser shining in central circle during Newtonian collimation.

You can see in the previous image that the laser is shining almost perfectly
in the center of the circle marked on the mirror. If it is not, you need to
adjust the secondary using these screws:
Figure 38 Secondary collimation screws.

When adjusting the secondary mirror you need to move the screws in very
small motions, about a quarter turn at a time. When you tighten one, loosen
the other two. Do this slowly and methodically until you get the laser dot
right in the center of the circle or dot on the primary mirror.

Once the secondary is aligned you need to look at the laser collimator as it
has a target on it and you should hopefully now see a laser dot on that target
like this:
Figure 39 Collimator target showing returning laser location.

In the above image we can see the laser is right on the center dot. If it is not,
we need to adjust the primary mirror cell on the back of the scope. Be sure
that the target you see in the above image is pointed towards the back of the
scope so you can see it while adjusting the primary mirror cell:
Figure 40 Rear view of the primary mirror cell and adjustment screws.

Just like we did with the secondary adjustments we need to tighten one
screw about a quarter of a turn and then loosen the other two, check the
collimator target and continue adjusting if necessary.

SCT collimation is usually done a little differently with what is called a star
test. The basic theory is that you get a fairly bright star in the center of your
field of view and then defocus it (intentionally make it blurry). Then you
look at the pattern it makes. For example, look at the following figure, the
scope shows improved collimation from left to right with the right most
image being the best:
Figure 41 SCT collimation examples.

What you are aiming for is the central dark area should be round and
exactly in the center of the round outer circle. See how the first image looks
more like a teardrop than a circle, the second is starting to look circular but
the center portion is not circular or centered, the third image is fairly close
but the central circle is still too far to the left and is not really round. The
fourth is the best centered and has the best circular pattern in it.

Since I do not have an SCT and rarely use one these were the best images I
could come up with but if you use a reasonably high power eyepiece when
doing this you can actually get very nice ring patterns from the outer edge
towards the center.

In order to make the adjustments, you do exactly what you did on the
Newtonian when you adjusted the secondary. The front of the SCT has
three screws (which may be covered by a cap) that you adjust just like the
Newtonian, ¼ turn at a time, screwing one in and two out at a time.
Figure 42 SCT Collimation screws on the rear of the secondary mirror.
1.6 Guide scope & guiding

Next up is guiding. Long exposure work requires that the telescope follow
the stars exactly. If it does not, you get odd shapes for the stars instead of
round, or you get streaks, or some other weird things. For this we use an
autoguider which watches a star and sends minute corrections to the mount
computer to make sure it is dead on accurate. One thing to note is that the
larger the field of view a telescope has (smaller f-ratio, less magnification)
the easier and more forgiving the guiding is.

Let’s start with why telescopes do not track accurately. The first reason is
that they are based on mass produced mechanical gears which are far from
perfect. When the gears are ever so slightly out of round, mismatched, or not
meshed perfectly this can cause periodic errors which we touched on in the
previous section.

Of course you may not have absolutely perfect polar alignment which keeps
the mount from tracking correctly, and we covered that in the previous
section as well.

Lastly we have the atmosphere itself. As the telescope tracks a target across
the sky the amount of atmosphere you are shooting through changes, and
this changes the amount of refraction and therefore changes the apparent
location of the target (discussed in detail in section 3.4).

Even if you could adjust out all the periodic error, and get a perfect polar
alignment, you could not correct for the atmospheric effects and so we must
guide.

Now we need an understanding of guiding accuracy as it is one place I have


seen a lot of misconceptions.

To find the pixel accuracy of a given setup for guiding you use this formula:
Resolution = (206.265 X PixelSize) / Focal
Length

So in my case it works out to my main scope (110mm, 770mm FL):

(4.67 X 206.265) / 770 = 1.25ArcSec/Pix

And myguidescope (80mm, 400mm FL):

(5.2 X 206.265) / 400 = 2.68ArcSec/Pix

This means that when my guider is off 1 pixel, it moves the mount 2.68 Arc
Seconds. This in turn moves my main camera 2.09 pixels. My guiding is
pretty accurate with all the tweaks I have done so being off more than a pixel
is rare, which means my main scope normally moves less than 2 pixels at
any given time.

Decreasing the focal length of my guidescope (for example using the Orion
Mini Guider) would increase the difference to 5.17 pixels and cause a huge
loss of detail and much larger stars. The trick is to keep the error difference
down to a couple of pixels if you can and to balance that with the amount of
light gathered by the diameter of your guidescope.

This formula is highly dependent on your camera, telescope, guidescope,


and guidecamera so what works for me may not for you. Be sure to run the
formula and keep the difference between your guidescope and main scope to
a minimum.

When we talk about the image moving on the camera sensor by a certain
number of pixels, keep in mind that this could be in multiple directions. For
instance, a star that should be one pixel on the sensor and that can move one
pixel in any direction can in effect cause that single pixel to become a nine
pixel blob.
Figure 43 How a one pixel movement can cover nine pixels.

Look at the image above, note how starting at the center and moving one
square in any direction and then returning to the center you can over time
move to all nine boxes.

Now think about being able to move two pixels (squares) in any direction,
that would be a five by five block of squares or twenty five squares in all.

The fewer number of pixels you move, the sharper the image will be, it is
that simple.

Next we come to guide cameras which has something to do with all of this
because it is the pixel size of the guidecamera that helps us dictate the focal
length of our guidescope and its accuracy. A good general purpose
guidecamera is the Orion Starshoot Autoguider and indeed it is one of the
most popular cameras out there. Another choice is the Orion G3
monochrome as it would provide more possible guide targets since it is
cooled and sees less noise. Some of the more popular choices are listed here:
Figure 44 Comparison of guide camera models.

Again, this is only a sampling and there are other alternatives such as a
modified webcam. I have no problem with cheap homemade items like the
homemade webcam, as long as the parts + reasonable labor do not exceed
what I would have spent on an actual guidecamera and provide equal results.
In this situation I do not believe it does provide equal results so I run the
Orion SSAG.

Figure 45 Orion StarShoot autoguider camera.

At this point you are probably thinking that since accuracy is so important, I
need to get as long a focal length guidescope as possible, couple that with
the smallest pixel size and highest resolution guide camera as possible, and
use that. Yes and no.
It, like everything else, is a tradeoff. A longer focal length guidescope means
fewer possible stars for guiding because you are limiting the field of view.
To correct that you would need a larger aperture guide scope, which means
more weight. More weight means the whole setup is harder to guide. Oy
vey!

My opinion is that you should get a guide scope and guide camera
combination that gets you close to two pixels or less of movement on the
main scope when the guide scope moves one pixel. The logic behind this is
simply due to how much motion it takes to look noticeably blurry on an
image sensor with 12-18MP of density, which is what I use. This should be
reduced closer to 1:1 on lower resolution cameras, or could be increased on
higher resolution models. The result is maximum light gathering, minimal
weight addition, and sharp images.

There are actually two ways to mount an autoguider camera. You can use an
“off axis” mount which basically splits the light from the scope into two
paths, one for the camera and another for the guider. I prefer the second
method which mounts a second telescope onto the main scope for the guider.
Look at the next figure to see what this setup looks like. This allows me to
have a wider field of view for the guider so I have more guide stars, and also
allows me to do whatever I want to the optical path without having to worry
about the guiding.
Figure 46 The Orion Awesome Autoguider Package mounted on top of a refractor.

Orion sells the SSAG in two packages like the ones I recommend: the mini
autoguider package and the awesome autoguider package (shown above).
The difference is the scope size. I run the awesome package so I get the
larger 80mm guide scope which has a longer focal length and is therefore
more accurate. These two guider packages run $349 and $399 respectively.
If you choose to go with an off axis solution, that runs about $409.

Many times people ask if you can use other cameras for guide cameras such
as a webcam or DSLR. Webcams certainly can be used and are far cheaper
than an actual guide camera although they do not have features such as built
in ST4 ports or binning, and have reduced sensitivity, etc. My personal
opinion is that if you are even semi-serious about AP you should not use a
webcam.

As for using a DSLR, that is a really bad idea. Leaving the shutter open and
the sensor active in live view mode for hours on end seems like a great way
to burn out your camera sensor quickly. Most cameras have a “thermal
shutdown” after so many minutes to help prevent this, which of course
means after a few minutes of guiding the camera shuts down and you lose
guiding. Next, when a camera is in live view mode, there is a slight delay in
what appears on the screen. This ruins any chance of accuracy in
autoguiding. Also, there are no drivers that I know of which enable the use
of DSLR cameras with guiding programs such as PHD or Guide Dog so you
are out of luck anyway.

We now come to guiding software and the de-facto standard, PHD by Stark
Labs (freeware, PC/Mac/Linux). Since it is the standard most people seem to
use, it is free, it works with just about any camera you would ever want to
use, and is supported by almost every piece of software that interfaces with
guiding, I saw no reason to explore other options. Download it, install it, use
it.

If for some weird reason PHD does not work well for you a couple of other
options are Guide Dog and Meta Guide which are both free as well.
Figure 47 PHD main screen.

Guiding software, such as PHD, include tools for improving guiding. For
example, PHD can display a graph that shows how much the mount is
moving and in what direction (see the next figure), which is handy for
providing feedback when tweaking the guiding settings. As a general rule
you want the two numbers on the far lower left (Osc-Index and RMS) to be
as low as possible to a point and the graph to be as smooth as possible.
Figure 48 A graph showing the guiding characteristics of a mount using PHD Guiding.

At this point we are not going to discuss all the details of guiding—that is
presented later in the book. I just wanted you to know you can tweak it if
you need to.
1.7 The camera

The next concern is cameras. If you are just starting out I recommend you
use a DSLR such as the Canon Rebels or Nikon D5100/7000. If you already
have a DSLR, use it, even if it is a Pentax, Sony, Fuji, whatever. You will
need software to control the camera and Canon seems to have the widest
array of that essential software. You can, however, use pretty much any of
them. I, for example, use a Nikon D7000 and it does a fantastic job and I
have many software options for it. My images are a reasonable example of
what you can accomplish with Nikon equipment.

Let’s clear up some issues before we get in too deep. Can you use a point
and shoot (camera without a removable lens)? Sure, but your results will
suffer greatly. All that extra glass that was never meant to focus through an
eyepiece or another objective degrades the image horribly.

How about one of the newfangled mirrorless cameras (MLCs)? Sure,


assuming you can find a T-Ring adapter for that camera, software that will
run it and download images from it, a camera that can shoot RAW and
software that can decode the RAW it shoots. Oh, and unless it is a Micro
4/3 your sensor will be so tiny that its sensitivity and noise generation will
be horrible.

I have a few suggestions for choosing a DSLR. Get one that is at least
10MP, one that you feel comfortable using (feel it in your hand, check out
the controls, play with it and see what you do/don’t like about each one,
especially if it will be used for daytime photos as well), and silly as it may
seem, you might get one that is at least partially weather sealed (the Nikon
D7000 and Canon 7D are).

Why weather sealing? One word; dew. If you choose the D7000 or
something comparable expect to pay $900-$1500 for the body alone. Now
some people will scream and say you can get a much cheaper body that will
do just fine, and it will, but it is also possible that the dew forms on it, the
scope rotates, and the runoff seeps into the camera shorting it out. Is this
likely? Probably not. I just prefer not to take that chance.

What is dew? The amount of water vapor the air can hold is
primarily dependent on the temperature. Warmer air holds
more water vapor than colder air. As the outside temperature
falls, objects such as metal and glass can actually radiate heat
faster than the surrounding air and thus become cooler than the
air around them. When this happens the air in contact with the
object is colder than the ambient temperature and releases its
water vapor which collects on the object.

You can, of course, wrap a towel around the camera to absorb the dew but
this also can trap heat in your camera which is the mortal enemy of your
images.

Another important feature is called LiveView. This is the ability to view a


video image of what the camera sees in real time, either on the screen on
the back of the camera or in a program on a connected computer. This can
make initial focusing much easier.

Next is the ability to shoot in RAW. Not to worry, virtually all DSLRs have
this capability. A RAW file is pretty much what it sounds like, the raw data
from the camera’s sensor is written to a file with no adjustments, no
corrections, and above all, no compression.

Jpegs (or JPGs) that your camera creates have all of those done,
adjustments, corrections and compression. They can technically work, but
because they are manipulated in camera and compressed they will always
give far inferior results to RAW images.

One thing you notice I did not mention with cameras is their maximum
ISO. Having higher ISO performance is good, but not because you want to
use it. My D7000 captures outstanding daylight or nighttime images at
ISO1600, and quite good images at ISO3200. ISO6400 is usable in a pinch.
The problem is, you will be stretching the image so you need the dynamic
range (dynamic range and more is discussed in detail a little later) offered
by the lower ISOs which is why the vast majority of my imaging is at
ISO800. If I need more light I just extend the exposure time.

What is ISO? Your camera’s ISO setting is just an


amplification of the amount of light it has already captured
during the exposure. Increasing the ISO does not capture
additional light. In fact, it increases the noise almost as much
as it increases the apparent light it captures.

Eventually you will hear of people “modding” their DSLRs for better
response to red. Let me first dispel a myth about DSLRs. Some people say
that they are completely insensitive to high wavelength red such as
Hydrogen Alpha (Ha for short). This is absolutely not true. (See the next
figure.)

Ha emits light at 656nm (Ha is the primary emission type from Hydrogen,
the most common element in the Universe, and a primary component in
most nebulae) in the visible spectrum and a standard DSLR will capture it
just as well as anything else in the visible spectrum. The problem comes
because there are some very dim Ha nebulas which are just too dim to do
easily with a standard DSLR. You can then “mod” your DSLR and have the
UV/IR filter removed which has the benefit of increasing the red sensitivity
substantially.

Keep in mind that when you do this, it massively increases the red in your
images making everything very very red (including things that should not
be red) and totally useless for normal photography without additional filters
or heavily tweaking the custom white balance feature of your camera. Both
Nikon and Canon cameras can be modded. I am currently still using an
unmodded camera.
Figure 49 NGC2244 shot using a Hydrogen Alpha filter and an unmodded DSLR with the same
exposure settings as would be used without the filter.

You could also consider a dedicated CCD imaging camera built just for
astrophotography. The down side here is that you can’t use this kind of
camera for anything but astrophotography. The advantages are that they are
usually internally cooled which can dramatically reduce noise, they are
more sensitive to the full spectrum of light so there is no need for
“modding”, they can have filter wheels built in, and they can have guiders
built in. They can also get very expensive in a big hurry. A basic high
resolution CCD for astrophotography can start at $1500 with no filters and
no guider built in.

When looking at a CCD instead of a DSLR you need to consider many


things. First, older cameras that may seem like a good deal on auction
websites may not be. The older models actually used parallel ports to
transfer data, and there is no software written for them today that runs on
any computer made in the past ten years. Modern cameras all use USB.

There are many differences between cameras, starting with do you want to
shoot mono or color? Assuming both cameras are 8MP the mono is higher
resolution (we discuss why a little later when we talk about the Bayer
matrix but suffice to say mono is approximately four times the resolution).
Mono is more of a pain because if you want color images, you have to
shoot at least three sets of images (red, blue and green) and combine them
later. Most people shoot four sets, red, blue, green and luminance (more on
that later too).

Mono cameras can also be more expensive, even if they are the same price.
Why? Because with a mono camera to create a color image you will need to
purchase colored filters and these are much more expensive than you might
think.

CCDs however are cooled, which means far less noise and greater detail
with the same or less work than DSLRs. Remember, the CCDs were made
to do what you want, DSLRs were not. Even the Canon 60da which is “for
astrophotography” as some say, will never approach what a CCD can do.

This is once again a choice of what kind of quality you want versus how
much you want to spend and how much time you want to put into acquiring
images.

Now selecting a CCD gets a little more involved. Since CCDs are typically
for more advanced imagers it stands to reason you want the best match for
your equipment you can get. To get that, we need to decide several things.

The seeing conditions where you will image can help us find the right pixel
size. If for example you regularly image from a location which has seeing
as poor as 4 arc-sec then you don’t need a camera with photosites (what the
actual sensors in the camera that record each individual pixel are called) as
small as if you shoot from a location with 2 arc-sec seeing.

Now of course you need to measure your seeing conditions and this is done
using the Full-Width Half-Maximum method (FWHM for short) measured
in arc-sec. Many programs have the capability to do this including Images
Plus. If this is your first camera get someone who shoots at the same
location or close by to record the average for you and use that number. If
you cannot get this information I suggest you use the number 2.5 arc-sec.

Once you have the FWHM number you can use that with this formula:
Resolution = (206.265 X PixelSize) / Focal
Length

Where the resolution of your scope is in arc-sec, the pixel size of your
camera is in microns, and the focal length of your scope is in millimeters.

Let’s take my primary imaging setup for example and plug in those values
(Nikon D7000, Orion Premium 110mm f7 ED APO):

(206.265 x 4.67) / 770 = 1.28 arc-sec

Now compare the 1.28 arc-sec I computed from the calculation to the
suggested FWHM seeing number of 2.5 arc-sec and you can see that my
setup clearly exceeds the maximum resolution of anything I will shoot from
my primary imaging location.

So is that it? Not hardly. According to the Nyquest Theorem, you should
sample at double the peak frequency. Translated for our use this means that
we need to take an image (sample) at double the actual resolution (peak
frequency). So, double a resolution of 2.5 arc-sec is 1.25 arc-sec. So it just
so happens that my computed maximum resolution above is almost exactly
right for imaging from a location with a FWHM seeing of 2.5!

Realistically this means if your seeing conditions have a 2.5 FWHM you
should look for a camera whose pixel size coupled with your focal length
telescope provides a resolution somewhere between 2.5 and 1.28 arc-sec. A
higher number and you lose detail in your images, a lower number and you
are working too hard for something you are unlikely to ever get.

So why not just get the highest resolution (lowest result from the formula)
you can and be done with it? You certainly can, but more resolution means
larger files, longer stacking times, more critical guiding, and the photosites
are prone to be less sensitive because as a general rule the smaller they are
the less sensitive they are to incoming light.

The next thing to look for with a CCD is how it is cooled. There are two
general families here, one where it cools x degrees below ambient all the
time, and one where you can set the cooling to x degrees. The latter is
preferred so that you can match darks to lights, maintain a constant
temperature all night as the ambient temperature falls, and match lights
from different nights together.

Now we come to dynamic range. As we will discover later on in the book


the dynamic range is extremely important in astrophotography. Higher
dynamic ranges mean we can pull more detail out of our images. To
calculate the dynamic range we need to use this formula:

DynamicRange = FullWellCapacity / ReadNoise

Let’s try this math with an ATIK 383L+ CCD camera:

25,000/7 = 3571

So in our example the camera has a dynamic range of 3571:1. I consider


this an average good result. Some CCDs will be higher and more expensive,
some will be lower and less expensive.

Next up is binning which is the ability of the camera to take a set of pixels
and make a super pixel out of them. For example, a 2x2 binning means you
take a square that is two pixels high by two pixels wide (four total pixels)
and combine them. As you would expect this reduces your resolution
(divides by four), but you might not suspect it also increases the camera’s
sensitivity (multiplies by four) and increases the SNR (Signal to Noise
Ratio, discussed in detail in section 2.5) by having only one read noise
instead of four.

Binning 3x3 takes nine pixels (three high by three wide, nine total) and
combines them into one super pixel. Binning 1x1 does nothing.

I should also mention that you can bin in software as well, and it will
increase your sensitivity and decrease your resolution just as if it was done
in hardware. The difference is in read noise. A hardware solution will have
only one read noise per super pixel, a software solution using 2x2 binning
will have four read noises per pixel. This means when implemented in
hardware you can boost your SNR as well, while in software you do not. So
although you do gain from binning in software I suggest you try it in
hardware instead.
Up next is a CCD comparison chart. I want to be very clear here when I say
this chart does not contain all the information you need to make an
informed decision on which CCD to purchase. It also only contains a small
handful of the CCDs out there you can choose from. The chart is to give
you a starting point and to visually show you that some things that seem
trivial can really make a huge price difference.

As an example. Look at the SBIG ST-8300M and the SBIG STT-1603ME.


Note that the 8300 has a much higher resolution, smaller pixel size, weighs
less, has better anti-blooming and is still cheaper than the 1603. Why is
that? Take a little look at the Full Well Capacity, 25,500 versus 100,000.
The 1603 has almost four times the well depth! As a general rule this means
it can capture a wider dynamic range than the 8300 can (the 1603 has a read
noise of about 15, so using our previous formula the 1603 would have twice
the dynamic range of the 8300).

Many cameras in the following chart also have options for built in guiding
cameras, built in filter wheels and more. Be sure when you are comparing
models in catalogs and online that both have the same additional options.
Figure 50 CCD comparison chart.
Regardless of the type of camera you choose to use you will need to
connect it to your scope at some point. Most CCD cameras have a “snout”
that fits into your scope just as an eyepiece would. As a general rule, you
never attach a camera to a diagonal unless there is a specific need to do so.

For a DSLR you will need: a T-Ring which attaches to your camera and an
adapter which connects your T-Ring to the scope. This adapter may be a
prime focus adapter or a universal adapter. The difference is that a prime
focus adapter allows a clear path from the telescope directly onto the sensor
of your camera while the universal typically allows you to put an eyepiece
inside the adapter to accomplish eyepiece projection.

Figure 51 Camera, T-Ring and 2” prime focus adapter.

For DSLR work the vast majority of people use the standard prime focus
adapter. This adapter has a snout on one end that slips into the focuser of
your scope while the other end threads onto the T-Ring. The T-Ring then
attaches to your camera just like a lens would. Be careful, the T-Ring will
often go on in several different positions, only one of which will lock in
place. Make absolutely sure your T-Ring is fully locked on to the camera
before you let go of it.

I have found only one reason for doing eyepiece projection. That would be
if I were to use a telescope which was not designed for AP (such as a
standard Newtonian or a Coronado PST). Such a telescope may not achieve
focus with a prime focus adapter but might be able to using eyepiece
projection.

The downside to eyepiece projection is that the resulting image is obviously


inferior to what a prime focus system would produce. The eyepiece was
never designed to project an image onto a camera sensor. It will however
sometimes work, especially with something in the 2026mm eyepiece range.
Note that with these adapters you will be forced into using a physically very
small eyepiece, usually a Plossl design. Use a very nice Tele-Vue Plossl
($120 for a 25mm) or at least an Orion HighLight Plossl ($60 for a 25mm)
for best results.

Figure 52 Camera, 25mm Plossl eyepiece and eyepiece projection adapter.

Another little trick for scopes that will not come into focus is to use a
Barlow. This will reduce the field of view quite a bit but it can indeed get
images when nothing else will work. Again, when imaging optical quality is
very important, a $125 Tele-Vue 2x barlow will far outperform a standard
$45 2x barlow.

If none of this appeals to you and you cannot get your camera into focus
you can try a low profile prime focus adapter, or replace your focuser with a
low profile focuser.
The last thing I want to touch on, but by no means the least important, is
how the camera’s sensor relates to the field of view.

Larger sensors are like larger numbers on your eyepieces, they show a
larger area of the sky. Smaller sensors are like smaller numbers on your
eyepieces, showing a smaller area of the sky. For example, using a DSLR
with a crop sensor (such as an APS-C size, most low to midrange DSLRs)
you are “zoomed in” to your target more than if you use a full frame sensor
(found in most high end DSLRs).

Smaller fields of view, more magnification, more zoomed in, whatever you
want to call it can be useful for smaller targets but make larger targets much
more difficult to image. What I suggest is that you use a free program called
Stellarium and the plugin that comes with it called Oculars. This will allow
you to input your telescope focal length and sensor size to see what
different targets would look like with that particular field of view. I have an
introduction to Stellarium including the Oculars plugin in section 2.2.

Using a couple of example cameras we have already talked about, the ATIK
383L+ has a sensor size of 17.6mm x 13.53 mm while the Nikon D7000 has
a sensor size of 23.6mm x 15.6mm. As you can see the Nikon’s sensor is
larger, this would result in a larger field of view (sees more of the sky, less
magnification) than the ATIK camera provides.

Let’s assume you have a DSLR or are going to buy a DSLR and have
decided to get it modded. Or maybe you want to buy one which has already
been modded. Where do you go? Who is reliable? Who does good work?

www.spencerscamera.com

These guys convert Canon, Nikon, Olympus, Panasonic, Pentax, Sony and
possibly other cameras. In addition, they have already modded cameras
available for purchase from all the same manufacturers. They also provide
image sensor cooling systems (no huge cooler box to cool these DSLRs!).
Can you guess who modified the Nikon DSLRs currently used on board the
International Space Station? Spencer’s!
I like choices, and if I was not going to use Spencer’s Camera for my
modification there is only one other service I personally would even
consider, and that is LifePixel at:

www.lifepixel.com

They modify Nikon, Canon, Fujifilm, Olympus, Panasonic and Sony


cameras.
1.8 Other important equipment

Depending on where you live you might be like me down here in Texas
where you have a real problem with dew. I have come home from the dark
site and literally used five or six towels to wipe off my equipment before
taking it inside. When I lift the lid to one of my equipment cases at the dark
site, water runs off and splashes on the ground (not drips, not trickles,
literally splashes). So unless you just want to image for an hour or two and
then pack it up, you need dew control.

I use a four port, dual channel dew controller with four dew strips (heated
strips with Velcro that warm the air around my optics to keep them dry).
One strip goes near the front of the optics on my main and guide scopes,
one on my finder scope, and one right before my main scope’s focusing
tube. Without these it would be more trouble to set up and tear down than it
would be worth imaging for an hour or two. Expect to pay $100-$300 for
this setup.
Figure 53 Dew strip placement.

Now you may think, scope, mount, guiding, camera and dew, that should do
it! Not even close.

Next we need at least one computer to run the shutter and guide the scope
(the autoguider plugs into a computer to run software that does the actual
guiding). Most imagers use a laptop or netbook for this. I used to use two,
one netbook for shutter and guiding, and one laptop for image transfers
(images show up complete with histograms on the screen immediately after
taking each frame), remote control of the scope and session planning.

There is at least one guiding solution that does not require a computer
which is made by Celestron, the NexGuide Autoguider. I have heard both
good and bad, but more bad than good about its guiding capabilities. It is
nowhere near as flexible as a SSAG in conjunction with PHD and EQMOD
(more on that later) but it is portable and self-contained.

Once you have a laptop or netbook, in order to preserve your night vision
you can get red rubylith sheets off of eBay and cut them to fit over the
screen. I actually mounted the rubylith under the bezel so it appears factory
installed.

Why red lights? Your eyes use two things to give you vision,
rods and cones. Cones detect the full spectrum and see both
color and brightness. Rods are very sensitive to brightness but
not color and are blind to red light. Rods can take up to 30
minutes to “adapt” to the darkness even from a split second of
bright light. Since they are blind to red light you use a red filter
over screens and lights so that they can stay adapted to lower
light levels in the rest of the light spectrum, thereby preserving
your night vision.

Figure 54 My two computers at a dark site covered with rubylith.

I have since switched to using ND9 (3 stop neutral density) film used for
professional lighting. It is cheap, about six bucks a sheet, and preserves
most of the color in your images. You can get this at just about any
professional stage lighting supplier such as Adorama and B&H Photo. It
still mounts under the bezel just fine with a very small piece of double sided
tape in each of the upper left and upper right corners to keep it from
moving.
You will also need a red flashlight (or torch for our friends across the pond)
to be able to find things and walk around without harming your night
vision.

Now we need to be able to focus accurately. I start by pointing the scope at


a bright star, something like Vega or Rigel. Then using the live view on the
camera zoomed all the way in I make the star as small as possible using my
focusing knobs and then lock the focus. Once that is done I place a
Bahtinov focusing mask over the front of the scope and shoot a four second
exposure at ISO800 to make sure my focus is perfect, adjusting the focuser
forward and backward in small increments until the central line is centered
in the cross as in the lower example of the next image. If I am using a
narrowband filter such as Ha, I double the exposure time to eight seconds.

Figure 55 Focusing with a Bahtinov mask, out of focus on top, in focus on bottom.
Figure 56 A typical Bahtinov Focusing Mask that fits over the end of the telescope.

You can create your own Bahtinov mask by visiting a Bahtinov mask
generator such as:

www.astrojargon.net/MaskGen.aspx

You can then print out the mask, overlay that printed paper onto something
more solid such as plastic, cardboard, etc and cut that out. Personally, I
prefer to purchase my mask and I got a nice thick hard plastic one that is
adjustable so I can use it on several different but similar sized scopes for
less than the price of a large pizza. Plus it is 100% dew proof, which is
important where I am.

I have tried the Bahtinov, the Hartmann and focusing assistance software
which measures FWHM and other metrics. In my opinion, the Bahtinov is
the fastest, easiest, and most precise method for DSLR imaging when
coupled with live view.

Another method of focusing which might be a little easier for larger


aperture scopes is called a Hartmann mask and consists of a mask that fits
in front of your scope just like the Bahtinov mask except it only has two
holes in it.

Figure 57 Hartmann mask.

These provide an image that splits the objects in the field of view into
doubles, the object being to bring them together to form one single image as
shown in the next figure.

Figure 58 Focusing with a Hartmann mask.

The advantage to a Hartmann mask is that it is very easy to make yourself


at any aperture. The disadvantages are that it presents a darker image than
the Bahtinov mask and that in my opinion makes it more difficult to achieve
perfect focus with one.

If you want to make your own Hartmann mask you can find a generator for
them here:

www.billyard-ink.com/Hartmann.shtml

As long as we are talking about round stars, the spacing of the camera, type
of telescope and options used can really cause some strange problems. For
example, a refractor typically needs what is called a field flattener to make
sure the stars at the outer edges are just as round as in the center. When you
do not have one, you can get images with the corners looking like the stars
in the next images. Note that the stars elongate in different directions
between the two images. That is because the image on the left is the top left
of the frame, the image on the right is the bottom left of the frame, and they
both elongate towards the center of the image.

Field flatteners typically need to be designed for a specific F-ratio, and are
typically noted for use for a range of F-ratios (e.g., f/5 - f/7). Even if your
scope is in this range, the field flattener may require a T-thread spacer
between the field flattener and the camera. This spacing is critical to the
performance of the device.

Figure 59 Elongated stars caused by not having a field flattener on a refracting telescope, upper
left and lower left corners of the same image.
When shopping for a field flattener or anything else that goes in front of the
camera be sure to think if it needs to have filter threads. Even though I
shoot with mostly Orion equipment I opted for a different manufacturer’s
field flattener because the Orion model did not have 2” filter threads which
at the time I needed. I have since moved to a four filter wheel so it really
doesn’t matter anymore.

Filters are an important part of imaging unless you live in the middle of the
ocean where there is no light pollution and are shooting one shot color. At
the very minimum I use a 2” light pollution filter by Baader, their “Moon
and Skyglow filter”. A little later on you will get to see a before and after
series showing the effects but for now just trust me, you need one. Filters
are like most other things in life, you get what you pay for. Cheap ones
typically have less anti-reflective coatings so you get some interesting
reflections that are a bugger to get rid of. They also have less consistency
between types of filters. What this means is that the red filter in a set may
let in more light than the blue and less than the green. This can be a pain to
always compensate for.

That doesn’t mean you have to spend $450 on a single 2” Astronomik 6nm
H-Alpha filter although it is indeed a fantastic filter. A nice Baader
Planetarium 2” 7nm H-Alpha filter for $277 will do just fine. A 2” color
(LRGB) set from Baader will cost you about $570 as opposed to those ultra
cheap $50 sets you see advertised on some websites. Spend the money
once, save the pain and anguish.

So what filters will you need? If you are shooting one shot color either with
a DSLR or CCD you will need a light pollution filter. I recommend the
Baader Moon and Skyglow. If you are shooting a monochrome camera you
will need either a four filter set called a LRGB (Luminance, red, blue,
green) and/or a four filter narrowband set consisting of
clear/UV/Luminance, H-Alpha, SII and O3. When choosing a narrowband
set unless you have a specific need try for something in the 6nm-8nm range
(this is the bandwidth that the filter allows in, higher is more light but less
definition, lower is less light but more definition).
Light pollution filters work by restricting the types of light that enter the
camera. They take the best known types of streetlights, lighting for signs,
office lights, etc and find out at what wavelengths that light is emitted. Then
they coat a piece of optical glass with special coatings that restrict light at
only those wavelengths allowing all other wavelengths to pass
unobstructed.

Narrowband filters are a little different, they only pass light at specific
frequencies. For example a H-Alpha filter only passes light at 656nm while
an O3 filter is around 501nm and an SII filter at about 672nm.

These very narrow slices of the spectrum can produce some very detailed
beautiful images. In addition, they can do all this from very light polluted
skies and pretty much in full moonlight!

Filters come in several different sizes and styles. You can get standard
screw-on filters for 1.25” and 2” threads (these go in some filter wheels, on
the nose of some prime focus adapters, and on the front of some field
flatteners and/or focal reducers), clip in filters for Canon cameras that
actually clip inside the camera body, and specialized filters for some CCD
filter wheels. Try to stay away from the 1.25” screw on type unless your
camera specifies them as these are too small and will cause vignetting with
virtually all DSLRs and many CCDs.

Another very important question is how will you provide power to your
camera, laptop and scope while you are out in the field? I am very fortunate
in that I image from an observatory which has A/C power so all my
equipment is plugged directly into A/C with the exception of my DSLRs
which run on their own batteries, and I carry two fully charged batteries for
each camera.

If, however, I go chasing transits and eclipses like I did in 2012 I will need
portable power and for that I use large battery packs.
Figure 60 Battery packs, Orion Prol7 on the left, Schumacher XP2260 on the right.

One is a 17Ah (amp hour) Orion battery pack and the other is a Schumacher
Electric XP2260 22Ah I purchased at Wal-Mart. Both cost about the same.
The Orion pack was purchased with my scope and was bought for the
warranty, and because should anything go horribly wrong it would be very
difficult for tech support to blame the power to the scope for anything that
happened

What are the amp hours a battery is rated in? Amp hours is
simply a rating that states how much power a battery can
provide over a given amount of time. For example a 17Ah
(amp hour) battery could provide 17 amps for one hour, 1 amp
for seventeen hours, or 8.5 amps for two hours.

Something you should be aware of with these types of battery packs is they
need to be kept charged. If you let them drain too far down they may not
recharge with their normal chargers. What I personally have had to do is
disassemble the unit, remove the battery, and charge it on a high power
battery charging unit I own for cars, boats and small engine machines like
lawn mowers. It does 12v and 6v using a variety of charging options. Use
this method at your own risk! This will also void your warranty.
For my DSLRs I have lots of choices, from using multiple regular batteries
and just swapping them out in the middle of the night, to running off AC
power with something like the Nikon EP-5B power supply which replaces
the battery with an AC adapter, or getting an extended runtime Nikon EN-
EL15A battery which is almost twice the size as a standard battery for that
camera.

So how much power will you need? How large a battery pack? That isn’t an
easy question to answer. Let’s start with the mount and go from there. If
you have an Orion Sirius mount and run it off the supplied 12v cigarette
adapter, that uses 2 amps. A 17Ah battery pack like the Orion Pro17 should
run for about (17/2=8.5) eight hours depending on the temperature and if
the pack has a full charge, and how much you slew, etc.

I use an AC to DC inverter for my laptop which when plugged into a wall


uses 2.5A of power. You lose quite a bit in conversion so figure 5A of use.
If that were the only thing plugged into the 17Ah pack it should last three
hours.

Dew strips can pull quite a bit and I use four which pulls about 2A. Again,
if they were the only things I run, then about eight hours.

Now we add all this up, 2A+5A+2A = 9A, less than two hours on the 17Ah
pack on a good day with warm temperatures. Realistically? About an hour.

One hidden thing, or at least something many people forget about is that
their guide camera probably draws power from USB which means it draws
laptop power, which drains your laptop battery much faster than if you were
just sitting at your desk. Be prepared for that.

If I really wanted to shoot all night off batteries I would have three battery
packs, all 22Ah, or at least the 17Ah and 22Ah I have plus another one of
the 22Ah using the two large ones for dew control and the laptop, the
smaller one for the scope.

One word of warning here, many battery units have a hard time supplying
more than 2A to an outlet, be it the cigarette lighter type or the AC outlet
type. For example when using my larger laptop (2.5A AC) even the larger
22Ah pack has a hard time running it for very long before it starts
screaming, probably a thermal warning. The only time I am really using it
in the field on battery at the moment is doing solar (the eclipse and transit
of 2012) so both times I could run the car, use a DC to AC inverter in the
car, run an extension cord over to where I was and plug the laptop in there.

You could of course also use a generator but I would not advise showing up
to a star party with one, your noisy motor will not be welcome. If you are
going to use a generator you may want to consider something like the small
Honda generators which are very quiet, (for a gas engine that is).

GAS AND DIESEL GENERATORS EXHAUST CARBON


MONOXIDE WHICH IS A POISONIOUS GAS. BE SURE
THE GENERATOR IS NOT IN CLOSE PROXIMITY
AND IS IN A WELL VENTILATED AREA. NEVER USE
A GENERATOR INSIDE A BULDING OR VEHICLE!
CARBON MONOXIDE POISIONING CAN BE FATAL!

You may already have a laptop you plan on using for all of this, or you may
be thinking of buying one. Either way you may run into a problem, a serial
port. While the industry is definitely moving towards USB for telescope
mount control, not all mounts are USB, yet. For those you will need a 9 pin
serial port to run them.

Figure 61 DB9 serial port on the back of a laptop.

If your laptop does not have a 9 pin serial port (most newer ones don’t) then
you have two choices. The best is if your laptop has a mini-PCIe port where
you can get a serial card with an actual honest to goodness real serial port
on it. Older laptops may have a Cardbus or even a PCMCIA port, either of
which can accept serial cards. While this is not the least expensive route, it
will be the easiest and most trouble free.

The next solution is a USB to serial adapter which will also work and is
generally cheaper but be careful as the cheaper ones may not be compatible
and give you fits. The most common ones I have seen work are based on
the Prolific chipset. Do be careful and buy these from a reputable dealer
with a good return policy as there are many counterfeit versions of these
cables.

Other little things to consider are USB extension cables, powered USB hubs
if you do not have enough ports in your computer, AC adapters, extension
cords, power strips and in the case of dew prevention, AC to DC inverters.

If you decide to use a Newtonian, SCT or MCT you may want to consider a
focusing motor. This allows you to press a button on a remote to move the
focus in and out and allows for very fine adjustments, even on a rack and
pinion style focuser. The reason you might want one is that when you focus
these types of scopes the image your camera sees (or the view in your
eyepiece) shakes like an earthquake and it can take a little while for it to
settle back down. While this is not that big of a deal when you are doing
visual, when trying to get nice sharp focus before an imaging run this can
make you want to break something.

With refractors and a good dual speed Crayford focuser this really isn’t an
issue. The motion settles almost instantly unless your mount is substantially
overloaded.

Focusing motors also come in computer controlled versions which your


image capture program may be able to control. Some programs such as
Images Plus can even adjust the focus throughout the night automatically to
make sure all your images are nice and sharp. Note that focusing motors are
not available for all focusers so you may have to replace your focuser with
a different model to use this feature. This might be something you want to
investigate before you purchase a scope.
1.9 Acquiring images

Before you try to acquire images it is very important that you learn how to
set up the camera. As cameras vary considerably between manufacturers,
models and versions, what follows will be very generalized.

First, most cameras have a top dial with several shooting modes on it. One
of those is an M for Manual which is where you need to set the dial for AP.

Figure 62 Top dial set to Manual exposure, bottom dial set to Single frame drive mode.

Next we need to make sure the advance mode (or drive mode) is set to
single frame. Most cameras have the ability to take one image when you
depress the shutter button and hold it (single frame), take many images
slowly when you hold the shutter button down (slow), and take many
images very quickly when you hold the shutter button down (fast). We need
the single frame setting as shown above for this camera. Your setting may
be in a menu or a different dial.

Settings for white balance, auto focus, focusing mode, etc should all be
ignored as we are shooting without an auto focus lens and in RAW.

Your camera may have a setting for long exposure noise reduction, make
sure this is off.

Another interesting option is some cameras can shoot in 12bit mode or


14bit mode. Always select the highest bit mode you have available. (We
will discuss exactly why a little later.)

Last but certainly not least, if your camera has a flash be sure to turn it off.
If you fire off the flash at the dark site, you will not be making any new
friends!

What some people forget at this point is that almost nothing you take a
picture of will look the way you want it to. You need to do some processing.
For that we need to stack the images first. Stack you say? Stack what? Glad
you asked!

Astrophotography is not one image. Back in the film days it could be, but
even then you could stack multiple images in the darkroom to make one
image. To see what stacking and processing can do to an image, see the next
two figures. There are four types of images we can use to stack, in order of
importance they are...
Figure 63 A single light frame, unstacked and unstretched.
Figure 64 The same image as the previous figure but stacked, stretched, cropped and processed.

Light frames. These are normal pictures just like you would take in the
daytime. Each one we try to get as close to the final product as possible as
we are taking them. The more lights you take, the more detail you can get
out of an image, to a point. Once you pass 20 or so the returns diminish
quickly. The formula is simple and inversely proportional. If you take one
image, you can get a 100% improvement by taking two, then a 50%
increase by taking four, then 25% increase by taking eight, then a 12.5%
increase by taking sixteen, then a 6.25% increase by taking thirty two, and
so on. The additional detail is gained by increasing the signal to noise ratio
of the image. For very faint objects I have taken eight or more hours’ worth
of light frames to stack together (96 or more 300 second images).

More detail using this very general formula assumes that you have already
determined the correct length of time to expose the image in the first place,
(more on that later).
Dark frames. These are taken using the same ISO and exposure time as
above, at the same temperature (this is important), except they are taken
with the lens cap, scope cap, or body cap on so that the frame should
(ideally) be completely black. The same rules for number of images applies
as above. We take these images so that they show us what dark currents the
camera generates at a given temperature, given exposure time, and given
ISO. We will later use the computer to digitally “subtract” the unwanted
“noise” which the dark frames will reveal.

According to a fellow astrophotographer who has a Chemistry PhD, plastic


lens caps can allow IR light to pass through and skew the dark file on a
modded camera (or possibly a CCD). While I do not hold a PhD in
anything, and I certainly understand that certain plastics are IR transparent
(the smoked window on the front of your TV remote is an example), I am
skeptical that the heavy black plastic in my Nikon lens caps is IR
transparent.

Bias frames. These frames are taken at the same temperature, the same ISO,
but at the fastest shutter speed your camera can do with the lens cap on.
This shows us the electrical noise the camera generates so that we can later
digitally subtract it from the light frames. The same rules for number of
images applies as above.

Flat frames. These are taken with even lighting over the front of the scope.
An example would be: during daylight, drape a white t-shirt over the front
of the scope, then adjust the camera to take a correct exposure through the
scope. The same rules for number of images applies as above. These are
used to correct vignette and other optical defects in your optical train such
as dust. It is important to note that you must take a flat at the exact same
focus with the exact same equipment (filters etc) as we used for the lights or
it will be worthless.

Technically, darks already contain bias information so bias frames are not
really required unless your processing software deals with bias frames
separately, and if you have no vignette or other optical defects you can skip
flats as well (although this is unlikely). Lights and darks, however, are
pretty much required.
I keep what is called a “dark library” which is a collection of dark files shot
at various ISOs, exposures and temperatures. Basically what I do is save all
my darks and reuse them as I find another target that needs the same darks.
Since the characteristics of your camera can change over time I recommend
removing any darks over six months old from your library, but you may
want to keep them around in case you want to reprocess lights that were
shot before your darks “expired”.

Once you have these images you can use a program such as Deep Sky
Stacker (freeware, PC) to combine them into a single image which you can
then process using photo-processing software like Photoshop. Deep Sky
Stacker is available from:

deepskystacker.free.fr

Now after reading all of that people still ask, how many should I take? The
short answer is start with 20 on bright objects, that way you are covered for
16 plus a few extra in case some do not turn out as well as you expected.
Using this number you will get the “best bang for the buck”, or “most return
for your investment”, or whatever you want to call it. Once you have
worked with the target and 16 light frames, you will start to know what
targets may need “more light” and you can extend the number of frames
from there.

One suggestion that keeps popping up is taking a certain number of images


and duplicating them instead of taking more. Unfortunately that just doesn’t
work. The reason is because each frame helps remove noise and increase
the detail in the object. Think of it this way: you are looking at the moon on
a night when there is a very thin layer of clouds completely covering the
sky. Every now and then you get to see a little piece of the moon. If you
take a picture when you can glimpse that small piece of the moon, and then
stitch each occurrence together you could eventually get an entire image of
the moon without the clouds! This is a pretty good analogy of what you are
doing with taking multiple lights, darks, bias, and flats.
Exposure time is something that a lot of people have trouble understanding.
There are no magic exposure settings for any particular type of object.
Some things like M42 need a combination of images at different exposures
which can range from ISO800 15 seconds, to ISO800 200 seconds. Most of
your targets, however, will use one setting for all the images (unless you
decide to shoot with different filters and combine them).

The way I typically choose exposures for nebulas and galaxies is to start
with settings something like ISO800 for 300 seconds and look at the
resultant image. There are two particular things I look at (besides framing
of course) which starts with the histogram as shown in the next figure. Note
that here I am working with a fairly bright image. Histograms are pretty
worthless with an unstretched image of a very faint object.

Figure 65 Typical histogram of a single light frame.

What I am looking for in the histogram is clear separation from the left
hand side of the image. This tells me that I have clear definition in my
image. Next I look at the background. It should be dark gray. The
background turning lighter means you are capturing more sky fog, which is
light pollution.

In the next two figures note that in the first one not only is the background
darker, but there is much more contrast between the nebula and the
background than in the second one (these are unstacked, unstretched images
so you have to look hard). You want to get as much detail out of the nebula
as possible which requires as much exposure as possible, but keep in mind
in the final image you want the black of space to be dark (I prefer almost
black) and there to be clear separation between that and the nebula.

So to recap, start off with something like ISO800 and 300 seconds, then
look at the histogram. If the hump is either touching the left side, or right
next to it, increase your exposure. If the hump has clear separation, check
the sky fog and see if it is too light, and if so, reduce the exposure to
balance it out.

These guidelines are to give you something to start with. Only your
experience and trial and error will get you where you need to be to create
great images.

Figure 66 A well exposed light frame.


Figure 67 An overexposed light frame.

When dealing with fainter objects you may not see anything on your
histogram that isn’t buried in the far left side and no amount of exposure
will separate the data from the background. This is where stacking a large
number of images together will help increase the SNR and bring out the
faint objects.

On very faint objects such as the Witch Head, Iris, and Spider & the Fly
Nebulae, etc, you may need to stretch the initial raw images right there to
make sure you are on target and framed where you want to be. This is
where it is handy to have Photoshop or another image editing package on
the laptop in the field so you can do a quick stretch before you spend hours
imaging something that isn’t in your frame.

When choosing an image editing program it is very important that you


choose one which will work with 16 (or more)-bit images such as .TIF files.
One popular free program, GIMP, I believe will not work with 16-bit files,
so is not suitable. I also believe that the cheaper versions of Photoshop
called Elements will not completely work with 16-bit files without a ton of
“tweaking”. I hate to say it but you really should consider the full
Photoshop CS package for this at about $600 (or their “cloud” version
which is basically renting for under $20 a month) and/or consider dedicated
astronomical image processing software.

Why 16-bit files? Because you are going to “stretch” the image to make
things more visible. Stretching requires a lot of latitude in the image colors
or you get banding among other issues. Each of the colors Red, Blue and
Green in an 8-bit image has 256 shades of that color to work with. So for
example there are 256 shades of red, 256 shades of blue, and 256 shades of
green that can be blended together to make one of 16.7 million colors.

Figure 68 32-bit image stretched as a 32-bit file.


Figure 69 8-bit image stretched as an 8-bit file.

In comparison a 16-bit image has 65,536 shades of each color for a total of
281 trillion colors! Now this is important because a faint nebula might be so
dim in your image that if the image is 16-bit and black is 0,0,0 that red may
be 100,0,0 out of 65536, 65536, 65536. If that same image was 8-bit it
would have to round to 0,0,0 out of 255, 255, 255 since 100 is so low. This
would have the effect of erasing any data you captured! We will cover this
in much more detail later in the book.

What you see in the two previous images is the exact same image, the first
one saved as a 32-bit file and then stretched as a 32-bit file. The second
image was a 32-bit image that was saved as an 8-bit TIF and then stretched
as an 8-bit image. Both were stretched in the same software, both started as
the same file, and both were stretched the same.

One interesting thing that always seems to pop up on forums is the use of
light pollution filters. Do they work? What are the pros and cons? Let’s start
with do they work....

Figure 70 Lagoon nebula without light pollution filter.


Figure 71 Lagoon nebula with light pollution filter.

The previous two images were taken minutes apart, same scope, same
settings, same everything, except the second image was taken with a light
pollution filter. So do they work? You betcha! Unless I am shooting
narrowband I always have a light pollution filter on my setup.

Of course, if I lived where I had no light pollution (the middle of the ocean,
the middle of the desert in West Texas) then I probably wouldn’t use it as it
does block some light I might want. Some versions also tend to make the
blue/violet halos around bright stars worse (my previous light pollution
filter was much worse at this than the new Baader Moon & Skyglow filter I
currently use).

Now that we have added a filter while shooting a one shot color image we
need to think about the white balance. This only applies to DSLRs or other
standard digital cameras, not CCDs. The software (such as Deep Sky
Stacker) you are using to stack your images may need a custom white
balance in order to maintain accurate colors. This is not as much an issue
with more advanced software such as PixInsight.
Setting a camera’s custom white balance is different for each camera model,
but I will give you the instructions for my D7000.

With the camera mounted to the scope in daylight and the light pollution
filter installed, start by setting your exposure mode (top left dial) to M, A, S
or P. Place a white sheet of paper far enough from the objective of the
telescope so that the page just takes up the entire frame of view. Make sure
the camera will take a correctly exposed image. Now press and hold the
WB button and rotate the back command dial (by your thumb) until you see
“PRE”. Let go of the WB button and immediately press it and hold it again
until you see “PRE” start flashing. Quickly release the WB button and take
a picture.

You now have a custom white balance and can use that setting by holding
the WB button in and rotating the back command dial until the top LCD
reads WB and PRE on the bottom and d-0 on the top.

Next up is focal reducers. (see figure 14) These are often combined with
field flatteners although both items can be purchased separately. The focal
reducer does two things: it increases the field of view (reduces the
magnification) of your scope, and it increases the amount of light hitting the
sensor of the camera since you have a wider field of view. This means you
can use shorter exposures but decreases the apparent size of the object in
your frame. Where this comes in really handy is larger targets like the North
America nebula which would normally be impossible to fit completely in
the frame of all but the shortest focal length scopes, and when you use a
really slow scope such as an f10 SCT telescope.

One thing to keep in mind that we touched on previously in the scope


section was thermal expansion and contraction. All scopes, especially any
metal scope tube, expands as the temperature increases and contracts as the
temperature decreases. This can change your focus throughout the imaging
session. You should make it a rule to recheck focus for this reason every
time the temperature drops/increases 20 degrees or more.

Now we have covered the most important items, although there is still a lot
to consider, including AC adapters, USB cables, barlows, eyepieces for
visual, and much more.
Here is a fairly complete list of my basic imaging kit:

Orion Premium 110mm f7 ED APO refractor (main scope)


Orion 80mm shorttube guidescope (scope for guiding)
Rings for guidescope (attaches the guidescope to the rail below)
Rail for guidescope (attach the rings above to the main scope)
Orion Star Shooter Auto Guider (the actual guider)
Celestron Laser Pointerfinder (finderscope)
Orion EZ Finder Deluxe (finderscope)
Dew Not 2 channel 4 port controller box (to control the strips)
4 - Dew Not heater strips for both scopes and red dot finder (to prevent
dew)
AC to DC converter 6A for dew system (to power the dew system from
AC)
HoTech field flattener 2" (to make the stars on the edge of the frame round)
Orion 4-2” manual filter wheel
Baader Moon & Skyglow filter 2" (to remove light pollution)
Baader 2” 7nm Ha filter
Baader 2” 8.5nm OIII filter
Baader 2” 8nm SII filter
Orion 2" Nikon T-Ring (attaches to the camera)
Orion 2" 10mm & 5mm T-Ring spacers (provides extra spacing between
camera and field flattener)
Nikon D7000 body
Orion Sirius EQ Go-To mount and tripod
AC adapter for mount (not included with mount purchase)
Extra 11lb counterweight for Sirius mount (to bring everything into
balance)
Case for optical tube (for transportation)
Laptop to control guiding, mount and shutter
Three USB cables for guiding, tether and shutter
Shoestring Astronomy USB controller for Nikon (allows for shutter
computer control)
2x 3-element Orion APO barlow (to enlarge objects)
2" Extension tube (needed to focus when using barlow)
Orion Stratus 5mm 2" eyepiece
Orion Stratus 8mm 2" eyepiece
Orion Stratus 13mm 2" eyepiece
Orion Stratus 24mm 2" eyepiece
Orion 2" Dielectric 90 degree diagonal
Orion 24mm illuminated crosshair eyepiece
Bahtinov Mask (for focusing)
Nikon Action Extreme 10x50 binoculars (for keeping an eye on seeing
conditions)
Portable table (for laptop, etc)
Portable chair (for my rear end)
AstroPlanner for planning imaging sessions
Power strip and extension cord
Red Flashlights (2 in case one dies)
Two cases to carry all the accessories in
One case for laptop, log book, pens, etc
Rags for wiping off dew
Clear paper sleeves to keep dew off target lists
The imaging train on my primary imaging scope
2.1 Camera control software

There are several types of camera control software out there with a wide range
of functions and prices. Below is a chart showing some of them and their
functions:

Figure 72 Camera control software comparison.

Image download means it can download the image from the camera and
display them on the screen as they are shot. This is important because it
allows you to see what is going to be on your images without having to look
at the tiny LCD screen, which not only is hard to do because it is so small and
so low resolution, but also because it will destroy your night vision.

Live view means you can see what the camera is looking at live while in live
view mode. Useful for coarse focus adjustments and sometimes to make sure
your target is where you want it to be.

Focus assist means the software has routines to help you perfect your
focusing. I don’t really use this. I use a focusing mask which I have already
discussed.

Auto stretch is a function where the image has the histogram automatically
stretched so you can see faint targets much better. This can really help with
really faint objects.

Dither is the capability to move the image a few pixels between shots to
increase signal to noise ratio.
Processing means the same software can both control the camera and do post
processing of the image. Items with a * mean partial processing is possible.
We will discuss stacking and processing later.

Canon DSLR, Nikon DSLR and CCD means it can control those types of
cameras.

Last but certainly not least is price.

These are certainly not your only choices for camera control but are a pretty
good sampling. All of the packages in the chart are capable of exposures of
less than, or greater than 30 seconds with the correct hardware.

With Canon cameras you typically will


have one USB cable that runs from
your laptop to your camera for control
and download of images. Nikon
cameras use separate cables for data
transfer and control. This provides
slightly faster performance over the
Canon’s single-cable interface since it
can send control signals and transfer
images simultaneously.

Speaking of Nikon, to do longer than


30 seconds you will need a special
shutter release cable for these with
either an IR connector at one end or a
GPS plug. The difference is in the
camera. Most very low end Nikons do
not have a GPS port so must use the IR
remote. Higher end/newer cameras
(D90/D7000) have the GPS port and
can use a cable that connects there.
You can get the cables from Shoestring
Figure 73 DSLR Shutter. Astronomy. Be sure you get one that’s
long enough.
DSLRShutter is a free application from Stark Labs which does one thing,
control the shutter of your DSLR. It isn’t fancy, but it gets the job done. It
provides support for both Nikon (with DSUSB or DSUSB2, these are the
Shoestring Astronomy adapters) and Canon, on both Windows and Mac.

Our first commercial application is Images Plus (IP), my current capture


program. Mike Unsold actually sells two programs called Images Plus, one is
this one, the image capture portion, and the other is the image processing
portion. They are available separately or together in one package.

I like IP because it is fast, easy to use, and works quite well. Support for both
the DSUSB and DSUSB2 are both built in for Nikons, as well as support for
Canons and a whole host of CCD cameras. IP can control not only your
camera but your focuser and filter wheels as well as provide dither support for
multiple guiding applications. Of course price is always important and the
price for Images Plus is comparable to other offerings that support fewer
camera manufacturers.

Figure 74 Images Plus camera control


You can queue up a whole series of different exposures and it will run through
them in sequence. During the first year that I used this program I was
constantly finding new applications for it. I anticipate that it will fulfill my
imaging needs for the foreseeable future.

When you want to connect to your camera simply plug it in. On the top menu
select Camera, and then your camera type. That will display the following
box:

Figure 75 Images Plus camera connection dialog.

You can see that I have already clicked the Connect button and that the
camera is connected. When connected to a scope of course it will not show the
focus mode as Auto and will not give you the Lens info or Focus length.

A little further down you can click the Select button to select a directory to
store your images when it automatically downloads them. Note that I am
storing mine by the date I start the imaging run.
Just below that is the checkbox you need to check if you want to download the
images automatically to the computer from the camera.

One interesting piece of information here is that the more images you have on
the memory card in your camera, the longer it takes to connect to the camera.
I would suggest always starting with an empty card so it will connect almost
instantly.

The next tab is where we can set quite a few camera settings:

Figure 76 Images Plus DSLR control settings screen.

Here we make sure that Long Exp. NR is set to off, change the Shutter Speed
to 30 (not 1/30), White Balance to Auto, ISO to 3200 or higher, Aperture will
not be settable with a scope attached, Exposure Comp set to 0.0, Images
Quality must be set to RAW+JPEG but you can choose the JPEG quality (I
use normal) and lastly, Jpeg Compress I use Size priority since I am more
concerned with the speed at which it displays than the quality because I will
only be using the JPG as a preview.

The reasoning behind the shutter speed and ISO is for live view alignment and
focusing. You need a very bright live view image so you can see the alignment
stars and focusing spikes.

The next screen has some more custom settings for the camera as shown here:

Figure 77 Images Plus DSLR control custom settings screen.

The only thing we have to check here are the High ISO NR which should be
off and the Color Space which should be sRGB. High ISO NR is noise
reduction which we will do a better job of with stacking and dark frame
subtraction. In the color space I use sRGB because it is pretty much an
industry standard for photographers and print houses so there really isn’t a
need to use anything else.
The next tab is focusing and gives you this screen:

Figure 78 Images Plus DSLR control focus screen.

The fast and easy explanation of this screen is when you set it to loop and
FWHM you want the Current Frame to equal the Current Best and be as low
as you can get it. Of course the full explanation is a wee bit more complicated.

Full Width Half Maximum is basically a measurement of the brightness of a


star. As the star becomes sharper (more in focus) the light is more focused into
a smaller area, which has a brighter central area than if the star is out of focus
and that same amount of light is spread over a larger area.

As the focus improves on the star, the number for FWHM drops. The lower
the number (which varies based on seeing conditions, brightness of the star,
characteristics of the camera, etc) the more “in focus” the star is.
Figure 79 Images Plus DSLR control Live View screen.

I use the live view screen for both alignment and initial focusing adjustments.
Keep in mind that once you click the Start button you have a limited amount
of time (several minutes) before the thermal shutdown kicks in and turns it
off.

Once I slew to my first alignment star I click the Start button, then click the
checkbox labeled Display Cross-Hair and then center the star right in the
center of the crosshairs. It really isn’t necessary but you can click Zoom to
zoom in on the star and get even more accuracy. You have four levels of
zoom.

When focusing I make sure to use all four of the zooms so that I get as precise
a focus as possible. Make sure you click the Stop button once you have the
star aligned or focused before you slew to the next alignment star or target to
let the sensor cool down.
Next we have the Capture tab which displays this screen:

Figure 80 Images Plus DSLR control capture screen.

The capture tab is where you can set up a sequence of short exposures (I use it
for less than 4 seconds or so). This is very useful for doing solar or moon
imaging. On Nikon cameras this uses the standard USB cable so no DSUSB
adapter is needed since you are not using bulb exposures.

Now we come to the workhorse tab, Bulb Capture, and it displays this screen:
Figure 81 Images Plus DSLR control bulb capture screen.

Here you can set up sequences of long exposures. On a Nikon you must select
the Control Type from the drop down box before you can do anything, then
click the Reset button. Once you do, all the other buttons become clickable.

One important button here is the Dither Setup button. Click this and then you
can turn on dithering and select your guiding program (I select PHD). Make
sure that when using PHD that you have the PHD server enabled in PHD’s
settings.

While not as pretty as some other camera control apps I find Images Plus to be
extremely versatile and very efficient. Very little space is wasted looking at
controls you don’t need so more of the screen can be used for the display of
the last captured image, important information, and histogram if you choose to
display it.
A newcomer to the field (at least new to multiple camera support) is the
Backyard family from Guylain Rochon which includes the BackyardEOS
program for Canon and the in development BackyardNIK for Nikon
scheduled for release around the third quarter of 2013.

Backyard is the cheapest commercial package but is still an excellent value


especially for beginners to AP.

Figure 82 BackyardEOS capture screen.

The main capture screen is a little more cluttered than IP, but provides more
information such as battery percentage, temperature, humidity and dew point.
Support for DSUSB is built in and accessible right from this screen under the
“Cable support” drop down.

One thing I really like about the Backyard program is the way it displays the
progress of your current imaging session. As your capture session progresses
you can clearly see how far along you are and how far you have to go,
complete with an estimated finish time on the right hand side of the screen.
This makes it easy to be ready for switching targets, meridian flips or packing
up to get home.
Over all I liked the Backyard programs but there are some down sides.

There is no support for CCD cameras so it is probably not a program you can
grow with should you decide to get into more advanced AP work.

Backyard has no support currently for


motorized filter wheels and only
provides FWHM focusing routines.

Since so much information is provided


on the screen this means you have a
smaller area to display your current
exposure, which really is a problem for
me. Even though I do not wear glasses
and use a 17” laptop, I still find myself
squinting and straining to see details of
targets out at the dark site. Sure, you
can use the zoom function (as you can
on other apps such as the previously
discussed IP) but I find having the
image as large as possible really helps
me in the field.

Lastly it only supports guiding through


PHD, which really isn’t that bad since
Figure 83 BackyardEOS progress center closeup.PHD is by far the most widely used
guiding app, but it still is a
consideration.

Guylain has certainly provided a lot of functionality and ease of use for your
money with the Backyard series and I will certainly be keeping an eye on
future versions.
2.2 Mount control/Planetarium software

This is a critical component to making your evening efficient and enjoyable.


The first piece of software I recommend is EQMOD. This is a little free
suite of software that allows other software to communicate with your
mount, and even to slew your telescope with a wireless gamepad if you
choose. Unfortunately I believe EQMOD only works with Synscan
controlled mounts which includes Orion and Skywatcher. You can check to
see about compatibility with other mounts here:

eq-mod.sourceforge.net

EQMOD runs on a platform called ASCOM, which is a freely


downloadable piece of software that just about every manufacturer of
astronomy gear has drivers for. This allows all of the equipment to talk
together including mounts, guide cameras, CCD cameras, focuser controls,
dome controls and a myriad of software all at the same time.

Figure 84 Illustration of how everything communicates through EQAMOD and ASCOM.

EQMOD will allow you to interface planetarium, planning and other


software with your telescope. For example, software such as TheSkyX,
Starry Nights, Red Shift, and Stellarium allow you to look up targets and
slew to them. It is also useful for aligning your telescope.

EQMOD also features "pulse guiding", which alleviates the need to connect
a cable from the ST-4 port on your guide camera to your mount. One really
neat thing here is the ability to control something akin to a gain control for
the guide signal making it move the mount more, or less, depending on
what you want. This comes in very handy in some situations where your
guiding software takes forever to calibrate.

Figure 85 EQMOD main screen with configuration open.

Another really nice feature of EQMOD is PEC training. In the previous


section on mounts we discussed what PEC is and how software is used to
find, strip out and finally make PEC adjustments. EQMOD’s PECPrep
program can provide the last two of those by providing a program that reads
and filters the log file from guide software such as PHD and then outputs a
correction file that EQMOD can directly use to provide the corrections.

My setup allows me to connect to the scope with TheSkyX and instead of


using the hand controller to do alignments etc I do it all through software. I
pick any three targets (and it could be two, ten, whatever) in the sky and tell
it to slew to the first one. I then use my wireless gamepad to center the
object in my camera, and then in TheSkyX I click on Sync. I repeat this two
more times and then I have a very good alignment. One advantage this has
is I can sync on ANY three objects; planets, the moon, stars, whatever. I am
not limited to objects in the hand controller’s alignment database. One trick
is to make the objects rather far apart and make sure two are on opposite
sides of the meridian.

One last feature, which I at first though was just plain ridiculous and geeky,
is that with EQMOD you can have your mount talk to you. That’s right, it
can get sassy (not really). Now I just can’t believe I did not have a mount
that would tell me “slewing to target” and “slew complete”. When you sync
it for alignment you don’t have to worry about whether it took it or not
because it will tell you “sync”. Want to change rates with your gamepad
without looking at the screen? Press the rate change button and hear “rate
one”, “rate two”. How cool is that?

Planetarium software is included in here too since it is used primarily to


drive the scope and see where I am and what I am looking at.
Figure 86 TheSkyX main interface.

TheSkyX from Software Bisque is the software of choice for most serious
astrophotographers I know, and once I tried it, I understood why. It is slick,
fast, and has more targets listed than I knew existed. It can load the latest
satellite tracking data, comets, asteroids, iridium flares, conjunctions and far
more.

When I am doing spectroscopy I use TheSkyX to find all the stars of a


particular type up in the sky and sort the list by magnitude so I can shoot
the brightest one in the sky. It takes seconds. I tried most of the other
offerings, the observatory I shoot at even has an 8”

Takahashi Newtonian being run by the free software Carte du Ciel, and yet I
was happy to spend my money on this package. That same observatory
however does run TheSky 6 (TheSkyX’s previous version) for its 16” SCT
on a Paramount.
Figure 87 TheSkyX observing list buttons.

By clicking on the Observing List tab over on the far left of the screen you
will see the panel on the left change to the Observing List panel. Here you
can just click “What’s Up?” if you have already configured your list, or if
you just want the default. The real power starts when you click the
“Manage Observing List” button to get this:

Figure 88 TheSkyX manage observing list screen.

As you can see in the above image you can have your “what’s up?” refined
to a specific type of object. You can even get more in depth by setting an
array of filters:
Figure 89 TheSkyX advanced query tab.

This is where things get really interesting. You can select an attribute on the
left, then double click it to get parameters such as the following:

Figure 90 TheSkyX attribute filter window.


From here you can tell it what you want the attribute to be, equal to a value,
less than a value, contains a value, starts with a value, etc. You can of
course set several filter attributes. For example I could say I wanted all stars
magnitude 10 or better, spectral class starting with W (Wolf-Rayet stars,
will match WN, WNE, WNL, WC, etc), more than 30 degrees above the
horizon, with a transit before midnight, and much more. I hope you can see
how powerful this is.

Once you have an


object centered on the
screen TheSkyX
presents a ton of
information over on
the left panel as you
can see over on the
left of the page.

Here you see pretty


much everything you
need to know at a
glance including the
rise, transit and set
times, alternate
catalog IDs, what
constellation it is in,
etc. I find this very
useful if I am looking
up targets while at the
Figure 91 TheSkyX object details panel. dark site because
either I did not have
time to come up with a target list, I am just doing visual for fun, or I am
looking to see what all is around a target I am already shooting.

If you click on the Log tab as seen over on the left, you can quickly enter
information about your viewing or imaging of this target including ratings,
seeing, and general notes. You can even have multiple observers so they can
each leave logs on the same target.
Of course no tour of TheSkyX would be complete without showing you
some of the artwork included in the program for select targets. For example,
here is what the screen looks like when I center M43 with my scope and
imager set up:

Figure 92 TheSkyX showing artwork for M43 including field of view indicator for my scope
and imager.

The professional package also allows you to overlay your own photos
directly over the sky chart, and with its built in precision automated
astrometry (plate solving) it will line up perfectly.

One of the more popular packages is Stellarium. Stellarium is a great


package to get your feet wet with. Since it is a free download, very pretty
and fairly small it is a great place to start. It has some telescope control and
several useful plugins such as the Oculars plugin that shows you what
certain targets will look like with a given eyepiece, camera, barlow, etc.
Unfortunately the target selection seems rather limited and images of targets
look even more limited.
Figure 93 Stellarium main user interface.

Let’s take a closer look at Stellarium. The images I show are windowed
view, but by default it comes up in full screen mode.

The first thing we need to do after installing Stellarium is to tell it where we


are. You can do this by moving the mouse over to the left of the screen and
down just a little below center until the side menu slides in from the left
side as you see over to the left of this text.

This is an important menu to remember because of that little question mark


at the bottom which brings up the help screen that contains keyboard
shortcuts and a link to the online documentation should you need it.

We want to click on the very top icon, the one that looks like a star. This
will bring up the configuration window for our location.
Figure 94 Stellarium side menu

Figure 95 Stellarium location screen.

From here you can search for nearby cities, click on the world map, or
directly enter your latitude and longitude. When you finish be sure to check
the checkbox to save this as your default location. Once you are done here
you can close this window.

Now go back to the menu on the left side as we previously saw. Click on
the icon of the wrench, the fifth icon down. This brings up the general
configuration window. On the top of this window you have six icons. You
want to click on the one on the far right named Plugins.

Over on the left you will see a list of plugins, click on Oculars. Now in the
bottom right corner of that screen you will see a button that says configure,
click that. Now at the top of the Oculars configuration you can click on
Eyepieces. You should see something similar to this:

Figure 96 Stellarium Oculars configuration screen.

There may be more configured in my screenshots than you have. That’s


fine, go ahead and click on Add and start putting in your eyepieces. Once
you have some in, click on sensors and put in your camera, then finally
click on Telescopes and enter the information for your telescope.

Once you are done with the configuration close all the configuration screens
and go back to the main interface.

You use the bottom ribbon for general controls which you access by moving
the mouse to the lower left side of the bottom of the screen. The menu will
then slide up:

Figure 97 Bottom menu bar in Stellarium.

From here you can do a variety of things including turning on and off
constellation boundaries, turning on and off both equatorial and
altitude/azimuth grids, turning on and off planets/nebulas, pausing the
rotation, etc.

The big feature here is the Oculars view. Move the mouse to the left side of
the screen to get the left menu out, then click on the magnifying glass icon,
fourth one down. You should then see the find box:

Figure 98 Stellarium find box.

Now we need to type in an object we know is in the sky right now. You can
try M43. If it is not in the sky, choose another Messier object and continue.
Let’s assume it is. Type in M43 into the box and press the enter key. This
centers our view on M43 and puts some relevant information in the upper
left of the screen.

Now move the mouse to the bottom of the screen to pull up the bottom
menu and click on the Oculars view icon, that seventh one from the right
and looks like a circle in a square. That should bring up something similar
to this:

Figure 99 Stellarium Ocular view for M43.

So why did I say something similar, and why might your view look
substantially different than mine? The answer is simple, you probably are
using a different telescope, different camera, or a different eyepiece.

To switch between the different things you configured you press Alt-O
which brings up a window like this one:
Figure 100 Stellarium ocular choice pop-up.

This concludes our very short walkthrough of Stellarium.

Next up is a package called C2A which stands for Computer Aided


Astronomy and is a very capable program. Even better, it is completely free.
If I wanted to get by on the cheap, this would be my choice.

While not as polished and user friendly as TheSkyX or Stellarium, it is far


more replete with features than Stellarium. It contains many object catalogs
such as Messier, NGC, IC and PGC. Unfortunately it lacks the depth of
catalogs supported by TheSkyX and AstroPlanner so while I consider it an
excellent program for beginner to intermediate use, advanced users may
want for something with a little more depth.
Figure 101 C2A main user interface.

If you get semi-serious about imaging you will probably want more. The
top of the line is TheSkyX Professional at $329, next down is TheSkyX
Serious Astronomy Edition (SAE) at $144. From there your next best bet
would be one of the Starry Night versions ($79-$249), then C2A, and
finally Stellarium. Now you may have noticed that I rank TheSkyX SAE
above Starry Night’s most expensive version, and indeed I ‘upgraded’ from
Starry Night Pro Plus 6 to TheSkyX SAE. TheSkyX is just that good. I fully
plan on going to the pro version once other things settle down. It is a great
package that will do everything you ever wanted and then some.
2.3 Tablet software

Tablets, specifically iPads, are so prevalent in today’s world that I just could
not leave them out. Too many times I have enjoyed both visual and AP with
the help of my iPad. From navigating the night sky, to seeing the moons of
Saturn orbit the planet in a planetarium, to just watching videos while my
scope imaged, the tablet has become a staple when I go out.

Why a tablet? One thing a laptop cannot do well is be lifted up and


compared to the sky and then move in real time as you move it against the
sky. Tablets excel at this. Until you have tried this you have no idea how
cool it is to put a moving sky chart up to the sky and move it around to see
what all is in that area of the sky. This is absolutely wonderful for visually
exploring with binoculars or a telescope, or can help you plan targets that
are in the same area you are already shooting.
Figure 102 Star Walk app for iPad.

Like desktop planetarium/star charting software there are several options


for this function on tablets. To start with there are programs such as Star
Walk ($4.99 iPad version) which is a planetarium and planning guide which
lists all of the Messier objects and some others. The database is small but
the graphics are rich, and for the money, this is my choice of apps for the
casual user. It also has a wonderful feature called Sky Live which shows
you the rise/set times of the Sun, Moon, Venus, Mars, Jupiter and Saturn at
a glance as well as their angle in the sky, and phases of the moon. Here is
the Sky Live screen:
Figure 103 Star Walk’s Sky Live screen.

More information on Star Walk can be found at their website:

vitotechnology.com
Figure 104 SkySafari app for iPad.

Next up on the list is SkySafari ($2.99, iPad/iPhone/Android). This is the


entry level version from Southern Stars and is a very capable program.
While not as “pretty” as Star Walk it is very easy to get down to business
with, and with over twice as many objects in its catalog as StarWalk, it
means business.

One step up is SkySafari Plus ($14.99, iPad/iPhone/Android). This app lists


2.5 million stars, telescope control functions, and a list of 31,000 space
objects of special interest. It is just as fast and easy as the standard
SkySafari, but contains tons more information.

From there we can jump to SkySafari Pro ($39.99, iPad/iPhone/Android)


which boasts one of the largest stellar databases of any planetarium
program for any platform (including PC/Mac!) This is the app you will
eventually wind up with if you are serious and is the app I reach for 90% of
the time in the field.
My favorite feature of SkySafari (other than its massive object database) is
the search feature. Oddly enough, not for searching:

Figure 105 SkySafari’s search listing.

Once you click on the little magnifying glass in the lower left of the
SkySafari main screen you are presented with this screen. Here of course
you can start typing what you want to search for in the search box at the
top. But that isn’t the cool part.

Notice the items listed below the search bar. Here you can tap on Satellites
for example and you will see a massive list of satellites with the ones that
are currently visible highlighted. You can do the same for Asteroids,
Planets, Comets, whatever. Of course you can tap on Tonight’s Best for a
listing of the best objects to view in the sky as well. I love this feature!

Want to control your telescope with your iPad? No problem! The same
Southern Stars also makes SkyFi, a wireless telescope controller which they
license to Orion Telescopes as the StarSeek Wi-Fi module. Not into
wireless? Try their SkyWire USB product.

For more information on SkySafari and other products by Southern Stars,


see their website:

www.southernstars.com

If you are less interested in having a star catalog but would like to explore
our solar system, watch space related videos and keep up on the latest in the
space program, the NASA HD app is a wonderful little app, and it’s free! It
also has a detailed satellite tracker.

Figure 106 NASA HD app.

More information on the NASA HD app can be found on their website:


www.nasa.gov/centers/ames/iphone/

Other useful apps are things like ICSC Clear Sky Chart to help you know
when you can go out imaging, Moon Globe for finding features on the
moon, and GoSkyWatch planetarium, all free.

I do not own an Android based tablet so I cannot really comment too much
on them except to say that several software developers such as Southern
Stars develop for both the IOS and Android market so I assume either will
do.

Another nifty app is AstroAid by Paul Rodman, the same author as


AstroPlanner discussed elsewhere in this book.

Figure 107 AstroAid main screen.

This little app will let you put in your scope/lens information, your
eyepiece/imager information, and then get a realistic approximation of the
view for that configuration. That can be very handy.
AstroAid unfortunately does not have a dedicated website yet, but you can
still find information on it at the iTunes page for it:

itunes.apple.com/us/app/astroaid/id541448395?mt=8

Scope Help can show you what your polar scope should be set to, give you
your GPS coordinates, altitude, provide you with a compass and level, has
field of view calculations and more. This is an excellent free app for your
iPad.

More information on Scope Help can be found on their website:

apps.myhillside.com
2.4 My setup procedure

Sometimes it is difficult to know what to do, and what order to do it in so I


thought I would offer this step by step process. Feel free to use it, modify it,
and abuse it. This is a constantly evolving system.

1) Setup and level tripod pointing north


2) Install mount and tighten it to tripod base
3) Install telescope to mount
4) Attach dew heaters/plug them in
5) Attach power cables, data cables and hand controller
6) Attach filter wheel and field flattener
7) Attach DSLR
8) Attach cables to camera
9) Balance scope
10) Polar align scope then set to home position
11) Setup table for laptop
12) Plug in cables to laptop
13) Plug in all power cables
14) Turn on dew heaters, mount, laptop
15) Setup folding chair
16) Input date/time/location information into hand controller, then
switch to PC connect mode
17) Launch PHD Guiding software and connect to guidecamera and
mount, this launches EQMOD
18) Set PHD to 2 second frames and take darks
19) Uncover guidescope and telescope
20) Reset wireless gamepad and test connection
21) Turn on DSLR
22) Launch Images Plus and connect to camera
23) Launch TheSkyX and connect to telescope
24) Turn on Orion EZ Finder Deluxe
25) Pick three objects to align on by looking at the sky and finding three
easy objects, the third should be a very bright star to end on
26) Tell TheSkyX to slew to first target
27) Turn on Live view in Images Plus and click the Display Crosshairs
checkbox
28) Use the wireless gamepad to slew until first target is exactly
centered in crosshairs
29) Tell TheSkyX to sync on that target
30) Tell TheSkyX to slew to second target
31) Use the wireless gamepad to slew until second target is exactly
centered in crosshairs
32) Tell TheSkyX to sync on that target
33) Tell TheSkyX to slew to third target
34) Use the wireless gamepad to slew until third target is exactly
centered in crosshairs
35) Tell TheSkyX to sync on that target
36) Launch AlignMaster and go through alignment process
37) Switch to EQMOD interface and clear alignment data
38) Repeat steps 25-35 once
39) Turn off Orion EZ Finder Deluxe
40) Place Bahtinov mask on telescope
41) Use live view at maximum zoom to focus
42) Turn off live view and take a 4 second exposure at ISO800 to verify
focus (double that when using narrowband filters), prefix filename
with “focus-”
43) Tell TheSkyX to slew to first target of the evening
44) Fire off a 30 second exposure at ISO 3200 to verify target placement
and calculate first exposure, prefix filename with “targetname-”
where targetname is something like M15 for Messier 15
45) Test exposures until I get the right one
46) Set exposure properties and number of exposures for time allotted,
including stopping for meridian flips where required
47) Once exposures are complete, install flat frame light source and start
exposing for flats, prefixing filenames with “targetname-flat-”
48) Remove flat frame light source and start exposing for bias frames,
prefixing filenames with “targetname-bias-”
49) Tell TheSkyX to slew to the second target of the evening...
Note there are no darks as I take them generally at home, in the refrigerator
during the winter, in the house in spring and fall, and on the back deck in
the summer. I use the same dark frames for multiple targets as long as the
temperature, ISO and exposure duration is the same or very close.

If you would like to see what the telescope setup procedure looks like (at
least the daytime setup portion) you can visit this book’s website at:

www.allans-stuff.com/leap

and watch the video “Setting up for astrophotography”.


2.5 Exposure considerations

The first thing you have to understand is image bit depth and dynamic
range.

There is no better way to understand these concepts than graphics, so let’s


look at some and see what is going on. I want you to understand that these
illustrations are just for this discussion to make it easy for you to
understand.

Every color is made up of combining red, blue and green colors into one
full color image. Each color is represented by a scale that shows its bit
depth. The image below we will call a 6 color grayscale because it shows 6
shades of gray.

Figure 108 6 color grayscale with percentage of saturation at the top.

Now for the sake of conversation let’s say that the numbers at the top
represent the percentage of light that has come into each pixel of your
camera. When the number of photons hitting the pixel reaches the
maximum of that pixel’s ability to absorb photons (this is called the well
depth) then it is at 100%. Let us further assume that your exposure has the
pixel we are talking about saturated at 15%. In the image above 15% is
clearly in the left hand solid black region so 0% up to 19% is all one color,
black. You might as well not have even opened the shutter.

Figure 109 12 color grayscale with percentage of saturation at the top.

In this image if we talk about the same pixel receiving the same amount of
light, 15% of saturation, we are clearly in the second section which is very
dark gray. When you stretch the image (more on stretching later) and
increase the contrast you can begin to pull out some separation between the
first and second sections, or very dark gray and black.

As you can see, the higher the bit depth (number of colors), the more likely
you are to be able to differentiate between two different shades, and the
more likely you are to pull out your target from the background. It should
be clear that you want the highest bit depth you can get.

Bit depth is measured a little differently than just the number of colors, 1-bit
is 2 possible colors or shades of a color, 2-bit is 4, 3-bit is 8, 4-bit is 16 and
so on.

Dynamic range is what the camera manufacturers call the bit depth. Since a
DSLR uses what is called a Bayer Matrix where a group of pixels is
combined to produce one color, one red pixel, one blue, and two green we
are talking about each pixel’s ability to respond to a specific range of
brightnesses in that particular color.

Why this concerns us is that dynamic range is reduced as ISO is increased


which causes a problem with exposures. When you increase ISO, sensor
gain noise is increased, but as you increase exposure time, thermal noise
increases. Danged if you do, danged if you don’t. But then there is dynamic
range to consider. This leaves you with the best option. Use the longest
exposures you possibly can with a low ISO to increase your dynamic range,
and ignore thermal noise which will be largely filtered out with darks.

Here is a chart that shows how the ISO affects the Dynamic Range (DR)
and Signal to Noise Ratio (SNR) of a Nikon D7000’s sensor:

Figure 110 ISO chart for Nikon D7000.

Now as we can see, at ISO 100 each pixel in the camera has 14 bits per
color component (R, G, and B). Each bit has two options (on=1 or off=0) so
this means there are 2 to the power 14 (=16,384) possible shades for each
color component. Now each final pixel is a combination of three colors so
16,384 x 16,384 x 16,384 = 4.3 trillion colors. Now take that down to the
lowest number of 6.75 bits and do the same math, 2 to the power of 6.75 is
about 107, and 107 x 107 x 107 is about 1,225,000 colors.

This clearly shows that the higher the ISO, the fewer colors you have,
which we already showed makes it much more difficult to stretch detail out
of the image.

So should you always shoot at ISO 100? No. You need to get a balance
between realistic length exposures and maximum dynamic range. You can
first figure out how long of an exposure you can get on your mount without
any issues and at least a 90% keeper rate, and then figure out how long you
want to be there

If you look at the previous chart you will also notice the last row which
shows SNR, or Signal to Noise Ratio. This becomes very important because
we are trying to pull a weak signal out of a universe of noise and this is
another item that helps.

Electronic devices all generate noise. Video cameras, still cameras such as
DSLRs, and audio playback devices all have a SNR value. As a general rule
the higher the SNR the better.

In audio for example, the SNR tells you what the softest sound is above the
level of hiss. I am sure you have noticed that cheap radios when really
turned up but without actually playing anything (like a paused CD player)
will emit a lot of hissing out of the speakers. The better the CD player in
this example, the less hiss you will hear at a given amplification.

The same holds true with cameras. The higher the SNR, the closer to black
an object can be and still be recorded over the internal noise (think hiss)
generated by the camera. This can be critical at getting those really faint
objects. The SNR only applies to the difference between absolute black, and
the next darkest shade of gray up the scale.

SNR is measured in dB, or decibels. Every increase or decrease of 3dB is


double the signal. Going back to audio for a second, if you have a radio
playing a 20dB and crank it up to 23dB, you just doubled the volume.

This too is a balancing act of choosing short enough exposures so that you
can keep 90% or better of what you shoot while simultaneously making the
exposure long enough to keep the ISO as low as you can reasonably go.

For most objects I have found that ISO 800 on the D7000 is fairly
acceptable. This gives you about five billion colors total with a 32dB SNR.
I will say however that as I am getting better at stretching images and
coaxing detail out of them I am starting to shoot a lot more at ISO 400 for
about forty billion colors, or eight times the number of colors along with a
2dB gain.

Using the ideas in Part 1 of my series on bright objects you need to shoot to
get the histogram on the left hand side of the graph but with clear separation
from the left edge. This provides a good signal while minimizing any
skyglow from light pollution. Then adjust your ISO to the lowest setting
you are willing to put up with that can give you reliably good images.
Unfortunately the harder (read as fainter) the objects you start imaging, you
have to learn to do some educated guessing based on your experiences
which will vary wildly with your equipment, knowledge, capabilities,
software, temperatures and of course, light pollution levels.

Now I have said a couple of times that you should shoot until the black of
space starts to turn gray (skyfog) and then hold off there. Why?

There are two types of noise; shot noise which is the noise you get from the
length of time the shutter is open, and read noise which is a function of the
camera recording the image.

To help you grasp this concept here is a rather oversimplified way to think
of it; you get one point of noise for each minute of exposure you have
regardless of whether it is 100 one minute exposures or 1 one hundred
minute exposure, so we start off with 100 points. This is our shot noise.

Next, every time the shutter button is pressed you get one point of read
noise. So if you use 1 one hundred minute exposure you get 1 point of read
noise for a total of 101 noise points. On the other hand if you use 100 one
minute exposures you get 100 read noise points for a total of 200 noise
points.

This is why we want to use the longest exposure possible given the amount
of light pollution in the air coupled with the capabilities of our mount.

So let’s put it all together and see where we need to be. In a perfect world
we want to shoot a target as close to zenith as possible and let’s say that
target is close enough to the celestial equator that it stays up for eight or
more hours a night. Knowing all this we choose to shoot two hours before
and two hours after it reaches the meridian for a total of four hours.

Now we need to have multiple frames to put together for averaging out bad
pixels, aircraft trails, and general noise so we want 20 frames as we
discussed earlier.

Knowing the length of time we have and the number of frames we want we
take four hours (240 minutes) and divide that by 20 frames for a total of 12
minutes for each frame. We need some time between frames for the image
to download and the guiding to dither so we reduce that number to 11
minutes. In the middle of all of this we need to perform a meridian flip and
get back on target, so we reduce it again to 10 minutes to give us a little
more time.

Next, assuming our mount has no problem guiding for 10 minutes we need
to find out what the skyfog will do on a 10 minute exposure. If it does not
turn the black of space to medium or light gray in our image then we go
with 10 minutes. If it does, then we lower the ISO until the background is
dark gray.

This method should give you an excellent starting place to maximize your
SNR, maximize your dynamic range, minimize your noise and give you the
best results in stretching.

If you read around enough you will see claims that it is all about the ISO.
Others claim it is all about the exposure length. Still more claim that if you
stack enough frames together you can solve any problem. I hope you see
now it is a combination of factors.
Still, at this point you may not be sure what the benefits of stacking are, so
instead of yapping incessantly at you I will show you! What you are about
to see are three images of Messier 78, a fairly dim and diffuse nebula. All
three images are derived from the same light frames, all three are processed
the same in the same software one right after another. The only difference is
how they were stacked:

Figure 111 M78 stacked with 71 lights, 25 darks and 25 bias.


Figure 112 M78 stacked with 10 lights, 25 darks and 25 bias.
Figure 113 M78 stretched single light frame, no darks, no bias, no stacking.

None of these are by any means finished images, but they do illustrate the
need for multiple lights and stacking quite well. Note the massive increase
in noise and reduction in detail between the first and second images.
Between the second and third, there is indeed a dramatic difference.

Total time on this target was almost six hours in the first image, fifty
minutes in the second, and five minutes in the third.

The huge difference between the first two is strictly a function of time on
target and quantity of frames. Bias and darks on both the images were
exactly the same so there was no difference there. Now you see the reason
we shoot and stack more frames!

Getting back to why we shoot longer exposures verses shorter ones with the
same amount of total time on target, look at this image:

Figure 114 M78 stacked with 20 lights (150sec each), 25 darks and 25 bias.

If you look back two figures you will see where M78 was stacked with 10
lights, 25 darks and 25 bias files, that was done with 10 300 second lights.
The image above was done with 20 150 second lights for the exact same
total amount of time. Indeed they are similar. Now take a really close look.
Figure 115 Fewer long exposures on the left, more short exposures on the right, same total
exposure time.

When you enlarge them and look closely you can clearly see there is more
noise and less detail on the image shot with a larger number of short
exposures. You would expect more detail when you shoot more frames, and
indeed the stars seem a little sharper on the right. Unfortunately the
increased read noise, coupled with shorter exposures not capturing as much
detail, canceled out any advantage you may have gained by shooting twice
as many frames.

Just like the rest of the examples, these were processed the same way, in the
same program, to as close to the same specifications as possible.
2.6 Post processing overview

Once you have all your images you need to process them to make one final
image. While this topic could easily fill multiple full blown books my goal
here is just to give you a general overview that we will build on later.

The first thing you have to do with the images is stack them. Very basically
what this does is take a specific pixel in each image and see how it changes, or
doesn’t change, from one image to the next. If the values for this pixel do not
change, or changes very little then the value is left alone. If the value changes
in one image drastically such as when a satellite passes through the image,
then that is ignored and the average value for the rest of the images is used
instead. If the value changes randomly from image to image, it is considered
noise and either rejected or averaged out. Remember, this is a very basic
overview. There are many types of stacking which do wildly different things,
but the general idea remains the same.

What this has the effect of doing is removing noise and increasing the signal
to noise ratio to help you pull out a faint object from a fainter background. As
we discussed earlier this becomes easier with more frames for the software to
look at in order to figure out what is noise and what is signal, (to a point).

Virtually everyone I know started stacking with a program called Deep Sky
Stacker (DSS for short), partially because it is an excellent starter program for
this use, and partially because it is free.

DSS allows you to take all your lights, darks, bias and flats and combine them
into one image. I will note here that DSS is NOT very good at anything but
stacking so do not do any stretching, saturation boost, etc with it. A great
thing about DSS is you can get pretty reasonable results from most targets by
using the suggested stacking parameters because the developers have done a
good job with making recommendations. Once you are done here you can
save the output as a 16bit TIF file for further processing.

Now here comes a problem. We need to take this 16bit TIF and stretch it in a
graphics program. The problem arises in that we need a 16bit editing package
and that means commercial applications. Photoshop (PC/Mac, $699) of course
will do this without a problem, Photoshop Elements (PC/Mac, $99)
supposedly can do this with enough plugins and tweaking. I have heard that
Corel Paintshop Pro (PC, $40) can handle 16bit images but I have not tried it.
Gimp, (the popular freeware image editing software for Windows/Mac/Linux)
is not yet suitable as the only stable releases are 8-bit (you don’t want to mess
with the unstable 16- and 32-bit releases). I have found that the majority of
people doing this who are fairly serious wind up using Photoshop or a
dedicated AP processing application. I know, that’s a lot of money, but what
are you going to do.

Dedicated astronomy post processing (PP for short) applications such as


Images Plus (PC, $179), PixInsight (PC/Mac/Linux, over $225) or MaximDL
(PC, starts at $299) are applications that can do your stacking, stretching and
many other functions all in one application. One advantage here is that some
PP apps do the stacking and stretching in 24bit or even 32bit data so they can
reveal more detail than even working in Photoshop!

Other things that dedicated AP PP apps provide typically includes gradient


removal for images that are shot into light domes or near light pollution,
deconvolution to really bring out details in galaxies and nebulas, background
neutralization to give nice even neutral colored backgrounds, and color
calibration so that your stars have some semblance of color to them and
targets don’t come out some weird color. This of course is just the tip of the
iceberg so to speak. There is so much more that can be done in a dedicated AP
PP app.

Eventually, if you are really serious about AP, you will have some form of
dedicated PP application such as Images Plus and Photoshop. You may even
have something like Adobe Photoshop Lightroom (PC/Mac, $149) to organize
your images, make them easier to export to web/Photobucket/blogs/etc, and to
make cropping for specific sizes and ratios the easiest thing you have ever
seen.

A typical workflow may consist of stacking/stretching/sharpening and noise


reduction in Images Plus, then copy that image into Lightroom. Now it is
cropped, has the clarity boosted slightly and maybe a little bit of saturation
boost, then click on Edit In Photoshop. Once in Photoshop, any final tweaks
are performed and it is then saved back into Lightroom. Then the image is
exported to whatever final destinations you desire (for example, right click on
the image, click Export, click Photobucket, select the gallery, then click
export).

When you first start out you will undoubtedly stack your images, give them a
quick stretch, boost the heck out of the saturation to get more color, then run
some form of noise reduction software. All this will take you about 30
minutes or so. As you progress you will add steps, stacking will take longer as
you get more data, and you will find yourself spending more time in PP than
in capturing the image to start with! (Ah how we learn.) When I first started I
could shoot 10 targets in a night, sleep until 1pm and have all 10 targets
posted on my website by 3pm. Oh how those images stunk. Now I shoot one,
maybe two targets in a full night, then spend days processing them, and they
are a little better, heh.

Let’s assume that you want to start out with Photoshop and DSS and not use a
dedicated AP PP app. You will probably need a little help with gradient
removal, noise reduction and other astronomy specific issues. The answer to
your prayers is a package called Astronomy Tools Action Set ($21.95
Windows/Mac) which contains tools for all this and much more. While I do
not feel this package comes anywhere close to the ability of a dedicated AP
PP app, it does indeed bridge the gap and I still find myself using it
occasionally for final touchups. I have also heard of other such plugins but am
not familiar with them. You can get these tools at:

http://www.prodigitalsoftware.com/

Another tool you may need to become familiar with if you are going to work
primarily in Photoshop and a CCD is FITS Liberator (Free, PC/Mac). Most
CCD cameras save their images into files with an extension of.FIT. Photoshop
cannot open these directly and so we use FITS Liberator. You can download
the software from:

http://www.spacetelescope.org/projects/fits_liberator/
Figure 116 Screenshot of FITS Liberator.

FITS Liberator can be used for a lot of things including initial stretching of
the image. Using this method you could do an initial stretch and then edit the
image further in a program that could not handle a 16bit or higher image.
Once you are done stretching, or if like me you do not use it for stretching,
you can click on Save File and save the FIT file to a TIF, which can then be
brought into Photoshop or another editing package.

Keep in mind that if you use a dedicated astrophotography processing package


such as Images Plus or PixInsight, you will not need to convert from FITS to
TIF as these programs handle FITS files natively.
Figure 117 A sample workflow I might use on a simple target.

In the previous figure you can see a typical workflow for a simple target. They
can get vastly more complicated as the targets get dimmer and more complex.
It also gets more complex when you shoot monochrome as you need to
integrate not only lights, darks, bias and flats, but also red, blue, green and
luminance if shooting for color, or Ha, SII, OIII and luminance if shooting
narrowband Hubble pallet.

The workflow example is not meant to be the end-all solution, but a good
starting place to give you an idea of the steps involved in most basic image
processing workflows. If you start with this and then add new things as you
learn them, you will be well on your way to creating some very nice images.
Here is one you probably didn’t see coming: calibrating your monitor. Every
monitor shows colors a little differently, and each shows colors differently
under different room lighting. So how do we know what we see on the screen
is what is really there? We calibrate our monitors.

There are two basic ways to calibrate a monitor, the wrong way is to use our
own eyes to try to get colors to match some chart or gradient, and the right
way, which involves a piece of hardware and software.

Since everyone’s eyes are a little different, and the ambient light makes a huge
difference, using hardware instead of our eyes makes sure you get an accurate,
unbiased result.

Being a photographer this hits very close to home. I need to make sure that
what I see on the screen after hours of adjustment is exactly what I will see on
the print when it comes back from the printer. Hardware calibration is how I
do that.

I use a Spyder Pro from Pantone. This consists of a piece of software which
displays colors on my monitor and a device that sits on the front of my display
to record the results. It then creates a color profile that loads every time I boot
the computer so that my colors stay accurate. This is calibrated to the sRGB
profile (basically a chart that tells what every color should look like) which is
the same profile that my lab (and most professional photo labs) uses.

This doesn’t guarantee that everyone else will see the same thing on their
computer that I do, because their monitors are probably not calibrated. At
least I know mine is right
2.7 Finding targets, session planning

Now that you have the basic knowledge and tools to capture and process
images, how do you find them? How do you know when you can shoot
what target? Excellent questions! To borrow a phrase from Apple, there’s an
app for that!

For my session planning I use a program called AstroPlanner ($45,


PC/Mac). This software allows me to set a date, select tons of different
catalogs to search, search for specific object types, specific rise/transit/set
times, specific magnitudes (or anything my scope can see after telling it my
telescope type), from specific locations and load all the results (from a few
to many thousands) and sort them by pretty much anything you could think
of, up to three sort items per list at one time.

Figure 118 Screenshot of AstroPlanner main interface.


It can show me where in the sky each object is, download images of all the
objects in the list so I can actually "see" what I am looking for, print out
everything from short little lists to detailed finder charts with multiple
levels of details for multiple angles of view. Heck, I can even tell it to slew
my telescope to the targets!

You should not have a problem finding what you want because
AstroPlanner has access to 132 catalogs which contain some 824,000,000
objects which are constantly updated (about eight updates in 2012).

Paul Rodman, the author, also runs a Yahoo Groups support group that is
pretty active and great for support. There are also several scripts that work
with AstroPlanner to expand its functionality.

One of my favorite features of AstroPlanner is right on the main screen:

Figure 119 Close-up of the ribbon in AstroPlanner.

In this one little place on the ribbon you can see the sun rise and set times,
moon rise and set times, current phase of the moon, current altitude of the
moon, and note the little arrows on the bottom right that show you where
the target you have highlighted is in the sky. This makes it really nice for
finding targets at a specific time as you can see right where it is in the sky
and by that, know where it is going. This is an awesome feature for
scheduling targets.

AstroPlanner also has a nice field of view system included which uses the
images it downloads of targets. Here is an example of M41, the little
beehive cluster:
Figure 120 AstroPlanner field of view screen, borders enhanced.

I should point out that for printing of this book I have enhanced the
retangular borders on the screenshot above as they just did not stand out
well enough for printing. They are exactly in the same place, they are just
wider and brighter.

This screen is great for seeing how your target will line up on your sensor.
The above image is for my Nikon D7000 on my 110mm f7 scope.
AstroPlanner also has a really nice finder chart print out facility which is
very configurable:

Figure 121 Astroplanner finder charts.

Now while I don’t expect you to really read the chart above you can see it
contains a naked eye chart (completely configurable) in the upper left; a
finderscope chart to the right of that so you know what to look for in your
finder (completely configurable for your particular finder); an
eyepiece/camera chart so you know what you should see in the
eyepiece/camera (again, completely configurable and note it is an actual
picture of the target, not a representation); and lastly a chart of helpful
information.

I could go on and on about the features of AstroPlanner but the intent here
is just to show you the features that are the most helpful in your AP
endevours.

Regardless of which of these apps you decide to try to see if it helps your
session planning, the features I showed for AstroPlanner are the features I
find the most helpful and the ones you may want to make sure whatever
software you decide to use actually has.

Figure 122 Deep Sky Planner main objects search screen.


Another package that does roughly the same thing is Deep-Sky Planner
($65, PC) which has a smaller catalog at 1.25 million objects but tighter
integration with planetarium programs.

Knightware says they have the most up-to-date information of the programs
I mention, and a unique compilation of Arp data.

Figure 123 Skytools main screen.

The most expensive package I have seen is SkyTools ($39-$179 Windows)


which incorporates many objects, primarily 500+ million stars down to 20th
magnitude and many features aimed at visual observers such as difficulty
ratings for splitting doubles. It even includes some exposure calculations for
AP although I would personally take those with a grain of salt. Not because
I think they are necessarily wrong, but because seeing conditions and light
pollution levels vary so much I can’t believe you could just calculate it and
go.
Don’t confuse these apps with features in smaller apps that show you what
targets are up tonight. Some of these smaller versions only include targets
that can be seen visually with a small scope, some of them are limited to
Messier objects only, and some do not allow any searching or sorting. Most
of them will not print out detailed finder charts.

I am sure there are many more planning apps, maybe even some that are
free. I have tried to provide information on ones I have actually seen people
use instead of just blasting out a list of software no one actually uses for
anything.

Since I typically will spend an entire night, from sunset to sunrise, imaging,
I tend to plan far in advance. Before a new month arrives I will typically
use AstroPlanner and TheSkyX to locate targets I want to image, then I
make an Excel spreadsheet containing several "target packages" which are
all night sessions planned as to what time I will start imaging what target,
when I will switch targets, times for meridian flips, etc. In addition I include
alternate targets and fun small targets if I have a short amount of time to do
something while waiting for something else.

What this allows me to do is have roughly four full nights of targets planned
and many alternates in case I cannot shoot the primaries for whatever
reason (I have had nights where everything to the east was covered in
clouds but the west was clear as could be) already printed out, inside a clear
sleeve (dew kills paper you know) and in my laptop bag ready to go. So
when a clear night good for imaging arrives, I do not have to waste time
making lists or hunting for targets, I just grab and go!

I mentioned clear sleeves or page protectors to keep the dew off the paper
and this took me a while to think of. I keep those on a clipboard with what I
have heard called a wax pencil, grease pencil or china marker which writes
on the clear sleeve even through water and is easy to wipe off with a rag.
This is really helpful for making quick notes, marking off targets, etc.

You will find that when doing serious long exposure AP work planning
your sessions becomes crucial. You need to be aware of meridian flips, plan
enough time to reacquire and recenter your target after a flip, plan for time
to do flats immediately after taking lights of a specific target and much
more. It will take you a while to get everything ironed out, but once you do,
you can spend your time waiting for a run to finish watching TV on your
tablet, doing visual with a second scope (what? I failed to mention you
should have a second scope? LOL!), or chatting with other people out
imaging with you.

With these apps you will have access to hundreds or thousands of targets
throughout the year, so how do you decide what to shoot? You could just
pick the largest on a given night, or the brightest, or throw darts. Many
people, myself included, started shooting the Messier list first, then went on
to the Caldwell list, then expanded from there.

The Messier list is a list of 110 objects all visible from the northern
hemisphere that were cataloged by French astronomer Charles Messier in
the 1700s. While he was looking for comets he noticed certain “fuzzy”
objects that seemed to get in his way and obstruct his vision, so he
cataloged them. With more modern telescopes we discovered that they are
some of the most beautiful objects in the sky, and some of the most easily
viewed.

The Caldwell list was compiled in 1995 by Sir Patrick Moore (middle name
Caldwell) and consists of 109 objects in both the northern and southern
hemisphere. Some of these objects can be a bit more challenging than the
Messier objects so this is a logical next step.

From there the sky is the limit, pun intended. You could try the Herschel
400 or take off entirely in your own direction with tens of thousands of
targets in the NGC catalog.
2.8 Astrophotography with camera lenses

Once in a while you might want to capture a larger expanse of the sky than
your telescope will allow and it occurs to you that you could just bolt your
camera and lens to your mount or telescope and take long exposures that
way. Absolutely!

Figure 124 Camera with telephoto lens mounted directly to the mount.

Many of the same things you learned about taking long exposures through
your telescope remain exactly the same, the one big difference is that with
your camera lens you can set an f-stop whereas on your telescope your f-
ratio is not adjustable. Many people will tell you to open the lens up to its
maximum opening, or smallest f-stop number. For some lenses this could be
f/1.8, for others, f/2.8, still others may have a maximum of f/4 or f/5.6,
these people are obviously not photographers! Photographers would never
tell you to do that unless you have no choice. The only time a photographer
uses the maximum aperture on a lens is where you want to provide
maximum separation from the background (small aperture numbers allow
for very narrow depth of field, or very little in focus) or when the light is so
low they have no choice to stop action like in night sports.

The problem here is that the further you open a lens up towards its
maximum the less focused the image becomes. Any lens with a maximum
aperture of f/1.8 will be “softer” at f/1.8 than at f/4. It is the maximum of
the lens that is the concern, not the actual aperture number. So a lens with a
maximum aperture of f/1.8 will be far sharper at f/4 than a lens whose
maximum aperture is f/4 shot at f/4. Make sense?

There is another side to this of course. Once you pass f/11 or so with most
lenses you start to get what is called diffraction which again, starts to blur
the image. Lenses are typically sharpest right in the middle of their range.
So if we assume that a lens has the marked aperture settings of 2.8, 4, 5.6,
8, 11, and 16 either f/5.6 or f/8 would be the sharpest the lens could be.
Generally speaking opening up to f/4 on this lens would not be too bad so
you could use it if you like.
Figure 125 Example of diffraction in photography.

If you look very closely at the image above you will note that as the f
number (f-stop) increases (aperture decreases) the center line (“making such
a”) gets slightly sharper between f/5 and f/8, stays in sharp focus at f/16,
then starts getting blurry at f/32. This is the effect of diffraction. It also
illustrates that at f/5 only a very small portion of the line is in focus (the
bottom of the “ma” is obviously blurry) even though this lens will open up
to f/2.8 (although not at this very small distance). With this lens, you get the
most in focus, and the sharpest focus for your subject, at f/11 or f/16. The
range on this particular lens is f/2.8 through f/32.

So why should you care about sharpness? Simple, if you don’t care about
sharpness, why are you worried about focusing? That is what focusing is, it
brings the image together to make it sharper, more defined. Unfocused stars
are no longer sharp little points of light, just like they would be if when shot
with lenses at their maximum aperture. Stars are so far away they appear as
points of light. Knocking them out of focus or shooting at an incorrect
aperture makes them swell up like blowfish.
Another thing to remember when using your camera and lens on your
telescope is to make sure the entire assembly is balanced together. I have
seen people balance their scope and get all set up, then bolt a DSLR and
huge lens to the top and then wonder why the mount isn’t tracking correctly.
Well DUH! You just added two pounds to it!

Next we have the issue of focusing. Many camera lenses will actually let
you focus past infinity. Since infinity is... well... infinite, you can see where
this is a bit of a conundrum. The trick is to disable autofocus and use your
live view to zoom in as far as you can (do NOT zoom your lens if using a
zoom lens; leave it where you want it to take the pictures, since lens zoom
can change focus!) and focus by making the stars as small as possible. Once
this is done, make sure you do not touch the focusing ring on your lens, and
then start shooting.

I want to touch on something else a non-photographer would probably


overlook. Large lenses attached to camera bodies should not be mounted on
a telescope, or a mount, by a tripod thread to the body. The problem is that
most cameras today have plastic outer shells including where the tripod
mounts, and maybe even the lens mount. When you mount a large heavy
lens to it and then support only the camera body from the tripod socket you
are just asking for something to break. Most large lenses come with a tripod
collar that has a socket for mounting onto a tripod. Use that instead of the
one on the camera.

Many scopes have adapters for the scope rings or other places to mount a
camera by using a ¼″x20 screw. This is the easiest way to mount a camera
for widefield work as long as two things are true:

First, your camera and lens should not exceed the maximum payload of the
mount when added to the rest of the equipment it is being bolted to.

Second, you should make sure that your lens does not “see” the telescope
since it will have a fairly wide field of view. It stinks to have shot a whole
series of images only to find out that the top of the telescope is in the
bottom of all your images!
When using your camera mounted on your scope you can still use guiding
since the main scope is being run by your guiding setup. But what if you
just want to mount your camera directly to the mount?

If you are using a wide angle lens, something less than 50mm, then you can
get excellent results with just a good polar alignment up to a couple of
minutes. Guiding may not be necessary. The longer the lens the more
guiding can become necessary.

One trick is to get adapters that will allow you to mount two dovetail bars
(the bars that your telescope is bolted to that connects to your mount) side
by side. This has a single dovetail that goes into the mount, then a T device,
then two dovetails or dovetail clamps side by side to mount scopes or
cameras on.

With this setup it is possible to mount your camera and lens to one side,
then your guidescope to the other side and guide the mount with that. This
allows you to use any length lens you may have with excellent accuracy.

Many lenses allow for manual focusing with a focusing ring on the lens
itself, and for us this can be a problem in that a slight touch of the lens
while imaging, (for example when a cable passes over it or your jacket
brushes it) can cause it to lose focus. The same thing can happen with zoom
lenses and the zoom ring on the lens. The simplest solution for this is to use
a small piece of gaffers tape or painters tape (either will leave little to no
residue and are easy to remove) to stop the rings from rotating once you
have them set where you want them.

Also on zoom lenses, tilting the lens (like when your mount rotates) can
cause it to shift the zoom. This is especially true on inexpensive zoom
lenses. Professional lenses are usually far better designed to prevent this.
Again, the little strip of tape solves this issue.

Take a close look at the image at the very front of this section and you will
notice that I have the camera and lens bolted to a dovetail which is then
inserted into the dovetail clamp of the mount. This is how I normally shoot
with a camera. Since this is a fairly large lens I have the bolt through the
dovetail bolted into the lens’ tripod collar.
There is however another interesting way to mount your camera to your
scope setup:

Figure 126 Using an extended dovetail for camera mounting.

Look closely at the image above. Note that under the telescope the dovetail
bar extends out substantially farther than it needs to. There are two reasons
for this.

The first reason is so I could mount a camera body upside down right below
the scope. This was mainly before I acquired a second GoTo mount so I
could shoot wide field and through the scope of the same target at the same
time. It still comes in handy for that on those occasions when I don’t feel
like dragging out the second mount, or when the second mount is doing
something else and I need a third camera.

The second reason was found quite by accident. This extension makes a
wonderful handhold for the scope when mounting and dismounting the
scope.

I bought this dovetail from Orion. It is their extended rail, and unlike many
rails it is a solid piece of aluminum drilled for the scope rings and a couple
of extra holes for things like a camera mount, so it is extremely solid.
2.9 Brand specific considerations

Brands in many things are not something to worry about, but there are some
specific considerations as to brands of some items in AP.

There are many different brands of DSLRs including Canon, Minolta,


Nikon, Olympus, Pentax, Sony and others. You will see a lot of people
using Canons, followed closely by Nikons. Canons clearly have the most
software support for astrophotography, and you will hear many people say
you should only consider Canons for this reason. Well, do you base your car
buying decisions on what make of car your auto parts store has the most
parts for? Do you even have a clue as to what make that is? I thought not.

You can shoot with any brand of camera you happen to own. There are,
however, some considerations for different manufacturers.

The first one that jumps out is Sony. There have been some users report
what is called Amp Glow in some models. What this means is the camera
picks up light from what looks like the imaging sensor itself glowing. It
usually starts in one corner and quickly spreads over the entire image area.
This kind of gradient can be very difficult to remove. Amp glow can be
minimized by taking shorter exposures and making sure you take flats.
Amp glow is not exclusively a one manufacturer issue; it can and has
appeared on many cameras, especially older models.

Fortunately seeing if your camera suffers from amp glow is very easy; put
the lens cap on, set the ISO to 1600 or so, and fire off a ten minute
exposure. If it is almost perfectly black across the whole image, no amp
glow.

Nikon cameras have an issue in that unless you purchase a special adapter
you are limited to 30 second exposures. Shoestring Astronomy has both
USB (preferred) and IR (lower end cameras) links that allow for bulb
capabilities for any length exposures. If you are purchasing the camera for
AP work, or AP in addition to regular photos, I would suggest a D5000 or
higher (D5100, D5200, etc), D90 or D7000. The D300/D300s,
D600/D700/D800 and D3/D4 series all work as well but are far more
expensive with no real gains for AP work. Older Nikon bodies are not
really recommended as their light sensitivity coupled with other processing
factors make them less than ideal, like most older cameras from other
manufacturers.

Older Nikon cameras suffer from an issue that people have termed “eating
stars”. What happens is that when the image is taken, the camera attempts
to remove some thermal noise. This can result in very faint stars being
misidentified as thermal noise and removed.

Some interesting information on this topic can be found at:

http://www.astrosurf.com/~buil/nikon_test/test.htm

After reading the information on the Nikon problem, you probably think
you should avoid them and go Canon instead. First, the Nikon problem does
not apply to current cameras (I have never seen the issue with a D90,
D7000, D5100 or D5200). Second, this is only an issue with extremely faint
stars that are difficult to see in the final image. For more information on
these two points see this link:

http://forums.dpreview.com/forums/post/37071846

Third, although the article clearly states that Nikon RAW files are not really
raw, we will now see, neither are Canon’s. Canon edits the data before it
gets to the RAW file format which can result in a very difficult time with
dark subtraction. More on this subject can be found at:

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/CanonLinearity.pdf
As for other manufacturers I have heard nothing negative. When
considering a particular camera, check that the processing software that you
want to use can handle its raw image format, a T-Ring for the camera body
is available, and that camera control software like Image Plus or MaximDL
to support controlling exposures for the camera. Then I would check with
other people using that brand/model of camera and see if there are any
known issues.

If I was purchasing a camera now and had both regular photography and AP
in mind I would start by looking at something like the Canon 60D and the
Nikon D5200 and play with both of them, go through the menus, take a few
shots and see which one “called my name”. Either one would be excellent
for both tasks so the main concern is how well you will like using it and
how comfortable it is for you. Then I might look at a step up and compare
the Canon 7D and Nikon D7000 for the weather sealing and other features
to see if they might be something I wanted.

Now for me, and me alone, I would pick the Nikon over the Canon any day
of the week and twice on Sundays. This is a personal decision which is why
I suggest you go play with several models and see what you like. My brain
just does not mesh well with Canon ergonomics or their menu structure.
This is not to say there is anything “wrong” with Canons, or “better” with
Nikons, although.... Really this is a Ford versus Chevy kind of thing.
Getting what works well for you is far more important than what name is
stamped on the front of your equipment.

If I was purchasing a camera specifically for AP and nothing else, and had
nothing to start with, honestly I would probably go with a CCD instead of a
DSLR. There are some great little CCD cameras out there like the Orion G3
series in either color or monochrome you can get for $499. For that price
you get a cooled camera which can greatly decrease noise in your images, it
is already “modified” so it has great response to Ha and SII, and it is
smaller and lighter than a DSLR.

Computers are divided up into two basic brands, Apple (usually referred to
as Mac for the Macintosh name) and PC (in this context running Microsoft
Windows). The primary difference here is that although Apple makes a fine
operating system and fine hardware, it is not the best solution for AP.

Can you use a Mac? You absolutely can. It will take more work, have far
fewer choices and cause some gray hair but it can be made to work.

To my knowledge there is no free stacking program for the Mac such as


Deep Sky Stacker, but commercial alternatives are readily available such as
PixInsight. EQMOD is only available for Windows so you may have to use
BootCamp or a virtual PC solution to get it to run. Controlling your camera
may prove difficult as the two top recommended camera control options,
ImagesPlus and MaximDL, are Windows only.

Then of course if you did manage to get some of them to run under a virtual
PC, getting them to communicate with the other Mac software running
would be next to impossible.

Unless you just want to run a Mac out in the field to say you did it, you may
be better off buying an inexpensive used PC to use in the field and then do
the rest of your processing at home on your primary Mac. Currently, around
$250 will get you a nice laptop on some auction sites complete with a serial
port built in that would be perfect for AP field use.

Linux, although not really a brand per se, is even worse than Mac when It
comes to AP software and I have the same recommendation to just purchase
an inexpensive Windows machine to use for AP unless you like suffering. If
you insist on self-inflicted pain, I have heard that there has been some
limited success with Wine, a set of constructs that allows some Windows
programs to run under alternate operating systems.

Let’s assume you decide to buy a used laptop for use in the field. What
brand should you consider? I would highly recommend a Dell Latitude as
they are built very tough and can be easily had with a built in serial port. In
addition, Dell sells so many laptops that you can get parts for them quite
easily should something need to be replaced.

One last myth I want to dispel, which really isn’t that much about brands
but still seems to fit here, is that just because two things look similar, and
may be made in the same plant by the same people, that does not make
them the same.

For example, let’s take two eyepieces, an Orion Stratus 24mm and a Baader
Hyperion 24mm. They look the same, the specs are the same, and they are
priced similar. In fact, if you take the names off of them they are very hard
to tell apart. Does that mean they are the same? Absolutely not!

I should first point out that for all I know, they may be identical in every
way except for the names on them. I am also not saying either of the brands
is doing anything wrong or using sub-standard components. I have and use
products from both manufacturers and am using them as examples only
because they happen to have eyepieces that look very similar. I have
nothing but respect for both Baader and Orion and their products.

With that being said, one company may have specified better glass in the
lenses, blackening of the lens edges to improve contrast, better grade metal,
better grade rubber in the eyecups, tighter tolerances, waterproofing, and
who knows what else.

My day job is in IT and one of the things we do is build custom computers


for our clients. In one day my shop could build five different computers
which had nothing in common at all but the case they are in. Same builders,
same case, same office, same bench, same day, not one single component
inside the cases would be the same. From the outside however, they would
look, and indeed be, identical.

For all we know, as certain ones failed quality control, those rejected by one
company were sold to the other company at a discount. Don’t think that
doesn’t happen in this day and age where price seems to mean everything
because I assure you it can and does happen.

The moral here is never judge a book by its cover. If you see two items that
look the same and maybe were even made in the same factory in China but
they have substantially different prices, odds are there is a reason one is
cheaper than the other. When in doubt, better to go with the brand name you
know and trust (see, I told you it would fit in here) than to wind up with
something sub-standard and spend days and weeks trying to figure out
where that reflection is coming from, or why it just never seems to lock into
focus, or why the threads never screw on right, etc.
2.10 Diagnosing image errors

At some point you will probably run into a situation where images do not
come out like you expect, and you need some way of figuring out what went
wrong.

The first type of problem is images with stars that are not round. We need to
break the raw, unprocessed image into five sections as shown here and pick
apart each section to see what is going wrong.

Figure 127 The five points to use diagnosing out of round stars.

Using these five boxes we can find out exactly what is wrong here; let’s start
with the center box:
Figure 128 The center section of our image.

Looking close what we see here are generally round stars. They are slightly
elongated, which when this appears in the center of our image usually means
our polar alignment is off, but that isn’t the issue we are after at the moment
although it is something we need to fix.

Let’s take a look at the top left square and see how it compares to this center
square.
Figure 129 Top left corner of our problem image.

Now we begin to see a serious problem, the stars here are not just a little out
of round, they are badly deformed! So how do we figure out what the issue is?
We look at the bottom left image:
Figure 130 Bottom left of our problem image.

We see here the exact same problem, except in a different direction. They
seem to be blurred (or fishtailed) towards the center of the frame. Checking
the other two corners confirm this suspicion in that they too blur towards the
center.

This problem is caused by our field not being flat, a fairly common problem
with refractors that are not using a field flattener, or where the field flattener is
not spaced the correct distance from our imaging sensor. To see where the
spacers go, let’s take a look at the image that started out section 2:
Figure 131 Field flattener spacer placement.

Spacers for field flatteners normally come in 5mm, 15mm, and 30mm widths
and can be combined until you get the right amount of flattening. Different
field flatteners attach differently, and can use different types of spacers so use
this as a reference only.

The above image shows my imaging train which includes custom


manufactured adapters not commercially available (the silver aluminum
piece).

You will need to add and subtract spacers taking test frames until the stars in
the corners are as round as you can get them.

What if they are blurred the opposite direction (fishtail away from the center)?
Then you are probably looking at coma, common in reflector telescopes and
this can be mostly corrected using a coma corrector such as the Baader
MPCC.
As we previously mentioned, if stars are not round in the center of the frame
we have a tracking error, most likely because our polar alignment is not as
good as it could be. This can be greatly improved by using the software we
mentioned earlier, AlignMaster. It could also be caused by poor balancing, so
be sure to double check your balance.

Figure 132 Comparison of focus issues.

Now we come to “bloated” stars, or stars that seem fuzzy and larger than they
should. Above, you see two images of the same area on Messier 8, the Lagoon
Nebula. The image on the left has larger, fuzzier stars, this is because the
image on the left was not focused correctly. You may also notice that the
nebula portion of the image is much more blurry in the image on the left as
well. One last point of these images is that the crops presented above are very
close to the center, and looking closely we can see that the image on the right
has better polar alignment than the image on the left as the stars are closer to
perfectly round.

Both images are massively enlarged to show the detail and comprise 1/48th of
the entire image.

While we are here, note the blue (medium gray as shown in the book) halo
around the stars on the image on the right. This is much harder to see in the
full sized image which is 48 times larger, but in some cases you may notice it
in your images. This is because of the telescope not combining all the colors
correctly and can be mostly eliminated by using an APO refractor (or a well
collimated reflector). Since I use a doublet APO refractor, the images are not
corrected as well as a triplet APO could achieve, so we still see some of this,
especially on massively enlarged images.

Stars can do other things as well, for example, if all the stars in your image
look similar to this:

Figure 133 Example star suffering from flexure.

Then you may have a problem with flexure. Flexure is where things on your
imaging setup flex when the scope rotates, and normally gets worse with
longer exposures.

For example, let’s take a look at this example:

Figure 134 An example of flexure.


Here you see that as gravity is always pulling straight down, so in the image
on the left the guidescope is being pulled towards the main scope right below
it, whereas on the right the guidescope is not being pulled towards the main
scope because the main scope is not underneath it as much. If the guidescope
and all its accessories (such as the guide camera, extensions, cables, etc) are
not extremely secure, this can introduce flex. Keep in mind that flex can
happen as the scope tilts on its side as well, and depending on where the flex
is, your stars may be streaked vertically, horizontally, or diagonally. Just
remember that a solid streak is different from the fishtailed spread of the flat
field problem we saw earlier and that when you are suffering from flex all
stars on the image will be streaked the same, regardless of where they are on
the image.

Figure 135 An example of a star suffering from an internal mount issue.

Now we have something completely different, a star that looks like a “v”, or it
could present as a “z” or even an “s”. The thing to remember here is that the
star streak seems to change directions. This is most often caused by an
internal problem with the mount such as a worn gear or bearing. I have also
heard of issues with cables dragging or binding (for example, the USB cable
for your imaging or guide cameras) which can cause the mount to hang as it
rotates.

I would first make sure that anything that could cause the mount to bind (as I
mentioned with cables) is checked, then I would check to make sure my RA
and DEC locks were engaged properly, then if it continued on all images
(once or twice, or just one evening could be wind) over a period of time, then
you might check your backlash. If none of that worked, I would look into
having the mount serviced.

The next issue you might face is stray light entering the frame, which I just
happen to have one image with two examples.

Figure 136 Light rays from the left side.

This image is the very top of the Witch Head Nebula, IC2118. Note the
streaks or rays coming in from the left side of the image. Since this target is
very faint, I stretched it hard in this example and that made the light rays jump
out even more.

The next issue, on the other side of the same image is harder to see (especially
in black and white):
Figure 137 Color bands from right to left.

If you look very close in the image you may see what looks like “(((((“ in
varying shades of gray (they are actually colors, blue, green and red) in the
image above through most of the image going from right to left. This is the
prismatic effect of a reflection.

Looking for the cause what I found was that there was a very bright light off
in the distance, pointed right at my scope. This banding was so bad I saw it
immediately with a quick stretch in PixInsight before I ran the sequence. I
started covering everything I could think of until the quick stretch I was doing
showed no sign of it. I believe light was reflecting up into my filter wheel
through the opening where you manually turn the wheel, so I covered this
with electrical tape.

Unfortunately after the final stacking and stretching, it was still there just
much harder to see so I cannot be sure I covered everything with tape as well
as I should, or if this was coming from Rigel (the star illuminating IC2118
which is indeed in the correct direction) and whatever I did only softened the
reflections. Either way it seems light was entering the imaging train from the
left side and reflecting off something on the right side. Keep in mind that
when you are heavily stretching an image, even the tiniest amount of stray
light can really affect the image.

Figure 138 An example of vignetting.

Vignetting is the darkening of the edges of an image and is caused by


restricting the light path too much for the camera sensor you are using. For
example, using a ASP-C (crop sensor) DSLR with a 1.25″ camera adapter can
cause some pretty bad vignetting, using a full frame DSLR with that same
1.25″ camera adapter can cause horrible vignetting like that shown above. The
image on the left has severe vignetting; the image on the right has no
vignetting to speak of.

For the most part this can be solved using flat frames as we discussed earlier.
In severe cases flat frames will help but the edges will suffer a wide variety of
issues once it is stretched the same as the center region since it did not receive
as much exposure as the center.

The trick with vignetting is to never let it get this bad. Use a 2″ or better light
path with DSLRs and never use a 1.25″ filter.

You may notice how some CCDs have built in filter wheels with very small
filters in them, this is because the built in filter wheels are extremely close to
the imaging sensor and so can be much smaller.
Images Plus 5.0 loaded with the cover image from this book
3.1 Shooting mono to get color

I should probably mention that as of right now I do not and have never used a
mono camera other than my guidecamera. So how the heck do I know what I
am talking about? Simple, the exact same principles apply to shooting one
shot color and splitting the colors out only to recombine them later. Why
would anyone do that? Patience, Grasshopper!

Every camera records in monochrome, yes, even your DSLR, and yes, even a
film camera!

Figure 139 A Bayer matrix.

Above you see a representation of a Bayer matrix used in DSLR CMOS


sensors and in one shot color CCDs. The gray squares are the actual sensors in
the camera called photosites, each colored square (marked with a R for red, G
for green and B for blue) is a filter on top of the photosite. These are
combined to form a single color in the camera so the output you see is in full
color. Every four-pixel square (one red, one blue and two green because the
human eye is most sensitive to green in normal daylight) is combined using a
complex math formula to create one colored area with four pixels of detail.

How is color film actually monochrome? Color film is actually


three layers sandwiched together. Each layer is treated to respond
to one color only, red, blue or yellow. By mixing these three
together you can get a full color image just like a standard printer
today uses CMYK (cyan for blue, magenta for red, yellow and
black).

You can manually do the exact same thing with a monochrome camera and
three colored filters:
Figure 140 NGC2244 red, blue, green channels and the final combined image.

The four images above are the three color channels and then the final
combined image. This is how monochrome imagers create color images, and
of course, how your camera works. This is called RGB (easy!).

If you want to try this yourself you can download the original high resolution
channel files seen above from this book’s website at:

www.allans-stuff.com/leap

They are listed as the NGC2244 RGB channels download.

So how do we combine the three black and white images into a single color?
Figure 141 Channels tab shown in Photoshop.

On the same pallet as your layers in Photoshop you will see a tab called
Channels. This is where you can split out the red, green and blue into separate
images, or combine them into one color image. This is also how you can
adjust one specific color without touching the others.

You do not have to shoot red, blue and green filters to use this function. This
is also how people shoot “narrowband” using Ha, SII and OIII filters, among
others. They use a monochrome camera (or in my hard headed case, a DSLR)
and shoot one set using the Ha, one set using the SII and another set using the
OIII filter and then combine them on the green, red and blue channels
respectively (for “Hubble pallet” images). You can mix and match colors,
shoot one through a regular colored filter, another through a narrowband filter,
and a third through no filter at all, then combine them. You can even combine
MORE than three colors by adding new channels! While there are no rules, I
suggest you start with standard RGB and/or Hubble pallet narrowband to get a
feel for things and then move on.
Figure 142 The moon shot in narrowband and put together in Hubble pallet.

Above is an example I shot just for this. (Yes, it stinks. Leave me alone, it is
just a demonstration piece ). This consists of one frame shot through a Ha
filter assigned to the green channel, one frame shot through a OIII filter
assigned to the blue channel and one frame shot through a SII filter assigned
to the red channel. All images were shot with a DSLR using 2″ Baader
Narrowband Imaging filters in an Orion 2″ 4-filter wheel.

What I wanted to show with this image is it is slightly different than a normal
moon image as seen next:
Figure 143 Standard RGB image shot with a DSLR, no significant processing.

Note that the large impact crater on the right and the streams of debris
radiating out are much more defined in the Hubble pallet image than in this
image. Different organization of colors and different filters shot for each color
can produce dramatically different results.

To process the Hubble pallet image I opened each of the images I shot in Ha,
SII and OIII in Photoshop. I then went to the Ha image, selected the entire
image, hit copy and then created a new image. On the new image I went to the
channels tab and deselected the blue and red channels, then with only the
green channel active I made sure it was selected and pasted my Ha image into
that channel. I repeated that process for the other two channels. Next, I
manually aligned the images and opened up the histogram. Lastly, I used the
Levels adjustment to make sure the histogram was as balanced between the
three colors as possible.
All this becomes important to understanding how an image is put together and
how we manipulate it.

Another issue between monochrome and color cameras is resolution. If you


take a look back at the first figure in section 3.1 you will notice that each
photosite, or pixel, records one color. To make a real color image we just
learned that you need three colors, red, blue and green. So how does that
relate to resolution in the camera?

A color camera, CCD/DSLR/Point & Shoot all work the exact same way. The
camera takes a square, one red pixel, one blue pixel and two green pixels and
creates one color pixel from these. Basically this takes your 10MP camera and
turns it into a 2.5MP camera (10 divided by 4) when it comes to colors, yet it
retains the 10MP luminosity. Stripping away the techno-babble this means
that your image has the black and white resolution of 10MP (luminosity) but
the color resolution of 2.5MP. Said another way, it takes a 2.5MP color image
and overlays that color (not the detail, just the colors) on top of a 10MP
image.

I know this is a hard concept to visualize so let’s do one more analogy. Take
two images, one 2.5MP in size and one 10MP in size. Convert the 10MP to
grayscale (sometimes called black and white, but actually has all the gray
shades as well) and print them out the same size, the 2.5MP on tracing paper
in full color, the 10MP on regular paper in monochrome. Now overlay the
10MP with the 2.5MP and see the results. Note that the edges on the 2.5MP
image will be very jagged compared to the 10MP so the color will not line up
just right with all the edges. This will cause some blurring on the edges and
your objects will not be nearly as sharp and well defined.

Enough with analogies, let’s see what that looks like:


Figure 144 Example simulating identical megapixel monochrome and color cameras.

Now I realize you will be seeing both images above in black and white, but
the image on the left is a 300 pixel wide crop of image NGC2244 in
monochrome, the image on the right is a 75 pixel color crop stretched over the
300 pixel monochrome image with an opacity of 50%.

This fairly accurately simulates the difference between two cameras, one
monochrome and one color, with the same mega pixel sensor. Notice how
much sharper and clearer the monochrome image is.

So what the heck does this mean? Simply stated this means that a
monochrome camera will always have better detail than a color camera if they
are both rated at the same number of pixels or resolution.

It also means that we can split out the luminosity channel from a color
camera, stretch the color channels and luminosity channels separately and
then put them all back together again to achieve images with different
characteristics including better detail.
3.2 Stacking images

If you would like to follow along with the stacking and stretching examples
you can download Deep Sky Stacker from:

deepskystacker.free.fr

or get a trial license for PixInsight from:

www.pixinsight.com

and then download the original raw light, dark and bias files from this
book’s website at:

www.allans-stuff.com/leap

I will assume at this point you have some lights and darks at least, and
maybe some flats and bias as well or have downloaded the sample images
from the website.

Keep in mind that as new versions of the software are released, these
instructions may not match up exactly with what you see. It should however
be close enough to get you where you need to go.

Let’s start off with DSS which is by far the most used software to start
stacking images. I will say that although this software is absolutely fantastic
for what you pay for it, there may come a time when you outgrow what
DSS has to offer. This is when you start to look at programs such as Images
Plus, or PixInsight.
Once you have DSS downloaded and installed, run it and you should see
the following screen:

Figure 145 The main Deep Sky Stacker window.

Look in the upper left and you will see “Open picture files...” which are the
lights and under that are the other image types we discussed earlier, darks,
flats, dark flats and offset/bias.

Load each type by clicking on the name of the type of file which will bring
up a standard file open dialog box like the following:
Figure 146 DSS file open dialog.

Here you can browse to where your files are and select the first file, then
shift-click on the last file of the set to highlight all files in the sequence.
Once all the files are loaded the screen will look like this:

Figure 147 DSS loaded and ready for stacking.


Figure 148 DSS Raw/FITS DDP Settings dialog.

Now click on the Raw/FITS DDP Settings... on the left under Options. On
the RAW Files tab make sure Adaptive Homogeneity-Directed (AHD)
Interpolation is checked, both white balance settings are unchecked, and Set
the black point to 0 is checked. On the FITS (FITS is an astronomical RAW
file format) Files tab check the first box, select your camera model if
available or try Generic RGGB, again make sure AHD is the selected
transformation.

Now click Apply and then OK to return to the main DSS window. Click the
Check All button and then Recommended Settings button and follow their
suggestions clicking OK when finished. Then start the stacking by clicking
Stacked checked pictures which is in red and finally click OK.
Now you will see the following screen as it begins to stack your images:

Figure 149 DSS running a stack.

Once the stacking is complete you see this screen:

Figure 150 Screen shown when DSS is finished stacking.

You now have a file called Autosave.tif in the directory where your lights
were. Do not panic, the image you see on the screen now has been stacked
but not stretched. We are only part of the way there. You can load this file
into Photoshop for initial levels stretching but be warned, this is a 32bit TIF
file so most options in Photoshop will not be available. This image will give
you the most detail out of the image on the first pass of stretching.

Now if you do not have Photoshop and can’t load the 32bit image, you can
click on Save picture to file... in DSS to save the file as a 16bit TIF file
(actually an 8bit padded TIF), and you will probably want to use some
compression, either ZIP or LZW (ZIP is preferred).

I do want to make one thing clear at this point. Down at the bottom you will
see three tabs in DSS, RGB/K Levels, Luminance and Saturation. Do not
use any of this, just save the file (or use the Autosave.tif file) and move on.

Now that we have done our generic stacking I want to back up and go into a
little more detail about the stacking process.

On the Stacking Parameters page are several different types of stacking;


those are:

Average. This is where the “mean” is derived by taking the value of the
same pixel from each image being stacked, adding those together and
dividing that sum by the number of images. This has a problem in that a
plane or satellite going through your frame will leave a bright white trail
which will skew the results of the mean for those pixels.

Median. This computation takes the value of the same pixel from each
image to be stacked and puts those values in an ordered list from least to
greatest, and then picks the middle value. This does not tend to suffer from
satellite or aircraft streaks like average does, but then again is not very
mathematically correct either as it just grabs whatever the middle value
happens to be.

Kappa-Sigma Clipping. This first takes the mean (discussed above) and the
Sigma (the amount a pixel’s value varies from the mean) and then compares
each pixel to the standard deviation. It then rejects the pixels that are
beyond the average deviation.
Median Kappa-Sigma Clipping. This is basically the same as the above
except instead of rejecting pixels outside of the standard deviation, it
replaces them with median average values.

Auto Adaptive Weighted Average. This calculates an average for all pixels
based on the mean average and the standard deviation with no rejection.

I generally use Kappa-Sigma Clipping and Auto Adaptive Weighted


Average depending on the target. I would love to be able to tell you which
one for which target, but unfortunately I have not found a rhyme or reason
yet why one works better than the other. I tend to prefer the rejection of the
Kappa-Sigma in most cases.

On the cosmetic tab you will see options for detecting and cleaning both hot
and cold pixels, I leave these off as it can do some really weird things to
images and your hot pixels should be removed with dark and bias frames
instead.

Now if you are not overly happy with the image that DSS created you can
click on the left hand menu to go back to the screen shown above and look
at the bottom to see this:

Figure 151 Deep Sky Stacker images window.

Looking here we can see columns such as Score which shows what DSS
has scored the image as. What the specific numbers are doesn’t really
matter. What does matter is higher scores are better and if you have an
image that stacks weird you can usually find the problem because one of
your light frames will probably have a dramatically lower score from the
rest. You can delete that image from your list and stack again for better
results.

What can cause the scores to be lower? When a cloud moves into your field
of view, when a front moves in or out and the temperature changes, or when
the angle to your target changes. Note in the listing above all the scores
increase and then decrease as the frame number increases. If you punch in
the date, time and target (M78 in this case) you will see that I started
shooting when M78 was on one side of the meridian, and then flipped to the
other side and continued shooting.

Another problem I have run into is somehow I get images with different
ISOs or exposure times in the stack, and these too can be removed (these
values can be checked before stacking whereas scores cannot).

Now I am going to shift gears and go from the free Deep Sky Stacker that is
extremely popular and free to something a little less well known and more
expensive, PixInsight.
Figure 152 The PixInsight main screen.

Right away you can see this is a much more involved application. Unlike
DSS, PixInsight is a full blown and very capable image processing suite
that does far more than just stack images. Don’t let that Process Console
there fool you, it isn’t that hard to use.... Well, maybe a little
Figure 153 PixInsight menu screen.

Start by clicking on the left where you see Process Explorer, then click the
little triangle next to <Scripts>, then click the triangle next to Batch
Processing, and finally double click on Batch Preprocessing to get to this
screen:

Figure 154 PixInsight BatchPreprocessing script.


Here you have a series of tabs for each file type on the top left. You can add
images to these tabs by clicking on the buttons along the bottom left with
labels such as “+ Add Bias”.

Note that I have Winsorized Sigma Clipping selected with a combination


method of Average.

Figure 155 Close up of the options section of the previous screenshot.

Make sure if you are using one shot color images such as from a DSLR that
you have the CFA images checkbox checked or your images will come out
monochrome.

Make sure all the “Use master...” boxes on the center right are not checked
unless you are using single master files created previously. If you don’t
know what I am talking about, just assume you are not.

You must select a “master” light file that PixInsight calls the Registration
Reference Image on the lower right. You select this by double clicking on
one of the light frames in the list on the Lights tab.

Lastly, select an Output Directory, click on the downward facing arrow to


the right of the Output Directory in the options section as shown above.
Once all this has been selected click on Run on the bottom right and sit
back for a while as this is a little slower than DSS.

You may receive an error when you click on Run if you did not include bias
and Flats, but you can safely ignore the warning for now.

Figure 156 PixInsight stacking images.

Off and running it goes! While we wait lets discuss things a little. Part of
the power in this program is that everything that actually works on the
images is built for speed and flexibility and therefore most of it all runs in a
command line window.

One confusing point with PixInsight is that once it is done with the stacking
you will come right back to the BatchPreprocessing screen with nothing
really telling you it completed, but it has. Click the Exit button in the
bottom right. It will ask you if you are sure, say yes.
To see your handiwork click File and then Open, browse to the directory
you told it to use as an output directory and inside you will see among other
things a folder named “master”. Open that, and you should see a file named
something like “light-BINNING_1.fit”, open that file.

On the screen you will see three files open with that filename. One has the
term “rejection_low” appended to the end of the name and another has
“rejection_high”. Close those two. The remaining file is your stacked
image.

Now that you have stacked images using one of the most popular freeware
stacking programs and one of the most powerful image processing suites,
what did you notice? Did you notice that it was basically the same thing?
Sure there were some differences but in both you loaded the lights where
they go, the darks where they go, and so on. The terminology was the same.

Even though you have probably never used Nebulosity or Images Plus, I
would be willing to bet you could stack images in them as well now that
you have the basic understanding of how it works.

Of course the big question I always get asked at this point is “if they both
do the same thing, why would anyone pay for one when the other is free?”
Excellent question! The answer is that in my tests, PixInsight produces far
superior results. In my images I can process one in DSS and look at it, then
say, ehhh, it’s OK. When I process the same images in PixInsight I look at
the result and say WOW! Your mileage may vary.

That does not mean you should immediately start using PixInsight or
another high end program. It is perfectly acceptable, and perfectly normal,
to start off using DSS. In fact, it took me a year to outgrow DSS which gave
me plenty of time to try demos of many other programs until I found the
one that worked best for me.

Starting out, your image capture and processing skills will be so poor it
really won’t matter what you start with. That’s okay, you’re learning and
you will improve with time and practice. The only real advantage starting
with something like PixInsight has over DSS from the very start is you tend
to learn that program better since it is what you started with. Good and bad
to everything, isn’t there?
3.3 Stretching images

You have heard me mention stretching several times. This needs a little
explanation and what follows is a very rough example. There are two ways
to “stretch” an image, levels and curves. Let’s start with levels. In an image
editing application you have something usually called levels, in other apps
you use a histogram tool. These usually have three pointers near the bottom
of the histogram that represent shadows, midtones and highlights as shown
below:

Figure 157 Levels window in Photoshop.

What we want to do to “stretch” the image is move the left most pointer (on
the Input Levels histogram) which is the one for the shadows, as far to the
right as we can without “clipping” any data. What this means is to move
that slider to the right until just before you get to where the line in the
histogram moves up for the spike.
Figure 158 Adjusted levels for shadows.

Next we want to move the second, center pointer to the left until we get
more detail out of the image. Be careful not to move it too far or the
background will get too bright. Once you are happy with this click on OK.
Keep doing this back and forth until you get the best balance of object
detail, dark background and least objectionable noise levels. It is preferable
to run this once on a layer then copy that to a new layer and go again. That
way if you overdo it without noticing it you can go back to the previous
layer.

Do not touch the right slider which is the highlight slider unless you have
to. You certainly can play around with it to see how it affects your image
but be careful as this will blow out stars and other bright areas of the image
as well as make it impossible to get any color in your stars.
Figure 159 Adjusted levels for midtones.

The second method of stretching involves curves. This is a little more


complex and should be done in very small increments or it can generate a
lot of noise and artifacts in your image. Normally I would do this in
Photoshop and create a new layer every time I wanted to apply a curves
adjustment just like I did for levels.

Basically you will see something similar to the next screen (in Photoshop
create a new layer by copying the background, then select Image,
Adjustments, Curves):
Figure 160 Photoshop curves adjustment window.

Hold down the Ctrl key on the keyboard and click on the darkest part of the
background sky you can find well away from any of your target. This is
your black point. While still holding the Ctrl key down click on the
brightest white star you can find. This is your white point. Now still holding
the Ctrl key down click on part of the nebula or galaxy you are trying to
enhance. Release the control key and press the Up arrow key on the
keyboard several times. What you want is to get a small, but obvious
increase in brightness/clarity of the section you are working on. Once you
get this small adjustment, close the curves window by clicking OK, copy
that layer to a new layer, and repeat the process. Using small steps provides
the best results because you can reevaluate what area you want to stretch
and get better detail. You can also realize when you mess up and just delete
that layer without undoing everything you worked on.

You can also use the brightest part of a nebula or galaxy as your “white
point” instead of the brightest part of a star to not “blow out” the
nebula/galaxy core, but be careful to watch what it does as you stretch.

I typically would use a combination of levels and curves adjustments to


each image to maximize my results. Rarely did only one method provide
superior results to a combination approach.

If we pick up where we left off with PixInsight in the last section we will
have a stacked image open and on the screen ready to be stretched. For this
I am going to use an image of Messier 78 and let’s see what happens. Here
is the image right now:

Figure 161 Stacked image loaded in PixInsight.

There is not a lot to look at but that’s OK, it will get there. Over on the left
click the Process Explorer tab and then the triangle next to <All Processes>.
Double click Automatic BackgroundExtractor to see this screen:
Figure 162 PixInsight Automatic Background Extractor settings.

Click the double arrows to the right of Target Image Correction to open the
list of settings. Select Subtraction in the Correction type dropdown list.
Then check the boxes next to “Discard background model” and “Replace
target image”.

This little part of PixInsight will help remove background issues such as
light pollution so that our stretching comes out better. It is really not
required to be able to stretch the image but I get so much better results after
doing this I thought I would leave it in the example.
Once you have all the settings as above, click the square in the bottom left
of the screen to run it. It will take a few seconds to run and after finishing it
will update the image on the screen.

At this point it may not look like it really did anything and that is fine, just
keep going. Here is my example image of M78:

Figure 163 PixInsight after Automatic Background Extraction.

If you look really close you can see a few stars and that’s about it, so let’s
stretch it. Go ahead and close the Automatic Background Extraction
window.

Click on the Process Explorer tab on the far left, then go to <All Processes>
and finally, double click on Histogram Transformation and you will see this
screen:
Figure 164 PixInsight Histogram Transformation screen.

Click "<No View Selected>" and change it to the view you want to work on
(in my case, light_BINNING_l). Now click on the very bottom left side, the
third icon from the left towards the right. This is the preview screen and
should look like this:
Figure 165 PixInsight histogram transformation real time preview.

Unlike when we worked with levels in many other apps, here we can see in
real time what will happen to the image when we stretch it.

Look at the window on the right of your screen, then look at the lower black
screen. This is the histogram. At this point you probably don’t even see how
everything is bunched up on the far left of the histogram so look close. See
it? Good, lets stretch it out now.

Click and hold on the far right arrow, then drag it towards the left until it is
about 1/3rd of the way from the left towards the right. You should see some
of the stars brighten and in my image I can just barely start to see some
nebulosity.

Now here is a little trick: roll your center mouse wheel down slowly and
note that it starts to zoom in. You want to zoom until once again the far
right marker is pretty much all the way on the right and the far left marker
is still on the far left.

At this point you want to grab the far left arrow and drag it towards the
right until just before the line in the histogram starts to rise.

Keep zooming in and dragging the far left arrow towards the right until it is
just to the left of where the hump in the histogram begins. Then drag the
center arrow to the left until it is just to the right of the hump in the
histogram ends, like this:

Figure 166 PixInsight after an initial histogram stretch.


Figure 167 Close up of the histogram in the previous image.

Once you think you have the stretch as good as you can get it click on the
little square in the lower left of the Histogram Transformation window to
apply it to your image. Until you click on that square you are only working
with the real time preview. Once you click the square to apply it your real
time preview will go nuts because it is applying the current stretch to the
image that you just applied stretch to, so you need to reset the preview. I
know that sounded weird but trust me, keep going.

To reset the preview click the strange looking X on the far bottom right of
the Histogram Transformation screen and then click the l with a square
around it in the center between the two histogram black screens. Now you
can try stretching it again to see if you can coax more out of it.

There is a ton more features in PixInsight, and of course in Photoshop but


this book is far too short to cover them all.

You have now stretched images in both Photoshop and PixInsight, both
excellent programs for this. Generally I do all my initial stretching in
PixInsight because I love the real time preview and it does a fantastic job.

Once I am done with that I typically remove the green color cast and do a
little noise reduction, both in PixInsight. Then I move on to Lightroom and
Photoshop for final touchups and cropping.
Whatever program you choose to use for your stretching, you should now
be comfortable enough to at least get started and get reasonable results.
Stretching is always a balancing act. Less stretching gives you a less noisy
image but reveals less of the target. More stretching gives you more of the
target, but more noise as well.
3.4 Image acquisition tips and tricks

Let’s start with the most basic of all image acquisition tips, when you take
the image.

There are several factors to consider when you start to think of exactly what
time you want to image a target, the first being when the sun sets or rises.
I’ll bet you didn’t know the sun sets four times every night! Ok, not really,
but there are four sunset times! On July 23rd, 2012, in Huntsville Texas, the
National Weather Service said that sunset was at 8:2lpm CDT. This is called
just sunset as far as I can tell and was when the sun actually dropped to the
horizon. Next at 8:48pm was when the sun was completely below the
horizon and is called Civil sunset and is nowhere near dark enough to
image. Then comes Nautical sunset and generally you can start your
imaging run if your target is in the east and that came at 9:21pm on the day
in question. Lastly is Astronomical sunset and that happened at 9:55pm on
our example day and it was then as dark as it would get. Note there is about
an hour and a half between “sunset” and real darkness so plan accordingly.

Just as you would expect, the same happens in reverse at sunrise. First
comes astronomical sunrise, followed by nautical, then civil, then actual. If
your target is in the west, you can usually still image until nautical when the
sky will start to turn obviously blue in your images.

The next thing we need to concern ourselves with is where in the sky our
target is:
Figure 168 Illustration of distance through atmosphere.

Looking at the figure above you can clearly see that the lower in the sky
(towards the horizon), the more atmosphere there is to shoot through to get
to your target. The distance through space doesn’t matter because all the
dust, water vapor, temperature currents and contaminants that can scatter
light are all within our atmosphere. The closer to zenith (directly overhead)
you get, the better your images. Earlier there was a list of images scored by
DSS. As the target gets closer to zenith from rising in the east, the scores
get progressively better.

Now you have seen how you shot through less atmosphere when your target
was higher in the sky, but why does that matter? Let’s start with refraction.
Refraction is how light is bent as it goes through different materials; in this
case it is traveling from pretty much a vacuum into our atmosphere. The
atmosphere of the earth is made up of a lot of different gases and vapors,
each of which has a different “refractive index” (the amount of refraction
that happens when light enters and exits a substance). Normal air at 0C has
a refractive index of 1.00293 while water at 20C is 1.333 and a vacuum has
a refractive index of 1.
Since the bulk of the atmosphere around the earth is within the first ten
miles and we know the refractive index of air at sea level at 0C is 1.00293,
let’s pick a rough estimate average of 1.00050 for the refractive index of air
as a whole.

Now if we do the math for a right triangle with one side being 52,800 ft. (10
miles) and one other angle being 1.00050 degrees, we see that the light will
have moved over eight feet!

Ok, so all the math in the world seems a little over our heads, how can we
get a really good grasp on this? With a simple little science experiment!

WEAR THE CORRECT EYE PROTECTION BEFORE


ATTEMPTING ANY EXPERIMENTATION INVOLVING
A LASER. THE LASER COULD SLIP, MALFUNCTION,
OR REFLECT OFF OF WATER OR OTHER
REFLECTIVE SURFACES AND CAN CAUSE
PERMANENT BLINDNESS TO YOU OR OTHERS IN
THE AREA SHOULD IT HIT SOMEONE IN THE EYE.
Figure 169 Setup for our experiment.

Here we have a tall glass cylinder filled with 2 cups of water, and then we
shine a laser through the water and mark where the beam hits the paper
under the cylinder. Without moving anything we add 4 more cups of water
into the cylinder and compare the results.
Figure 170 How the distance of travel through water changes the refraction of light.

Here you can clearly see the beam has moved over to the left probably a
good quarter inch. If you look closely you can see in the right image a small
black dot just to the right of where the central laser beam hits the paper.
This dot was where the laser hit the paper in the first part of the experiment.

But this is water and air. How does that relate to a vacuum and air? Just like
there is a refractive index difference between a vacuum and air, there is a
similar difference between air and water. In addition, there is water vapor in
the air, and a whole lot more air than the 6 cups of water we used.

What else can we learn from this experiment? How about diffraction?
Diffraction is how light is dispersed, or spread out, as it travels through a
medium. Let’s look at these images again, a little closer this time:
Figure 171 Diffraction of light increased.

Now when we look closely we notice that the inner white part of the light is
smaller in the second image, but there is a little more of the not as bright
halo around that area in the second image, and a lot more of the not as
bright halo in the second image.

What does this tell us? It says that the more water the laser travels through,
the less concentrated the beam is in the center, or the more spread out the
light is.

So now we have seen that the further light travels through a medium, the
more it is refracted and diffracted. What else?
How about Rayleigh scattering? Rayleigh scattering is the effect that turns
the sky blue, and changes it to reds and yellows at sunset. Basically as light
travels through a medium and bumps into molecules along the way,
different wavelengths of light are reflected differently. Shorter wavelengths
(blue and violet) are scattered more because the molecules are very small
and so affect those wavelengths more.

A good way to think about this is that the blue light is completely scattered
in the first 10 miles of heavy atmosphere, which is about all we have. When
the sun is directly overhead you are within those 10 miles of atmosphere
and so you can see the scattered blue light, making the sky blue.

Once the sun moves to the horizon it could be shining through three times
the amount of atmosphere as it was previously to get to you. By that time
all you see is the yellows and reds because all the blue was scattered away
20 miles ago.

This is why when you have a beautiful blue sky above you and you look
near the horizons you will notice it is less blue than directly overhead; this
is the effect of more atmosphere.

Now comes pollution (man made, volcanic, fires, etc). Think of pollution
like fog as it has the same effect. No matter how dense the fog, you can
usually see your hand directly in front of your face with no problems. Move
your hand away and it gets progressively worse. The greater the distance,
and the more fog, the harder it is to see.

Pollution is all around the planet spread throughout the atmosphere. Just
like everything else, the more atmosphere you have to shoot through, the
more pollution you have to shoot through and the harder it is to see your
target.

Lastly, light pollution is the light from street lights, houses, signs etc. This
light does the same thing as the light from the sun during daylight, some of
it scatters into the atmosphere. This is what causes us to see “light domes”,
the glow in the sky around densely populated areas. The lower in the sky
you shoot, the more of this scattered light your camera will pick up. The
closer to cities you shoot, the more of this your camera will pick up. This is
why astrophotographers travel to dark sites to shoot.

It should now be obvious that the lower towards the horizon you shoot, the
more problems you will have with your images. Does this mean you should
only shoot directly overhead? No. But you should consider shooting as high
as your amount of time allows.
Figure 172 Stellarium showing visual representation of seeing conditions.
Looking at the previous figure you should clearly see that as light pollution
decreases from the left to the right, the brightness of the sky (amount of sky
fog) decreases, and the number of stars visible increases. You can also see
that the sky is brighter and has fewer visible stars near the horizon.

After worrying about the sun setting and the angle of the target, we have the
temperature. First, lower temperatures are better because the camera will
run cooler and therefore produce less noise. Software noise removal tools
will work better on images with less noise.

Secondly is temperature stability. As the night progresses and cools off, the
noise levels in the camera drop. While this sounds great, the problem
becomes that if you are shooting one target all night, the images from the
start of the night will have more noise than the images you shoot later in the
night. If these are radically different it can wreak havoc on your stacking
program trying to figure out what the heck happened!

The trick here is to stagger your targets if you want the best possible results.
Assuming you wanted four hours of lights on each target, the first night you
could shoot for the first two hours target #1, and then switch to target #2 for
four hours and then the rest of the night on target #3. The second night you
can shoot the first two hours on target #1 for four hours total, then switch to
target #4 for four hours and the rest of the night on target #5. This allows
you to stack images of relatively the same temperature for each target set.

Now this temperature issue really is not a problem in the winter when you
are dealing with temps in the 30s and below (Fahrenheit) as the noise levels
will not appreciably drop below those temps.

Another temperature issue when you want to get a little extreme and
squeeze the absolute maximum out of your images is the stability of the
temperature of the camera sensor. Just like a stable outside temperature, we
want the sensor of our camera to stay at the same temperature from the start
to the end of an imaging run. This is why sometimes I will shoot a frame or
two which I fully plan on discarding just to warm up the sensor to an
operating temperature that will be relatively the same for the rest of the
imaging run. Delays between frames will not substantially alter the
temperature of the sensor and instead could cause issues with the first part
of each exposure being cooler and responding differently than the rest of the
exposure.

Once you have all the atmospheric and equipment variables you can put
everything together and come up with your limiting magnitude, or the
maximum magnitude of the target you can see/image. This was put into a
nice set of nine classes by John Bortle in 2001 and is called the Bortle scale.
Those classes are:

Class 1(Black): Excellent, naked eye limiting magnitude of 7.6-8.0, M33 is


a naked eye object with structure.

Class 2(Gray): Very good, naked eye limiting magnitude of 7.1-7.5, M33
can be seen clearly but not really structured.

Class 3(Blue): Good rural skies, naked eye limiting magnitude of 6.6-7.0,
M33 can be seen with averted vision.

Class 4(Green & Yellow): Not bad suburban skies, naked eye limiting
magnitude of 6.1-6.5, M33 visible only very high in the sky and very
difficult even with averted vision.

Class 5(Orange): Typical suburban skies, naked eye limiting magnitude of


5.6-6.0, Milky way very weak or invisible near the horizon and weak
overhead.

Class 6(Red): Suburban near large city, naked eye limiting magnitude 5.1-
5.5, Milky way is all but invisible except at zenith. Skies appear light gray.

Class 7(Red): Urban or full moon, naked eye limiting magnitude 4.6-5.0,
Entire sky is light gray, no milky way visible at all.

Class 8 & 9(White): Medium/large cities, naked eye limiting magnitude up


to 4.5, sky is very light gray, white or orange, you can read a book or
newspaper without additional light, the only visible Messier object is M45.

Check out http://www.lightpollution.it/dmsp/ to see where you are on the


scale just factoring in light pollution.
Now we can deal with dithering. This is the process where your camera
control software tells your guiding software to move the image a couple of
pixels between each frame. The reasoning behind this is that not all pixels
in your camera are created equally. Some may be more sensitive than
others, some may be less sensitive, some may be hot pixels, and some may
not work at all. Dithering solves all of these problems and since it moves
the image around on the Bayer matrix you can get more accurate definition
on the edges of objects like nebulae.

In Images Plus for example, when my Nikon is hooked up I can go to the


Bulb Exposure tab and click the Dithering button. This pops up a box that
allows me to select PHD as my guiding application and from that point on
every time a frame is taken with Images Plus it will tell PHD to move a few
pixels. When this happens Images Plus will tell you “PHD Dithering” at the
bottom and may seem unresponsive as it waits for PHD to tell it that it is
ready with the new coordinates.

One thing to mention here is for this to work you need to go to the Tools
menu in PHD and turn on PHD Server. Without this your camera control
software cannot talk to PHD.

Dithering, of course, will not work at all unless your camera control
software can talk to your guiding application. If you plan on imaging with
something like a remote shutter release or intervalometer (sometimes called
an interval meter, a device that allows you to time exposures without the
use of a computer) then you cannot dither, among other things.

If you haven’t ever seen a Clear Sky Chart, you need to become familiar
with it. This one little chart can help you figure out not only when to image,
but what you can image successfully! Visit www.cleardarksky.com to see a
chart of your location:
Figure 173 Clear Sky Chart for my location.

There are colored squares for each hour next to each of several indicators.
Cloud cover, darkness, wind, humidity and temperature should be pretty
obvious. Each square is ranked from worst (white) to best (really dark
blue). What we need to really look at are the values for these other items.

Transparency is a measurement of the amount of water vapor in the air. This


is measured from the ground all the way to space so just because the
humidity is low where you are, does not mean there is not a lot of water
vapor at 10,000 feet! This indicator is not overly important if you are
shooting planets, globular clusters, or open clusters.

Seeing is temperature differentials in the atmosphere and their associated


turbulence. This can affect high magnification imaging such as planets. It
does not affect long exposure work like faint nebulas nearly as much.

Just in case you missed it, darkness is exactly that. When the moon is up,
darkness goes to heck. The more full the moon, the worse it is.

This chart is available at the web site previously mentioned, as a Windows


gadget, and as an iPhone/iPad app. It may be available in other forms as
well.
3.5 High Dynamic Range images

HDR, as High Dynamic Range is known, has been a well-used friend of


photography for a long time. Even back in the days where prints were made
by hand from negatives in the darkroom one at a time, HDR was in use. It is
not some newfangled toy invented in the digital age.

Basically what HDR does is take a target like the Orion Nebula and allow you
to take exposures which vary substantially, exposing one set for a certain part
of the nebula which is very bright, another set for a part that is not quite as
bright, and another set for a part that is very dim, and then combine them in a
single image so that everything is well exposed.

The reason for this is latitude, or dynamic range. Think of it this way. Place a
newspaper on the front of your car’s bumper at night just under the headlight.
Now turn on your bright lights and stand in front of them and try to read the
newspaper. Your eye cannot adjust for both the extremely bright headlight
beam and the very dim newspaper print all at the same time. The same holds
true for some targets in the sky except the camera does not have enough
dynamic range to capture everything well exposed at once.

Figure 174 Four different exposures captured for HDR.

In the four images above, note that as the exposure increases from left to right,
(15 seconds, 45 seconds, 90 seconds and 180 seconds) the amount of detail
you can see on the outside increases, but the central core gets so bright it
washes out the detail.

The solution seems simple. Take the small central region of the image on the
left, copy that over the same region on the next image over, then repeat the
process until you have a “stack” of images with the best exposed parts all
combined. Unfortunately when you do that, the parts never blend together
correctly without a lot of manual blending and feathering. This was the way
people used to do it before we had powerful computers, digital images and
awesome software, but boy was it time consuming and painful!

The fix for this is to create an HDR image as shown here:

Figure 175 An HDR composite image of the Orion Nebula.

Note that both the detail in the central core and the wispy outer dust lanes are
present in one image. This image was a combination of 10 images at 5
seconds, 10 images at 10 seconds, 10 images at 15 seconds, 10 images at 30
seconds, 10 images at 45 seconds, 10 images at 60 seconds, 10 images at 90
seconds, 10 images at 120 seconds and 10 images at 180 seconds. Ok, I
probably went a little overboard
When I first created this image I thought it was “overcooked”, it was just
too... well... artificial looking. I still think that a little, but when I looked at the
detail in the images from the Hubble and realized that the level of detail is
actually pretty accurate, that changed my opinion on this image somewhat.
Take a look at the Hubble images and this one yourself and see what you
think.

Anyway, I combined each set with a set of darks for that exposure, stacked
each set one at a time in DSS, stretched each set approximately the same just a
enough to see what data there was, then combined those images in a program
called HDR Efex Pro ($99.95, PC/Mac).

You can obtain it here:

www.niksoftware.com

Once the images were combined I loaded the output into Photoshop and did
the last stretch and the result is what you see above.

I wasn’t sure how the nebula would react to HDR and that is why I shot so
many different exposures. Normally you only need one set of exposures for
each part of an image you want properly exposed and then one set right in the
middle of the two exposures. For example, assume you were shooting a sunset
on the beach and you wanted both the bright sunset and darker beach to show
up. You would expose one image for the sunset, one for the beach, and one
right in the middle of those two exposures.

Doing more than is required doesn’t really hurt, in fact it can preserve more
detail, but generally it is a minor amount of increased detail in exchange for a
lot more processing time, storage space, and pain in the rear.

There are many different HDR applications out there and some of them are
free. I would highly suggest you get and use one of these instead of doing it
manually. Photoshop CS5/6 has HDR capability built in (although I am not a
big fan) so if you have one of those, you can already do some HDR.
Here is another example set to show you what HDR can do. The first is a set
of 120 second images stacked:

Figure 176 Set of 120 second exposures stacked.

As you can see, 120 seconds blows out the central region pretty badly and just
barely starts to show some of the detail in the outer reaches of the nebula. This
is a problem.

Now I will combine a stacked 30 second image, a stacked 60 second image


and a stacked 120 second image in the free HDR package, Picturenaut 3 (Free,
PC), and then do a little stretching in Photoshop. You can download
Picturenaut from:

www.hdrlabs.com/picturenaut
Figure 177 HDR image from combining multiple exposures.

Right off the bat you can see the central core is no longer blown out and there
is detail there. In addition you can see far more of the nebula although not
quite as bright in some areas.

No, there is nowhere near as much detail as in the first HDR image you saw a
couple of pages ago, but then this was five minutes with a free program and
another five minutes in Photoshop with only three exposures. Not bad, eh?

With some more exposures and some harder work in Photoshop this could be
a really nice image. It could also be better with a better HDR program such as
HDR EFEX Pro, but then again that costs $99.95 and this program was free,
which shows you can indeed do quite a bit for less.
3.6 Tuning for best guiding results

If you use your mount a lot, and I hope you do, it will eventually develop
problems, and one of the most common is worm gear play. That is usually
fairly easy to fix and I am going to show you how on my Orion Sirius
mount but first there is something I need to say:

OPENING OR ADJUSTING ANY PIECE OF


HARDWARE BEYOND WHAT THE OWNER’S
MANUAL SPECIFIES WILL VOID YOUR WARRANTY
AND COULD CAUSE DAMAGE TO YOUR
HARDWARE OR YOUR PERSON. USE THE
INFORMATION PROVIDED HERE AT YOUR OWN
RISK.

Even though I am going to show you how to do this on an HEQ-5/Orion


Sirius mount, the ideas are the same for any mount. The screws and covers
may be in a different location and you may need to do an internet search or
call the manufacturer to locate them, but rest assured that the principles are
the same.

The first thing here is we need to know if there is any slack, and if so,
where it is. If you can grab the end of your scope and move it from side to
side and there is noticeable slop, then that is a declination problem. If you
can grab the counterweight bar and move it from side to side, that is a right
ascension problem.

Before doing any of this I recommend you remove your scope and any/all
weights. That will make it far easier to move things around and far less
likely you will tear something up.
Figure 178 Location of three declination retaining bolts.

Let’s start by adjusting the declination axis. First, find the three silver 4mm
allen head bolts. We need to loosen them just enough so that the worm gear
carrier will move when we move the 2mm allen set screws we will see in a
minute.
Figure 179 Location of one of the declination set screws.

There are two set screws that push the worm gear one direction or the other.
One is right above the power panel as shown here.
Figure 180 Location of second declination set screw.

The trick here is once you loosen the three 4mm retaining bolts, make sure
both the 2mm set screws are not loose, twist them in using almost no
pressure at all to check this. You can then loosen one a quarter of a turn and
tighten the other one quarter of a turn, tighten the three retaining bolts
snugly, and then check to see if the play has gotten better or worse.

Continue on this path until you just barely remove the play from the
declination axis. Do not over tighten! Once you are satisfied, plug in the
mount and use the hand controller to move the mount in declination. If you
hear a stripping sound, STOP! You have over tightened and will need to go
back and loosen things up a little.
Figure 181 Location of the three right ascension retaining bolts.

For the right ascension you do the exact same thing, there are just different
locations for the screws. Start on the bottom of the mount and you will see
the same pattern of three 4mm silver retaining bolts.

In the image above I have rotated the mount so that the counterweight bar is
pointing up and to the left from the rear of the mount. Here you can clearly
see the three 4mm retaining bolts you need to loosen and you can also see
the allen wrench I have inserted into the first 2mm set screw.

The second set screw is a little harder to find, and indeed I had to hunt for a
while before I found a document online that showed me where it was
hiding:

Figure 182 Location of the second right ascension set screw.

Yep, it is behind the power panel. There are two pretty obvious screws
holding the power panel to the mount head in the same direction as the
arrow in the above image. Be careful here and do not pull the power panel
out any more than is required to get the allen wrench in.

Again we do the same thing as last time, tighten one set screw one quarter
turn, loosen the opposite set screw, tighten the three retaining bolts and see
if it is better or worse. Repeat as needed.

And again we plug in the mount and this time we run the right ascension
axis and make sure it is smooth, does not grind, does not bind, etc.

As a last check, after all screws and covers are tight and you have run
through full swings of both declination and right ascension using the hand
controller, load your scope and weights back on. Balance everything and
run through a set of full swings in both declination and right ascension
again just to make sure things don’t get loose, bind or slip when there is a
load on the mount.

While we are working on the mount it never hurts to check the lube. The
right side of the mount from the back has six Phillips head screws and if we
remove them we get to the side drive gears:

KEEP FINGERS AND CLOTHING AWAY FROM


MOVING GEARS!
Figure 183 Side drive gear assembly.

You can add a little white lithium grease (do not use just any grease! Use
only high quality grease like white lithium). Of course you do not want too
much grease as it will get into places where it doesn’t need to be, leach out
the side, etc. I put a little on like above and then run the gears back and
forth using the hand controller, wipe off excess and put more where needed.

Now that our mount is tuned up a little, we need to tune the guiding. While
PHD is the most popular software for guiding in my experience and it is
what I use, it stands to reason once you understand how PHD works and
how to tweak it these same ideas could be applied to any guiding package.

PHD actually stands for “Push Here Dummy” and does a fantastic job
without any real user intervention. For what it costs, and what it
accomplishes, this is probably the best value in astronomy software out
there. That doesn’t mean we can’t make it better.

Your first consideration when guiding is the balance of the scope. While
having the scope slightly rear (towards the camera) and east heavy can help
with keeping the gears meshed and preventing bouncing, it should be a
minimal amount. Anything else can result in wildly erratic guiding.

As we have already covered earlier, your guidescope and camera should be


of the correct focal length and pixel size so that minor movements off
center of the guide star do not result in being massively off target on your
main imaging scope. For example, using the SSAG on the miniguider scope
with a longer focal length imaging scope can and will cause guiding issues.

Correct polar alignment is also critical for good guiding. The better the
mount can follow the RA movement of the stars without guiding, the more
accurate the guiding will be.

Camera exposure (for the guide camera) is not really an important factor as
far as guiding is concerned as long as you understand that shorter exposures
allow for more rapid response from the guiding setup but will not allow for
guiding on fainter stars. Longer exposures give you more stars to guide on
but have a longer delay between possible guide adjustments. I generally use
the default of 1 second in PHD unless I am in an area where I cannot get
many stars to guide on, or when shooting narrowband on a night with so
much moonlight that it washes out the fainter stars. This can also be
addressed under configuration options by binning the exposure 2x2 which
increases exposure sensitivity while lowering resolution. DO NOT use
binning when also using a long focal length imaging scope and a shorter
focal length guidescope as this will only make your existing problems much
worse.

If you click on tools and then enable graph, you can see a graph of what
corrections PHD is making. There are two numbers over on the left and for
my setup I try to keep the OSC-Index to around 30, and the RMS to under
20. This typically gives me good solid guiding.

OSC-Index is the odds that the next RA correction will be in the opposite
direction of the last RA correction. Perfect with no periodic error would be
.50, perfect with periodic error would be .30. Much below .30 and you
should increase your RA aggressiveness and hysteresis. The opposite
corrections would be indicated if your OSC-Index was too high, for
example .80.
RMS is the Root Mean Square, the average distance of the star in pixels
from the baseline on the graph in RA over the period of time currently
displayed on the graph. Lower is always better

Now one thing you should know is that guiding performance will change
depending on where you are pointing in the sky. A setup that guides
extremely well at zenith may not do as well near the horizon and vice versa.
A good polar alignment, excellent balance and the correct
guidescope/camera combination can help mitigate this but cannot totally
eliminate it.

Another factor with PHD guiding if you are using the EQMOD ASCOM
system (as I am) is the ASCOM PulseGuide Settings. I have mine typically
set from .5 to .6 for both the RA and DEC rates. When PHD calibrates you
will find that the higher the rates set in EQMOD, the faster it tends to
calibrate using fewer steps for each direction. In theory you want
somewhere around 7-15 steps each direction so that it can get where it
wants fast enough, but is moving slow enough for small corrections. The
reason I give a range of steps is because rarely do I get the same number of
steps every time in every direction so a little play in the numbers is
required.

One tweak I have found useful is the Star Mass Tolerance under
configuration. When this is set to 1.00 changes in the apparent mass of the
star (due to fluctuating seeing conditions or transparency) will not cause an
issue. If this is lowered(the default I believe is .5) then you will get an alert
every time the mass appears to change and this gets pretty annoying.
Figure 184 Example PHD graph.

In this example graph I wanted you to note that it has both vertical and
horizontal lines. The center horizontal line is 0 movement. Each horizontal
line above and below that line is 1 pixel movement. In this graph you can
see that the declination moved approximately 1 pixel three times whereas
the RA only did that once (far left). If your ratio between your imaging
scope and guidescope is good then one pixel movement is not bad at all and
should produce nice round stars.

The example above shows good ability to correct without moving the scope
too much in any direction; however, note how many corrections there are.

There is no simple way to adjust PHD for every scope/mount/guider


application. What I suggest is that you have the graph open as above and
slowly adjust each setting until you see your graph get better or worse,
being sure to allow plenty of time between adjustments. Once you get the
least amount of corrections (a perfectly flat line would be perfect, and that
will never happen) write these settings down and use them as a baseline for
future adjustments.

One problem I have heard about is a definite choppy pattern to your Dec
movement. Under Dec Algorithm you can switch between Resist Switching
and Low pass Filter to see if that solves your issue.

When setting up PHD for guiding there are a couple of rules. First, never
have more than one star in or touching the green guiding box. Second,
never get the green box too close to the edge of the screen. Keep it in the
central 75-80% of the screen.

When adjusting the settings in PHD it is important to have an idea as to


what each setting is doing:

RA agr is the RA aggressiveness and it will apply this number’s


percentage to the corrections. So if the box shows 100, it will apply
100% of the correction to the RA. Common sense says use 100 here
but in practice a slightly lower number can work better by not being
quite so jerky.
RA hys is the RA hysteresis and it applies this percentage of correction
from the previous correction to the upcoming correction. This is nice if
your mount tends to have the same error continuously such as with
periodic error. Generally speaking keep this number low.
Mn mo is minimum motion, or the amount a star is allowed to move
before any corrective action is taken. If you set this to .5 it would not
make a correction unless the star moved more than ½ a pixel. If it was
.05 then it would make corrections if the star moved more than 1/20th
of a pixel. Both values are extreme and you should avoid high values
like .5 because things will move too much before they can be
corrected, causing large jerking corrections while values low like .05
will cause lots of small corrections and oscillations. Something more
in the .15-.25 range is where you need to be.
Mx ra is the maximum RA correction duration the software can make
at a time. This is measured in milliseconds. It has been speculated that
sometimes setting this value very high, such as well over 1000ms, can
help with issues on certain setups like Celestron mounts. It seems that
on occasion the mount will not make smaller corrections because of a
design flaw in the mount. Unfortunately this also allows the image to
blur so it really is not a fix.
Mx dec is maximum dec correction duration and is basically the same
as above but for declination.
The drop down box on the far right is the declination guide mode and
is usually set to auto. You can turn it to off if your polar alignment is
excellent which can eliminate unnecessary corrections.
This should be enough to get you started tweaking PHD or your choice of
guiding software. Remember the basics are the most important: balance,
imaging scope to guidescope ratios, polar alignment, and being under the
mount’s maximum weight. No amount of “tweaking” can repair the damage
ignoring these fundamentals can do.

The settings above and a few more can be found in the settings dialog box
accessed from the main PHD screen by clicking on the brain icon when
PHD is not guiding:

Figure 185 PHD Advanced setup screen.

While PHD provides some excellent information on what your setup is


doing, there is a new way to get even more information to help you fine
tune your setup, PHDLab (freeware, PC/Mac) which is available here:

www.countingoldphotons.com
In addition to downloading that software you will need the Python runtime
which runs programs written in the Python programming language. If you
have a Mac, it should already be installed. If you run a PC, you will need to
download the 32bit version 2.7.x where the x is the latest version in the 2.7
series from:

www.python.org/download

Be sure you do not download newer major versions of Python such as the
3.3 series, these will not work. Also note that even if you are on 64bit
Windows, the author says to download the 32bit Python runtime.
Figure 186 PHDLab main screen with a PHD log file loaded and displayed.

Unfortunately I know almost nothing about this software, at least nothing I


am confident enough to share with you. What I do know, (and the reason I
am including it) is this has the potential to revolutionize the way we keep an
eye on, and tweak, our guiding session. The amount of information here is
impressive.
Figure 187 PHDLab guide box screen.
3.7 Create a custom light box for flats

Since starting to work on my post processing I have noticed the need to


start shooting flats. The problem is, you must shoot flats without moving
the camera, scope, focus, anything. This means they have to be shot on site,
right before or right after shooting your lights.

There are a couple of problems with that. All the designs I have seen are
large boxes. I don’t want to carry around a large box to the dark site and
besides, something that large might disturb the dust bunnies and mess up
the whole idea of flats. Next problem is if anyone else is there, flicking on a
light could get me shot (this is Texas ). So what do I do?

First thing I do is come up with a list of what it needs to be able to do, so


here goes:

1) It must be easily portable, small and light. Anything heavy can mess up
the scope’s setup.
2) It must be reasonably accurate. The light must be uniform in
illumination.
3) It must be reasonably inexpensive, the EL panels I have been looking at
run about $100 for just the light; let’s keep it under that.
4) It must be usable when other imagers are right next to me, no light leaks.
5) It must be serviceable, meaning I can repair it, replace things, etc.

What I built probably won’t work for you, and the idea isn’t for me to build
one for you, but to give you enough know-how and ideas to build one
yourself that will work equally well for you.

Off to Home Depot I went! I know they thought I was some terrorist getting
bomb supplies; I walked up and down every isle grabbing weird items,
putting others back, fitting things together that were completely unrelated.
Boy did I get some weird looks! After about an hour I left with this:
Figure 188 Supplies bought at Home Depot.

These are two 4″ PVC sewer pipe connectors, a 6″ flashing connector for (I
think) a stove exhaust vent, a can of PVC glue, two translucent lids, a 6″
plastic floor drain grill, and a bag of bolts.

Next stop, Radio Shack!


Figure 189 Supplies from Radio Shack.

Here I found 4 white 3v LEDs, 4 LED mounts, a rocker switch, a 4 AA


battery holder, and a project box. Next stop, Wal-Mart!
Figure 190 Supplies from Wal-Mart.

Left to right we have a box of male and female electrical connectors, some
Styrofoam plates, glue and some Velcro.

Now its time to start working on stuff. The first thing I needed was a Proof
Of Concept. For this I put things together and mounted it on the scope with
just some clear tape and used one of those battery powered lights you press
down on the top to turn on. That gave me my first flat:
Figure 191 Flat test.

This clearly shows I need flats. This image has been stretched and
desaturated, it was brown (I used an incandescent bulb). Next was to test
out the batteries and LEDs:

Figure 192 Testing on a breadboard.


Good! Now I know I can get them all lit up. Let’s mount the battery pack to
the top of the project box with hot glue:

Figure 193 Mounting the battery pack.


Figure 194 Drilling holes for the LEDs.

Now we drill holes in the project box, four large ones for the LED mounts
and two (well, four now because I goobered!) smaller ones to bolt the
project box to the drain grill.
Figure 195 Mounting the box.

Now we bolt the box on the grate and install the LEDs.
Figure 196 Inside view.

I open the package of Velcro and take the fuzzy strip and run it around the
inside of the 6″ metal connector, on the opposite end of where it will mount
to the drain grill as this will protect the paint on the outside of the dew
shield. Next we glue the two 4″ sewer pipe couplers together and mount
them inside the 6″ metal connector.
Figure 197 Outside view.

Above is the outside view.


Figure 198 First diffuser installed.

After a little wiring, we cut the foam paper plates into two circles for
diffusers. Here is the first one installed.

Figure 199 Light through only the first diffuser.


So I turn on the lights and there is a problem: the light is nowhere near even
enough to take a flat.

But I am not as stupid as I look! I had actually planned on this and so I


install the second diffuser in its place four or so inches in front of the first
diffuser and I get this:

Figure 200 Light through both diffusers.

HA! Nice even illumination! Let’s put it on the scope:


Figure 201 Installed on the nose of the scope.

And take a flat to see how it works:


Figure 202 Flat file taken with the new light box.

So a little information:

Size = 7.5″ diameter x 10″ tall/long


Weight = 2lbs 4oz with batteries
Cost = $60 buying everything except a little wire and solder
Time to construct = About 3 hours
Exposure for 40% saturation on histogram = ISO800 1/60th sec

Now since the light goes down the inside of the sewer pipe couplers, to leak
out it would have to come back up the outside of the couplers, curve around
the end of the scope, and go down the metal coupler on the outside past the
Velcro. I don’t see much light doing that.

I have now used this for several months and the only thing I would change
is I need to come up with some kind of latch for the batteries in the holder
because if it bounces around enough in the back of your car the batteries
can come loose.

Hopefully this has given you some ideas.


3.8 Remote observatories

Remote observatories are just that, observatories in remote areas that typically
have excellent locations (high in the mountains in the middle of nowhere),
excellent telescopes (including 32″ research grade telescopes) and are
remotely operated through the internet for a fee.

So why in the world would you consider using one of these services?

First off, some locations are outside your hemisphere which allows you to
shoot objects you could not otherwise shoot. For example, if you are in the
northern hemisphere there are several remote observatories with locations in
Australia which will allow you to shoot any targets in the southern hemisphere
you like.

Secondly, ease of use. All you do is pull up your web browser, log in, input
your target information, and let the data flow in. No messing with cables and
no worrying about a camera failing.
Figure 203 An array of camera lenses at New Mexico Skies observatory.
Figure 204 The RC Optical Systems 24” telescope at one of New Mexico Skies observatories.

Thirdly you would have access to equipment that most of us could not afford.
I know I can’t afford over $50,000 for a single scope! This translates into not
just high end scopes and mounts, but CCDs and filters as well. Many of these
scopes are suitable for serious scientific research and are actually rented by
professional scientists.

Lastly, because you may want to see what you can do with a remote setup
before purchasing your own. Most of the companies will give you tons of
specifications on exactly what you are renting time on. You can use this
information to help you decide on what you want to buy. Many of these
services have “lower end” scopes (read that as scopes many of us might be
able to actually afford) so it is an excellent way to see what they are capable
of.
So how much do they cost? That is a rather convoluted question to answer so
let me give you a few options from one company. Their prices currently range
from approximately $53/hour for a one shot color Takahashi 106mm to
$208/hour for a monochrome Plane wave 510mm assuming there is no
moonlight. Billing is normally by the minute and prices can vary depending
on how much time you purchase in advance.

Be sure to take a look at all the features and payment schedules before
selecting a remote observatory. Some may require monthly payments while
others may not. Some may have only high end scientific grade equipment
while others may have more moderate equipment available for use. Some may
have policies that say you will not be billed for time if it is cloudy, if they
have equipment problems, or for some other reason your target did not get
captured correctly.

Another feature of some is the ability to manage groups. In other words, one
person can purchase credits for imaging time and then distribute those credits
and manage other users within their group. Think of it this way. An astronomy
club president can use some of the club’s dues to purchase a certain number of
credits per month, which the group’s members can then use whenever they
want for whatever they want.

Another way to look at it is a research department head can purchase a certain


number of credits and then create users under his account that can use a
portion of those credits as he assigns research projects to people on his team.

Does this sound interesting? Want to try it cheap, or maybe even free? Several
companies will give you a huge discount or even a little free time on their
system for being a new customer. Want more? How about a totally free
system? Give Bradford Robotic Telescope a try at:

www.telescope.org

and see what they can do for you. Keep in mind that since this service is free,
you are likely to have to fight over time and don’t get refunds if something
goes wrong (hard to refund $0 now isn’t it?). They also tend to service school
requests first so as an individual you may have to wait a while.
If you decide you need more capabilities, better access and more service, you
can try New Mexico Skies at:

www.nmskies.com

and Sierra Stars Observatory at:

www.sierrastars.com
3.9 Something different: Spectroscopy

Sometimes it is fun doing something completely different than most of the


people out there. I like being different, the odd man out, learning things that
are unique. This is what drove me to spectroscopy.

Spectroscopy is the study of the spectrum of light emitted, reflected or


absorbed by stellar objects. It can tell us many things such as the composition
of a star or the rate of expansion of the shell of a supernova.

The reason it is included in this book is it can be a fun way to extend your AP
and you already have almost everything you need to get started imaging
stellar spectra, so why not play?

To start with, you need two things. First, a way to break light into its
components and you can do this with a grating filter.

Figure 205 The star analyser 100 1.25” grating filter.

Second, you need software to process the images you get. I recommend you
get both from:
www.rspec-astro.com

Tom Field over at RSpec is a great guy to deal with and will be happy to set
you up with the Star Analyser 100 grating and his fantastic software RSpec.

Now you can certainly start out with just the grating and use free spectroscopy
software such as Visual Spec (which I am sure is excellent software which I
could never get to do what I wanted it to do, but I am a little dense). I went
from Visual Spec to RSpec because RSpec was infinitely easier to start using
for me, and the support for RSpec directly through Tom and through his
Yahoo Groups support group was beyond fantastic.

Let’s start with what you can get from this and then see how to do it.

Figure 206 Standard spectrographs of common star types taken with a DSLR.

In the image above you can see spectrographs of the seven standard star types,
O, B, A, F, G, K and M (Oh Be A Fine Girl, Kiss Me) in addition to three
others, C for carbon stars, S for stars that are giants or supergiants that are not
quite carbon stars yet, and W for Wolf-Rayet showing the howling winds of
its exposed helium shell.

So how do you know what is what?


Figure 207 Spectrograph showing the hydrogen balmer absorption lines of Mizar.

In this image we see three labeled vertical light colored lines in the graph at
the top (see the arrows), and their corresponding black “gaps” at the bottom.
These are the hydrogen balmer absorption lines.

In 1865 Johann Balmer introduced the Balmer formula which shows what
wavelengths of light are absorbed by hydrogen when light passes through it.
We call these hydrogen balmer absorption lines. They are used as signposts to
calibrate the spectrum we captured.

So how do we go about this process?

The first thing we do is set up for imaging just like we always do, with a twist.
We add the Star Analyser 100 grating filter into the imaging train, in my case
on the end of an adapter such as this:

DO NOT TOUCH A SPECTROSCOPY FILTER OR


GRATING EXCEPT AROUND THE EDGES. OIL FROM
YOUR HANDS CAN CONTAMINATE THE SPECTRUM
GENERATED.
Figure 208 Image train used for 1.25” filters such as the Star Analyser.

The far right end of the adapter in the center of the image is threaded for 1.25”
filters and this is where I screw the filter on.

Focus the assembly until you get a visible spectrum. Then we need to line up
the spectrum so that it appears level across the frame. You can just use the
spacers that come with the filter, or just leave the filter unscrewed just a little
until it is directly horizontal in the frame. We also want both our star and our
spectrum as close to centered as we can get it to make up for any vignetting
that may occur due to using a 1.25” filter and adapter.

We level the image because if you remember the Bayer matrix is made up of
squares, so you get a higher resolution if your image of a line (the spectrum
line) is either directly horizontal or directly vertical as it does not have to
“stair step” pixels.

In the next image you can see the stars on the left as the small dots of light.
The brighter one on top is our target, the other one just happens to be there
and we are ignoring it.
Figure 209 Spectrum focused and leveled.

Once you have it level and locked in you can start the fine focusing. Zooming
in with live view really helps at this point. You need to focus on the spectrum,
just completely ignore the star. What you are trying to do is get the absorption
lines to be as clear and distinct as possible. Once this is done lock in the focus.

As far as exposure goes you need to play around with it. You want the image
as bright as possible while maintaining the distinct features in the absorption
lines. Too much exposure and it causes things to glow and the absorption lines
become indistinct. Too little exposure and it is too hard to see the spectrum.
You need to find the best balance.

Remember that this is AP as well, so you can do the same thing stacking
images here that you do in standard AP to bring out more detail in the image.

Once you have a well-defined spectrum to work with like this:


Figure 210 Spectrum showing well defined hydrogen balmer lines.

you can start processing the images. I have added the arrow points to show
you where the balmer lines are.

What follows is my processing method. It is not necessarily correct, accurate


or even recommended . It assumes you have at least glanced at the manual
for RSpec and are reasonably familiar with the interface. If you would like to
follow along with the processing you can obtain a trial version of RSpec from:

www.rspec-astro.com

and then get the Mizar spectroscopy processing example from this book’s
website at:

www.allans-stuff.com/leap

I find RSpec works faster if I convert and rotate the image in Photoshop
before beginning. Your mileage may vary.

Open the image, do any rotating necessary and move the lines to right above
and right below the spectra making sure not to touch the spectra or star
(between the arrows in the image below).
Figure 211 Adjusting for spectra.

I typically also check the Subtract background box you see in the image
(bottom center) and leave the defaults.

Now we need to calibrate the image:


Figure 212 Initial calibration of the spectra.

I already know that my camera, with my adapter, has an Angstrom/Pixel of


3.311 so I click in the top box (arrow 1), then click on the very top of the first
really tall straight up and down line (this is the star, arrow 2), then make sure
the next box down is set to 0 (arrow 3), make sure Use One Point Alignment
is checked (arrow 4) and that my Angstrom/Pixel is set (arrow 5). Click
Apply.

Before we go any further, you are probably wondering how I know my


angstrom/pixel. This number can vary with camera pixel size, pixel density,
sensor size, grating lines/mm, grating distance from sensor, and telescope
focal length. Wow! So how do you figure out what it is?

The trick is to calibrate a known star against a known correct spectrum. In


other words, you find out where the hydrogen alpha (Hα) line is on Sirius,
then you see where that line is on your image. Your line will be xxxx number
of pixels from the zero point (arrow 2 above) and we use that to calculate your
angstrom/pixel. So let’s say your image shows the Hα line is 13,000 pixels
away from the zero point and we know that Sirius has a Hα line at 6562
angstroms. 13000/6562 = 1.98 angstroms/pixel.

So how do we know where the Hα line is for Sirius to start with? You
download the book “Spectroscopic Atlas for Amateur Astronomers” by
Richard Walker from:

www.ursusmajor.ch/downloads/spectroscopic-atlas-3_0-english.pdf

Walker’s book has a ton of information including detailed spectra on many


stars and if you are even mildly interested in spectroscopy you should get this
free book. I went so far as to print it out, take it to Office Depot and have it
bound with hard plastic covers.

OK, so how do I know where the Hα line is on the image from my camera? It
should be one of the first real dips in the line on the right hand side of the
spectra. You can play around with different angstrom/pixels until you get one
where the dips for Hα, Hβ and Hγ all line up. This will be your correct
angstrom/pixel number.

Let’s move on...

Now click Reference -> Edit Points to trim the spectra to between 4200 and
6700 angstroms (this is the range my camera is reasonably sensitive to).
Figure 213 Cropping the spectra.

Now we save the profile as Mizar-Start by clicking on the little blue floppy
disk icon. Click on Reference -> Edit Points to smooth the curve:
Figure 214 Smoothing the initial spectra.
Figure 215 Close-up of line smoothing.

Note that I smooth it enough to get out the roughness but not so much as to
remove the general large characteristics of the spectra. In the image above the
jagged line is the main profile while the smooth line is the reference line we
are smoothing. Save the reference as Mizar-A by clicking on the blue floppy
disk at the top and then clicking Reference on this screen:

Figure 216 Save box when multiple profiles are on the grid.

Click Reference -> Close Reference Series. Click the folder icon just to the
left of the blue floppy to load the standard reference file for this star type:
Figure 217 Loading the standard reference for Mizar.

In this case I loaded the a2v.dat file for Mizar since it is an A2V type star.
Note that some star types will not be represented in data files here. You can
normally use something close. For example, if I had not had an A2V type
reference file, I could have used the A2i file and been reasonably close. Now
we need to trim the profile to between 4200 and 6700 angstroms just like we
did last time using the Reference -> Edit Points function which leaves us
looking at this:
Figure 218 Profile for a2v type stars loaded and cropped for 4200-6700 angstroms.

We now use the Reference -> Edit Points function to smooth the line:
Figure 219 Crop of part of the smoothed a2v reference profile.

See that again, I smoothed out all the roughness but left the general profile
features intact.

Save this reference as Mizar-B.

Use Reference -> Close Reference Series and then click on the open folder
and open Mizar-A as the main profile. Click on Reference -> Open Reference
Series and open Mizar-B as the reference.
Figure 220 Mizar-A and Mizar-B profiles loaded.

Click on Reference -> Math on 2 Series and use the settings in the next image,
click on Calculate, then Move To Profile.
Figure 221 Using Math on 2 Series function.

You can close the Math window, click Reference -> Close Reference Series.
Save this profile as Mizar-C. Open Mizar-Start as the main profile and open
Mizar-C as the reference.
Figure 222 Mizar-Start and Mizar-C both loaded on the grid.

Click Reference -> Math on 2 Series and do the math again with the same
settings.
Figure 223 Math calculations on Mizar-Start and Mizar-C.

Click Move to Profile and then Close the math window. Click Reference ->
Close Reference Series and then Reference -> Edit Points and smooth the line.

Figure 224 Final line smoothing.

Pay close attention here. See the image above and note that it is just a curved
line. I am not preserving anything other than the general slope of the line. This
is critical! There should be no bumps or dips at all in the smoothed reference
line. It should just be a smooth line from roughly the same start and end
positions as the profile on the grid.

Now we do Reference -> Math on 2 Series again and see this:

Figure 225 Final Math on 2 Series.

Click Move to Profile and Close the math window. If there is a reference
profile open, close it. You can now click on the Appearance button and create
reference lines and add text to your profile:
Figure 226 Completd profile for Mizar.
Notice that my reference lines are almost an exact match for where the
absorption lines should be. Some of the error I am sure is mine, some is
because it was a pretty poor night when I shot the image, and some of it is
probably variance in my camera’s response to the light. Using a DSLR is not
the ideal way to go, but it can be done and it is a lot of fun!

I would like to leave you with this: Spectroscopy is new to me, so take this
processing sequence with a truck load of salt. It seems to work pretty well for
me so I am happy at this point. I am not going to go more in depth until I can
get a monochrome CCD and do it right. What I can tell you for sure is it is
very thought and process intensive, but that is half the fun! Seeing spectra you
imaged line up with the absorption lines shot by professionals is pretty neat,
particularly when you are learning about spectra, absorption lines and stars in
the process.
3.10 Closing notes

Astrophotography can be an extremely rewarding experience. It can also


consume a lot of time, effort, money and patience. I certainly do not want to
talk anyone out of such a great and sometimes life altering pursuit but at the
same time I would feel amiss if I did not at least warn you of the down
sides.

I have heard horror stories of people spending thousands of dollars on


equipment only to find out the amount of time needed far exceeded what
they had available, and then selling everything at a huge loss. On the other
side I have seen people beat their heads into the wall for hours, days, weeks
and months because they refused to spend money to get where they wanted
to be.

With all that being said, the time I have spent in astronomy has been some
of the most rewarding times of my life, and I hope will continue to be. I
have queued up eight hours of imaging on the computer only to see clouds
roll in an hour later and trash my images. Instead of getting mad, I turned
off the show I was watching on my iPad and just watched the beauty of the
clouds rolling though. It was almost a religious experience. I have spent
half an hour scraping ice off equipment and my windshield so I could go
home at six in the morning, snickering at the whole situation. I have
stopped at the gate leaving the observatory, leaned up against the rear of my
car after locking the gate behind me and just smiled as I watched the sun
peek over the trees.

I feel that the pros far outweigh the cons, but I also realize many other
people would disagree. This is an endeavor that needs to be thought about
long and hard before you get hip deep into something you won’t enjoy, or
before you pass up what could be one of the greatest passions of your life. I
can only hope this book has helped you make your own decision, and if you
decided to join the rest of the AP lunatics, it gave you a direction to proceed
with your own private lunacy.
I also want to urge you to come by my website at:

www.allans-stuff.com/leap

Check out the videos, join the forum and tell me what you think of the
book, good or bad. If you have suggestions or comments, I would love to
hear them.
3.11 Where to go from here

Oh boy, are there a lot of places you can go so here are some suggestions:

Astronomy equipment:
Orion Telescopes -www.telescope.com-1-800-447-1001
Agena Astro -www.agenaastro.com-1-562-215-4473
Oceanside Telescope -www.optcorp.com-1-800-483-6287
Shoestring -www.shoestringastronomy.com
Astronomy
Astromart (used) -www.astromart.com
ScopeStuff -www.scopestuff.com
Hayneedle -www.telescopes.com

Online forums:
Astronomy Magazine -www.astronomy.com
Stargazers Lounge -www.stargazerslounge.com
Cloudy Nights -www.cloudynights.com
Ice In Space -www.iceinspace.com
CosmoQuest -www.cosmoquest.org/forum
Astromart -www.astromart.com/forums/
Telescope Junkies -www.telescopejunkies.com
Allan’s AP Forums -www.allans-stuff.com/forum/

Specializations:
Spectroscopy -www.rspec-astro.com
Radio Astronomy -www.radio-astronomy.com
Photometry etc -www.citizensky.org

Camera control software:


DSLRShutter -www.stark-labs.com
Images Plus -www.mlunsold.com
MaxIm DL -www.cyanogen.com
BackyardEOS/NIK -www.backyardeos.com

Image processing software:


Images Plus -www.mlunsold.com
PixInsight -www.pixinsight.com
Nebulosity -www.stark-labs.com
MaxIm DL -www.cyanogen.com

Mount control software/Planetarium software:


TheSkyX -www.bisque.com
Starry Night -www.starrynight.com
Stellarium -www.stellarium.org
Cartes du Ciel -www.ap-i.net
C2A -www.astrosurf.com

Session planning software:


Astroplanner -www.astroplanner.net
Skytools -www.skyhound.com
Deep Sky Planner -www.knightware.biz

DSLR modification service:


Spencer’s Camera -www.spencerscamera.com-801-367-7569
Life Pixel -www.lifepixel.com-1-800-610-1710

Remote observatories:
Sierra Stars -www.sierrastars.com
New Mexico Skies -www.nmskies.com
Bradford Robotic -www.telescope.org
Supernova 2012aw captured within days of its appearance
4.0 Top 25 targets to start with

When I started AP there were plenty of lists; Messier lists, Caldwell lists,
Herschel lists and more. The problem is not all targets are created equal. Here
I have compiled a list of my favorite medium to large sized targets to get you
started. They are some of the easier targets with a few more difficult ones
thrown in to help get you ready to do the really difficult ones later.

Each image is accompanied by a small star chart printed from AstroPlanner to


give you an idea of where it is in the sky. To read the charts, you will need to
know the abbreviations for the constellations. They are:
Messier 8 – Lagoon Nebula
Messier 11 – Wild Duck cluster
Messier 13 – Great globular cluster in Hercules
Messier 16 – Eagle nebula
Messier 17 – Omega nebula
Messier 20 – Trifid nebula
Messier 27 – Dumbbell nebula
Messier 31 – Andromeda galaxy
Messier 33 – Triangulum galaxy
Messier 42 – Orion nebula
Messier 45 – The Pleaides
Messier 81/Messier 82 – Bode’s galaxy and the Cigar galaxy
Messier 78 – Diffuse nebula
Messier 101 – Pinwheel galaxy
Cladwell 4 – The Iris nebula
Cladwell 11 – Bubble nebula
Cladwell 19 – Cocoon nebula
Cladwell 20 – North America nebula
Cladwell 31 – Flaming star nebula
Cladwell 33 – Eastern veil nebula
Cladwell 34 – Western veil nebula/Witch's broom
Cladwell 49 – Rosette nebula
Cladwell 63 – Helix nebula
New General Catalog 7030 – Flaming horse nebula
New General Catalog 1499 – California Nebula
Streaking stars and the glow of light pollution at the SHSU observatory
5.0 Glossary

A/D converter (ADC) - Analog to digital converter. A camera sensor


records light as an analog signal which the A/D converter then converts into
digital information.

Achromat – A type of refractor typically with two lens elements to correct


for chromatic aberrations. This type of scope is not well suited for
astrophotography.

Afocal - A means of taking an image through an eyepiece of a telescope


without removing the lens from the camera.

Alt/Az - Altitude Azimuth, a type of telescope mount that moves up and


down, left and right as opposed to the smooth rolling motion of an EQ
mount which accurately tracks the motion of the stars around the earth.

Amp glow - Some cameras show glowing on a long exposure image. This
usually manifests itself in the corners of the image first and then can spread
towards the center. A moderate amount of this can be removed using dark
frames. Severe cases cannot be corrected.

Aperture - In telescopes, the diameter of the opening at the front of a


telescope, usually measured in millimeters. Can also be measured in inches
for larger scopes. In camera lenses there is a diaphragm inside the lens that
controls the aperture which is sometimes referred to as an F-Stop.

Apochromatic (APO) – A type of refractor extremely well adjusted to


remove most or all chromatic aberrations. Can have two, three, or more lens
elements. Higher end versions almost always have three or more elements.
Excellent for astrophotography uses.

Arc Minute – There are 360 degrees in the sky as it goes 360 degrees
around us. One arc minute is 1/60th of one of those degrees.
Arc Second – Is equal to 1/60th of an arc minute.

Artifacts - Errors or unwanted signals in the image.

ASCOM - abbreviation for AStronomy Common Object Model and is a


standard in the astronomy equipment industry for control interface design
of astronomical equipment such as mounts, focusers, motorized domes, etc.

Astrograph - A type of Newtonian telescope that is designed specifically


for astrophotography.

Astrometry – Extremely precise measuring of objects like comets and


asteroids. Astrophotography - Photography of objects in the sky.

Autoguider - A camera and associated equipment used to increase the


accuracy of the mount in tracking the stars.

Audio Video Interleave (AVI) – A wrapper for computer video files, can
contain a variety of different formats, typically video for Windows formats,
and has a file extension of .AVI.

Back Focus – The necessary distance needed to be able to attach a camera


onto a telescope focuser, and be able to bring the image projected onto that
camera’s sensor into focus.

Backlash – Unwanted spacing between gear assemblies usually resulting in


some “play” or “slop” with the device. This is normally used to describe
issues with a mount but can be applied to anything with gears.

Baffles – Ridges running around the inside of the light path in a telescope
to prevent the scatter of light inside the telescope and provide an image with
greater contrast.

Bahtinov mask - A mask or cover that goes in front of a telescope with a


specific pattern of slits designed to provide easy focusing of point light
sources such as stars.
Barlow - An optical device that increases the magnification or reduces the
field of view, depending on how you look at it. This trades some image
quality and light for more magnification. These plug into the optical train
just before the eyepiece.

Bayer matrix - In color one shot cameras (any camera that produces a
single color image in one exposure) the pixels are grouped in groups of
four, one red, one blue and two green. These are combined to generate the
color information for that area of the image. The matrix is the array of
colored filters over the pixels that accomplish this.

BFA - Bayer Filter Array, see Bayer matrix above.

Bias frame - An image taken with the highest shutter speed possible on a
given camera at the same ISO and temperature of the light frames. This is
used to subtract the camera’s electrical signal present in every frame it takes
from the final image.

Binning – A process of combining multiple pixels in order to boost sensor


sensitivity at the expense of resolution. For example, 1x1 binning means
each pixel counts as one pixel and is in effect not binned, 2x2 binning
would take a square of 4 pixels and combine them into one “super pixel”.

Binos - Short for binoculars.

Bino-Viewer - A device that allows attaching two eyepieces to a standard


telescope so you may view objects in stereo.

Bit - A single bit can be either on or off, representing either 0 or 1.


Computers use this as the basic language of everything they do.

Bit depth - This describes a measurement of something like the number of


colors an image can contain and is base two math. An example is a l bit
scale will contain two possible combinations, a 2 bit scale will contain 4, a
4 bit scale will contain l6 and an 8 bit scale will contain 256 bits.

Black point - An area of an image that represents absolute black.


Blooming - In a camera, once a pixel has received as much light as it can
handle, the voltage can spill over into adjacent pixels causing them to be
brighter than they should.

Bortle scale – Astronomer John Bortle developed a scale of nine levels


which represents the “true darkness” of a site, or the amount of light
pollution present.

Bulb exposure - DSLRs and other cameras can be used in this mode. As
long as the shutter release button is held down, the shutter is open and the
camera is exposing the image.

CCD - Short for Charged-Coupled Device, a type of sensor used in digital


cameras. In astrophotography it is usually used as a reference to a camera
designed and used specifically for astrophotography as opposed to a digital
SLR or other multi use digital camera.

Celestial equator - An imaginary line which is basically the equator of the


earth projected up into the sky.

Center mark – A dot placed exactly in the center of the primary mirror of a
Newtonian to aid in collimation.

Chromatic aberration – When light passes through the optical path it is


split into its component colors and then rejoined at the focal point. When
this is not done exactly perfectly you can see some “glowing” around bright
objects, usually blue or violet, sometimes referred to as “fringing”.

Clip - Clipping an image means you have cut off one end or the other of the
image’s ability to record data. Clipping the highlights for example means
that area of the image is pure white and cannot contain any detail. Clipping
the darks means that part of the image is pure black and contains no detail.

CMOS - Complimentary Metal Oxide Semiconductor. In astrophotography,


a type of sensor in a camera.

Collimation - The act of aligning the optical components of a telescope to


make sure all parts of an image combine correctly into one sharp image.
Coma - An optical defect normally present in reflector telescopes that can
cause point light sources such as stars to appear to be out of round,
presenting like they have the tail of a comet.

Convolution – A mathematical method of multiplying arrays of numbers to


get a third array of numbers. Used in image processing to stretch or resize
images.

Coma corrector - An optical device for reflector telescopes to correct for


coma aberrations.

Corrector plate – The lens on the front of an SCT type telescope that
corrects for the spherical aberration created by the spherical mirrors used in
that design.

Counterweight – A weight, usually on an equatorial mount, used to


balance the weight of the telescope and associated hardware.

Crayford focuser – A telescope focuser that uses smooth bearings and


rollers as opposed to gears used in rack and pinion style. They usually come
in dual speed (coarse and fine adjustments) and can have adjustable tension.

CRW/CR2 - Canon’s RAW image format.

Dark frame - An image taken at the same ISO, shutter speed and
temperature as the light frames but with the lens cap/scope cap on, or the
shutter closed. This is used to detect the thermal signature of the camera’s
sensor at these setting so they can be subtracted from your final image.

Dead pixel - Opposite of a hot pixel, a pixel that is stuck in the off position
and registers as black regardless of the amount of light applied.

Declination (DEC) – Celestial coordinate measured from the celestial


equator north and south of that line, from +90 degrees to the north to -90
degrees to the south, zero being the celestial equator.

Deconvolution - A method of image enhancement that corrects for the bad


effects of convolution. This can substantially increase fine details in an
image.

Dew heater - Usually a strip that heats up and is wrapped around a


telescope near the optics. This warms the optics and prevents dew from
forming.

Dew shield - A device attached to the end of a telescope and is like a


hollow extension of the telescope tube. This delays the objective from
collecting dew, and reduces the intake of extraneous light sources.

Diagonal - A device that has a mirror inside and reflects the image at a 45
degree or 90 degree angle for easier viewing. One side goes into the
focuser, the other end holds an eyepiece.

Diffraction - As light passes through a telescope it passes through


openings, as light gets near the edges of these openings it is diffracted. This
causes stars to appear larger than they actually should.

Diffraction limited – Term used primarily by telescope manufacturers that


says that the telescope should perform so that any defect seen will be with
the physical characteristics of light and not optical problems with the
telescope.

Dispersion – Cause of chromatic aberrations. Prism effect, when light is


spread out into its spectrum from white light.

Dobsonian - a type of telescope mount, but usually used as a reference to


the entire telescope assembly. These are usually larger Newtonians mounted
onto a base that sits on the ground and moves as an alt/az. Like regular
Newtonians these are not well suited to astrophotography due to not having
enough backfocus.

Doublet – A refractor telescope with two objective lenses.

Dovetail - A metal rail that attaches to the bottom of the telescope, usually
by rings that clamp into the telescope tubes or bolts into the bottom of the
telescope, which can then be quickly and easily attached to the mount’s
clamp. Popular dovetail types include Vixen and Losmandy.
DSLR - Digital Single Lens Reflex camera. A type of camera where the
user actually looks at the same image that will be recorded on the sensor by
means of a mirror and prism that reflects the light from the lens through an
eyepiece. When the shutter is opened to take the picture the mirror swings
out of the way, the eyepiece goes black as it is no longer receiving the
reflected image, and the sensor is exposed.

DSS – Short for Deep Sky Stacker, very popular free program generally
used by beginning astrophotographers for stacking images.

Dynamic range - The range from brightest to darkest that a camera can
record.

ED – Extra low Dispersion, optical glass corrected for chromatic


aberration.

EQ/Equatorial Mount - A type of mount specifically designed to track the


stars as they travel around the earth compensating perfectly for their arc in
the sky.

Ephemeris – Detailed information about planets, their moons, comets and


asteroids.

Eyepiece - An optical device that focuses the light exiting a telescope tube
in such a way that you can view it with your eye. These typically contain
many lens elements in a round cylinder that is inserted into the focuser. The
eyepiece can be made to magnify or reduce the image size.

Eyepiece projection - A method of taking a photograph through the


eyepiece of a telescope without a lens on your camera. This uses a specific
adapter. This can come in handy on telescopes that cannot reach focus using
a prime focus adapter.

F-Stop - When using a camera with its lens installed, the aperture is
adjustable and is commonly referred to as the F-Stop.

Field flattener - An optical device used primarily on refractors to make


sure that the image arrives at the camera sensor perfectly flat. This prevents
elliptical stars in the corners of the images while the stars in the center may
be perfectly round.

Field of view - Commonly represented as FOV. The area of the sky that you
can see at one time. Longer focal lengths (more magnification) generally
show smaller areas of the sky and hence a smaller field of view. Eyepieces
with smaller numbers cause the same effect.

Field rotation – The effect of the image being blurred from the rotation of
the sky and happens when you use an Alt/Az mount to take long exposures
since the Alt/Az mount does not rotate the camera like an EQ mount does.

Filters - Typically a piece of glass (or Mylar in some solar filters) that alters
the light coming through the telescope before the eyepiece or camera. These
are used for removing light pollution, enhancing certain colors, shooting
color images with a monochrome camera and many other tasks.

Finder - A small telescope or other pointing device that helps you quickly
orient your telescope towards a particular target. Similar to a gun sight.

Firmware - The software a device uses to tell it what to do. For example,
your GoTo telescope software in the hand controller is called its firmware
and can be updated on many devices.

FITS format - A file format designated by .FIT (such as .TIF, .GIF or


.JPG) specifically designed for scientific purposes. Like RAW or TIF files
this stores raw data that does not degrade from repeated editing as do
formats such as .GIF or .JPG.

Flats/Flat frame - An image taken with even illumination over the front of
the telescope and exposed to present a neutral gray image. This must be
taken with the exact same setup as your light frames (same focus setting,
same filters, etc) and is used to remove vignetting.

Focal length - The length of a line following where the light travels
through a telescope, this is important for calculating parameters such as the
FOV and magnification.
Focal plane – An inferred plane at the point where the image from the
telescope comes to focus. A camera’s sensor is mounted so that it is at the
focal plane.

Focal ratio (FR) – The focal length divided by the aperture of the primary
objective of the telescope.

Focal reducer - An optical device which reduces the effective focal length
and increases the field of view of a telescope, seemingly reducing the
magnification. This is usually mounted into the focuser before any
eyepieces or cameras.

Focuser - A piece of equipment mounted on the telescope where the light


exits. Eyepieces, diagonals, barlows and cameras are mounted into the
focuser. Its job is to move the eyepiece/camera/etc back and forth until the
light comes into focus at a specific point (your eye or the camera sensor).

FOV – See field of view.

Frames per second (FPS) – The number of image frames captured per
second by the device, used in video capture devices.

Full well capacity - A measurement of the total amount of light a photosite


can store before saturation occurs.

FWHM – Full Width Half Maximum. The measurement of the angular


apparent size of a star, usually used to get the size as small as possible in an
image which represents the best possible focus.

Gain - This is a multiplication of the incoming signal. For example, if one


photon enters a camera and hits the sensor, setting the gain to 2x will cause
the digital signal sent from the camera sensor to say that two photons hit the
sensor. Increasing the ISO of a digital camera is increasing the gain.

German equatorial mount (GEM) – Another name for the equatorial


mount.
GoTo – A telescope that when properly aligned can point to a celestial
object automatically when selected from a catalog or menu.

GPS – Global positioning system, a device or feature used to determine


your exact location on the planet.

Grayscale - An image recorded in black, white, and variations of gray with


no color information.

Guiding - The act of following a star or other object using either manual
corrections (as was the case back before GoTo and tracking mounts) or
automatically using guiding equipment such as an autoguider.

Hand controller (HC) – The handheld device used to control your


telescope’s mount.

HDR - High Dynamic Range. You can use different exposures on different
images and sandwich them together to show an image that has too much
dynamic range to be captured in one single exposure. M42 is a prime
example of a target that needs HDR processing: if you expose correctly for
the faint dust lanes on the outer areas, the central core is blown out or
clipped; if you expose for the central core, the outer dust lanes are clipped
into blackness and can not be seen.

Highlights - Areas of maximum brightness in an image.

Histogram - A graph that shows how an image is exposed. In a normal


grayscale histogram the left side is absolute black, the right side is absolute
white and there is usually a hump in the graph display somewhere near the
center showing the exposure of that image. Color works the same way but
shows the intensity of the red, blue and green color channels.

Hot pixel - Opposite of a dead pixel. A pixel that shows exposure


information even when shot in complete darkness.

Illuminated reticle eyepiece – An eyepiece with an illuminated crosshair


or other centering marker used for precise centering of targets in the field of
view.
ISO - International Standards Organization, used to measure the “speed” of
film, or the sensitivity of a sensor in a digital camera. As ISO increases, less
light is required to “expose” for a given image. This also reduces the signal
to noise ratio, increases noise, and reduces the bit depth possible in the
image.

JPG/JPEG - Joint Photographic Experts Group. A file format denoted by


.JPG (such as .TIF or .GIF) that is very common in digital cameras. Using
this format should be avoided because it uses a lossy compression format to
reduce file size. This results in huge losses of information and makes it
virtually impossible to process well for astronomical uses.

Light frame - A standard picture. Every regular picture you have taken
with a regular camera of birthdays, friends and family are all what we call
light frames. These are the frames you work with that contain your image
data.

Light pollution – Stray light from street lights, signs, windows etc that
shine or are reflected up into the air. This is scattered by contaminates and
humidity in the air and create a glow effect around cities making it difficult
to see outside the atmosphere.

Light year – The distance light travels in a year through a vacuum,


approximately 5.87 trillion miles.

Limiting magnitude – The measurement of the dimmest star you can see at
zenith which takes into consideration all parameters such as light pollution,
weather conditions and optical devices used (if any).

Lossless compression - Certain file formats such as PSD and TIF employ
compression methods that preserve 100% of the data while decreasing the
file size.

Lossy compression - Formats such as .GIF and .JPG use lossy compression
which throws away data that it does not think is needed to display the
image.
LRGB - When shooting a monochrome camera and creating a color image
you need to shoot at least one image with a red filter, one image with a
green filter and one image with a blue filter and then combine them together
into one color image. The L in LRGB stands for luminance and is used to
increase detail in an image. The Luminance frame is the detail frame and
can be shot in very high resolution, then the color can be shot at lower
resolutions and combined with the luminance to create a high resolution
color image. You can use this idea to increase your ability to stretch images
as well.

Luminance – The recording of brightness or intensity of light. Typically


this is the high resolution/detailed portion of an image.

Magnitude - A measurement of the brightness of an object. An increase in


one magnitude is approximately 2.5 times as bright. The lower the number
on the scale, the higher the magnitude.

Maksutov Cassegrain telescope – See MCTbelow.

Maksutov Newtonian – Similar to a Maksutov Cassegrain except they are


designed as a Newtonian configuration with the focuser near the front of the
scope.

MCT - Maksutov Cassegrain Telescope, a type of telescope that has a


sealed front end which is actually a corrector lens called a meniscus, two
mirrors and has its eyepiece in the rear.

Megapixel - Roughly one million pixels.

Meridian - An imaginary line dividing the west and east halves of the sky
running from the north celestial pole to the south celestial pole.

Meridian flip - On equatorial mounts you need to change the orientation of


the scope once it tracks to the meridian. This “flips” the scope around to
pointing the other direction at roughly the same spot on the meridian. Going
past the meridian without flipping can cause the scope to run into the
mount, cables to come loose, and many other really bad things.
Micron – One millionth of a meter or 0.001mm.

Mirror cell – The frame that holds the primary mirror assembly.

Mirror lock(DSLR) – Some cameras have the ability to lock the mirror in
the up position to minimize camera vibration when the shutter is tripped.
This can be very useful shooting brighter objects like the moon but is
ignored in long exposure work as the amount of time the camera is
vibrating due to the mirror slamming open is miniscule compared to the
overall exposure time.

Mirror lock(SCT) – Some SCT type telescopes have the ability to lock the
mirror once the image is in focus to prevent the mirror from “flopping” or
moving as the orientation of the telescope changes.

Monochrome – Technically means one color, meaning either black or


white. “Monochrome” cameras are actually grayscale in that they produce
black, white and many different shades of gray.

Mosaic - The act of shooting multiple images in a grid pattern and stitching
them together to allow you to shoot a larger field of view than you could
normally.

Mount - The mount is the geared (and sometimes motorized) device that is
typically attached to the top of a tripod and then has the telescope attached
to it. It is the mount that allows you to point the telescope at different
objects without moving the tripod, and (when motorized) tracks objects
across the sky.

Narrowband - Using special filters you can capture the emissions from
certain gasses such as hydrogen alpha, sulpher and oxygen. These can be
used much like LRGB imaging to create faux color images of high
resolution. This method can also overcome all but the worst light pollution
situations and can even allow you to shoot on nights with a full moon to
some degree.

Near Earth Object (NEO) – An object such as a comet or asteroid which


will pass in close proximity to earth.
Newtonian - A type of reflector telescope that has two mirrors in a hollow
tube. The front of the telescope is open to the elements and the back is
sealed. The eyepiece is near the front of the scope. These are usually not
suitable for astrophotography unless they are designed as an “astrograph” as
they will not bring a camera to focus without modifications or the use of a
Barlow.

North celestial pole (NCP) – The point in space very close to Polaris
where a line drawn from the exact southern to northern poles would extend
into space with the earth revolving around that line.

Nyquist theory - States that when converting frequencies, the sampling rate
should be 2x the highest frequency to get an accurate conversion and
preserve all the data.

Objective lens – Also called the primary objective, the large front lens of a
refractor telescope.

Off axis guider (OAG) – A method of mounting a guide camera so that it


shares most of the same optical path as the imager, picking off a small
amount of light usually from a mirror mounted in the light path.

One shot color (OSC) - Any camera that creates a color image from a
single exposure.

Opposition – When a planet is closest to the earth and is directly on the


other side of earth from the sun.

Optical train - Anything that is directly in the path of light from the stars to
your eye or camera sensor is considered “in the optical train”. Could be
called the optical path as well.

Optical tube assembly (OTA) – Also referred to as the OTA, this is the
main tube of the telescope not including any mount, pedestal, pier or tripod.

Parfocal – Applies to both eyepieces and filters and means that if you
exchange one filter (or eyepiece) for another, you will remain in nearly
perfect focus. Not all filter sets or eyepiece sets are parfocal.
Periodic error (PE) - Errors in the manufacturing process of the gears and
drive assembly in an EQ telescope mount results in repeating errors in the
tracking of the mount. These can be removed with software that contains
PEC code.

PEC - Periodic Error Correction. Software that corrects for periodic error.

Photometry – The measurement of apparent magnitude of objects such as


comets, asteroids and stars.

Photon - For the purposes of discussion in this book, a photon is a single


particle of light.

Photosite - The technical name for the tiny part of the sensor in a digital
camera sensor that when exposed to light records a signal. Typically called
a pixel.

Piggyback - Mounting a camera with a lens on a telescope in such a way as


it is not shooting through the telescope but is instead just using it as a
tracking mount.

Pixel - A single dot in an image.

Pixel size - The physical size of a photosite on the sensor of a camera,


measured in microns.

Plate solve – Refers to Plate Solution, or finding the absolute position and
motion of an object. Some applications such as TheSkyX Professional offer
a plate solve feature where it can look at your image and tell you exactly
what is in the frame.

Point light source - Stars are considered point light sources because
regardless of their magnification they are so far away they will always
appear as a single point of light.

Polar alignment – Aligning the “polar axis” of an equatorial mount to


either the northern or southern celestial pole so that the mount can track
celestial objects precisely.
Polar scope - A small telescope usually built into the mount which allows
for precise pointing of the mount’s right ascension axis to the north or south
celestial pole.

Prime focus - Attaching a camera without a lens in such a way that the
image from the telescope is directly projected onto the sensor of the camera.

Quantum efficiency (QE) - A measurement of the percentage of photons


which hit a photosite versus how many are detected.

Rack and pinion focuser – A less expensive and typically less accurate
style of focuser.

RAW - A RAW file is a file that contains the relatively unaltered,


unmodified data directly from the camera’s sensor.

Rayleigh scattering – The scattering of different wavelengths of light by


the molecules in the atmosphere.

Resolving power – 4.56/(inches of aperture of the telescope)=resolving


power of the telescope in arc-sec. Note that this does not take into
consideration obstructions such as secondary mirrors.

Reticle - Crosshairs or other markings that allow you to precisely center a


target in your field of view. Sometimes included inside eyepieces and finder
scopes.

Red dot finder - A type of finder that uses an illuminated red dot as a
reticle.

Refractor - A type of telescope that has an objective lens on the front end
and an eyepiece or camera at the other. Light passes straight through
without being reflected unless a diagonal is used.

RGB - Red, Green, Blue. One shot color cameras shoot everything as a
combination of these three primary colors. When shooting monochrome
images and wanting to end up with a color image, you shoot at least one
frame with a red filter, one with a green, and one with a blue and then
combine them to create a full color image.

Right ascension (RA) – Celestial coordinate measured from west to east in


hours, minutes and seconds. As the earth turns each hour, 15 degrees of arc
pass.

Saturation – The point at which you cannot record any more data. This
may refer to the full well capacity of a CCD camera or the maximum value
a pixel can store.

Schmidt Cassegrain Telescope (SCT) - a type of reflector that has a sealed


front, two mirrors and has its eyepiece in the rear of the scope.

Seeing - A measurement of the conditions of the atmosphere as it relates to


being able to view or image an astronomical object. An easy method to
determine the seeing conditions is to look for stars twinkling; the more they
twinkle, the worse the seeing.

Sidereal rate – 23 hours, 56 minutes and 4 seconds is one sidereal day


which is why the stars are never at the exact same place at the exact same
time every night and seem to “advance” across the night sky every night all
year long. This is the rate at which your telescope must track to remain
aligned with your target.

Signal to noise ratio (SNR) - The ratio of signal (what you are trying to
capture in the image) to noise (electrical signals inherent to the camera
generating the image). The higher the SNR, the easier it is to stretch an
image and bring out the detail of your target.

Slew – The process of your telescope moving to and from targets.

South celestial pole (SCP) – The point in space very close to Sigma
Octantis where a line drawn from the exact northern to southern poles
would extend into space with the earth revolving around that line.

Spider vanes - Small strips of metal or plastic in the front of a Newtonian


telescope which supports the secondary mirror in the optical path.
Stacking - Taking several images and combining them in such a way as to
increase the signal that you want to keep while reducing the noise levels
that you do not.

Strehl ratio – Gives a ratio as compared to a theoretically perfect optical


system. For example, a Strehl ratio of .90 is 90% as good as a theoretically
perfect optical system.

Stretching - Taking an image and manipulating the data so that details that
were too dark to see are now light enough to be visible through
compression of the grayscale or color scale.

T-Ring – Connects to your camera as though it were a lens and converts to


a standard T-Thread which can be screwed directly to some focusers, or to a
separate snout that slides into a focuser.

Thermo Electric Cooler (TEC) – Electric cooling device used with some
CCD and DSLR cameras.

TIF - A file type (like .GIF and .JPG) to store image files. TIFs are
excellent because they are lossless formats. They are however far larger
than JPG or GIFs.

Tracking - The ability to follow an object as it appears to travel across the


sky.

TSX - Abbreviation for TheSkyX, a planetarium, telescope control and


planning application for amateur and professional use from Software
Bisque Inc.

United States Naval Observatory (USNO) – The standard for


timekeeping in the United States.

Vignetting - The effect of the edges of an image being darker than the
center due to obstructions or optical imperfections.

Well depth - A measurement of the total amount of light a photosite can


store before saturation occurs.
White point - A part of an image that represents pure white.

Zenith – The point directly overhead.

Ziegenfield effect - A total nonsense phrase I made up just to see if you


would actually read it.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy