Presented here is a description of a program for time series photometry based on the techniques of optimal extraction. The program is fed a list of images to reduce, then the user indicates the program star, up to ten comparison stars, and a psf model star. PPX then finds stars in each image, determines object matching from one image to the next (shift, scale, and transpose) using similar triangles, then uses the image mapping coefficients to locate and measure, using optimal weighting, the brightness of each comparison and target star in each image. An output table containing the resulting photometric measurements along with image mapping coefficients and psf shape parameters is written to disk as a “CVS” file in preparation for further processing in Microsoft Excel and plotting using gnuplot. PPX is written in the IDL language.
Introduction
Since the mid 1980s astronomers have employed the techniques of optimal weighting to reduce spectroscopic data from 2-dimensional CCD detectors, assuring the maximum signal-to-noise (SN) in the resulting 1-dimensional spectral traces (see Horne, K. PASP 98:609-619, June 1986 – available HERE). The program described here, “PPX”, uses the same techniques but in two dimensions (described in Naylor, T., MNRAS 296:339-346, 1998 – available HERE) to maximize the SN for brightness estimates of faint stars in ccd images. In comparison with the aperture photometry software used by amateur astronomers the technique has the potential to produce significantly improved SN, particularly in the sky noise limited cases which are so typical to home-bound amateur observatories. PPX was conceived, written, and tested here at the Beverly Hills Observatory (BHO) over the past year or so, during which time many “tweaks” and improvements have been made. This article describes it’s use, demonstrating it’s strengths and weaknesses.
Some Background
One of the two telescopes here at Beverly Hills Observatory (BHO) is used almost exclusively for time-series differential photometry of cataclysmic variable stars, mostly in cooperation with The Center for Backyard Astrophysics (CBA). As the observatory is located only four miles from the center of downtown Baltimore, MD, I am always limited by severe light pollution. One way to mitigate the additional noise a bright sky contributes to aperture photometry measurements is to keep the central or “star” aperture as small as practical. It is well known and can be easily demonstrated that the optimal aperture diameter for differential aperture photometry is something like 1.4 times the full-width at half-maximum (FWHM) of the point spread functions (PSF) in an image (see Figure 1). In the case of differential photometry we’re not trying to measure the absolute flux of the star, only its measured flux in comparison with other stars in the same image. The assumption is made that the shape of the PSF is invariant within a single image and thus an aperture of any given size encompasses the same percentage of light for any and all stars within that image. As we’ll see later, that is a somewhat dangerous assumption!
Basically there is a trade-off when it comes to choosing an aperture size. If one uses a larger aperture the constraints on positional accuracy (the centroid of the star’s X,Y position in the frame) are relaxed, but more low signal-to-noise (S/N) pixels in the outer part of the PSF are included. If one uses a very small aperture it is essential that the centroid of each measured star be as accurate as possible so that the measurement aperture includes exactly the same part of each star’s PSF. In the small-aperture case one must also be very careful as to how flux in pixels only partially included inside the aperture is determined. In the end these two considerations lead to optimal aperture sizes which are somewhat larger than what Figure 1 might suggest.
In practice what I’ve done is to reduce a night’s data several times, each time varying the size of the photometry aperture until the standard deviation, or scatter, in the curve for two comparison stars is minimized. Since the optimal aperture is also mildly a function of the brightness of the stars being measured there just isn’t a one-size-fits-all solution.
MIRA
My solution to all the considerations, discussed above, was to seek out a photometry software package that had excellent sky determination, properly handled partial pixel integration, and had the very best image centroid determination possible. Packages that I tried included Maxim DL, AIP4WIN, MPO Canopus, and DAOPHOT (I had IRAF running under Cygwin and that has an older implementation of DAOPHOT). The clear winner, however, was MIRA from Mirametrics.
The biggest advantage MIRA has over any of the other software is its precision centroiding methods and careful partial-pixel integration around the edges of the star measurement aperture. Of the other packages only DAOPHOT allows parameters (like the aperture size) to change from one image to the next and that is really important in order to keep the measurement aperture as small as possible in order to maximize signal-to-noise (again, Figure 1). Using the MiraPro Scripting interface I was able to assemble a differential photometry reduction “pipeline” (written in Lua) that drew upon MIRA’s various classes and methods. The result was significantly improved S/N and reduced photometric scatter as determined by performing differential photometry on pairs of standard stars.
Optimal Extraction and IDL
About five years ago I finally decided to plunk down the necessary chunk of cash to purchase a license to run the IDL programming language on my home computer. Virtually all of the programming I’d done while at the Space Telescope Science Institute (STScI) had been done in IDL so it was nice to have a familiar language to work in. I wanted to automate some of the tasks in MIRA which had become tedious, particularly when reducing data sets that might include over 1000 images for a given evening. As mentioned, the aperture photometry in MIRA was very precise, but the user interface (UI) lacked some capabilities I really needed, like the ability to follow a set of stars through a large image stack without loosing the stars if their positions changed by more than a few dozen pixels, or through a meridian flip where the images are rotated by 180 degrees. MIRA also had a really nice source extraction and photometry package called MExtractor which I wanted to make greater use of. Unfortunately MExtractor is a stand-alone program that only works inside the UI – so there were no “hooks” or exposed classes and methods I could use to try to automate the input/output, and certainly no way to force it to follow a particular set of stars throughout an image sequence. It just finds all the stars and measures them and outputs the results.
So the first thing I wrote in IDL was a similar-triangles star matching routine which was able to match stars from one image with the same stars in another image despite the second image being shifted, magnified, and rotated relative to the first. Called “MATCH_STARS” it reads the MExtractor output files and outputs the image mapping coefficients. I then wrote a Lua script which read the MATCH_STARS output, used the image mapping coefficients to determine star positions in each image, then called upon MIRA’s photometry and centroiding tasks to perform the actual photometry using image-specific aperture sizes and shapes (elliptical apertures). The resulting photometry was significantly better than any of the other software I had tried.
At about that same time I read Tim Naylor’s paper about optimal extraction or variance-weighted aperture photometry. MATCH_STARS thus became the first major chunk of code for what would become PPX.
PPX: How It All Works
There are many steps to the procedure, almost all of which happen “under the hood”. This is a synopsis of each step of the process without going too far into the details. Because I wanted to run this program on my laptop “in the field” and because licenses for IDL are so expensive, it meant I had to “deploy” the program using the IDL Virtual Machine. The IDL VM allows IDL programs to run on machines which have no IDL development license. The problem is that doing so means you have no access to IDL’s command line, so everything has to be done using widget interfaces.
1) Program Startup – Input Parameters
When the user clicks on the desk-top icon the first widget he/she encounters is a dialog that allows input of a number of program input parameters. The defaults values, shown in Figure 2, are ones that seem to work well with my particular telescope/camera combination.
2) Indicate the “Stars of Interest”
The “stars of interest”, or SOI, include the PSF model star, the variable star, and up to ten comparison stars. The user indicates all of the images to be measured and the first image from the list is loaded into a star selection GUI, shown below. There the user indicates each star with the image cursor. The user can also adjust the inner and outer sky measurement annuli. If the sky annulus for any one of the stars includes other objects that might bias the sky estimation the user can indicate another nearby region to use for the sky determination for that object.
3) Find
Once the user exits the star selection GUI the program will run without further input. The first thing it does is find all of the stars in each image. An IDL version of DAOPHOT’s “FIND” task is used to find and determine approximate locations for all the stars in each image, and DAOPHOT’s “APER” task is used to provide approximate flux and robust sky estimates.
4) Filter Star Arrays
The stars found in each succeeding image will be used to determine image mapping coefficients to transpose star positions from the first image into the “space” of each succeeding image. The MATCH_STARS routine, mentioned earlier, performs the necessary calculations. But before sending the star arrays to MATCH_STARS the arrays are cleaned of objects that might lead MATCH_STARS astray, such as close doubles, faint stars, saturated stars, and stars too close to the image edges.
5) Improve x,y Positions
Throughout PPX liberal use of Craig Markwardt’s “MPFIT2DPEAK algorithm is used to fit 2-d elliptical Gaussian functions to the stars. The first use is to improve the positions determined by DAOPHOT FIND. It proceeds in two steps; the first has the sky value fixed to that found by DAOPHOT APER. The second iteration uses the output of the first along with the same sky value (really not likely to improve on that) to fix all values except xc and yc. These then become the positions fed to MATCH_STARS.
6) MATCH_STARS – Then Determine PSF Shape
MATCH_STARS then determines the image mapping coefficients. With the exception of the first (or template) image we don’t know where the PSF model is located, so the position coefficients are used to locate it in each image. Another two-step fitting process using MPFIT2DPEAK is employed, this time using the output from the first step to build a weighting mask for the second iteration where all parameters except the PSF shape parameters (axis1, axis2, and tilt) are kept fixed.
7) Refine Location of Stars in the Template Image
Once the image mapping parameters are determined for all of the images, those parameters are used to map the positions of the Stars of Interest (SOI) from each image back into the space of the template image. Basically the shift/rotation matrix is inverted for this purpose. The mean of the median half of the x,y positions so mapped become the refined positions for the SOI in the template image. Once this is complete there will be no further fitting – the weighting masks will be placed at the refined template image location, mapped into each subsequent image using the image mapping coefficients without further refinement. The idea is to avoid chasing bright pixels (errors will be biased towards brighter pixels) thus resulting in estimated magnitudes that are biased towards brighter magnitudes for fainter stars.
8) Build The Weighting Mask
This is the heart of Naylor’s technique. First, a 2-D Gaussian is fitted to the image of the variable star. There are seven parameters that describe the 2-D elliptical Gaussian:
- constant background
- peak level
- axis 1 size
- axis 2 size
- x-position
- y-position
- axis 1 tilt
For the fit to the variable star only the peak level is allowed to vary; all other parameters are kept fixed. The sky was determined by the DAOPHOT APER routine, the PSF shape parameters (axis1, axis2, and tilt) are from the fit to that image’s PSF model star, and the X,Y positions from the refined SOI positions mapped into the space for the image under consideration.
The Gaussian fitted to the variable star as described, above, is then used with the known variance in the sky in vicinity of the variable to calculate a 2-d variance mask. The total “volume” of the fitted Gaussian is normalized to equal 1.0, and the normalizing factor recorded for use in the following step.
Next the program goes through the list of stars-of-interest, for each one building a model PSF based on the fit parameters found for that image’s PSF template star, but sampled on the x,y grid to reflect the x,y location of that particular star-of interest. That PSF is also normalized to 1.0, then multiplied by the normalizing factor for the variable star’s PSF (see previous paragraph) and combined with the variable star’s sky variance to construct a star-specific 2-d variance “image”.
Finally the weighting mask is determined. First the normalized PSF for the star-of-interest is divided by its variance mask. The resulting quantity is then divided by the sum of the square of the variable star’s psf, itself divided by its own variance mask. This results in a 2-d weighting mask centered on the location of the star-of interest. The weighting mask is multiplied, pixel-by-pixel, times the actual image values of a subarray centered on the star-of-interest to determine the actual optimally-weighted flux measurement.
One of the really nice features of this technique is that, because the model PSFs are all normalized, each pixel’s intensity in the PSF model reflects the proportion of total flux that should occur within that pixel. So each pixel can be used as an independent estimate of the total flux. The weighting mask guarantees that we combine those estimates in a way that maximizes the resulting S/N. Furthermore, it also allows a simple and direct determination of the actual errors in the measurement.
9) Truncating the Weight Mask
Theoretically the weighting mask does not go to zero until R=infinity. But from a practical standpoint the product of the weighting mask times the image values becomes really small after about R=2.0 FWHM. So the final step in PPX is to apply a soft aperture that sets pixels beyond a user-defined radius to zero. In practice I’ve found that setting this parameter (called R_CUT) to about 2.8 or so seems to maximize the resulting S/N.
Results
I’ve re-reduced numerous nights’ data and the improvement using PPX is clear. To quantify the differences I determined the standard deviation of DMag where:
DMag = Mag_Comp2 – Mag_Comp1
Figure 4 shows some data from 2010 for the CV star CR Bootis. In this case one of the comparison stars used was very near the brightness of the variable star itself, which in this case was quite faint. Specifically, the peak of the PSF for the fainter comparison star was typically 600 ADU above a 6000 ADU sky level, a classic case of sky-noise-limited measurements. The standard deviations for DMag in this case were 0.051 for PPX and 0.065 using Maxim DL. That difference is a bit larger than I’d expect; Naylor shows in his paper that one should expect about an 11% improvement in S/N over straight aperture photometry. I suspect part of the difference is due to the fact that Maxim re-centers its photometry apertures for each star in each image. Especially with faint stars that leads to the apertures chasing bright pixels caused by sky noise. Given the small number of “real” star counts a single errantly bright pixel can make a big difference. With PPX once the “robust” positions of the stars in the template image are determined their positions, following application of the image-to-image mapping coefficients, are not changed. Figure 5 shows the differences in the x-position of comparison stars 1 and 2 throughout the image stack. The scatter in the Maxim data is obviously much larger.
When PPX Goes Bad
The single most important assumption in the optimal extraction method is that the shape of the PSF is invariant across the image. But the fact that I use a standard Schmidt-Cassegrain Telescope (SCT) means that is most decidedly untrue in my case. Figure 6 is a plot showing the variation of measured FWHM as a function of X position for a sample image. The focal surface for an SCT telescope is strongly curved and even over the relatively small area of the ccd chip I use (about 10mm across) the difference is evident. Other optical aberrations, like coma, are also present, and grow with distance from the optical axis.
But it gets worse. Like most mass-produced SCTs the Celestron 14 inch ‘scopes suffer from “mirror flop”. These telescopes are focused by moving the primary mirror along the central baffle tube. There is always a little “slop” between the collar that rides along the baffle tube and the tube itself. Hence when the telescope is moved from one area of the sky to another the mirror can “flop” a small but annoyingly significant amount, thus changing the optical collimation. The biggest problem occurs when “flipping” the telescope from one side of the mounting pier to the other – as happens when using a German equatorial mount at the time of meridian crossing. Even if one is careful to match the field of view both pre- and post-meridianal crossing the various comparison stars will fall at different locations along the blue curve shown in Figure 6. And that’s assuming the optical alignment and thus the “minimum” location of the curve do not move. Finally, focus changes can change the shape of the curve as the zone of best focus migrates from a center “point” of best focus (hopefully near chip center) to circular zones of best focus. When the ambient temperature is changing rapidly this becomes a problem – which could be mitigated using a temperature-compensating focuser which, sadly, I don’t have. Figure 8 is a case-in-point. The red curve is the differential magnitude between two comparison stars as reduced through PPX, while the blue curve is from data reduced using the Mira pipeline. Since Mira uses unweighted aperture photometry it is less affected by changes in the PSF shape.
Another way in which PPX occasionally fails is during the star-matching process. The star-matching function was originally written as a stand-alone program that took as input the results from a run by Mira’s MExtractor plug-in. Within the context of PPX, MATCH_STARS does not have sufficient error trapping code installed, and the various switches and parameters which are included in the function call are not available to be adjusted by the PPX user. Currently I have “hard-coded” the parameters which seem to work best for my particular setup. Still, the program can crash if the star-match function is not able to find enough matched triangles to determine a solution. This can happen when the areal overlap between images is insufficient and there are not enough stars in common to create enough similar triangles. If there was a “jump” in the guiding such that the images are doubled then the program will find two closely-spaced objects for each star which can confuse the matching algorithm. Finally, in cases of poor guiding the PSF images might be elongated beyond what the initial DAOPHOT FIND parameters (particularly the “round” parameters) will allow the program to detect. All of these cases can cause the star-match function to crash hard, taking PPX with it. I’m currently updating that section of the code, not so much for my own use but in case others decide to use PPX. Once completed there will be proper error trapping and appropriate messages will be written to the output log file. I’m also going to add the star-matching switches and parameters to the startup parameters dialog.
Conclusions
PPX, using optimal variance-weighting extraction, has the capability to significantly increase S/N in estimates of star brightness in two-dimensional CCD images as compared to normal unweighted aperture photometry programs. Together with other program features, such as creating a robust location for the “stars-of-interest” in the template image, and by not re-centering apertures during the extraction process (and thus possibly chasing hot pixels), a reduction in the standard deviation of differential magnitudes of comparison stars as much as 25% compared to aperture photometry routines has been achieved. Some improvements are needed before the program is ready for anything beyond personal use, including better error trapping and user access to the parameters used by the star-matching algorithm. On the hardware side, here at Beverly Hills Observatory improvements could be realized by installing a temperature-compensating focuser and by replacing the current CGE mount with one which will allow tracking past the meridian so that the images remain at the same location on the chip. This second item is being addressed with the acquisition of a previously-owned Astro-Physics 1200gto mount.
Leave a Reply