A crude first pass at the reduction of UNSWIRF data can be done at the telescope using a mixture of IRAF routines. A package of IRAF programs has been compiled by Stuart Ryder (with enhancements by Brad Cavanagh at the University of Victoria and by Michael Burton at UNSW) to streamline the final processing of UNSWIRF data. These programs have been put together into an add-on package for IRAF that the user can easily set up themselves as follows:
gunzip unswiraf.tar.gz tar xvf unswiraf.tar rm unswiraf.tar
set unswirfdir = "home$UNSWIRF/" task $unswirf = "unswirfdir$unswirf.cl" reset helpdb = (envget ("helpdb") // ",unswirfdir$helpdb.mip")
cl> unswirfYou should receive a welcome message and a list of the new tasks.
un> cd UNSWIRF un> xc unswfit.x -lllsqIn the unlikely event that the compilation fails, try checking that the LD_LIBRARY_PATH environment variable is defined.
un> cd unswirfdir un> softools so> mkhelpdb root.hd helpdb.mip so> bye un> cdwhich recompiles the help library, and should report something like "Total of 19 help modules in 3 packages, file size 3904 bytes". Then, typing "help unswirf" (or just help) will bring up a short summary of the available tasks and their functions.
The data are now recorded at the telescope in fits format so the step described below should not be necessary unless you still happen to have some unreduced data stored in figaro format.
[Since the raw data files are recorded onto disk in .sdf format,
the easiest way to backup your data at the telescope is to make a tar file
on the Sun Exabyte/DAT drives of the raw data area (e.g., cd /vaxinst/ccd_1/961118;
tar cvf /dev/nrst5 *). If you resurrect this tar file on one of the
data disks, then you can use this
Unix shell script to convert a sequence of .sdf files to FITS
files in your IRAF directory; it is best to run it while in the directory
where your data is located. Within IRAF, type files *.fits > night1.in;
copy night1.in night1.out. Use your favourite editor to globally remove
all occurrences of ".fits " from night1.out. Finally,
read all the files into IRAF using rfits @night1.in 1 @night1.out make+
oldiraf- . If your site has the Starlink
CONVERT utility available, the conversion can be done directly from Figaro
to IRAF format using this
Unix shell script.]
Although data collected with IRIS using Method 4 (NDRs) is automatically
corrected for non-linearity and dark current, we follow the Figaro convention
of using the IRISLIN task to convert from ADU to e-. At the same time,
we extract the etalon z setting from the header comment line (if present,
otherwise it will be searched for in the image title string), and put this
in the keyword Z_ETALON. The task that does both of these is called unswlin,
and can generally be used to pre-process a whole night's data using unswlin
18nov*.imh (it checks each image to see if it has already been "linearised",
and if so, skips to the next image).
Following the night's observing, you will probably have made a series of dome flatfields with varying filter and etalon z, preceded or followed by a matching set of lamp-off exposures. There are two tasks available for this task, unswflats and unswflat2. They both carry out the dark subtraction and produce a flatfield with the name Flat_filter_zzeta, where filter is the blocking filter used, and zeta is the etalon z setting.
unswflats is for when there is a sequence of lamp-on frames and lamp-off frames. Simply enter the image root name, the first image number with the lamp ON, and the first image number with the lamp OFF. unswflats will figure out the order of things from there. Note that unswflats deletes the lamp-on and lamp-off images after creating each flatfield, as they are unlikely to be needed again.
unswflat2 is for when there is a single lamp-off frame. It takes
as input the first lamp-on
frame, the last lamp-on frame, and the number of the lamp-off frame. It does not delete the lamp-on and lamp-off images afterwards!
Again, I assume that you have a series of consecutive exposures of a spectroscopic standard star from this list over a range of z settings, repeated with the telescope offset by ~30". Input is the root name for the data, the first and last numbers in the standard sequence (should be all taken in pairs), the name of the output text file storing the results, an indication of whether the telescope moved first (alt=t) or the etalon (alt=e) in the sequence, and a list of radii (up to 7 values) for the photometry apertures. Only the first number in this list is actually used in calculating sensitivities, but the photometry from the others can be inspected to help ascertain whether the right aperture size was chosen.
Regardless of whether the etalon spacing or telescope pointing was changed
first, you can use the task unswphot to automatically:
Typically expect a few percent variation between measurements at the same FPZ setting (1-3%) in good conditions, but larger variations (5-10%) between different etalon settings (though tending to vary in a systematic manner). We only use the value for the continuum setting later (in unswcal) but it is useful to check on the consistency of the calibration and whether the variation in ADUs with FPZ setting mirrors the variation in scaling factor determined with UNSWCUBE4/5.
unswphot assumes the star name has been recorded as BSxyz, and a list
of the stars whose photometry is recorded, together with their wavelength,
can be determined from inspection of unswphot.cl. It is relatively
simple to add additional standard stars to this list by following the format
Only the most luminous sources will contain adequate signal-to-noise for the line width to be treated as a free parameter in the cube fitting. Therefore, most sources will require some sensible constraint on the line width, and since the emission lines in most objects will be unresolved even by UNSWIRF (except perhaps for outflow sources), etalon scans of an arc line can be used to map both the variation of line width and peak position over the etalon area. The latter can be used to correct for variations of the peak position (as measured by the cube fitting) which are intrinsic to the etalon.
The cubes can be generated by using the docube (when there is a single
arc-off frame) or doscube (when arc-on and -off frames were taken in pairs).
Process the cube with unswfit cube cube fwhmimage-
func="lorentzian" guess="parabola" fwhm=15 fwhmfix- dz=2, where cube
is the name of the IRAF cube file. This will produce an integrated
intensity map cubeI, a line width map cubeW, and a "velocity"
map cubeV. To make use of the width and velocity maps in later processing,
you will need to rotate them by +90 degrees (unlearn rotate; rotate
cubeW cubeW 90; rotate cubeV cubeV
90), and then some judicious use of the imedit task may
be in order, to patch over any spurious pixels. At a pinch, you may be
able to use arc line maps from previous observing runs, but since the width
will depend on how good the parallelism was, you should use arc line maps
from the same day, or at least the same run.
Now to some real data! The basic steps that must be carried out are:
There are three versions of the program, unswcube, unscube4 and unswcube5, to choose from (unswcube2 and unswcube3 are now defunct):
unswcube is the original program. Scaling is done by a fixed number, input on request, for the continuum frame to all the line frames. Standard bad pixel and mask files are used (unswirfdir$unswirf.bad and UNSWIRFrmask) and no smoothing is performed. Note that the bad pixel mask must have been rotated 90 degrees anticlockwise but the mask has the same orientation as the raw data (sorry - an historic accident in the programming!). After processing the images and sorting them in order of increasing z, the continuum image will be displayed in the image display window, and you will be asked to mark the location of least one continuum object in this frame using the space bar. This will be used to register all the frames to a common coordinate system, so these objects must be present in ALL the frames, otherwise the alignment will fail. If there are no suitable field stars, you may still be able to use a continuum object with superimposed emission (e.g., an AGN), but be aware that the peak of the emission may change spatial position with z. Check the output of the imalign task to see that the shifts are all small. It would also pay to examine each plane of the cube (use the unswdisp task) for proper alignment and artifacts.
Note that the input images will not actually be altered, so that you can re-run unswcube if necessary, without having to reload and pre-process the raw data.
unswcube4 and unswcube5 allow (i) the scaling factors to be individually chosen (placed in a text file with one line for each line FPZ setting; each line setting is scaled to the continuum FPZ setting), (ii) the sigma of the Gaussian smooth in pixels to be chosen (set to 0.001 if you dont want any smoothing; you can estimate the sigma by applying imexam to a sky-subtracted frame and applying "j" and "k" for Gaussian fits), and (iii) the name of the bad-pixel and mask files to be specified. The number of object frames needs to be explicitly set as well.
unswcube5 is the same as unswcube4 except that after cleaning the pixels with fixpix it brings up imedit to allow interactive cleaning of any bad pixels that remain. Since these pixels cannot be recorded you dont want to do this step more than once! Hence unswcube4 is typically run several times, iterating on the value of the scaling parameters to choose, first with a low smoothing factor, then with a smoothing factor equal to the seeing. When the best values for these are found (from inspection of the cube, using unswdisp2) then it is time to run unswcube5 with these parameters. Warning - keep track of the scaling factors as you vary them - its very easy to forget which way you have moved them! You should be able to get down to just 1-2% changes in these factors before they are optimised. Typically they will lie in the range 0.9-1.1, but can fall outside this, particuarly in the case of bad weather!
The locations of bad pixels to be patched over are recorded in the file unswirfdir$unswirf.bad; since the number and location of bad pixels generally changes with time, you should feel free to update this file yourself (note that these pixel locations correspond to images which have already been rotated through 90 degrees). You should carefully examine the output datacube for weird artifacts (e.g., "bullseyes") that come from the sub-pixel image interpolation while aligning all the images; these are most likely due to unlisted "hot" pixels which, once identified in the raw data, can be appended to the bad pixel list. The mask (unswirfdir$UNSWIRFrmask) cannot also be assumed to remain constant and should also be determined for each observing run.
Also output, in addition to the cube, is a continuum image, with the cube name and the suffix cont added to it.
Improvements to these taks, particularly to deal with the position of
the stars in sky frames moving differentially with respect to source frames,
would be appreciated!
If you have repeat sequences of your object and sky, or you obtained images at extra z settings afterwards, then this process needs to be broken down into 2 steps; first the basic image processing, then image alignment and averaging to form a master datacube. The task unswproc will do the former, creating new processed versions of each image with the prefix ip (as with unswcube, it was felt best to leave the input data frames unaltered). Next, run the task unswmerge on these new files to create the final cube of averaged and sorted images. You will need to give it a text file containing the names of images to be averaged together (i.e., those with the same z), e.g.,
ip19nov0268 ip19nov0278 ip19nov0264 ip19nov0274 ip19nov0265 ip19nov0275 ip19nov0220 ip19nov0238 ip19nov0221 ip19nov0239 ip19nov0222 ip19nov0240 ip19nov0223 ip19nov0241 ip19nov0224 ip19nov0242 ip19nov0225 ip19nov0243 ip19nov0226 ip19nov0244 ip19nov0227 ip19nov0245 ip19nov0266 ip19nov0276 ip19nov0228 ip19nov0246 ip19nov0277 ip19nov0267If you have three repeat sequences, then this file will have 3 columns; if you only obtained one sequence, then it will have only one column, and the frames will still be sorted in order of increasing z, but no averaging will be done. Note that although the order of the image pairs/trios is not important (they will be sorted in order of increasing z before the cube is made), the first image name(s) in the file should be the continuum frame. Also, note that the images need not be contiguous in z; in the example above, two complete scans from z=413 to 463, followed by z=477 were done (runs 220-228 and 238-246); later it was realised that extra z settings were required (z=400, 407, 470, 484, and 550), and each of these were done twice (in runs 264-268 and 274-278). Provided the z increment is constant between all frames to be merged, with no gaps, then unswmerge can handle it.
Should the image centroiding on any of the continuum objects fail to
converge for some reason (images with large offsets, say > 10 pixels relative
to the continuum frame, may need to be pre-registered using the imshift
task to bring them into closer alignment), then unswmerge will
crash, and you will need to investigate the source of the problem, tidy
up the intermediate images, and start again. Note that the images produced
by unswproc ARE overwritten after being registered, so to avoid
shifting images twice, you may need to rebuild these with unswproc.
The penultimate step in the data reduction is to extract the "moments" of the datacube (integrated line intensity, position of the line peak in the etalon z/wavelength/velocity direction, and line width). In practice, the line profile orthogonal to any particular spatial pixel is usually too noisy to allow the line width to be a free parameter, and so the instrumental profile width is assumed instead, as mapped by the arc line scans. Having already set the base of the line profile to be zero (by subtracting off the continuum), this leaves only the position and the height of the (Lorentzian) profile as the remaining free parameters in the fitting.
The fitting routine will generate values for the intensity and "velocity" at each pixel, no matter how unlikely the value. Thus, it makes sense to blank out (in this case, set to zero) all pixels in the vignetted corner regions of these images (when the wide-field optics have been used). In addition, the user is given the opportunity to interactively blank out pixels in both the intensity and velocity maps which neither exceed some intensity threshold, nor yield a sensible velocity (i.e., the fitted position of the peak should lie within the z range scanned, assuming the observations did adequately span the line of interest).
The task that controls all this is called unswblank. You will
be prompted for the name of the cube, the name of the arc line width map
(type no if you don't feel you need one, dont forget to have rotated
the width frame from unswfit if you do!), and the z-increment between images.
The unswfit routine (written by Michael Ashley) is then called
to do the Lorentzian fitting, and produce the raw moment maps. The integrated
intensity image will then be displayed in the image display window. If
using SAOimage, you can place this image in one of the "blink" buffers
(Select "Scale", then "Blink" with one of the mouse buttons, to associate
the image with that button); with Ximtool, the image is in Buffer 4, and
all later images will be in buffer 1. Move the cursor over the sky regions,
and estimate the amplitude of the noise level. Enter this number (probably
10000 - 15000) as the first estimate of the threshold to apply. You are
also asked for a high threshold - enter a number just above the largest
source data value - this helps catch any hot pixels which might have been
miseed. An appropriate blanking mask of 1's and 0's will be constructed
on the basis of the above criteria, and displayed. Now, by blinking this
mask with the raw intensity image, you can assess whether the threshold
is too high or too low to separate "real" emission from noisy data. Enter
"n" and another threshold level, and keep on iterating until you find a
mask you are happy with (or a best compromise); otherwise, enter "y", and
the blanked maps will be produced:
Note that cubeI and cubeV are also output - these are the equivalent of Ib and Vb if no blanking had been done.
Sometimes, particularly with low surface brightness sources, you may
come away a bit disappointed with the final fitted result, in which the
emission seems even less well-defined than in one or two of the on-line
frames. This is usually a case of more not necessarily being better - the
emission may be confined to two or three frames, and the bulk of the frames
are blank. What tends to happen is that unswfit picks up on only
one peak in each spectrum, and when most of the spectrum is noise, that
peak may not be the actual emission, and is normally blanked. In cases
like this, it would be worth trying to fit a cube with fewer frames (but
not less than 2!). For example, if the only emission apparent in a cube
of 6 frames is in frames 4 and 5, then sub-section this cube (imcopy
cube[*,*,3:6] cube), and run this smaller cube through
The blanked intensity maps are still in units of electrons. unswcal converts these into calibrated fluxes. It divides by the exposure time (read from the header) and then applies the sensitivity calculated by unswphot (typically you use some average value, though strictly speaking you apply the continuum frame level as all line frames are scaled to this). The task unswcal will do both of these steps, producing a calibrated image cubeIc, in units of 10-16 ergs cm-2 s-1. The input file name is cubeIp and the scaling factor input is given in 10-16 ergs cm-2 s-1 per e/s.
Similarly, a corrected velocity map can be produced using the task unswvel or unswvel2. These subtract the arc line "velocity" image, before B converting from z units to km/s. The input paramters are the velocity frame output from unswblank cubeVb and the reference velocity image obtained from unswfit (dont forget to have rotated it!). To convert to an absolute velocity scale is difficult and can only be done if the same emission line has also been observed in a source where its peak velocity is known. The program initially produces relative velocities. unswvel applies a constant offset to all pixels. unswvel2 determines an absolute velocity by taking as input the velocity appropriate to the first plane in the cube produced by unswcube, the FPZ number of that plane, and the spacing in FPZ units between planes. All pixels whose velocity cant be determined (eg due to no flux) have a zero-point set to -1000 km s-1 to avoid velocity values near zero. Beware of this when displaying the frame!