|Co-authors||Data pipeline team|
|Date Created||March 23, 2004|
|Date Updated||Sep 10, 2008|
|Maintained by||Gopakumar P|
Document Scope: This document addresses the pipeline
software requirements for the TAUVEX mission.
TAUVEX is a suite of 3 imagers which
will fly on GSAT-4.
The telescope will point toward the sky at a fixed orientation to the
Earth-satellite axis and will thus scan a fixed line of celestial
latitude with a period of 24 hours. Each individual photon is read and
will be reconstructed into an image of the sky after processing on the
- UV Software Overview
- Intermediate Data File Formats
- Spacecraft Telemetry Interface
- TAUVEX Database Design
- TAUVEX Data Reception
Data from the instrument require considerable processing before they
can be used for science analysis but because of the amount of data,
this reduction has to be done by an automated pipeline. The output data
products must be those most useful to the user community.
- The software must run on a
variety of architectures and so is
programmed to run with Java (1.6.x)
- Where possible, we use code
developed by other projects.
- The code must be modular so
that individual parts may be
substituted as required.
- Each logical step will be a
separate module with a standard data
product which can be verified or used as a base for further analysis,
- The pipeline system will be
run from a script or command file.
- The program operation will
be governed by a parameter file.
- Each data file will be fully
self-documenting. The history of
that file will be contained in the data header.
Return to top
- Level 0 Data: Raw data from spacecraft.
- Level 1 Data: Data processed to remove filler.
Level 1a Data: Level 1 data but converted to FITS binary
tables. Created by UVS_create_level1a
- Level 1b Data: Level 1a data with the x and y
pixel information of each event. Created by UVS_calc_xy
- Level 1c Data: Level 1b data with data
calibration information. Created by UVS_apply_cal
- Level 1d Data: Level 1c data with flat field
information. Created by UVS_flat_field
- Level 1e Data: Level 1d data with geometric
distortion information. Created by UVS_geom_corr
- Level 1f Data: Level 1e data with R.A. and decl.
information for all event. Created by UVS_calc_radec
- Level 1g Data: Level 1f data after data registration.
Created by UVS_register_data
- Level 2 Data: Processed data from single scan of sky.
- Level 3 Data: Data from multiple scans.
- Level 4 Data: Derived science products.
There are four major stages in the pipeline flow, each of which is
further expanded below.
Data produced by the instrument will be sent through the spacecraft
telemetry system to the Master Control Facility in Hasan and will be
processed into Level 0 data. The pipeline will begin with the Level 0
A number of checks will be done by the software to ensure that there is
nothing grossly wrong with the data. These checks may include tracking
the voltages, temperatures, total count etc. If a potentially serious
condition arises, an operator will be notified and processing will
- Correction for Instrumental Effects:
Corrections at the instrumental level include:
- Distortion corrections -
where the image is mapped to a
different point based on its location in the field.
- Flat fielding - The
instrument sensitivity depends on the
location on the focal plane.
We get a time-tagged series of photons from the instrument which we
then convert into a map of the sky. In order to ease the pointing
requirements on the spacecraft, we will correct each photon's spatial
position by shifting and registering to other photons from the same
star. Because of the differing sensitivities of the different
instruments, it may be necessary to use the data from one telescope to
correct the others.
Level 2 Data:
Once the registration is complete,
data is ready to be made into images from individual scans. These
images, in standard format (for eg. fits, VOTable), are readable by
any standard astornomical software packages. Different
information on each data set will be writted to a database which can be
queried on line for later access by user community.
Level 3 Processing:
Responsibility of the automated
data pipeline software ends with level 2 data production.
Higher order data products will be dealt with in the due course,
depending on the availability of man power and resources.
Level 3 data shall be combined images from multiple observations and can contain
additional source information useful for variability studies and light
curve analysis. Derived science products like point source catalog,
light curves etc. forms the level 4 data.
Return to top
Running the Pipeline
The entire pipeline can be run using a shell script (UVS_run_pipeline.sh).
user need to supply the filename for level 1 (or level 0) data. However, this
works only on unix/linux. One need to run individual modules separately on
other platforms. These individual steps may be performed in the following order.
Please check the modules section for more information on
each of them.
- UVS_ingest_data: Splits combined data into science and telemetry data files.
- UVS_create_level1a: Convert data into FITS binary tables.
- UVS_calc_xy: Calculate x and y positions of photons.
- UVS_apply_cal: Apply calibration to the data
- UVS_flat_field: Correct for variations in the calibration.
- UVS_geom_corr: Correct for distortion in the system.
- UVS_liearity_corr: Correct for non-linear response of detection.
- UVS_register_data: Correct for pointing errors.
- UVS_create_image: Create Level 2 FITS files.
- UVS_extract_ps: Extract point sources.
- UVS_variability: Write time history for each point.
The input of the first step (UVS_create_level1a) is obtained from the spacecraft. For
testing, the input file can be created by running the module 1 and 2 given in the Test Routines
with inputs from the LEVEL0 INPUTS given in the
Data Pipeline Downloads.
Ingest is the first step in the TAUVEX pipeline and will convert the
spacecraft data into scientifically useful data products.
The pipeline will start with the Level 1 data which is
defined in the document:
The output data is the validated Level 2 data. The format of
Level 1a data is binary FITS tables with multiple extensions. The first
extension contains only a header with all observation specific
information. Subsequent extensions contain information about the
individual photon hits and are separated by time stamp; ie., one
extension contains all the photon detections within a 1/8 s time
interval. The headers contain only information specific to that
Based on the 'SYNC' word
NOTE: The output file name would be <Level1a_data_file>_T<n>.fits, where
n = 1,2,3 for each of the three telescopes.
Will read TAUVEX Level1
data and will write out FITS binary tables in the
Level 1a format.
Return to top
There are several instrumental effects including geometric distortions
and flat fielding that have to be corrected for.
Input: The input data is the validated Level 1a data.
Output: The output data will be a FITS binary file with each extension representing a different frame - basically the same format as the Level
Overview: Location of each event may need to be
corrected for instrument scanrate (for scanning instruments),
in-orbit jitter and drifts. This step corrects the positions for
all spacecraft shifts.
The input data is the geometrically and photometrically corrected
The output data will be an image of the sky (Level 2 data).
Return to top
Level 3 Processing
Level 3 data products are data products derived from the primary data
product - the Level 2 image of the sky. One example would be point
source files with fluxes in multiple bands. Other files will be
determined by the scientific requirements.
Input: The input data is the Level 2 FITS image.
Output: Output data files are in different formats depending on the file.
Return to top
A suite of automated and manual test routines have to be developed for
the pipeline software so that we can easily check the performance of
the pipeline routines.
<sky_simulation_output> <start_log_file> <end_log_file> <level0_data_file> <telemetry_data_file>
This module reads data in the TAUVEX Level0
format and strips out the
data and the log files. The output data will be written in the Level1
This module reads the fits binary table data from levels 1b to 1f and write out
Fits Library: nom.tam.fits Version1.0.
- Read_level1a_hdr: Reads FITS binary table header into structure.
- Read_level1a_data: Reads FITS binary table.
- Write_level1a_hdr: Writes header structure into FITS binary table header
- Write_level1a_data: Writes FITS binary table.
- Read_level2_hdr: Reads FITS image header into structure.
- Read_level2_data: Reads FITS image data.
- Write_level2_hdr: Writes header structure into FITS image extension.
- Write_level2_data: Writes data into FITS file.
- Update_hdr: Updates specific header keywords
Return to top
Return to top