<&Viz.NoteHead "The PMM USNO-A2.0 Catalogue">

The details listed below are from the files kindly supplied to CDS by Dave Monet at the ftp://ftp.nofs.navy.mil/usnoa site. Please refer to http://ftp.nofs.navy.mil/projects/pmm/ for the most recent details about the PMM products.

A compression technique adapted to the PMM USNO-A2.0 was used for the CDS installation, keeping the direct access possibilities on a catalog shrinked to 3.6Gbytes.



1  Introduction to USNO-A2.0

****************Really Important Stuff**************************************

1) This file is the first level of documentation for the USNO-A2.0 catalog.  It
   discusses the changes between USNO-A2.0 and USNO-A1.0, and familiarity with
   USNO-A1.0 is presumed.  Should this not me the case, please start by reading
   the A1.0 documentation (README.V10 and associated files) before continuing
   with this file.  Questions and comments should be directed to

   Dave Monet
   US Naval Observatory Flagstaff Station
   PO Box 1149  (US Mail Only)
   West Highway 66  (FedEx, UPS, etc.)
   Flagstaff AZ 86001 USA

   Voice: 520-779-5132
   FAX:   520-774-3626
   e-mail:  

   Please understand that the level of support provided will be commensurate
   with the level of effort expended.  I am too busy to do your homework
   for you.  E-mail works better than the phone.

2) If you have been using USNO-A1.0, all you really need to do is swap
   the new versions of the .ACC and .CAT files for the old ones.  If
   you insist on understanding what has changed, you can read the rest
   of the documentation, but the new version is intended to be as
   compatible as possible with the old one.

3) This file is subject to being updated.  We are in the process of moving
   the USNO Flagstaff Station Web site from
       http://www.usno.navy.mil/nofs
   to
       http://www.nofs.navy.mil
   Please be patient during the transition.  This version of the file
   was all that I could prepare in time for the CD-ROM distribution.
   As changes, mistakes, and additions are processed, the new version
   of this file will be available from our Web site.

*********************The Rest Of The Stuff********************************

                    USNO-A V2.0
              A Catalog of Astrometric Standards
                   David Monet a)

	Alan Bird a), Blaise Canzian a), Conard Dahn a), Harry Guetter a),
	Hugh Harris a), Arne Henden b), Stephen Levine a),
	Chris Luginbuhl a), Alice K. B. Monet a), Albert Rhodes a),
	Betty Riepe a), Steve Sell a), Ron Stone a), Fred Vrba a),
	Richard Walker a)

a) U.S. Naval Observatory Flagstaff Station (USNOFS)
b) Universities Space Research Association (USRA) stationed at USNOFS.

============== Abstract =======================

   USNO-A2.0 is a catalog of 526,280,881 stars, and is based on a
   re-reduction of the Precision Measuring Machine (PMM) scans that
   were the basis for the USNO-A1.0 catalog.  The major difference
   between A2.0 and A1.0 is that A1.0 used the Guide Star Catalog
   (Lasker et al. 1986) as its reference frame whereas A2.0 uses the
   ICRF as realized by the USNO ACT catalog (Urban et al. 1997).

   A2.0 presents right ascension and declination (J2000, epoch of the
   mean of the blue and red plate) and the blue and red magnitude
   for each star.  Usage of the ACT catalog as well as usage of new
   astrometric and photometric reduction algorithms should provide
   improved astrometry (mostly in the reduction of systematic errors)
   and improved photometry (because the brightest stars on each plate
   had B and V magnitudes measured by the Tycho experiment on the Hipparcos
   satellite).  The basic format of the catalog and its compilation is the
   same as for A1.0, and most users should be able to migrate to this
   newer version with minimal effort.

   This file contains a discussion of the differences between A1.0 and
   A2.0, and those points not discussed remain unchanged.  For convenience,
   the documents circulated with the A1.0 catalog are included in this
   distribution.

================= Discussion =========================

1.  REFERENCE FRAME

   USNO-A2.0 has adopted the ICRS as its reference frame, and uses
   the ACT catalog (Urban et al. 1997) for its astrometric reference
   catalog.  The Hipparcos satellite established the ICRS at optical
   wavelengths, but stars in the Hipparcos catalog are saturated on
   deep Schmidt survey plates as are the brighter Tycho catalog stars.
   Fortunately, the fainter Tycho stars have measurable images, so each
   survey plate can be directly tied to the ICRS without an intermediate
   astrometric reference frame.  The proper motions contained in the
   ACT catalog are more accurate than those in the Tycho catalog, so
   the ACT was adopted as the reference catalog.  USNO-A1.0 use the Guide
   Star Catalog v1.1 as its astrometric reference catalog, and the
   availability of the ACT was the driving force behind the compilation
   of USNO-A2.0.

2.  STAR NAMES

   USNO-A2.0 continues the policy established for USNO-A1.0 of not
   assigning an arbitrary name to each object.  Without explicit star
   names, the IAU recommendation is to use the coordinates for the name.
   Since USNO-A2.0 contains a complete astrometric rereduction, the
   coordinates of objects are not the same, so the names for USNO-A1.0
   stars are NOT PRESERVED in USNO-A2.0.  If you need a name for a star,
   you can use either the coordinates or the zone and offset so long
   as you are careful to cite USNO-A2.0 as the source.

   (If anybody has a clever solution to the problem of star names that
   does not waste lots of space or CPU cycles, please let me know.)

3.  PHOTOMETRIC CALIBRATION

   The Tycho catalog provides B and V magnitudes for its stars.  USNO-A2.0
   uses these and Henden's photometric conversion tables between (B,V)
   and (O+E+J+F) to set the bright end of the photometric calibration for
   each plate.  This is an improvement over USNO-A1.0.

   Unfortunately, GSPC-II and other large catalogs of faint photometric
   standards are not available, so the faint end of the photometric
   calibration came from the USNO CCD parallax fields in the North,
   and from the Yale Southern Proper Motion CCD calibration fields
   (van Altena et al. 1998) for fields near the South Galactic pole.
   Hence, the faint photometric calibration of USNO-A2.0 may not be
   any better than for USNO-A1.0.  Sorry.  When better sources of faint
   photometric calibration data become available, new versions of USNO-A
   will be compiled.

   A new algorithm for doing computing the photometric calibration.

     a) Since there are 300 or more ACT(==Tycho) stars on each plate,
        the computed J+F+O+E magnitude for each star can be computed
        from B+V.  Given the relatively poor nature of this conversion,
        subtleties of the various photometric systems were ignored.
        Please remember that all Tycho stars are toasted on deep Schmidt
        plates, and we were lucky that PMM could compute decent positions
        and brightnesses for any of them.  Four solutions were done
        (O+E+J+F) which fit an offset for each plate and a common
        slope for all plates.  For example, there were 825 free parameters
        in the solution for the 824 POSS-I O plates, 824 offsets and 1 slope.
        This solution isn't quite as good as fitting individual slopes
        for "good" plates, but is much more stable than fitting individual
        slopes for "bad" plates.

     b) There are 215 POSS fields and 42 SERC/ESO fields with faint
        faint photometric standards.  Again, the ensemble of plates was
        divided into 4 solutions (O+E+J+F), and the fit allowed an
        offset for each plate but a common value for the linear and the
        quadratic term.  For example, there were 217 free parameters in
        the POSS-I O plate solution, 215 offsets, 1 slope, and 1 quadratic
        term.  Again, this offers stability at the expense of accuracy
        on the "good" plates.

     c) A number of iterative solutions for using the calibrated plates
        to calibrate the rest were tried, and most failed.  Finally,
        a stable solution was found.  For each of the 4 sets of plates,
        the faint zero points were fit as a function of the bright
        zero points.  Using this relationship, the faint zero points for
        all plates were computed.  (We chose to use the fit instead of the
        individual solutions for those plates which had the faint
        photometric standards.)  Note that this relationship provided
        the fifth (and final) parameter for the photometric calibration
        (i. e., bright offset, bright slope, faint offset, faint slope,
        faint quadratic).

        Once the coefficients were known for all plates, the overlap
        zones on adjacent plates were used to smooth the solution over
        the whole sky.  In an iterative scheme, the faint mean error
        for each plate was computed from all stars in common with other
        plates, and then the faint offset was adjusted after all the
        mean errors were computed.  This algorithm converged in 3 or 4
        iterations, and makes the plate-to-plate photometry as uniform
        as possible given the paucity of faint standards.

     d) No vignetting function was used.

4.  ASTROMETRIC CALIBRATION

   A startling result of the comparison between PMM and ACT is that
   decent astrometry can be done on stars as bright as about 11th magnitude.
   Visually, these images have spikes and ghosts, and are not the sort of
   images commonly associated with the word "astrometry".  Since there
   are 300 or more ACT stars on a single Schmidt plate, each plate
   can be tied directly to the reference catalog without an intermediate
   coordinate system.  This solution includes corrections for systematic
   errors in the focal plane and for magnitude equation, and these
   are discussed below.  It should be emphasized that the raw measures
   are the same for USNO-A2.0 and USNO-A1.0, and the difference is in
   how these are combined to produce the coordinates found in the catalog.

   a) Schmidt telescopes have field-dependent astrometric errors, and
      these must be sensed and removed.  Because there are hundreds of
      reference stars on each plate, the algorithm used was as follows.
      Data from the exposure log are used to do the transformation from
      mean to apparent to observed to tangent plane coordinates using
      the relevant routines from Pat Wallace's SLALIB package.  The
      first set of solutions finds the best cubic solution between the
      PMM measures (corrected for the known Schmidt telescope pin cushion
      distortion) and the predicted positions.  Once an ensemble of these
      solutions have been done, the residuals are accumulated in 5mm by
      by 5mm boxes of position on the plate.  By combining the residuals
      from hundreds of plates, the systematic pattern can be determined
      with good precision.  The second step is to repeat the cubic fit
      between predicted and observed positions after correcting the
      observed positions using the pattern determined in the first step.
      Examination of the systematic pattern produced by the second
      step indicated that there was a small residual pattern that arose
      from the interdependence of the fixed pattern and the cubic
      polynomial fit.  A third iteration was done, and the resulting
      systematic pattern was consistent with random noise.

      The iterative process of determining the systematic pattern of
      astrometric distortions was done separately for each telescope
      in each color, and intermediate solutions based on zones of
      declination were examined for the effects of gravitational
      deflection.  None were found, so the final patterns were determined
      through the co-addition of all plates taken by a particular telescope
      in a particular color.  Hence, USNO-A2.0 uses 4 specific patterns
      instead of the single mean pattern used for USNO-A1.0.

   b) Inspection of the astrometric residuals from high declination
      fields (where the overlop between plates is large) showed that
      there was a significant radial pattern.  This, and the analysis
      of the residuals from the UJ reductions for the USNO-B catalog,
      suggested that magnitude equation was present.  This is hardly
      a surprise because the images of Tycho stars show spikes, ghosts,
      and other problems whereas the faint stars show relatively clean
      images.  The effect is small to non-existent within a radius of  2.2
      degrees of the center, and then rises to 1.0 arcsecond at  3.0
      degrees and continues to rise into the corners.  The effect is
      more or less the same for the POSS-I O, POSS-I E, and SERC-J plates,
      but a different behavior was seen for the ESO-R plates.  The
      source of this different behavior is not understood, and may
      indicate a software problem associated with the different size of the
      ESO plates (300x300 mm vs 14x14 in).

      The analysis of the UJ plates (like POSS-II J except with a 3 minute
      exposure) shows a similar behavior when the Tycho stars are subdivided
      into bins of <9, 9, 10, 11, and 12 magnitude.  Since the nominal
      difference between UJ and POSS-I is something like 4 magnitudes,
      the effect was assumed to be zero for stars fainter than 15 and
      rises linearly until it becomes the same for all stars brighter
      than 11.  This is an empirical correction, and more work needs to be
      done to verify its behavior.

5.  NUMERICAL REFOCUS

   The most common mode for the PMM to mis-measure a plate is that it
   does not determine the distance between the camera and the plate
   accurately.  The PMM starts by using the granularity of the emulsion
   as a signal for setting the focus (i.e., minimum background smoothness),
   and then does 15 exposures separated by 0.5 millimeters to compute
   the actual pixels per millimeter.  In many cases, this algorithm is
   not sufficient, and the raw scans have relatively large astrometric
   errors, and show a sawtooth pattern in the residuals.

   Since PMM saves many more data than are contained in this catalog,
   it is possible to refocus the plate after the scan.  To do this, the
   known positions of the ACT stars are fit as a function of the new
   Z distance between the camera and the plate.  Minimization of these
   residuals indicates what the proper focus should have been, and then
   the entire set of raw measures are corrected for this effect.  In
   general, this processed tightens the histogram of the number of plates
   as a function of the astrometric error.  The good scans are unaffected
   but the bad scans get better.  This algorithm has been applied to all
   plates used in USNO-A2.0.

6.  EPOCH OF COORDINATES

   In USNO-A1.0, the coordinates were computed from the positions measured
   on the blue plate (O or J), so they were J2000 at the epoch of the
   blue plate.  For USNO-A2.0, we believe that the uncertainties in the
   positions are no longer dominated by systematic errors, so it makes
   sense to average the blue and red positions.  Hence, USNO-A2.0 coordinates
   are J2000 at the epoch of the mean of the blue and red exposure.  For
   POSS-I plates, this difference is trivial because the plates were taken
   on the same night.  For SERC-J and ESO-R, there can be a significant
   epoch difference between the blue and red plate, and stars with small
   proper motions will be affected.  Note that stars with large proper
   motions will be selectively deleted from the SERC-J+ESO-R portion
   of the sky because they will fail the test of blue and red positions
   within a 2 arcsec radius, and that this omission depends on the
   epoch difference of the plates for the individual fields.


7.  MULTIPLE ENTRIES

   We have done our best to remove multiple entries of the same star, but
   they still remain.  The improved astrometric reduction decreased the
   number of stars in the catalog by about 0.8   but this reduction is masked by the increase in the number of stars
   associated with moving the north/south transition from about -33 degrees
   to about -17.5 degrees.  In the north/south overlap zone, double
   entries are generated for stars with large proper motions since if
   they were detected in each survey separately but moved far enough
   to escape the double detection removal algorithm.  There shouldn't
   be too many of these, but they may be obvious because they are
   statistically brighter than the typical catalog entry.

8.  BRIGHT STARS

   Images for stars brighter than about 11th magnitude are so difficult to
   measure that their computed positions may differ with the correct
   position by more than the 2 arcsecond coincidence radius used in the
   reductions.  For really bright stars, all that appears are an ensemble
   of spurious detections associated with diffraction spikes, halos, and
   ghosts.  To make USNO-A2.0 a useful catalog, bright stars were inserted
   into it so that the catalog is a better representation of the optical
   sky.  For may applications, it is better to know that a bright star
   is nearby than it is to insist that the poorly measured objects be
   deleted from the catalog.  In compiling USNO-A2.0, a list of all
   ACT stars that were correlated with PMM detections was kept.  For
   these stars, USNO-A2.0 contains the PMM position, not the ACT position,
   and the flag bit is set to indicate the correlation.  In the compilation
   process, all uncorrelated ACT stars were inserted into the catalog
   using the ACT coordinates.  However, ACT is not complete at the bright
   end because it omits stars with low astrometric quality.  Hence,
   a final pass inserted all Tycho stars that do not appear in the ACT
   catalog at the Tycho position.  According to the documents published
   with the Tycho catalog, every effort was made to make it complete at
   the bright end, even for stars with low astrometric quality.

   Note that one should not use the coordinates of ACT and Tycho stars
   presented in USNO-A2.0 for critical applications.  ACT stars appear
   at the epoch of the plate, but because the proper motions for the
   non-ACT Tycho stars are unreliable, these stars appear at the epoch
   of the Tycho catalog.

9.  PRETTY PICTURES

   The all-sky pretty pictures generated from USNO-A2.0 used an algorithm
   to reduce the over-density of southern stars that arises from the fainter
   limiting magnitudes of the SERC-J and ESO-R plates.  This was done
   by using a random number generator and omitting the star if the
   random number was less than 0.45.  That is to say, the southern
   over-density is not quite a factor of 2 more objects per unit area
   than found from the northern surveys.  Again, all objects are in
   USNO-A2.0 and the over-density was removed to make the pretty pictures.

11.  SOURCE CODE

   As with USNO-A1.0, we have published the source code for all computations
   and for all calibration.  The compilation code is in ALPHA13.TAR in
   the directories ./newbin/procN.  The code for the numerical refocus
   is in NEWBIN.TAR ./newbin/newz0 and for the fixed pattern removal
   in ./newbin/tycho2xtaff.

   The code is published as a service to those who wish to understand
   USNO-A2.0 and not so that we can be ripped off.  Please respect the
   intellectual property rights contained in the source code, and
   do not make us wake up the lawyers.

Enjoy!  If you use USNO-A2.0 for neat stuff, drop me an e-mail.
-Dave


2  Read.pmm (USNO-A1.0 document)

The following text is copied from the UJ1.0 CD-ROM and gives an overview
of the PMM and its programs.  In an attempt to satisfy the serious user
of this catalog, the source code for the PMM is found in the sg1.tar file
somewhere in this CD-ROM set.  This file contains the source code for
all pieces of the executable image as well as the key data files used
to calibrate various pieces of the PMM.  References to code in this section
point to files in sg1.tar.

    Dave Monet is 

The Precision Measuring Machine (PMM) was designed to digitize and reduce
large quantities of photographic data.  It differs from previous designs
in the manner by which the plates are digitized and in that it reduces the
pixel data to produce a catalog in real time.  This section gives an
introduction to the design, hardware, and software of the PMM.  For those
wishing to pursue issues in greater detail, the software used to control
the PMM may be found in the directory exec/c24, and all software used to
acquire and process the image data is found in the other directories
under exec/ (processing begins with exec/misc/f_parse).

High-speed photographic plate digitization has been accomplished using
three different approaches.  Many machines (APS, APM, PDS, etc.) have
a single illumination beam and a single channel detector.  This approach
can offer extremely accurate microdensitometry at slow scanning speeds
(PDS) and has been used by intermediate-speed machines (APS, APM, etc.)
that have produced many useful scientific results.  The second approach is
to use a 1-dimensional array sensor, such as the SuperCOSMOS design.  These
offer much higher scanning rates but suffer from more scattered light than
true microphotometers.  The third approach (PMM) is to use a 2-dimensional
array sensor, such as a CCD.  This offers yet higher throughput at the
expense of more scattered light.  The 1-D and 2-D designs are new enough
that detailed comparisons with single pixel designs have yet to be done.

Of the three designs, only the 2-dimensional array design separates image
acquisition from mechanical motion.  In this approach, the platen is stepped
and stopped, its position is accurately measured, and then a CCD camera takes
a picture of a region of the photographic material.  In this manner, the
transmitted light (plates and films) or reflected light (prints) is digitized
and sent to a computer for processing and analysis.  The mechanical system
is not required to move the platen in a precise direction or speed while
image data are being taken.  Therefore, the mechanical system is much easier
to build and keep operational, and platen sizes can be much larger (a feature
needed to minimize the thermal and mechanical impact of replacing the
photographic materials).


A.  Hardware

The PMM design is conceptually simple.  The mechanical system executes a
step and stop cycle, and then reports its position to the host computer.
A CCD camera takes an exposure of the "footprint" in its field of view,
and the signal is then read, digitized, and passed to the host computer.
Once the image is in the computer, the mechanical motion may be started
and image processing and mechanical motion can occur simultaneously.
In practice, the PMM design is a bit more complicated because it has
two parallel channels for yet higher throughput.  The various subsystems
are the following.

	a) The mechanical system was manufactured by the Anorad Corporation
	of Hauppauge NY to specifications drawn up by USNOFS astronomers
	and their consultant William van Altena.  Its features include
	the following:

		i) 30x40-inch useful measuring area.
		ii) granite components for stability.
		iii) air bearings for removal of friction.
		iv) XY stage position sensed by laser interferometers.
		v) Z and A platforms for above/below stage instruments.
		vi) ball screw motion in X at 4 inch/second maximum speed.
		vii) brushless DC motors in Y at 2 inch/second maximum speed.
		viii) computer control of all motions.
		ix) two laser micrometers mounted on the Z stage to measure
		    distance to photographic materials.
		x) two CCD cameras (discussed below).

	In addition, it has a single channel microphotometer system built
	by Perkin Elmer, but that system is not used for POSS plates.
	It is controlled by a dedicated PC that communicates to
	the outside world by an RS-232 interface.

	The PMM is housed in a Class 100,000 (nominal) clean room and
	the thermal control is a nominal plus or minus one degree
	Fahrenheit.  Actual performance is much better over the 80
	minutes needed to scan a pair of plates.  The temperature is
	usually stable to +/-0.2 degrees and short term tests show
	a repeatability of 0.2 microns over areas the size of POSS
	plates.  Thermal information is recorded during the scan
	and is part of the archive.

	b) The images are acquired and digitized by two CCD cameras made
	by the Kodak Remote Sensing Division (formerly Videk).  Each
	has a format of 1394x1037 and a useful area of 1312x1033 pixels.
	The pixels are squares of 6.8 microns on a side, have no dead
	space between pixels (100	bad pixels in the array (Class 0).  A flash analog to digital
	converter is part of the camera, and the image is read and
	digitized with 8-bit resolution at a rate of 100 nanoseconds
	per pixel.

	Printing-Nikkor lenses of 95 millimeter focal length are used to
	focus the sensor on the photographic plate with a magnification
	of 2:1.  The resolution of these lenses exceeds 250 lines per
	millimeter and they have essentially zero geometric and chromatic
	distortion when used at 2:1.  The illuminator consists of a
	photometrically stabilized light source, a circular neutral
	density filter to compensate for the diffuse density of the
	plate, a fiber bundle, and a Kuhler illuminator to minimize
	the diffuse component of illumination.  Each camera's light path is
	separate except for the single light source.

	c) Each camera has its own dedicated computer and related peripherals.
	The digital output of the camera is fed to a 100 megabit
	per second optical fiber for transmission to the computer room
	where a matching receiver converts it back into an 8-bit wide,
	10 megabyte per second parallel digital signal.  This signal is
	interfaced to a Silicon Graphics 4D/440S computer using an
	Ironics 3272 Data Transporter attached to the VME bus.  This
	system supports the synchronous transfer of 1.4 megabytes in
	0.14 seconds with an undetectably small error rate.

	The 4D/440S supports DMA from the VME bus into its main memory
	without an additional buffer.  Once in the computer, the PMM
	software (discussed below) does whatever is appropriate, and,
	if the user desires, the pixel data can be transmitted across
	a fiber optically linked SCSI bus to disk or tape drives located
	in the PMM room.  This is particularly convenient for the operator.

	d) A DEC MicroVAX-II computer acts as system synchronizer, and does
	little more than coordinate all steps in the motion and processing.
	This operation is not as trivial as it sounds.

	e) The user interacts with the PMM using any X-window terminal
	by logging into the MicroVAX and starting the control program.
	The control program logs into the Anorad PC and each of the
	processing computers across RS-232 (it is too old to have X).  These
	computers open X-windows on the users terminal, and all interaction
	with them (including image display) avoids the MicroVAX.  A simple
	interpretive language was written for the MicroVAX, and plates
	are measured by executing sequences of commands.  Sequences
	may be found in exec/c24/seqNNN.pmm.  The sequence for measuring
	4 UJ plates is seq485.pmm.

B.  Plate Measuring Sequence

The sequence for measuring plates is designed to minimize human intervention.
Each of the two platens holds four POSS plates.  While one is being measured,
the other is loaded so that the plates can come to thermal equilibrium.
The measurement sequence consists of the following phases.

	a) The camera is positioned over the middle of the plate and the
	neutral density filter is set to maximum (D=3.0).  A sequence of
	fixed length exposures is made as the density is reduced, and
	the optimum value for the exposure is found.  Due to limitations
	of the camera interface, the exposure time has a granularity of
	one millisecond and must be in the range of 2 to 127 milliseconds.
	Once the optimum neutral density is found, it is kept at that value
	for the entire plate.  Changes in diffuse density are followed by
	changing the exposure time.

	b) The Z-stage is fixed at a nominal value, and the plate pair
	(1/2 or 3/4) is scanned to obtain the distance between the Z
	laser micrometers and the surface of the plate.  The XY stage is
	positioned at a Y value that will later be used for the digitization
	footprints, and then driven at high speed in X.  As the stage moves,
	the micrometer and the Anorad PC are sampled to determine the
	Z distance as a function of X position.  This procedure is repeated
	for the sequence of Y positions, and the 2-dimensional map of Z
	distance is obtained.

	c) The camera is positioned over the middle of the plate, and
	a sequence of exposures is taken at different values of the Z
	coordinate.  For each exposure, a measure of the sky granularity is
	computed, and interpolation is used to find the Z coordinate that
	maximizes the granularity.  This establishes the "best" focus in
	an impersonal manner, and it appears to be stable to plus or
	minus 50 microns in Z.

	d) A sequence of frames are taken of the central area of the plate
	with increments of 1.0 millimeter between each, and the standard
	star finding and centering algorithm is run on each frame.
	After all frames have been taken, the nominal value of the plate
	scale is used to identify unique stars seen on each of several
	frame.  Once the set of measures is isolated, software computes
	a revised estimate of the plate scale.  This revised estimate
	can be considered the difference between the layer of the emulsion
	that reflected the laser micrometer beam of the reference plate
	and of the current plate.

When the plate is scanned, the Z stage is driven to the position appropriate
for each footprint, which is the sum of the "best focus" plus the difference
between the current location and the central location as determined by the
laser micrometers.  After the positions have been measured, a linear
expansion is applied to the pixel coordinates for each star to remove the
difference in the (observed-nominal) plate scale.  At first glance, this
algorithm seems quite complicated, but determination of the plate scale
is critical to the astrometric integrity of the PMM.  To measure to 0.1
arcsecond, the scale must be known to 0.008contribute to uncertainties in the plate scale.

	a) No technology better than a laser micrometer was found to
	measure the distance between the Z stage and the plate.  Unfortunately,
	the laser is somewhat sensitive to the reflectance of the surface,
	and the range of diffuse densities encountered during the scanning
	of the UJ plate of about 0.1 to 2.5 causes an uncertainty of where
	the micrometer is measuring.  The only competing technology,
	touch probes, was considered too risky for use with original POSS-I
	and -II plates.

	b) The POSS plates are not flat, and no reasonable plate hold-down
	mechanism was proposed.  This problem is a minor annoyance for
	UJ and POSS-II plate because the typical +/- 200 microns could
	be removed by software.  Unfortunately, the +/-1 or millimeter
	seen on the POSS-I plates causes the images to be out of focus,
	and a surface following algorithm is required.

Unfortunately, the elaborate focus and scale determination routines developed
to measure POSS-I and POSS-II plates were unreliable for measuring the
UJ plates.  Many UJ plates had diffuse densities so low that the sky and
the noise in the sky were extremely difficult to measure.  To the human
observers, these plates seem as clear as window glass.  Since the UJ exposures
were only 3 minutes, many plates had so few stars in a single footprint
that the scale determination routine got lost.  In either case, the error
induced by a lost algorithm was much larger than just measuring the focus on
a good plate and using that value for the UJ plate.  This was done, so the
list in the preceding paragraph must be extended.

	c) The difference in focus between the current plate and that used
	to determine the CCD camera scale is not known.  Note that the PMM
	should follow the current plate properly since that measurement
	is only the difference between the local and central value
	determined by the laser micrometer.  What is not (or only poorly)
	known is the offset at the central location.

C.  Image Analysis Algorithms

The mechanical and camera systems serve only one purpose: to deliver image
data to the processing computers.  The major precept of the PMM design is to
do all image processing and analysis in real time.  It was true when the PMM
was designed, and is still true, that it takes much longer to read or write
an image to storage devices (particularly those for archival storage) than
it does to extract the desired information.  Indeed, the original PMM design
had no mechanism for saving the pixels.  A substantial amount of thought
and work has gone into the design of the image processing algorithms.  This
section gives an overview of the code, and the serious reader is encouraged
to read the source code (located in exec/ and its subdirectories).

When the MicroVAX notifies the computer that the mechanical motion has been
completed, the computer commands one or more exposures to be taken.  The
code is written to take 1, 2, 3, 4, or 8 exposures depending on the value
of GRABNORM.  The routine exec/misc/f_autoexp is used most often because
it takes the exposures, evaluates the sky background, and will re-take
an exposure with a modified exposure time if certain limits are exceeded.
Since the background is variable, this type of autoexposure routine is
necessary.  Note that it does not vary the setting of the neutral density
filter used to illuminate the plate, so it has a limited range over
which it can modify the exposure.

Another problem related to taking an exposure is the presence of holes, tears,
and the area around the sensitometer spots.  Typically, the POSS plate sky
background has a diffuse density larger than two, but where the emulsion is
absent or hidden from the sky, the density can be very close to zero.
These regions cause gross saturation of the CCD camera, and its behavior
becomes extremely non-linear, even to the point of having decreasing
signal level with increased exposure.  To avoid this, the routine
exec/misc/f_toasted takes a very short exposure to test for this condition
before the normal exposure sequence is started.

Flat field processing is done in the traditional manner, using bias and
flat frames taken under controlled circumstances.  The CCD cameras are
quite linear and uniform, and the flat field processing does little more
than take out the non-uniformities in the illuminator.  Pixel data are
converted from unsigned bytes into floating point numbers during the
flat field processing, and all steps in the image analysis and reduction
software are applicable to non-photographic data.

The image processing is divided into a hierarchy based on accuracy,
and there are three levels.  The first, called the blob finder, is charged
with finding areas that need further processing, and doing this with a
relatively coarse accuracy of +/-1 pixel.  The second is invoked to refine
this guess to an accuracy of +/- 0.2 pixel and to provide improved estimators
for the object's size and brightness.  The third step is non-linear least
squares processing,  which produces the accurate estimators for image
position, and moment and other image parameters.  Each is discussed in
greater detail in the following paragraphs.

        a) Blob finding:  Many different algorithms have been proposed to
	find blobs in an image.  (I prefer to use "blob" instead of "star"
	since we do not know in advance what sort of an object we have
	found.)  The PMM algorithm was designed for very high speed.
	It is based on the concept that finding an image requires neither
	the spatial resolution nor the intensity resolution required to
	measure accurate image parameters.  The first step of the blob
	finder is to block average the input image by a size PARMAGNIFY
	which can take on values of 1, 2, or 4, but all experience indicates
	that 4 is acceptable for PMM processing of POSS plates.  (The driver
	for this processing is in exec/pfa124subs/bmark2_N.f where N
	takes on the values of 1, 2, or 4.)  The larger the value of PARMAGNIFY,
	the faster the blob finder will operate.

	With PARMAGNIFY determined, the block average TINY image is computed
	and then subjected to a median filter to produce the SKY image
	of similar size.  The histogram processing of the sky image determines
	the dispersion of the sky, a scaler that will be applied to the
	whole image.  Then, the sky image and the sky dispersion are used
	to generate the DN1P image, an image whose pixel values are 1
	if the TINY image was greater or equal to the SKY pixel plus
	PARSIGMA times the sky dispersion, or 0 if not.  If the DN1P pixel
	is set to 1, the corresponding SKY pixel is set to zero indicating
	that it should not be used to compute local sky values.

	Another picture of reduced size is computed as well.  The
	DN2P pixel is set to 1 if the TINY pixel is greater or equal
	to PARSAT, a number that represents the level at which an image
	is considered to be saturated.  In practice, the number is about
	230 instead of the maximum possible value of 255 that comes from
	the camera A/D converter.

	The logic behind the TINY, SKY, DN1P, and DN2P is the following.
	Most computers take many cycles to compute an IF statement, and
	these tend to negate look-ahead logic needed to make software
	execute quickly.  By making images whose values are 0 or 1,
	additions and multiplications can replace many IF statements,
	and thereby increase the speed of the code.  Our experience is
	that automatic blob finding is very expensive (slow) because of
	the complexity of the algorithm, and our efforts to run it in
	parallel mode were unsuccessful.  Hence, optimization was needed
	in this part of the code to keep its bandwidth high.

	Given TINY, SKY, DN1P, and DN2P, blob finding can begin.  The
	algorithm is based on the concept that we wish to find
	isolated, mostly circular objects.  The algorithm considers a
	circular aperture and computes the area and perimeter based on
	the pixel values in either the DN1P or DN2P image.  The area is
	the number of pixels that meet or exceed the detection
	criterion inside of the aperture, and the perimeter is the
	number of such pixels that cross from inside the aperture to
	outside the aperture.  A detection is triggered when the area
	has a non-zero value and the perimeter is zero.  This means
	that a blob has been isolated.  Once a blob has been detected,
	its location and coarse magnitude are tallied and the pixels in
	DN1P or DN2P are set to zero so that it will not be detected
	again.

	This algorithm can be expedited in a variety of ways.  First,
	the central pixel is tested to see if it is one.  If not, the
	aperture is moved to the next pixel.  This test corresponds to
	the assertion that the night sky is dark, and that a substantial
	number of pixels will be fail the detection threshold test.
	Next, explicit logic tests for small blobs.  The logic contained
	in exec/blob/find124_N tests for all radius one and two pixel events,
	and special cases of 4 pixel events.  The routine exec/blob/find3_N
	tests for all possible 3 pixel events.  These cases are worth
	the effort because the apparent stellar luminosity function
	tells us that the vast majority of stars in the catalog will be
	faint (small), and that the processing for small blobs needs
	to be optimized.

	The processing is completed by examining the DN1P or DN2P image
	with progressively larger apertures, until all blobs are
	found or until an unreasonably large aperture is needed, which is
	an indicator either that a very bright object is in the field or
	there is something wrong with the image.  In all cases, blob
	finding has been completed.

	As the blobs are detected, the routine exec/blob/plproc_N attempts
	to divide the blob into sub-blobs if required.  This is not a
	true deconvolution because we have transmission and not intensity.
	This routine is intended to separate almost distinct blobs found
	in the outskirts of other blobs, and does not do a good job
	splitting close double stars.  For the parameters used in UJ1.0,
	the splitter is far too aggressive and tends to break up well
	resolved objects into a series of distinct blobs.  This is an
	area for algorithm development before beginning the scans of the
	deep Survey plates.

Once the list of blobs has been assembled, the TINY, SKY, DN1P, and DN2P
are no longer used.  All further processing refers to the full resolution
DATA image.  In addition, the code shifts from scaler to parallel operation
because it can consider each blob as a separate entity.  Silicon Graphics
implements parallel processing with the DOACROSS compiler directive
for the pfa (Power FORTRAN Accelerator) compiler.  Its function is to
assign the next step of the DO statement to the next available CPU.
This algorithm is quite effective for processing stars because it means
that a big, complicated star will occupy one CPU for a while, but the
other CPUs can continue processing other stars.  Efficiencies between
3.5 and 3.8 were seen on the 4 CPU 4D/440S computer.

	b,c) Coarse and fine analysis are carried out sequentially by
	exec/fsubs/multiproc.  The first step in done by exec/fsubs/proccenscan
	which examines the blob along 8 rays and determines the size and
	center of the blob.  Then, the blob is fit by a circularly
	symmetric function by the routine exec/fsubs/marg and then various
	other image description parameters (moments, gradient, lumpy,
	etc.) are computed and packed into integers.

	The function selected was B + A/(EXP(z)+1) where
	z = c*((x-x0)**2 + (y-y0)**2 - r0**2).  (Perhaps this is more
	familiar when called the Fermi-Dirac distribution function.)
	Because the PMM uses transmitted light,  faint images look
	something like a Gaussian, but bright images have flat tops because
	they are saturated.  Hence, the desired fitting function needs to
	transition between these two extremes in a smooth manner.  A large
	number of numerical experiments were made, and they can be
	summarized by the following points.

		i) The production PMM code takes the sky value from
		the median SKY image rather than letting it be a free
		parameter in the fitting function.  The failure mode
		for many normal and weird objects was found to be
		an unreasonably large value for the sky and a correspondingly
		tiny value for the amplitude.  Fixing the sky forces the
		function to fit the image, and this is much more robust than
		letting the sky be a fit parameter.

		ii) Allowing the function to have different scale lengths
		in X and Y was found to be numerically unstable for too
		many stars.  With 6 free parameters in the exponent, chi
		squared can be minimized by peculiar and bizarre combinations
		that bear little resemblance to physical objects.

		iii) Iteration could be terminated after 3 cycles without
		serious damage.  If the object could be fit by the function,
		convergence is rapid and the parameter estimators at the
		end of the 3rd iteration were arbitrarily close to those
		obtained after many more iterations.  If the object could not
		be fit by the function, the parameters obtained after 3
		iterations were just as weird as those obtained with
		more iterations.

		iv) The best image analysis debugging tool was to subtract
		the fit from the DATA image and display the residuals
		as the PMM is scanning.  This allows the human observer
		to get a good understanding of the types of images that
		are processed correctly, and where the analysis algorithm
		fails.  This mode of operation is not possible on plate
		measuring machines that do not fit the pixel data.

	Therefore, a 5 parameter, circularly symmetric, fixed sky function
	was fit to all detections, and the position determined by this
	function is reported as the position of the object.

	Since most other high speed photographic plate measuring machines
	compute image moments, the PMM computes these as well.  Our
	experience is that the image moments are less useful for star/galaxy
	separation than quantities obtained from least squares fitting,
	and the positions determined from the first image moments are
	distinctly less accurate than those determined by the fit.
	In addition, the image gradient, effective size, and a lumpiness
	parameter are also computed since these may assist in star/galaxy
	separation.  All parameters are packed into 13 integers by the
	routine exec/fsubs/marg, and that code should be consulted for
	information concerning the proper decoding of these values.

D. Catalog Products

The distribution of PMM data should begin and end with the distribution of
the raw catalog files.  Unfortunately, cheap recording media are incompatible
with the bulk.  So far, over 440 CD-ROMs are needed to store these data, and
the scanning is not yet done.  Perhaps the digital video disk will make this
problem go away.  Until them, the PMM program will attempt to generate
useful catalogs that contain subsets of the parent database.

   USNO-A:

     These catalogs are intended to be used for astrometric reference.  They
     contain only the position and brightness of objects, and ignore such
     useful parameters as proper motion and star/galaxy classification.  These
     are objects that measured well enough on each of two plates to pass the
     spatial correlation test based on a 2-arcsecond entrance aperture.

     V1.0 contains RA and Dec, and takes its astrometric calibration from
     GSC1.1 and is photometric calibration from the Tycho Input Catalog and
     from USNO CCD photometry.

     V1.1 is derived from V1.0 by using SLALIB to transform RA/Dec to
     Galactic L/B.  The catalog is arranged in zones of B and is sorted on L.
     Because of intermediate storage requirements, the lookup tables between
     V1.1 and the GSC will not be computed.

     V2.0 is planned for late summer of 1997 after ESA releases the Hipparcos
     and Tycho catalogs.  The astrometric calibration will be made with
     respect to Tycho, and Tycho will be used to calibrate the bright end
     of the photometry.  Should STScI release GSCP-II (or significant chunks
     of it), this improved photometric calibration will be included, too.

   USNO-B:

     This catalog will extend USNO-A in several key areas.  It will contain
     star/galaxy separation information and will contain proper motions.
     Note that these quantities will be computed from J/F plate data, so
     USNO-B will be incomplete in the north according to the production
     schedule of POSS-II, and proper motions will be impossible south of
     -42 due to missing second epoch survey data.  Proper motions in the
     -36 and -42 zones can be computed from the Palomar Whiteoak extension.
     In addition, the plan is to use spatial coincidence data from the
     O+J and E+F survey comparisons to supplement the O+E requirement
     needed by USNO-A.  Hence, there should be many more entries, and the
     limiting magnitude for objects with peculiar colors will be much deeper.

   UJ1.3 and beyond:

      The UJ plates (3-minute IIIa-J on POSS-II field centers) provide a
      useful set of astrometric standards at intermediate brightnesses.
      To the extent possible, UJ will be kept current and made available
      to those who request it.

   Pixels:

       The PMM pixel database is approaching 5 TBytes.  Each of the PMM
       detections contains a pointer back to the frame and position of
       the pixel that triggered the detection loop.  Current USNO policy
       is to release the pixel database as soon as there is a reasonable
       way to do so.  Users with a particular urgency can contact Dave
       Monet and make a special request for access, but the logistics of
       searching and retrieving a specific frame from the archive on 8-mm
       tape will preclude all but the most important requests.

3  Read.ast (USNO-A1.0 document)

This is READ.AST, the file with the discussion of the astrometric calibration
of USNO-A.  Please refer to READ.ME for an introduction to the catalog.

Summary:

   The astrometric calibration of USNO-A is based on the Space Telescope
   Science Institute's Guide Star Catalog version 1.1, hereinafter GSC.
   This is a temporary calibration, and it will be replaced with a
   calibration to the European Space Agency's Hipparcos and Tycho catalogs
   as soon as they become available (current estimate is June 1997).
   We believe that a typical astrometric error is about 0.25 arcseconds,
   but for stars a few magnitudes brighter than the plate limit and away
   from the corners, the error may be as small as 0.15 arcseconds.
   Coordinates are computed in the system of J2000 at the epoch of the
   survey blue plate.  Proper motions were neither computed for nor
   applied to the coordinates in this catalog.

   Whenever possible, we have adopted Pat Wallace's SLALIB for computing
   quantities associated with position and angle.  Details about these
   routines and permission to use them should be obtained from the
   author at 

Source Code:

     binary/acrs - projection of ACRS to survey plate coordinates
     binary/ppm  -    "       "  PPM  "    "     "       "   "
     binary/gscgen -  "       "  GSC  "    "     "       "   "
     newbin/tychogen  "       "  Tycho Input Catalog "  " "  " "
     binary/gsctaff -  Taff-o-grams for various surveys
     binary/autogo - fit POSS-I O to projected GSC
     binary/autoge -  "  POSS-I E "   "   "     "
     binary/autogb -  "  SRC-J    "   "   "     "
     binary/autogr -  "  ESO-R    "   "   "     "
     catalog.tar - electronic version of the various plate logs
     binary/ugapX - the various routines that make the catalog

Strategy:

     Using the reference catalog (GSC1.1) and the information contained
     in the plate log (possi.cat and south.cat in catalog.tar), SLALIB
     is used to compute the observed place for each catalog star.
     The PMM coordinates are corrected for the nominal cubic distortion
     of the Schmidt telescope (using SLALIB's SLA_PCD, etc.) and
     compared to the projected catalog.  A best fit using up to cubic
     terms is computed and the residuals are saved.  After doing this for
     a significant number of plates, the residuals are binned according
     to their location on the plate, and an approximation for the
     systematic field distortion of the Schmidt telescope is determined.
     (These are called Taff-o-grams in the code in recognition of Larry
     Taff's demonstration of their significance.)  The fitting procedure
     is repeated, this time including the systematic field distortion
     map, and this fit is adopted for the generation of the catalog.

The Individual Plate Solutions:

     For a particular field, the plate log was consulted to get the
     various parameters (date, time, emulsion, etc.) for the plate.
     Unfortunately, there were a substantial number of typographical
     errors in the original versions of these logs, and every effort
     has been made to track down these errors and correct them.  We
     believe that the versions contained in this CD-ROM set are more
     accurate than the ones we started with, and all of the errors
     that we could fix have been fixed.  With the exposure data,
     SLALIB is used to compute the best estimator of where the stars
     should be found.  In order, we used SLA_MAPQK, SLA_AOPQK, and
     SLA_DS2TP to go from catalog to apparent to observed to tangent
     plane coordinates.

     The PMM produces coordinates for each detection in integer hundredths
     of a micron on its focal plane.  Actually, there is a systematic problem
     in the introduction of temperature and pressure into the PMM logic,
     and its version of a micron can be off by as much as one part in
     10^5, but they are sufficiently close to microns for this discussion.
     The coordinates have had the individual platen zero points subtracted,
     and the nominal center of each plate appears at approximately (170,175)
     millimeters.  SLALIB provides a utility for removing the nominal
     pin cushion distortion of a Schmidt telescope, and this correction
     is applied to the raw PMM coordinates.

     With the exception of systematic astrometric errors in the Schmidt
     telescope, the projected catalog and undistorted PMM coordinates
     ought to agree with each other.  The mapping is done using cubic
     polynomials in X and Y, although linear terms are sufficient except
     when doing the full-plate solution.  No sub-plate solutions are used:
     a single fit in X and Y is used to describe the whole plate.  These
     solutions are saved as are the residuals computed for each match between
     the PMM and the reference catalog, and this process is repeated for
     every survey plate.

     When many solutions are available, the residuals are combined
     according to the position of the object on the plate by the
     code in binary/gsctaff.  For USNO-A, a mean distortion pattern
     was computed for each of the three Schmidt telescopes involved.
     However, it is clear from examination of subsets of the data that
     there are significant differences in the shapes of the distortion pattern
     as a function of zenith distance (actually declination but most survey
     plates were taken near enough to the meridian).  In future releases,
     we intend to use zonal versions of this correction.  The residuals
     are binned in a 32x32 grid, and a 2-dimensional smoothing spline is
     used to expand this to a 65x65 grid.  This corresponds to boxes
     about 5 millimeters in size on the plate.

     With the systematic correction determined, the astrometric solution
     is repeated using the same catalog projection but adding the systematic
     correction removal to the pin cushion distortion removal in the
     pre-processing of PMM coordinates before fitting.  Again, a single
     cubic fit in each coordinate is used to describe the entire plate.

Assembling the Catalog:

     Two separate astrometric fits go into each field.  First, the red
     plate is mapped on to the blue plate, and then the blue plate is
     mapped on to the reference catalog.  The code is complicated only
     because of the large number of detections in each field, and the
     importance of applying each fit in the proper order.  This process
     is done in binary/ugap012, and extra software is inserted to verify
     that each step worked properly.  The output of ugap012 is a set of
     rings on the sky that follow from the surveys being taken in rings
     of declination.  Because of the relatively slow response of our
     CD-ROM jukebox that stores the raw catalogs, it takes about a week
     to do this phase of the preparation of USNO-A.

     The rings of various declinations are merged into zones of constant
     width by the code in binary/ugap3.  The zones are examined for
     duplicate detections by the code in binary/ugap4.  This program
     makes a list of all entries to be removed (the TAGs) and saves
     multiple observations of the same object in the sameXXXX.dat file
     for the photometric calibration.  The important routine in ugap4
     is nodup.f which finds the multiple detections.  For USNO-A, the
     radius was taken to be 1 arcsecond.  In the polar regions, the
     xynodup.f routine is used and the double detections are removed in
     coordinates on the tangent plane, and a radius of 15 microns was used.
     Finally, the code in binary/ugap5 removes the TAGged entries and
     produces the final catalog.  This catalog incorporates the astrometric
     calibration, but not the photometric calibration.  Routines to
     check each step appear in binary/ugap3x, binary/ugap4x, and
     binary/ugap5x.  A powerful debugging tool is plotting the entire
     sky because the eye is very sensitive to systematic errors at plate
     boundaries, etc.

     Finally, the code in binary/ugap7 applies the photometric calibration,
     and the code in binary/ugap8 projects the catalog in Galactic
     coordinates.  The partition of the catalog files on the various
     CD-ROMs is done in binary/ugap6.

4  Read.pht (USNO-A1.0 document)

This is READ.PHT, the file with a discussion of the photometric calibration
of USNO-A.  Please refer to READ.ME for an introduction to the catalog.

Summary:

   The photometric calibration of USNO-A1.0 is about as poor as one can
   have and still claim that the magnitudes mean something.  The calibration
   process is dominated by the lack of public domain photometric databases.
   In particular, this calibration was done without the final Hipparcos
   and Tycho catalogs, and without the Guide Star Photometric Catalog II.
   We have done the best job we could with the available data, and will
   recalibrate the catalog when significant databases become available.
   We believe that the internal magnitude estimators for stars are probably
   accurate to something like 0.15 magnitudes over the range of 12th to 19th,
   but that the systematic error arising from the plate-to-plate differences
   is at least 0.25 magnitudes in the North and perhaps as large as
   0.5 magnitudes in the South.  Users who are able to locally recalibrate
   USNO-A photometry are encouraged to do so since that will remove the
   systematic errors and leave only the measuring error.

Source Code:

   Useful places to look for pieces of the calibration are the following:

       newbin/piphot - generation of the USNO CCD parallax program magnitudes
       newbin/reversion - mapping the parallax program to individual plates
       newbin/bc1 - mostly obsolete with the exception of generating a
                    couple of input files for bc2
       newbin/bc2 - calibration of the northern sky
       newbin/bc3 - calibration of the southern sky
       binary/ugap4 - find multiple detections of the same object
       binary/ugap7 - apply the calibration to the raw catalog

Strategy:

   The calibration of USNO-A is divided into the calibration of the northern
   sky and then the calibration of the southern sky.  In each case, the
   first step was to compute the plate-to-plate offsets and convert the
   magnitudes from a specific plate into a system that was valid for all
   plates (called the meta-magnitude system).  The second step was to
   compute the transformation from the meta-magnitude system to pseudo-
   photographic magnitudes computed from CCD photometry and the Tycho
   Input Catalog.

The Northern Calibration:

   Removal of the plate-to-plate differences begins with examination
   of the list of all objects found by the code in ugap4 to be multiple
   detections of the same object.  For details, refer to the code, but it
   is sufficient to summarize this process as finding all objects that
   fall within a 1-arcsecond radius of another object.  All objects
   inside this radius were considered to be the same object, and the code
   in ugap4 selects one for the catalog and saves all objects in the
   SAMExxxx.dat file.  Code in bc1 looks at the SAMExxxx.dat files and
   computes the list of plates that overlap other plates and makes
   intermediate files of all stars that overlap a specific plate.
   Code in bc2 (parfit.f) then iterates a solution that starts at a zero
   offset (constant) or a zero offset and unit slope (linear) for each
   plate and computes the best fit for that plate to all of its neighbors.
   At the end of each iteration, all solutions are updated before the
   start of the next iteration.  Typically, the solution is very close
   to the final value after about 5 iterations, but it was allowed to
   run for 17 iterations so that a stable solution was found for all plates.

   The original plan was to allow a linear solution for each plate, but
   after the difficulties encountered in the Southern solution, the solution
   was done allowing only a constant term.  Visual examination of the
   calibration showed that both were essentially similar, so the constant
   one was selected.  The plate-to-plate solutions are found in bc2/calcoef.XX
   files, where XX is the iteration number.  Removal of the plate-to-plate
   offset before application of a transformation between internal and
   external magnitude systems was far more stable than doing the solution
   after such a transformation.  The internal magnitude systems for
   each plate are surprisingly similar.

   Because of the lack of a suitable calibration database, we decided to
   use the B and V magnitudes from the Tycho Input Catalog to calibrate
   the bright end, and to use the V and I CCD photometry done at USNO on
   parallax fields for the faint end.  Henden supplied tables for computing
   the color corrections which he derived from numerical integrations of
   spectrophotometric data and filter response curves.  For Tycho data,
   only stars with B and V were accepted, and the Henden relationships
   were used to compute O(B,V), E(B,V), J(B,V), and F(B,V).  Examination
   of the residuals to the photometric solution indicate that there
   are significant color terms remaining: the O/J solutions show less
   dispersion than the E/F solutions.  To mitigate this problem, Tycho
   stars with B-V less than 0.5 or greater than 1.2 were ignored in the
   final solution.

   The USNO photometric database was complete for V and I, but many stars
   did not have B data.  Because of this, we decided to ignore the B
   data when available, and to base the calibration on the V and I data
   alone.  Dahn supplied a relationship between V-R and V-I, and a crude
   calibration of B-V as a function of V-R was used.  These and the Henden
   tables can be found in newbin/piphot in the various .tbl files.
   Again, this calibration procedure left significant color terms.  The
   E/F calibration shows less dispersion than the O/J calibration.

   With the ensemble of pseudo-photographic magnitudes for standard stars,
   the relationship between the meta-magnitude and the standard magnitude
   system was done by newbin/tcapply.  The algorithm attempts to find a
   ridge line between the two systems, and then to fit a smoothing spline
   to it.  This solution is provided to the user (newbin/bc2/tcnodes.?)
   who can examine, correct, and extrapolate it as appropriate.  These
   new nodes (newbin/bc2/tcedit.?) are then fit with the smoothing spline
   and the final lookup tables (newbin/bc2/tclut.?) are produced.  Although
   the blue and red solutions are done by the same code, they are completely
   independent of each other.

   It is possible for the PMM to produce magnitudes that don't make sense.
   In particular, the total flux can be zero or negative should the estimator
   of local sky contain some sort of contamination.  These fluxes are
   mapped into 50.0 for the case of zero flux, and 50.1 through 75.0 for
   the case of negative flux.  In the latter case, the flux is negated before
   taking the logarithm and 50 is added to the result.  These magnitudes
   are ignored during the calibration process and passed directly from
   the PMM to the final catalog.  At best, they serve as flags that something
   was wrong with a particular image.

The Southern Calibration:

   The first step of the southern calibration is the same as that for the
   northern calibration, the removal of the plate-to-plate offsets.
   This is done in newbin/bc3/parfit and makes files soucoef.XX in a
   manner very similar to the northern solution.  However, the first
   solution that allows a constant and slope for each plate was seen
   to quadratically grow for the red (F) solution but not the blue (J)
   solution.  This was traced to a small but significant correlation between
   limiting magnitude and declination which drove the numerical instability.
   Solving for only a plate-to-plate offset showed the same instability.
   Therefore, an extra routine (newbin/bc3/damper.f) was inserted to
   remove this term after each iteration.  The blue solution with and
   without this term was examined and found to be essentially the same,
   so we have some confidence that the red solution is reasonable, too.
   The source of this correlation is unknown, and should disappear with
   the inclusion of more calibration data.

   The calibration of the meta-magnitude system in the southern solution
   was made more difficult because there are no USNO parallax fields
   south of -20.  Instead, a boundary condition that the southern and
   northern solutions should agree in the -30 degree zone was used.
   The list of same stars found by binary/ugap4 was used to identify those
   objects with northern and southern magnitudes, and the calibrated
   northern magnitudes were combined with the Tycho Input Catalog pseudo-
   J and F magnitudes to provide the calibrators for the southern
   meta-magnitude system.  Because of all of the difficulties associated
   with the apparently incomplete removal of color terms based on broad
   band photometric indices, the decision was made to ignore differences
   between J and O, and F and E.  This is a crude approximation, but one
   that was forced by the lack of appropriate calibration databases.
   As with the north, the calibration of the meta-magnitude system starts
   with nodes computed from a ridge line, and ends with a lookup table
   computed from nodes supplied by the user.  The code is in newbin/bc3
   and is nsapply.f, nsnodes.?, nsedit.?, and nslut.? in a manner similar
   to the northern solution.

Other Matters:

   The Schmidt telescope vignetting function was ignored.  Indeed, there
   are three such functions, but the lack of a suitable calibration database
   makes it almost impossible to solve for these functions from PMM data.
   The choice of zero vignetting function follows from Henden's analysis
   of the UJ1.0 data in which he could not independently verify the
   Palomar Schmidt vignetting function adopted by the Guide Star Catalog.
   Henden's analysis showed only a marginally significant function, and
   it was substantially smaller than that developed for the GSC.

   The northern calibration must be done first because of the reliance
   of the southern calibration on it.  Both are then copied to binary/ugap7
   where they are applied to the uncalibrated catalog and same files.
   Various other programs verified that the calibration was applied
   properly.

   The distinction between galaxy magnitudes and stellar magnitudes was
   ignored.  This followed from the lack of star/galaxy separation
   information for POSS-I plates.  The reductions being developed for
   USNO-B include star/galaxy separation, but they rely on the improved
   signal to noise ratio offered by the fine grain emulsions.

   Future releases of USNO-A will incorporate improved photometric
   calibration algorithms.  The release of the Tycho catalog in 1997
   will offer a dramatic improvement in the calibration of the bright
   end of the catalog as well as the transition from saturation
   around 12th magnitude.  The release of GSPC-II will provide an important
   calibration database for the intermediate stars, especially in the south,
   but more work is needed to extend the calibration to 20th magnitude
   and beyond.

5  Compressed version at CDS

5.1  The installation of the PMM USNO-A2.0 Catalog at CDS

The original catalog consisted in 24 files, one for a 7.5° strip in declination. Each file was extended to a directory, named N000...N8230 and S0000...S8230, i.e. with the same conventions as those used for the GSC Catalog.

In each of these directories, there is one file for 30min (i.e. 7.5° at the Equator) in right asension; the total number of files is therefore 24×48 = 1152 files, with an average number of 20,000 (near the poles) to 800,000 objets per file. In each of these files, the range of the coordinates is then restricted to 7.5°, i.e. a maximal value of 2,700,000 when the coordinates are expressed in their original units of 10mas. The final grouping allowed to reduce one record to 7 or 8 bytes (the mean is close to 7).

The resulting catalog occupies only 3.6Gbytes, including all transformation and query software; the full 526×106 objects are tested in about 45 minutes (i.e. 5µs per object) on a Sparc-20 (72MHz).

A few benchmarks made on a Sparc-20 (72MHz) give the following average elapsed times (between 15 and 70for a search by position on the catalog, keeping the 10 closest stars (actually performed on the USNO-A1.0 which was converted in April 1997 with an almost identical software):

=========================================================================
   Search        Tested stars   Time required   Reading time
  Radius (')     per target        (s)         for 1 star (microsec)
-------------------------------------------------------------------------
       2.5           14153        0.09           6.4
      10.0           66394        0.24           3.6
      30.0          201351        0.67           3.3
=========================================================================

A client/server access to the PMM USNO-A2.0 Catalog – as well as to other catalogues – is also available via the findpmm2 program which is part of the cdsclient package.

François Ochsenbein, <&CDS.home>
E-mail:
(October 1998)

<&Viz.tailmenu /home/cds/httpd/Pages/VizieR/pmm/usno2.htx "index">