***************************************************************************
  CONTENTS
  --------
  1. Obtaining the latest version

  2. Installing HDF
  2.1 Supported Platforms
  2.2 Third Party Software Requirements
  2.3 System Requirements

  2.4 General Configuration/Installation - Unix
  2.4.1 Overview
  2.4.2 Layout of configuration files

  2.4.3 Changing default values(CC,CFLAGS,..) and Setting Options
  2.4.3.1 Changing default values(CC,CFLAGS,..)
  2.4.3.2 Using HDF/MFHDF libraries w/ original netCDF library
  2.4.3.3 Setting other Options

  2.4.4 Running configure
  2.4.5 Dealing with Configure Problems
  2.4.6 Compiling, Testing and Installing

  2.5 Platform-specific Notes
  2.5.1 HPUX(9.03) on PA-RISC
  2.5.2 Solaris(2.5) on Sparc
  2.5.3 Solaris(2.5.1) on INTEL(x86)
  2.5.4 OpenVMS AXP and OpenVMS VAX
  2.5.5 Windows '95 or Windows NT
  2.5.6 PowerPC/68k(far/4i/8d)- Macintosh OS
  2.5.7 Exemplar (HP-UX 9.03)
  2.5.8 SP2 Single node
  2.5.9 T3E Single node
  2.5.10a SGI IRIX 6.x
  2.5.10b SGI IRIX64
  2.5.11 DEC Alpha(Digital Unix v4.0)
  2.5.12 Cray T90

  2.6 Pablo Instrumentation
  2.7 File Cache(Beta release)
  2.8 Installation Location
  2.9 Specifying the System Type
  2.10 Configure Options 

  3. Man pages

  4. Release Notes

  5. Documentation

  6. FAQ

  7. Java Products

  8. HELP

*****************************************************************************

1. Obtaining the latest version
   ============================

    The most recent version of the distribution can be obtained from
    the NCSA ftp archive site at:

     ftp://ftp.ncsa.uiuc.edu/HDF/HDF_Current

2. Installing HDF	
   ==============

    For compiling and installing the HDF libraries, tests and
    utilities on a system, please follow these instructions. 

2.1 Supported Platforms
    ===================

    For PLATFORM specific NOTES see Section 2.5 called
    'Platform-specific Notes'.


  Platform(OS)                    C-Compiler       Fortran-Compiler   
  ------------                    ----------       ---------------- 
  Sun4(SunOS 4.1.4)               GCC 2.6.3        f77 SC1.0            
  Sun4(Solaris 2.5)               CC SC4.0         f77 SC4.0            
  SGI-Indy(IRIX 5.3)              CC 3.19          f77
  SGI-Indy(IRIX v6.2)             CC               f77 6.2
  SGI-Origin(IRIX64 v6.4-n32)     CC 7.20          f77 7.20     
  SGI-Origin(IRIX64 v6.4-64)      CC 7.20          f77 7.20    
  HP9000/735(HP-UX 9.03)          CC A.09.84       f77 09.16  
  HP9000/755(HP-UX B.10.20)       CC A.10.32.03    f77       
  Exemplar(HP-UX A.09.03)         CC 6.5           fc 9.5        
  Cray T90
   CFP  (UNICOS 10.0.0bt u10.13)  CC 5.0.5.0       f90 3.0.2.0  
   IEEE (UNICOS 10.0.0bt d10.25)  CC 5.0.5.0       f90 3.0.2.0  
  Cray C90 (UNICOS 803.2)           -                -        
  IBM SP2 (single node, v4.2.1)   XLC 3.1.4.0      f77 4.1.0.6       
  DEC Alpha/Digital Unix v4.0     CC               f77              
  DEC Alpha/OpenVMS AXP v6.2        -                -  
  DEC Alpha/OpenVMS AXP v7.1        -                -  
  VAX OpenVMS v6.2                  -                - 
  IBM PC - Intel Pentium
       Solarisx86 (2.5.1)         GCC 2.7.2.1        -                
       Linux (elf) (2.0.30)       GCC 2.7.2.1      fort77 (f2c) 2.7.2.1     
       FreeBSD (2.2.1)            GCC 2.7.2        f77 2.7.2.1
  PowerPC(Mac-OS-7.6)               -                -              
  68k-far/4i/8d(Mac-OS-7.6)         -                -           
  Windows NT/95                     -                -             
  T3E (unicosmk 2.0.2.16)         CC 6.0.2.0.5     f90 3.0.2.0


  NOTE:  Platforms listed with compiler information entered, are 
  platforms that HDF was tested on and for which we provide 
  pre-compiled binaries.  If a platform is listed, but there is a '-' 
  in the C and Fortran fields, then we 'support' this platform, even 
  though we have done no testing on it.  We will try our best to 
  answer any questions regarding it.

  A release is scheduled for the VMS, Windows NT/95 and Macintosh 
  platforms for HDF 4.1r2. 
 
2.2  Third Party Software Requirements:
     ==================================

     1. IJPEG distribution release 6b(libjpeg.a). The "official" site
        for this is ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v6b.tar.gz

     2. ZLIB 1.0.4(libz.a) distribution.

     Both of these distributions are included with this distribution
     in 'hdf/jpeg' and 'hdf/zlib'. The HDF/mfhdf base distribution
     in known to work with these versions only.

2.3 System Requirements
    ===================

    To build HDF from source, you need:

      * an ANSI C compiler. The native ANSI compilers on the above 
        platforms are supported. On platforms where no ANSI compiler
        was available the free GNU ANSI compiler GCC was used.

      * a Fortran 77 compiler (F90 on Crays) if you want Fortran support. 
        See above table for platforms where Fortran is supported. You 
        can compile both libraries without Fortran support by setting 
        the Fortran compiler variable 'FC = NONE' in the respective
        makefile fragment(mh-<os>) found in the top-level 'config'
        directory: 

                $(toplevel)/config/mh-<os>.

        See below for further details of configuration and installation.
        

2.4 General Configuration/Installation - Unix
    =========================================

    2.4.1 Overview
    --------------        
    In this distribution there are two types of 'configure'
    scripts. One is the Cygnus 'configure' script and the other is the
    'configure' script created by the GNU autoconf package. The Cygnus
    'configure' script is used at the top level to configure the overall
    distribution and the HDF/MFHDF/IJPEG/ZLIB libraries. The GNU 'configure' 
    script is used by the netCDF/IJPEG distributions to configure themselves. 
    However, these gnu configure scripts are not used in configuring this 
    distribution.
 
    The Cygnus 'configure' script attempts to guess the correct
    platform you are configuring the distribution for by calling the shell
    script 'config.guess'. It outputs a unique string based on information
    obtained from the UNIX command 'uname' consisting of CPU-VENDOR-OS
    e.g. 'hppa1.1-hp-hpux9.03' for an  HP9000/735 running HPUX-9.03.

    2.4.2 Layout of configuration files
    -----------------------------------
    The following shows the layout of the files used in the configuration
    of the HDF distribution.

    NOTE: The $(toplevel)/mfhdf/CUSTOMIZE and 
          $(toplevel)/mfhdf/configure(autoconf) files are no longer used 
          in the configuration of the distribution.
  
    $(toplevel)/Makefile.in
                config.guess
                config.sub
                configure (cygnus)
                configure.in (cygnus)
                config/mh-hpux, mh-sun,.....(host makefile fragments)

                man/Makefile.in

                mfhdf/CUSTOMIZE(not used)
                mfhdf/configure(autoconf - not used)
                mfhdf/libsrc/config/netcdf-aix.h,...  -> copied to netcdf.h
                mfhdf/fortran/config/ftest-aix.f,...  -> copied to ftest.f
                mfhdf/fortran/config/jackets-aix.c,.. -> copied to jackets.c
                mfhdf/fortran/config/netcdf-aix.inc,..-> copied to netcdf.inc

                hdf/Makefile.in
                hdf/src/Makefile.in
                hdf/test/Makefile.in
                hdf/util/Makefile.in
                hdf/zlib/Makefile.in
                hdf/pablo/Makefile.in

                hdf/jpeg/configure.in (cygnus)
                hdf/jpeg/Makefile.in
                hdf/jpeg/configure.gnu(autoconf - not used)
                hdf/jpeg/config/mh-hpux, mh-sun,... (host makefile fragments)
                hdf/jpeg/config/jhpux.h, jsun.h,...   -> copied to jconfig.h

                hdf/fmpool/configure, configure.in config.guess, config.sub,
                           Makefile.in (all cygnus)
                hdf/fmpool/config/mh-hpux, mh-sun,...(host makefile fragments)
                hdf/fmpool/config/fmpsolaris.h,...    -> copied to fmpconf.h

    2.4.3 Changing default values(CC,CFLAGS,..) and Setting Options
    ---------------------------------------------------------------
    To change any of the default values or set any of the options 
    edit the makefile fragment: 

             $(toplevel)/config/mh-<os>

    for your particular operating system. After changing the values you must 
    re-run the top-level 'configure' script. Make sure you start from
    a clean distribution if you are rebuilding after a previous make
    (i.e. 'make distclean') before re-running 'configure'.

      2.4.3.1 Changing default values(CC,CFLAGS,..)
      ********************************************
      To change any of the default values for CC, FC, CFLAGS, FFLAGS,..etc
      edit the top part of the makefile fragment: $(toplevel)/config/mh-<os>
      It is also a good idea to look at the other system variables to make sure 
      they are set correctly for your system.

      2.4.3.2 Using HDF/MFHDF libraries w/ original netCDF library
      ************************************************************
      To use the HDF/MFHDF libraries(libdf.a, libmfhdf.a) with the
      original netCDF library(libnetcdf.a) the HDF/MFHDF distribution
      must be compiled with the option '-DHAVE_NETCDF'. This will
      rename the HDF version of the C-interface(ncxxx) of the netCDF API
      to sd_ncxxx to avoid clashing with the original netCDF API from
      libnetcdf.a. Currently there is no support for renaming the 
      netCDF Fortran interface stubs. As such the HDF/MFHDF distribution 
      must be compiled without fortran support. HDF Users can still access
      HDF/netCDF files through the SDxxx interface but not through the
      ncxxx interface unless the renamed interface is used(sd_ncxxx).

      2.4.3.3 Setting other Options
      *****************************
      The makefile fragment must also be modified to enable the features 
      mentioned in sections 2.6) and 2.7) below.

    2.4.4 Running configure
    -----------------------
    To build both of the libraries contained in this directory,
    run the ``configure'' script in $(toplevel), e.g.:

	./configure -v --prefix=/usr/local/hdf

    If you're using `csh' on an old version of System V, you might need 
    to type `sh ./configure -v --prefix=/usr/local/hdf' instead to prevent 
    `csh' from trying to execute `configure' itself.

    This will configure the distribution to install the libraries, utilities,
    include and man files in '/usr/local/hdf/lib','/usr/local/hdf/bin',
    '/usr/local/hdf/include' and '/usr/local/hdf/man' respectively. The
    default 'prefix' is '/usr/local'. It is advisable to use something
    like the above to avoid overwriting say another 'libjpeg.a' that might be
    installed in '/usr/local/lib'. The '-v' option is for verbose output.

    Note that both 'libz.a' and 'libjpeg.a' and their respective
    include files are installed along with the base HDF(libdf.a) 
    and netCDF(libmfhdf.a) libraries.

    If the configure script can't determine your type of computer
    then it probably is a platfrom that is no longer supported.
    If you want to be adventurous see the section 'Dealing with
    Configure Problems' below. Otherwise send an email to 
    'hdfhelp@ncsa.uiuc.edu' for further help. 

    2.4.5 Dealing with Configure Problems
    *************************************
    If you want to be adventurous you can try the following.

    Configure basically calls either of the two shell scripts 'config.guess' 
    or 'config.sub' depending upon whether a target platform was supplied 
    on the command line to configure. If you don't provide a target on
    the command line configure calls 'config.guess' to guess what platfrom
    it is configuring for. The shell script 'config.guess' uses the unix
    command 'uname' to figure out the CPU, vendor, and OS of the
    platform. If you do provide a target on the command line, configure
    calls the shell script 'config.sub' to build the triplet specifying
    CPU, vendor, and OS from the full or partial target provided.

    If the configure script can't determine your type of computer, give it
    a general name that the computer is generally refered to as an argument, 
    for instance './configure sun4'.  You can use the script 'config.sub' 
    to test whether a general name is recognized; if it is, config.sub 
    translates it to a triplet specifying CPU, vendor, and OS.
    (e.g hppa1.1-hp-hpux9.03 for an HP900/735 running HPUX9.03).

    If this still fails all is not lost. All the configure script really
    needs is one of the supported targets mentioned above(except NT).
    If you think your platform is close to one of the above platforms
    mentioned in the 'Supported Platforms' sections you can pass configure
    this target and it will configure the distribution for that target.

    For possible mappings you will need to look inside the shell script
    'config.sub' and look at the partial to full mappings and pick one
    that satisfies the triplet mappings found in 'configure.in' below
    the section "# per-host:'. Note that if you try a mapping and it
    does not work this means that 'config.sub' needs to be edited to
    provide the proper mapping from your target to a full mapping that
    is supported. 

    There are currently NO instructions for porting the distribution to a 
    new platform.

    2.4.6 Compiling, Testing and Installing
    ---------------------------------------
    To compile the library and utilities type:

        make 

    To find out the available make targets type:

        make help

    To test the libraries and utilities type:

        make test 

    It is a good idea to save the output of the tests and view it later 
    for errors.
    e.g. 

        make test >& make.test.out

    To install the libraries, utilities, includes and man pages type: e.g.

        make install

2.5 Platform-specific Notes
    ========================

    2.5.1 HPUX(9.03) on PA-RISC
    ---------------------------
    The distribution has been compiled/tested with the native
    ANSI-C compiler and Fortran compiler. The binary distribution 
    was compiled using the native compilers.

    2.5.2 Solaris(2.5) on Sparc
    ---------------------------
    The distribution has been compiled/tested with the native
    ANSI-C compiler and native fortran compiler. The binary 
    distribution was compiled using the native compilers.

    When compiling your programs on Solaris, you must include the 
    the nsl library, to resolve calls to the xdr* routines.
    For example,

      cc -Xc -xO2 -o <your program> <your program>.c  \
         -I<path for hdf include directory>\
         -L<path for hdf libraries> -lmfhdf -ldf -ljpeg -lz \
         -L/usr/lib -lnsl


    2.5.3 Solaris(2.5.1) on INTEL(x86)
    --------------------------------

    The distribution has been compiled/tested with GCC 2.7.2.1 with
    *NO* FORTRAN support.

    When compiling your programs on Solaris_x86, you must include the 
    the nsl library, to resolve calls to the xdr* routines.
    For example,

       gcc -ansi -O -o <your program> <your program>.c \
           -I<path for hdf include directory> \
           -L<path for hdf libraries> -lmfhdf -ldf -ljpeg -lz  \
           -L/usr/lib -lnsl

    2.5.4 OpenVMS AXP on DEC Alpha and OpenVMS on VAX
    -------------------------------------------------
    *** NOT AVAILABLE WITH THIS RELEASE ***
   
    2.5.5 Windows '95 or Windows NT
    -------------------------------
    *** NOT AVAILABLE WITH THIS RELEASE ***

    2.5.6 PowerPC/68k(far/4i/8d)- Macintosh OS
    ------------------------------------------
    *** NOT AVAILABLE WITH THIS RELEASE ***

    The distribution was compiled/tested with MetroWerks Codewarrior(CW8).
    Only the base libraries{libdf.a,zlib.a,libjpeg.a,libmfhdf.a,libxdr.a}
    were compiled and tested on the PowerPC without Fortran support.

    *NO* Fortran support is included in this distribution.

    Codewarrior Projects can be found with this distribution.
    They have been run through the Macintosh BinHex utility program. 
    You need to compile the libraries before you can compile the test 
    programs 'testhdf', 'xdrtest', 'cdftest', 'hdftest' and  'nctest'.

    2.5.6.1 Special Notes
    *********************
    1. The test programs are SIOUX applications.
    
    2. When testing 'testhdf' in 'hdf/test' directory make sure
       that a directory called 'testdir' exists in the 'hdf/test'.
       This directory is used in the external element test.

    3. You need at least 8MB of memory to run most of the test programs.

    2.5.6.2 Configuring the Distribution
    ************************************
    A. If you have access to a unix box.

       1. Uncompress and untar the distribution.
       2. Type './configure mac'. This will build the distribution
          for the Macintosh. Specifically it will copy the correct
          files to the correct location. For the Macintosh release
          the two relevant files are the following: 

          mfhdf/libsrc/config/netcdf-mac.h  -> copied to mfhdf/libsrc/netcdf.h
          hdf/jpeg/config/jmac.h            -> copied to hdf/jpeg/jconfig.h

       3. Re-tar this newly configured distribution and download it to
          the Macintosh where any utility that understands the tar format
          can be used to unpack it. If you do not have access to such a 
          utility then you need to copy the files one by one.

    B. Don't have access to a unix box and are using the 'tar.Z'/'tar.gz' 
        version.
       1. After you have downloaded the distribution to your Macintosh
          you need to find a utility to uncompress and untar the
          distribution.

       2. You will need to copy the following files to the correct locations
          with the new names(netcdf.h , jconfig.h).

          mfhdf/libsrc/config/netcdf-mac.h  -> copied to mfhdf/libsrc/netcdf.h
          hdf/jpeg/config/jmac.h            -> copied to hdf/jpeg/jconfig.h

    C. If you are using the 'sit.hqx' version everything should have
       been configured already. You just need to un-stuff and un-binhex
       the distribution.

    2.5.6.3 Building the Distribution
    *********************************
    The distribution needs to be built in the order specified below.
    Both PowerPC and 68k Codewarrior Projects can be found in the 
    following directories:
 
    $(toplevel)/
                hdf/zlib/zlib.68k-project.hqx
                         zlib.PPC-project.hqx

                hdf/jpeg/jpeglib.68k-project.hqx
                         jpeglib.PPC-project.hqx

                hdf/src/hdflib.68k-project.hqx
                        hdflib.PPC-project.hqx

                hdf/test/testhdf.68k-project.hqx
                         testhdf.68k-project.hqx

                hdf/test/testdir(need to create this if it does not exist)
    
                mfhdf/xdr/xdrlib.68k-project.hqx
                          xdrlib.PPC-project.hqx
                          xdrtest.68k-project.hqx
                          xdrtest.PPC-project.hqx

                mfhdf/libsrc/mfhdflib.68k-project.hqx
                             mfhdflib.PPC-project.hqx
                             cdftest.68k-project.hqx
                             cdftest.PPC-project.hqx
                             hdftest.68k-project.hqx
                             hdftest.PPC-project.hqx

                mfhdf/nctest/nctest.68k-project.hqx
                             nctest.PPC-project.hqx

    2.5.6.4 Testing the Distribution
    ********************************
    Run the tests in the following order
      
       1. hdf/test/testhdf
          Note:
           When testing 'testhdf' in 'hdf/test' directory make sure
           that a directory called 'testdir' exists in the 'hdf/test'.
           This directory is used in the external element test.

       2. mfhdf/xdr/xdrtest
           After running this test compare the output to that shown in
           the file mfhdf/xdr/testout.sav

       3. mfhdf/libsrc/hdftest
           After running this test compare the output to that shown in
           the file mfhdf/libsrc/hdfout.sav

       4. mfhdf/libsrc/cdftest
           After running this test compare the output to that shown in
           the file mfhdf/libsrc/testout.sav

       5. mfhdf/nctest/nctest

    2.5.7 Exemplar
    --------------
    HP Exemplar (Convex) machines running version 10.x of HP-UX are now only
    able to be configured for HP-UX.  If you are running an Exemplar with an
    earlier version of the software, you must configure the machine as
    follows:

        ./configure -v --host=c2-convex-bsd

    Otherwise, the machine will be configured for HP-UX.
        

    2.5.8 SP2 Single node
    ----------------------
    HDF has been compiled and tested to run in a single node of the SP2
    system.  You can make the library the same way you would on an AIX system.  
    To use it in the parallel processing environment on the SP2 system, you
    must run the HDF code in only one designated process since HDF does
    not support concurrent access of the same HDF file.


    2.5.9 T3E Single node
    -------------------------
    HDF has been compiled and tested to run in a single node of the
    T3E system.  To use it in the parallel processing environment on
    the T3E system, you must run the HDF code in only one designated
    process since HDF does not support concurrent access to the same
    HDF file.

    Prior to installing on the T3E, the following changes will have
    to be made to ./hdf/test/tsdmmsf.f and ./hdf/test/tsdnmmsf.f:

    Search on '-128'. You will find two instances of this in
    each file.  The second instance looks as follows: 

    C NOTE: If you get a compile error on the "char(-128)" line, substitute
    C       the "char(0)" line.  Its not quite as thorough a test, but...
    C      i8min = char(0)
	  i8min = char(-128)

    Replace the char(-128) line, as directed.


    NOTE: HDF is compiled with the f90 compiler starting this release.
    Cray has phased out the cf77 compiler.  The f90 compiler issues
    numerous warnings during the compiling of the Fortran API test
    programs.  They can be safely ignored.  One warning is about the
    unsupported DOUBLE PRECISION being replaced by REAL.  That works
    fine for the purpose of the test programs since T3E REAL is 8
    bytes in size which is the same size as DOUBLE PRECISION in other 
    machines.  Another warning is by the loader complaining about many 
    SYMBOLS referenced but not defined.  Those SYMBOLS are actually HDF 
    Fortran function names declared in dffunc.inc file and they are not 
    used in the testing.


    2.5.10a SGI IRIX 6.x
    --------------------------
    IRIX is the traditional SGI 32-bit OS.  Starting in version 6.x,
    it supports two classes of 32 bit compilers, the old 32 (-o32)
    and the new 32-bits (-n32).  SGI is phasing out the -o32 compilers.
    Continued maintenance is available on the -n32 class of compilers only.
    The HDF library configures to use the -n32 class of C and F77 compilers.
    If you want to use different compiler options, you need to edit
    config/mh-irix32 and then run configure.  Consult the section
    of "General Configuration/Installation" for more information.


    2.5.10b SGI IRIX64
    --------------------------
    IRIX64 supports multiple combinations of ABI (-64, -n32, -o32) and
    instruction sets (-mips2, -mips3, -mips4).  Previous HDF
    library releases had hard coded the MIPS settings by guessing what
    might be the most reasonable combination.  This release no longer
    sets the MIPS option but leaves it up to the local or user's
    default.  The configure still generates -64 code by default on
    an IRIX64 system.  If -n32 code is desired, one may override it
    by specifying 'irix6_32' during the configure step.

    Configure command	    Code produced
    -----------------	    -------------
    ./configure                 -64
    ./configure irix6_32        -n32

    If you want to use different compiler options, you need to edit
    config/mh-irix6 (for just configure) or config/mh-irix32 (for
    configure irix6_32) and then run configure.  Consult the section
    of "General Configuration/Installation" for more information.


    2.5.11 DEC Alpha(Digital Unix v4.0)
    ------------------------------------
    The distribution has been compiled/tested with the native Digital 
    Unix C and FORTRAN compilers.

    During the testing of the library the test 'mfhdf/libsrc/hdftest' 
    will report "Unaligned access ..." messages which can be ignored.

    2.5.12 Cray T90
    ---------------
    Prior to installing on the T90 CFP or T90 IEEE, the following
    changes will have to be made to ./hdf/test/tsdmmsf.f and
    ./hdf/test/tsdnmmsf.f:

    Search on '-128'. You will find two instances of this in
    each file.  The second instance looks as follows: 

    C NOTE: If you get a compile error on the "char(-128)" line, substitute
    C       the "char(0)" line.  Its not quite as thorough a test, but...
    C      i8min = char(0)
          i8min = char(-128)

    Replace the char(-128) line, as directed.


2.6 Pablo Instrumentation
    =====================

    This version of the distribution has support to create an instrumented 
    version of the HDF library(libdf-inst.a). This library along with
    the Pablo performance data capture libraries can be used to gather data
    about I/O behavior and procedure execution times.  

    More detailed documentation on how to use the instrumented version of
    the HDF library with Pablo can be found in the Pablo directory: 

       $(toplevel)/hdf/pablo 

    See the provided '$(toplevel)/hdf/pablo/README.Pablo' and the PostScript 
    file '$(toplevel)/hdf/pablo/Pablo.ps'.

    At this time only an instrumented version of the core HDF library libdf.a 
    can be created. Future versions will have support for the SDxx interface
    found in libmfhdf.a. Current interfaces supported are ANxx, GRxx, DFSDxx,
    DFANxx, DFPxx, DFR8xx, DF24xx, Hxx, Vxx, and VSxx.

    To enable the creation of an instrumented library the following section
    in the makefile fragment($(toplevel)/config/mh-<os>) must be uncommented 
    and set.

    # ------------ Macros for Pablo Instrumentation  --------------------
    # Uncomment the following lines to create a Pablo Instrumentation
    # version of the HDF core library called 'libdf-inst.a'
    # See the documentation in the directory 'hdf/pablo' for further 
    # information about Pablo and what platforms it is supported on
    # before enabling. 
    # You need to set 'PABLO_INCLUDE' to the Pablo distribution 
    # include directory to get the files 'IOTrace.h' and 'IOTrace_SD.h'.
    #PABLO_FLAGS  = -DHAVE_PABLO
    #PABLO_INCLUDE = -I/hdf2/Pablo/Instrument.HP/include

    After setting these values you must re-run the toplevel 'configure' script.
    Make sure that you start from a clean re-build(i.e. 'make clean') after
    re-running the toplevel 'configure' script and then run 'make'.
    Details on running configure can be found above in the section
    'General Configuration/Installation - Unix'.

2.7 File Cache(Beta release)
    =================================
    This version of the distribution has preliminary support for file caching.

*NOTE*: This version is NOT officially supported on all platforms
        and has not been extensively tested. As such it is provided as is.
        It will be supported officially in a later release.

    The file cache allows the file to be mapped to user memory on 
    a per page basis i.e a memory pool of the file. With regards to the 
    file system, page sizes can be allocated based on the file system 
    page-size or if the user wants in some multiple of the file system 
    page-size. This allows for fewer pages to be managed along with 
    accommodating the users file usage pattern.

    The current version supports setting the page-size and number of pages
    in the memory pool through user C-routines(Fortran will be added in a 
    future release). The default is 8192 bytes for page-size and 1 for number 
    of pages in the pool.

    Routines:(The names may change in the future...)
    -------------------------------------------------
    Hmpset(int pagesize, int maxcache, int flags)
    --------------------------------------------
    o  Set the pagesize and maximum number of pages to cache on the next
       open/create of a file. A pagesize that is a power of 2 is recommended.
       'pagesize' must be greater than MIN_PAGESIZE(512) bytes and 
       'maxcache' must be greater than or equal to 1. Valid values
       for both arguments are required when using this call.

       The values set here only affect the next open/creation of a file and
       do not change a particular file's paging behavior after it has been
       opened or created. This may change in a later release.

       Use flags argument of 'MP_PAGEALL' if the whole file is to be cached
       in memory otherwise pass in zero. In this case the value for 'maxcache'
       is ignored. You must pass in a valid value for 'pagesize' when
       using the flag 'MP_PAGEALL'. 
 
    Hmpget(int *pagesize, int *maxcache, int flags)
    ----------------------------------------------
    o   This gets the last pagesize and maximum number of pages cached for
        the last open/create of a file. The 'flags' variable is not used.
    
    In this version a new file memory pool is created for every file that is
    created/opened and can not be shared. Future versions will allow sharing 
    of the file memory pool with other threads/processes.

    To enable the creation of a library using page caching the following 
    section in the makefile fragment($(toplevel)/config/mh-<os>) must be 
    uncommented and set.

    # ------------ Macros for Shared Memory File Buffer Pool(fmpool) ------
    # Uncomment the following lines to enable shared memory file buffer pool
    # version of the HDF core library libdf.a. Please read the
    # documentation before enabling this feature.
    #FMPOOL_FLAGS  = -DHAVE_FMPOOL

    After setting these values you must re-run the toplevel 'configure' script.
    Make sure that you start from a clean re-build(i.e. 'make clean') after
    re-running the toplevel 'configure' script and then run 'make'.
    Details on running configure can be found above in the section
    'General Configuration/Installation - Unix'.

    The file caching version of libdf.a is automatically tested
    when the regular HDF and netCDF tests are run. The page caching
    version has been tested only on a few UNIX platforms and is NOT
    available for the Macintosh ,IBM-PC(Windows NT/95) or VMS.

2.8 Installation Location
    =====================

    By default, `make install' will install the HDF/mfhdf files in
    `$(toplevel)/NewHDF/bin', '$(toplevel)/NewHDF/lib', etc.  You may
    then copy the files to the appropriate directories on your system.
    If you prefer, you can specify the directory so that `make install'
    will install the files directly in it.  This is done by giving
    `configure' the option `--prefix=PATH'.

    eg.  ./configure -v --prefix=/usr/local/hdf

    This will configure the distribution to install the libraries,
    utilities, include and man files in '/usr/local/hdf/lib',
    '/usr/local/hdf/bin', '/usr/local/hdf/include' and
    '/usr/local/hdf/man' respectively.

2.9 Specifying the System Type
    ==========================

    There may be some features `configure' can not figure out
    automatically, but needs to determine by the type of host HDF/mfhdf
    will run on.  Usually `configure' can figure that out, but if it prints
    a message saying it can not guess the host type, give it the
    `--host=TYPE' option.  TYPE can either be a short name for the system
    type, such as `sun4', or a canonical name with three fields:

         CPU-COMPANY-SYSTEM

    e.g. hppa1.1-hp-hpux9.03

    See the file `config.sub' for the possible values of each field.


2.10 Configure Options 
    ==================

    Usage: configure [OPTIONS] [HOST]

    Where HOST and TARGET are something like "sparc-sunos", "mips-sgi-irix5",etc.

    `configure' recognizes the following options to control how it
    operates. 

    NOTE: not all options are currently supported by this
          distribution. The following are the only ones supported.

    `--help'
         Print a summary of the options to `configure', and exit.

     `--prefix=MYDIR`          install into MYDIR [$(toplevel)/NewHDF]


3. Man pages
   =============

    Man pages can be found in:

         $(toplevel)/man

4. Release notes
   =============
    The files in sub-directory $(toplevel)/release_notes are detailed 
    descriptions for the new features and changes in this release.
    They can be used as supplemental documentation. These files are also 
    available on the NCSA ftp server (ftp.ncsa.uiuc.edu) in:
 
         /HDF/HDF_Current/release_notes/.

5. Documentation
   =============

   The HDF documentation can be found on the NCSA ftp server
   in the directory /HDF/Documentation/.  The
   HDF home page is at:

      http://hdf.ncsa.uiuc.edu/

6. FAQ
   ===
   An FAQ is available on our ftp server, as well as under
   "Information about HDF" on the home page.

7. HDF Java Products
   =================
   The HDF Java Interface and Java HDF Viewer are included as an
   optional part of the HDF source.  These products are built
   separately after the library.  See java-hdf/README.


8. HELP
   ====
   If you have any questions or comments, or would like to be
   added to or removed from our hdfnews email list, contact us
   at:

      hdfhelp@ncsa.uiuc.edu


