Professional Documents
Culture Documents
Table of Contents
Introduction
Required Compilers and Scripting Languauges
o WRF System Software Requirements
o WPS Software Requirements
Required/Optional Libraries to Download
UNIX Environment Settings
Building the WRF System for the NMM Core
o Obtaining and Opening the WRF Package
o How to Configure WRF
o How to Compile WRF for the NMM Core
Building the WRF Preprocessing System
o How to Install the WPS
Introduction
The WRF modeling system software installation is fairly straightforward on the ported
platforms listed below. The model-component portion of the package is mostly self-
contained. The WRF model does contain the source code to a Fortran interface to ESMF
and the source to FFTPACK . Contained within the WRF system is the WRFDA
component, which has several external libraries that the user must install (for various
observation types and linear algebra solvers). Similarly, the WPS package, separate from
the WRF source code, has additional external libraries that must be built (in support of
Grib2 processing). The one external package that all of the systems require is the
netCDF library, which is one of the supported I/O API packages. The netCDF libraries
and source code are available from the Unidata homepage at http://www.unidata.ucar.edu
(select DOWNLOADS, registration required).
The WRF model has been successfully ported to a number of Unix-based machines.
WRF developers do not have access to all of them and must rely on outside users and
vendors to supply the required configuration information for the compiler and loader
options. Below is a list of the supported combinations of hardware and software for
WRF.
The WRF model may be built to run on a single processor machine, a shared-memory
machine (that use the OpenMP API), a distributed memory machine (with the appropriate
MPI libraries), or on a distributed cluster (utilizing both OpenMP and MPI). For WRF-
NMM it is recommended at this time to compile either serial or utilizing distributed
memory. The WPS package also runs on the above listed systems.
The netCDF libraries should be installed either in the directory included in the users path
to netCDF libraries or in /usr/local and its include/ directory is defined by the
environmental variable NETCDF. For example:
To execute netCDF commands, such as ncdump and ncgen, /path-to-netcdf/bin may also
need to be added to the users path.
Hint: When compiling WRF codes on a Linux system using the PGI (Intel, g95, gfortran)
compiler, make sure the netCDF library has been installed using the same PGI (Intel,
g95, gfortran) compiler.
Hint: On NCARs IBM computer, the netCDF library is installed for both 32-bit and 64-
bit memory usage. The default would be the 32-bit version. If you would like to use the
64-bit version, set the following environment variable before you start compilation:
setenv OBJECT_MODE 64
If distributed memory jobs will be run, a version of MPI is required prior to building the
WRF-NMM. A version of mpich for LINUX-PCs can be downloaded from:
http://www-unix.mcs.anl.gov/mpi/mpich
The user may want their system administrator to install the code. To determine whether
MPI is available on your computer system, try:
which mpif90
which mpicc
If all of these executables are defined, MPI is probably already available. The MPI lib/,
include/, and bin/ need to be included in the users path.
Three libraries are required by the WPS ungrib program for GRIB Edition 2 compression
support. Users are encouraged to engage their system administrators support for the
installation of these packages so that traditional library paths and include paths are
maintained. Paths to user-installed compression libraries are handled in the
configure.wps file by the COMPRESSION_LIBS and COMPRESSION_INC variables.
As an alternative to manually editing the COMPRESSION_LIBS and
COMPRESSION_INC variables in the configure.wps file, users may set the
environment variables JASPERLIB and JASPERINC to the directories holding the
JasPer library and include files before configuring the WPS; for example, if the JasPer
libraries were installed in /usr/local/jasper-1.900.1, one might use the following
commands (in csh or tcsh):
setenv JASPERLIB /usr/local/jasper-1.900.1/lib
setenv JASPERINC /usr/local/jasper-1.900.1/include
If the zlib and PNG libraries are not in a standard path that will be checked automatically
by the compiler, the paths to these libraries can be added on to the JasPer environment
variables; for example, if the PNG libraries were installed in /usr/local/libpng-1.2.29 and
the zlib libraries were installed in /usr/local/zlib-1.2.3, one might use
setenv JASPERLIB ${JASPERLIB} -L/usr/local/libpng-1.2.29/lib
-L/usr/local/zlib-1.2.3/lib
setenv JASPERINC ${JASPERINC} -I/usr/local/libpng-1.2.29/include
-I/usr/local/zlib-1.2.3/include
after having previously set JASPERLIB and JASPERINC.
In addition, there are a few WRF-related environmental settings. To build the WRF-
NMM core, the environment setting WRF_NMM_CORE is required. If nesting will be
used, the WRF_NMM_NEST environment setting needs to be specified (see below). A
single domain can still be specified even if WRF_NMM_NEST is set to 1. If the WRF-
NMM will be built for the HWRF configuration, the HWRF environment setting also
needs to be set (see below). The rest of these settings are not required, but the user may
want to try some of these settings if difficulties are encountered during the build process.
In C-shell syntax:
Note: Always obtain the latest version of the code if you are not trying to continue a pre-
existing project. WRFV3 is just used as an example here.
Once the tar file is obtained, gunzip and untar the file.
A helpful guide to building WRF using PGI compilers on a 32-bit or 64-bit LINUX
system can be found at:
http://www.pgroup.com/resources/tips.htm#WRF.
To configure WRF, go to the WRF (top) directory (cd WRF) and type:
./configure
You will be given a list of choices for your computer. These choices range from
compiling for a single processor job (serial), to using OpenMP shared-memory (SM) or
distributed-memory (DM) parallelization options for multiple processors.
Note: For WRF-NMM it is recommended at this time to compile either serial or utilizing
distributed memory (DM).
Once an option is selected, a choice of what type of nesting is desired (no nesting (0),
basic (1), pre-set moves (2), or vortex following (3)) will be given. For WRF-NMM,
only no nesting or basic nesting is available at this time, unless the environment setting
HWRF is set, then (3) will be automatically selected, which will enable the HWRF vortex
following moving nest capability.
Check the configure.wrf file created and edit for compile options/paths, if necessary.
Hint: It is helpful to start with something simple, such as the serial build. If it is
successful, move on to build smpar or dmpar code. Remember to type clean a between
each build.
Hint: If you anticipate generating a netCDF file that is larger than 2Gb (whether it is a
single- or multi-time period data [e.g. model history]) file), you may set the following
environment variable to activate the large-file support option from netCDF (in c-shell):
setenv WRFIO_NCD_LARGE_FILE_SUPPORT 1
Hint: If you would like to use parallel netCDF (p-netCDF) developed by Argonne
National Lab (http://trac.mcs.anl.gov/projects/parallel-netcdf), you will need to install p-
netCDF separately, and use the environment variable PNETCDF to set the path:
setenv PNETCDF path-to-pnetcdf-library
Hint: Since V3.5, compilation may take a bit longer due to the addition of the CLM4
module. If you do not intend to use the CLM4 land-surface model option, you can
modify your configure.wrf file by removing -DWRF_USE_CLM from
ARCH_LOCAL.
To compile WRF for the NMM dynamic core, the following environment variable must
be set:
setenv WRF_NMM_CORE 1
setenv WRF_NMM_NEST 1
setenv HWRF 1
Once these environment variables are set, enter the following command:
./compile nmm_real
./compile -h
or
./compile
produces a listing of all of the available compile options (only nmm_real is relevant to
the WRF-NMM core).
To remove all object files (except those in external/) and executables, type:
clean
To remove all built files in ALL directories, as well as the configure.wrf, type:
clean a
This action is recommended if a mistake is made during the installation process, or if the
Registry.NMM* or configure.wrf files have been edited.
These executables are linked to run/ and test/nmm_real/. The test/nmm_real and run
directories are working directories that can be used for running the model.
Beginning with V3.5, the compression function in netCDF4 is supported. This option will
typically reduce the file size by more than 50%. It will require netCDF4 to be installed
with the option --enable-netcdf-4. Before compiling WRF, you will need to set
the environment variable NETCDF4. In a C-shell environment, type;
More details on the WRF-NMM core, physics options, and running the model can be
found in Chapter 5. WRF-NMM input data must be created using the WPS code (see
Chapter 3).
More details on the functions of the WPS and how to run it can be found in Chapter 3.