Professional Documents
Culture Documents
User’s Guide
October 2011
Contents
1 Tools 3
1.1 Before we start . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Preparing an experiment . . . . . . . . . . . . . . . . . . . . . 6
1.3 Launching an experiment . . . . . . . . . . . . . . . . . . . . 8
2 Namelist settings 11
2.1 General principle . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Important atmophere namelists . . . . . . . . . . . . . . . . 11
2.2.1 NAERAD . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 NAMARPHY . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 NAMCT0 . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 NAMCT1 . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.5 NAMDIM . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.6 NAMDPHY . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.7 NAMDYN . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.8 NAMFPC . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.9 NAMGEM . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.10 NAMMCC . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.11 NAMPAR0 . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.12 NAMPAR1 . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.13 NAMPHY . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.14 NAMPHY0 . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.15 NAMPHY1 . . . . . . . . . . . . . . . . . . . . . . . . 20
2 CONTENTS
2.2.16 NAMPHY2 . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.17 NAMRAD15 . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.18 NAMRGRI . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.19 NAMRIP . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.20 NAMSCEN . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.21 NAMTOPH . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.22 NAMVV1 . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Important SURFEX namelists . . . . . . . . . . . . . . . . . 22
2.3.1 NAM_PGD_GRID . . . . . . . . . . . . . . . . . . . 22
2.3.2 NAMDIM . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.3 NAMRGRI . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.4 NAMGEM . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.5 NAM_ISBAn . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.6 NAM_SGH_ISBAn . . . . . . . . . . . . . . . . . . . 23
2.3.7 NAM_SEAFLUXn . . . . . . . . . . . . . . . . . . . . 24
2.3.8 NAM_WATERFLUXn . . . . . . . . . . . . . . . . . 24
2.3.9 NAM_DIAG_SURF_ATMn . . . . . . . . . . . . . . 24
2.3.10 NAM_DIAG_SURFn . . . . . . . . . . . . . . . . . . 24
2.3.11 NAM_WRITE_DIAG_SURFn . . . . . . . . . . . . . 25
2.4 Modifying namelists . . . . . . . . . . . . . . . . . . . . . . . 25
4 Post-processing 33
4.1 The diagnostic monitor . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Monthly means . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Global time series . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 Hemispheric time series . . . . . . . . . . . . . . . . . . . . . 38
4.5 Local time series . . . . . . . . . . . . . . . . . . . . . . . . . 40
5 Pre-processing 43
5.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.2 Climatology files . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3 Initial conditions for the atmosphere . . . . . . . . . . . . . . 45
5.4 Initial conditions for the surface . . . . . . . . . . . . . . . . . 46
silos STORAGETEK). The heavy part of the model output is stored there.
This machine has a restricted unix shell and is mostly accessed through ftp.
A particularity of tori which makes the scripts harder to read than with
the previous computers is that it contains two families of machines. The
family toritx is based on Linux servers and has interactive access. A job
submitted to tori is generally sent to it first. One can compile and do ftp
access, but not launch executable files. The family torisx is based on NEC
vector processors with possibility of parallel computation. The model and
other executable files run on it, but compilation or ftp access are not allowed.
The operating system is a NEC unix (very close to but not identical with
Linux). The two families share a permanent ($HOME) and a semi-permanent
($WORKDIR duration at least 24 h) disk space. A separate documentation (in
French) on the use of tori is available on the Arpege-climat web site.
To facilitate the preparation of multi-step scripts (typically one step on
toritx, one on torisx and one on toritx) a product named MTOOL has
been made available. It consists of a special syntax (#MTOOL...lines) and a
script pre-processor, which is used as:
mtool_filter.pl job
where job is the name of the script file. This command modifies a script on
toritx and submits it to toritx or torisx according to the lines inside the
script. In order to allow this operation, you must add to your configuration
file (e.g. .profile):
export MTOOL_ROOT=~mrpm631/public/mtool
PATH=${PATH}:$MTOOL_ROOT/bin
export PWD=$(pwd)
where a directory corresponding to your job has been created and where you
can find the output of your different steps (until your job crashed). This
directory is cleaned up after a couple of days. For more details about MTOOL,
a specific documentation (in French) is available on the Arpege-climat
web site.
As far as transfers to or from cougar are concerned, you are firmly encour-
aged to use on tori the commands ftget and ftput which allows transfers
by ftp without giving the password. It is thus necessary to provide to tori
the password of cougar (and to update it). For that, it is necessary to be
connected on tori and type the command:
ftmotpasse - u user -h cougar-tori
This operation will create on tori a crypted file .ftuas. There is noth-
ing similar on sxclimat. Some scripts will work only if you have created a
.netrc file with the syntax:
machine cougar login user password pwd
It is very important, for security reasons, that your password on cougar is
different, because it is vulnerable (not crypted as in .ftuas). On the other
hand, an illegal access to cougar is less harmful, because this machine al-
lows only a restricted shell. In the worst case, the pirate will corrupt or
destroy your archives or saturate your quota. Note that you will never have
to change this password, and then never to edit either .ftuas or .netrc.
On tori and sxclimat you have to change your password regularly.
As version 5.2 is designed for century-long simulations, or for hundreds of 4-
to 6-month-long hindcast, you cannot avoid to install the automatic launch-
ing system. A (French) documentation of this software is available on the
Arpege-climat web site. Essentially, you must add the line in your tori
configuration file (.profile)
PATH=~mrga562/relances/procs/procs.v2.2c:${PATH}
in the previous runs. So you have to change the parameters in the history
file of your automatic launching each time you change NSTEP. The safest way
to work is to keep NSTEP constant. You must also be careful if you do not
start on January 1st. In order to keep the script as simple as possible, it
does not take into account the cases where variable MM is above 12.
To use the model, it is therefore first necessary to open an account on the
three machines referred to above. For the users outside of the domain
meteo.fr, it is necessary also that the network of Météo-France, through
the fire-wall parme, recognizes the machine from where connection is issued:
to this purpose, indicate in the authorization form the IP addresses of the
workstations you wish to use. The request for access authorization is to be
renewed every year spontaneously and the procedure takes a couple of weeks.
When all that precedes is carried out, you are ready to launch the model. In
Chapter 2, you will learn how to prepare a configuration file named namelist
and a surface configuration file named EXSEG1.nam. Then, in Chapter 3
you will edit and launch the script which produces the coupled integration of
the model. In Chapter 4 you will learn how to exploit the experiment that
you realized by preparing files for your favorite graphic software from the
various files created by Arpege-climat during integration. In Chapter 5
you will learn how to prepare files to run Arpege-climat in a different
geometry from the standard tl127l31r or from the few existing ones as well
as how to prepare the surface files. In Chapter 6, you will learn a little
more on ocean-atmosphere coupling. Note that the documentation on the
coupling software OASIS, the sea-ice model GELATO and the ocean model
NEMO are available on the web page of ASTER team. Finally, Chapter 7
will explain how to compile the model from scratch or to modify the model
with your own routines and Chapter 8 will teach you how to modify the
model to add extra variables or diagnostics. But before, let us see what you
will not learn in this handbook. Then a summary of what is at your disposal
for a basic use of the model is given.
When one deals with user-friendly public software, one starts by downloading
a file in the format .tar.gz. Once the file expanded, one finds a README
file, a directory doc containing files in the format .ps, .pdf, .html or .tex
and other files about which one does not have to worry initially. Reading
the README file learns that it is necessary to modify and then to launch a
file Configure which will create a file Makefile automatically. Then one
launches in one or more steps the command make. And the software is
installed. One creates an icon on his desktop, and one is offered a beautiful
graphic interface with menus, buttons and input windows. There is nothing
1. Tools 9
any more to do, but to read the documentation if one wants to exploit more
than 10% of the possibilities of the software.
Model Arpege-climat does not obey this scheme. It is addressed to qual-
ified and tough researchers. First of all, a license agreement with Météo-
France is necessary to install Arpege-climat on a machine external to the
site. The model was compiled on IBM (at ECMWF). To compile it on an-
other machine, one needs the help of qualified computer specialists, because
the assistance of Arpege-climat team is limited to the supply of the source
code (once the agreement is signed) and standard tools for compilation which
you will have to adapt to your system. In addition to the library of the model
itself, you need an auxiliary library (xrd), a library of spectral transforms
(tfl), and a library of message passing (mpi/mpe). Although it has been
highly simplified in the present version, this kind of work may take weeks,
and possibly months if you want to implement parallel computations on an
odd machine.
It results from what precedes that the standard use of model Arpege-
climat consists of running an already existing executable file. Contrary
to the models designed in the 1980’s, no option requires recompiling the
model. The namelists authorize most configurations. Moreover, an average
user can, with a minimum of assistance from Arpege-climat team, create
a new executable file by modifying and recompiling some subroutines. You
can thus plan to replace a physical parametrization by another one, a work
taking between one day and six months according to the input-outputs of the
new parametrization. However these operations are not described in details
in this guide.
The model is not reduced to a single a.out and a namelist. You need
files of initial conditions for the four components (ocean, sea-ice, atmosphere
and land) of the system, plus initial files for the coupler. Here we focus
on the atmospheric component. In order to update each month boundary
conditions you need extra data (e.g. CO2 concentration, aerosols). The
standard geometry is spectral triangular truncation to wave-number 127 with
reduced linear Gaussian grid with 31 vertical levels1 which yields a uniform
resolution of about 160 km, but a few others are available from Arpege-
climat team, e.g. a regional version with 50 km horizontal resolution over
Europe and 300 km in the southern Pacific2 . In Chapter 5 you will learn
how to prepare the necessary files. However we do not explain here how to
run seasonal forecasts by preparing initial condition from ECMWF analyses.
This kind of expertise requires that you fully master the simulation and
validation of present climate. Nevertheless, for an occasional use, Arpege-
climat team can provide the necessary files. For example if you wish to
1
abridged to tl127l31r
2
named mediash_l31
10 1. Tools
start the run the day of your birth with a stretched geometry having for pole
your birthplace, you will be provided within a reasonable delay with the file
of initial conditions and the 12 files of boundary conditions.
Finally, we do not describe here how to plot your data. This is a long
operation if you want to get beautiful results. But from our experience,
every user has his favorite graphical software, which generally requires data
in a specific format (European standard as Grib or US-originated standard as
NetCdF). Arpege uses its own storage standard which is not compatible with
graphical softwares, so we provide tools to transform elaborated diagnostic
in universal standards (Ascii, ieee) and let people free of their choice.
However, if you are new-comer and lazy, do not re-invent the wheel: ask
your colleague (or the author of this documentation) to provide ready-to-
use but not-simple-to-customize scripts to get a first look at your data on
your Linux PC. The climatological documentation provides examples of plots
obtained with GMT software.
Once the passwords are validated (the first password is often an out-of-date
password to change at the first connection) and are crypted (by ftuas) or
simply entered (by netrc), you can use the tools below:
There is one namelists file for the atmospheric setup and one namelists file for
the surface configuration. The file of namelists for the atmosphere makes it
possible to carry out simulations with Arpege in various configurations and
various resolutions without having to recompile anything. Each namelist
breaks up into a parameter list. One often says “the namelist” for “the file of
namelists” . There are 137 namelists read by the model in configuration 1
(direct model run), with 323 switches prescribed (amongst more than 2000).
Indeed all switches have a default value defined in the code before reading
the file. Thus if switches are missing in the file, the model runs nevertheless.
It is necessary however that the 137 namelists are present, even possibly
empty.
There exists a way of circumventing the edition of a namelist in the case
of simple modifications. It consists of passing as argument (in the line con-
taining the name of the executable file of the model) key words which take
precedence over the switches of the file. This approach is dis-advised for
climatic simulations. It is opposite to the founding principles of Arpege.
Indeed, if you save the namelist associated with your experiment, you are
sure to be able to start again, extend, or modify in a controlled way this
experiment. However it can be found in other configurations (e.g. 927, see
Chapter 5).
You will find hereafter comments on some switches present in the standard
file. For the others, it is necessary to browse the code documentation.
14 2. Namelist settings
2.2.1 NAERAD
2.2.2 NAMARPHY
2.2.3 NAMCT0
LFBDAP If .T., diagnostics are written on the history file; in any case,
these diagnostics are backuped in other files.
NFRHIS Frequency for archiving history files, in time steps (if > 0) or
hours (if < 0). It is necessary to archive the last time step if one wants
to continue integration.
NHISTS Choice of specific time steps for saving the history files. The
backup takes place for all the multiples of NFRHIS (including 0) when
NHISTS(0)=0, which corresponds to the standard case. If you wish
to save only some occurrences, you will take NHISTS(0) different from
zero: then, you store only the time steps NFRHISxNHISTS(1), NFR-
HISxNHISTS(2), . . . , NFRHISxNHISTS(NHISTS(0)). This mecha-
nism is found in all frequencies of archiving or printing.
NSFXHISTS Choice of specific time steps for saving the SURFEX Full-pos
files, analogous to NHISTS
NPOSTS Choice of the time steps for saving the Full-Pos files.
2.2.4 NAMCT1
2.2.5 NAMDIM
2.2.6 NAMDPHY
2.2.7 NAMDYN
NITMP Number of iterations for the research of the medium point in the
semi-Lagrangian.
2. Namelist settings 17
2.2.8 NAMFPC
CFPFMT Type of output grid for Full-Pos. If GAUSS, one stores in grid
point. If MODEL, one stores in spectral coefficients.
2.2.9 NAMGEM
2.2.10 NAMMCC
LMCC01 If .T. you read the boundary conditions on two monthly files.
Attempts to reproduce the behavior of versions 3 and before.
2.2.11 NAMPAR0
NOUTPUT Level of verbosity: nothing (0), one processor (1), all proces-
sors (2).
2.2.12 NAMPAR1
2.2.13 NAMPHY
2.2.14 NAMPHY0
2.2.15 NAMPHY1
SODELX Dimensionless depth of the layers in the soil for the heat diffusion.
2.2.16 NAMPHY2
2.2.17 NAMRAD15
LERAD6H15 If .T., additional calls are done during the first 6 hours.
2.2.18 NAMRGRI
NRGRI Number of longitudes per latitude circle (starting from the pole).
Necessary if NHTYP=2.
2. Namelist settings 23
2.2.19 NAMRIP
LASTRF Keep insolation as for years around 2000, used to prevent any
drift due to the formulation of RET which is not correct unless over
the period 1980-2020
2.2.20 NAMSCEN
2.2.21 NAMTOPH
ETRADI Pressure below which ACRANEB is active; this does not concern
RADINT or RADINT15.
2.2.22 NAMVV1
You will find hereafter comments on some switches present in the standard
file for the Arpege grid (CGRID=”GAUSS”). For the others, it is necessary to
browse the Surfex User’s guide. The file of namelist for the surface model
is named EXSEG1.nam.
2.3.1 NAM_PGD_GRID
2.3.2 NAMDIM
2.3.3 NAMRGRI
2.3.4 NAMGEM
2.3.5 NAM_ISBAn
2.3.6 NAM_SGH_ISBAn
2.3.7 NAM_SEAFLUXn
2.3.8 NAM_WATERFLUXn
CWAT_ALB type of albedo formula used to set albedo over water : "TA96"
: Taylor et al (1996) formula for water direct albedo, depending on so-
lar zenith angle
2.3.9 NAM_DIAG_SURF_ATMn
LFRAC flag to save in the output file the sea, inland water, natural covers
and town fractions
2.3.10 NAM_DIAG_SURFn
LSURF_BUDGET flag to save in the output file the terms of the surface
energy balance
LSURF_VARS flag to save in the output file the surface specific humid-
ity for each scheme (on the four separate tiles), on each patch of the
vegetation scheme if existing.
LCOEF flag to save in the output file the transfer coefficients used in the
computation of the surface energy fluxes, for each scheme (on the four
separate tiles) and aggregated for the whole surface
2.3.11 NAM_WRITE_DIAG_SURFn
hours in the machine) are penalized in the queue by a lower priority. So,
according to the model resolution, you should optimize this gathering.
The first part really editable by you starts here: variables specific to the
user are defined here. The script is self-explanatory. The name $EXPID is
to be changed every time you create a new experiment and do not want
to overwrite an existing one. The number $IPASS has to be incremented
each time you submit again your job, if you want to proceed in the calendar
(otherwise you redo exactly the same run). The file substeps contains the
size of the inner loop (12 months). If you attempt to run more passes that
its sized, you are exited. Variable $GEOM is very important. It defines the
geometry, i.e. the horizontal and vertical description of your grid. You
will encounter it in the post-processing as well as in the pre-processing. If
you intend to change it from the standard one, you have to create a new
namelist, but also many environment files (see Chapter 5). By default, the
model starts on 1st January 1989 00:00 UTC. Changing $YEAR0, $DATE0 and
the namelist (section NINDAT) is not sufficient. Here again, you need new
environment files.
The script indicates then where you will find the iniial namelists and the
initial conditions (atmosphere, surface, ocean, sea-ice, river routing and cou-
pler). In the case of the coupler, there are 3 initial files (atmosphere, ocean
3. Launching the model 31
and rivers). In the case of the surface, there are 2 initial files (Arpege file
and text file).
If you use your own monthly boundary conditions, you have to change
$BCOND. The five $PATH* variables correspond to directories you must cre-
ate on cougar. The script cannot create them for you. If you try to submit
the default script without taking any action, your run will fail because it
cannot archive the files. This is generally the first mistake done by a new
Arpege-climat user (sometimes by an experienced one too) who attempts
to press the button of his new toy before reading the guide. You are abso-
lutely free to name those five variables according to your fancy, but remember
them: you will need these names in the post-processing scripts.
As long as you use the standard geometry of tl127l31r, you will have no
file missing, except the restart if you decide to start at a date different from
1 January 1989. But if you decide to use another geometry, you should read
Chapter 5 first. However, you can create easily two files at a new geometry:
This is perhaps not what you want for your experiment, but the script
will not crash. The other fields containing $GEOM in their name must ab-
solutely be created and properly referenced in the script. For example, if
file lonlat_$GEOM does not exist you will run one month, but crash in the
post-processing with an obscure error diagnostic.
At this stage, old Arpege-climat users need to pay attention to a new
feature. A part of the post-processing which was done outside in previous
versions is now done here. By default, you do not archive the big amount of
model result (the former POST files). Instead, you save four types of products:
The section ends by defining additional unix variables you do not really
need to modify. The current directory is now $PATHWRK which is a semi-
permanent place (duration at least 24 hours). In case of problem, you can
visit this directory post mortem, but not when you come back from long
holidays.
The second section recovers the data, using the indications provided in the
preceding section, and modifies them in order to run the model as you want.
Most of it works on toritx, and the last lines work on torisx.
You need to prepare your namelist. It is saved on file $NAMELIST$IPASS. If
you forget to create the corresponding directory (by default $HOME/namelist)
your experiment will not be able to run more than one month. In order to re-
spect the Gregorian calendar, the script needs a calendar file (calend_$EXPID)
with the initial date in the first line, and at least as many lines with NSTOP=nday
as months you want to process (nday is the duration of each month, e.g. 31
for January). If you have none, no harm! The model script is kind enough
to create one such file for you, starting at $DATE0 and finishing in December
2100. In principle, you should not bother with the calendar file, unless you
want to do something special. Then the model reads a few variables in the
namelist and calculates the different lengths in time step for integration
duration and archiving frequency. Finally the namelist is updated with the
switches which may have changed from the initial or previous namelist.
The second ingredient for the model is a bunch of restarts which contain
the state of the prognostic variables at the beginning of the run and also some
boundary forcings. This file is saved on $RESTART$IPASS for the atmosphere
(do not forget to create its directory). We will need first to update the
boundary conditions of the current month from Const.Clim. This is not
sufficient to update the atmosphere restart , and five files of aerosols and
one file of ozone will be needed.
Different executable files are recovered:
• nemo
• oasis
• trip
3. Launching the model 33
The third section is the model itself (configuration 1). When all runs properly
as well as when an error occurs, the standard output and standard error files
are printed, followed by the file NODE of the first processor (a large file as
it contains all prints made in parallel mode). The model uses mpiexec for
memory distribution. This is the only parallel part of the job.
The next lines are devoted to post-processing. Here we are still on torisx
because still in the loop on the months, and we have no access to the silo. The
output files from Arpege and Surfex, one every $NHH hour are merged and
renamed. Monthly means are calculated (postM), times series are extracted
for France (postF) globe (postG) and northern hemisphere/Europe (postE).
The namelist is updated: NINDAT is advanced by one month.
The loop on the months being finished, we are back to toritx. The sea-ice
restart file is archived (by security) on cougar. The monthly means are
also archived in the silo. The time series are temporarily saved on $PATHWRK.
Note that the treatment of the global and hemispheric time series includes
an explicit loop on the name of the fields extracted. If the file $LISTPP
has been modified, then the two loops with index $CFLD are to be modified
accordingly.
34 3. Launching the model
The other restart files are saved for the next launch (atmosphere, surface,
ocean, rivers, coupler).
At the end of the year, i.e. in each launch if NSTEP=12, the time series files
are gathered by tar into annual files which are sent to the silo. Here again
the two loops on $CFLD depend on the content of $LISTPP.
The purpose of the fifth part is to prepare the run on the next month. It
simply consists of renaming the atmosphere namelist, then lrelan launches
the model again for a next year. The last lines consist of erasing from tori
the old namelists and the old restarts.
4
Post-processing
1. monthly means
2. global series
3. “hemispheric” series
36 4. Post-processing
4. local series
The non-zero value is generally 1, but in column four and five it may be 2.
This indicates that you wish daily series of daily averages (based on the
four 6-hourly values). Setting 1 means that you wish 6-hourly (or whatever
frequency you have decided with NHH) data. Local series are always saved at
the highest frequency, because they generally occupy less space.
At his stage, the model script will create the files corresponding to what you
asked for: one file per month for monthly means, one file per year and per
field for global and hemispheric series, one file per year for local series.
In the case of Aladin the global and hemispheric series cover the whole CI
domain (as the monthly means). You can choose the C domain or a smaller
one for local series.
By default, all variables calculated by Full Pos are saved as monthly means.
Some variables come from the Surfex postprocessing, but they have been
merged with the Arpege ones in the model script. Tables 4.1 and 4.2 list
what is available in a monthly mean file. Such a file is in Arpege format,
which means that you can directly access each field by its name in the first
column. In Table 4.1, the values of NN are 00000 (for 100000 P a), 92500,
85000, 70000, 60000, 50000, 40000, 30000, 25000, 20000, 15000, 10000, 7000,
5000, 3000 and 1000 P a; the value LL corresponds to the lowest level of the
model (by default 31).
A certain number of unix scripts are available on sxclimat:
/eac9/deque/V5.2/postprocessing/
First of all, you want to know about the climate of your model. You have run
a multi-year simulation and you want to create a monthly climatology from
your monthly files. The script you need is moychmens.sh. It calculates for
each calendar month the time average between year $YEAR1 and year $YEAR2
for experiment $EXPID. You must of course edit it to set where the input data
is (on cougar) and where you want to write your results (on sxclimat). You
can also edit fort.4 to select the fields you will average (by default all fields
are processed).
Seasonal means are often preferred to synthesize the model climate. The
script moychsais.sh is very similar to moychmens.sh. It calculates for each
calendar season the average of the three months. Of course, the multi-year
monthly means are assumed to already exist. The variables to edit in the
script are the same as in moychmens.sh.
4. Post-processing 37
At this stage, you have your material available, except that it is written in an
unusual format, sometimes in uncommon units, and some important fields
are missing. A first useful script is mmm_arp5a.sh. It calculates for each field
the global mean, the minimum and the maximum. The manual edition of
the script is similar to the last two scripts. In addition, the second column
of fort.4 contains a multiplicator of the fields. For example mean sea level
pressure is multiplied by 0.01 to transform P a into hP a. For temperature file,
one can wonder why a multiplier like 1.0001? This is a trick to indicate that
you want K to become ◦ C. With its multiplier, precipitation is transformed
into mm/day. A certain number of fields are added, which are simple and
useful combinations of the existing ones:
• FLU.EVAPORAT evaporation
The results are in a simple Ascii file named mmm (for mean minimum maxi-
mum). Because the wind is rotated and rescaled in the case of stretched grid
(and, to a lesser extent, with Aladin grid), you need a file named $COEFUV.
If you use a new geometry, you will have to create such a file. Edit and use
the script:
/eac9/deque/interpolation/coefuv.sh
If you use a new geometry, you have certainly already created the .grid file
you need. It is important that the U- and V- components of vectors appear
by pairs at the end of fort.4. The (even) number of such fields is $NCHV.
To go further than global means, you may want to plot maps of your results.
The script prich_arp5a.sh works in a similar way (multiplication, wind
rescaling, additional variables) but writes the full model grid in Ascii format.
In order to plot the individual fields, you have to unzip, split into files of
$NPT lines and paste with lonlat_$GEOM file ($NPT is the number of lines of
40 4. Post-processing
lonlat_$GEOM). To create this file from your $GEOM.grid file, use the filter
/eac9/deque/interpolation/grid2lonlat.pl
Many plotting software accept <lon,lat,value> text files as an input. One
of the best is the (free) gmt:
gmt.soest.hawaii.edu/gmt/doc/html/GMT_Docs/GMT_Docs.html .
If you wish to interpolate from the model grid to any type of grid (latitude-
longitude, Lambert, polar stereographic or list of cities) please read the doc-
umentation on interpolation in Arpege-climat.
Time series are extracted from the Full Pos files for 3 types of domains.
The simplest domain is the global one, which contains all model grid points.
Because of the size, you will be certainly limited to a few fields. We propose
a default of five fields which are extensively used in climate analyses because
observation series over many parts of the globe are available for comparison.
The only difference in the case of “hemispheric” series versus global series is
that the grid is truncated to the first points. This allows to consider more
4. Post-processing 41
fields. Of course we avoid to select a second time the global fields, and
the proposed list corresponds to surface fields often encountered in regional
analyses (e.g. Prudence database):
You can save model fields on a small domain. This allows to store many fields
at high frequency. The domain is just a list of model grid points in an Ascii
file with lines in the form <longitude, latitude, number of the point in the
model grid>. You can generate such a file with the domaine.sh script. The
data is saved in a single file per year, contrary to the global or hemispheric
time series. So it is important to know the rank of each field for selection.
To read such a file, the external loop is on time and the internal loop is on
fields.
In this list some fields are already in the global or hemispheric series, but at
a higher frequency. This list is an accumulation of impact people demands in
the last 15 years, so it should cover many needs. The script extra_jj_loc.sh
extracts a sub-domain (assumed to be included in the local domain). You
can select the fields and apply a multiplicator through the fort.4 internal
file in the same way as with the seasonal means.
44 4. Post-processing
5
Pre-processing
5.1 Rationale
The first step when you decide to change geometry is to create twelve clima-
tology files. This corresponds to the old 923 configuration, and is now done
by incli.sh . To use this script, you first give a name to your geometry
(variable GEOM), and you create a nam_$GEOM file on your working directory.
This namelist file contains information about your geometry (most are com-
mon with those of Chapter 2):
NRGRI number of longitudes per latitude circle (for a new NDGL value,
use the script tori:/cnrm/gc/mrga/mrga561/V5.2/rgrid to generate
NRGRI)
Aladin, you get a ${GEOM}CIE.grid file which contains the central, inter-
action and extension zones. In the case of ascii or ieee Aladin files, it is
essential that the name indicates which area (C, CI or CIE) is concerned. FA
files such as nclim4* are self-documented about the area they cover. The
script
/eac9/deque/interpolation/redgridald.sh
creates the ${GEOM}CI.grid and ${GEOM}C.grid files from ${GEOM}CIE.grid
1. NAMFPC
48 5. Pre-processing
2. NAMFPD
3. NAMFPG
If you wish to change only the vertical dicretization, the above script does
not work (another mistery of Arpege coding complexity). Instead, use
arp2arp_vert.sh
Note that the vertical interpolation is not as accurate as in 927 configuration
(linear interpolation with pressure, with constant extrapolation instead of
APACHE subroutine). If the near surface accuracy is an important issue
use 927 configuration two times with an intermediate horizontal resolution
finer than your geometry. In the case of Aladin you have no alternative
choice.
• creation of the PGD file with the physiographic fields which depends
on the geometry of your domain.
• the prognostic fields which depend on the geometry of your domain and
the date and time of the start of your run will be added into the PGD
file. The file is then called the PREP file in the SURFEX environment.
The PREP file is what in the Arpege world is called the initial surface file.
The script initsfxarp will create the new surface initial file. You need to
edit the script to specify :
- the SURFEX namelist Chapter 2named by default OPTIONS.nam if you
want to change the geometry and/or the surface schemes you will use for
your run.
- if you need to create a new PGD or not to : set the variable makepgd to
“yes” or “no” . If you want to change geometry you will need to create a new
PGD.
- the atmospheric file where the prognostic fields will be read from. The file
is a Arpege restart file.
- the directories where the PGD and PREP files will be kept.
You will end with the PGD (if makepgd=”yes”) and the PREP files. To
be more accurate, two files are created for one set of SURFEX data : a file
5. Pre-processing 49
with the suffix .fa containing the data , a file with the suffix .txt containing
a description of the data (type of geometry, date and time of the prognostic
data, the surface schemes used ...)
Thus, the script will create PGD.fa / PGD.txt (if makepgd=”yes”) and
PREP.fa / PREP.txt .
We invite you to keep the SURFEX schemes used in the examples and
to check the values into the complete Externalized surface User’s guide
(http://www.cnrm.meteo.fr/surfex)
(ref : Compilation of auxilliary binairies for Surfex to create the binaries
needed by initsfxarp )
50 5. Pre-processing
6
Coupling with the ocean
• LCURR activated to take into account the currents in the wind stress
computation
• LGELATO un-activated for the simplest ice model (ice if SST below
freezing temperature).
52 6. Coupling with the ocean
The update of the namelist concerns also the ocean one (named namelist)
and the coupler one (named namcouple). For the ocean, the initial date
(ndate0) together with the time step number (nitend) and the frequency of
restart (nstock) and diagnostic (nwrite) files are updated. For the coupler,
the initial date (_inidate_) together with the integration time (_duree_)
are updated.
Just after atmosphere boundary condition files, ocean and coupler ones are
added. The former contains coordinates, bathymetry, horizontal diffusion
coefficients, geothermal heating and climatological runoff, while the latter
contains grid, mask, area informations, weights for the gaussian interpolation
from the ocean to the atmosphere and a field name table.
The coupler and the ocean executable files are provided (named oasis and
opa9 respectively in the script). There is the possibility of using another
resolution for the ocean, but you will have to recompile the ocean model
accordingly to do so and changes will have also to be made in the coupler
restart and interpolation files (see next section).
After the run, in addition to the atmosphere restart file, the ocean restart
file together with the coupler restart files are copied in order to be used at
the beginning of the next month.
At the end of the script, the ocean namelist (similarly to the atmosphere
one) is copied in order to be used at the beginning of the next month.
6. Coupling with the ocean 53
1. apply the recipes of Chapter 5 to deal with the chosen new geometry
2. update the coupler boundary file with the new grid and mask informa-
tions (NetCDF format), and modify the namelist namcouple accord-
ingly
1. re-compile the Nemo code with the appropriate compilation key for
the chosen resolution together with some specific subroutine sources
associated to that resolution
You are now eager to run Arpege-climat on your own machine. From a
technical point of view, you need to compile the source code and ancillary
libraries (for example Gribex coding). This chapter describes how to install
the ancillary libraries and the installation and use of GmkPack (a Météo-
France toolbox) to compile the Arpege-climat source code.
You will need the Arpege-climat source code and the ancillary libraries
packages to have the model compiled.
GmkPack is a toolbox made of bash scripts with some perl scripts for the
creation of an environment to compile some source code. After the creation
of this environment it can perform the compilation, manage objects libraries
from the result of the compilation, and build executable files from object
libraries.
From the definition above you can wonder whether GmkPack is a concurrent
to make. The answer is yes. It has been developed after some limitations
have been found with the use of make. For instance, make will not mark the
difference between two files which would have the same name (basename)
and a different path (dirname). The practice of the development of complex
softwares like Arpege or Aladin has led to the development of GmkPack .
While GmkPack is expected to work on any source code, it is especially de-
signed for Arpege or Aladin ; that is why its customization looks Arpege-
Aladin-oriented and why it includes some kinds of plug-ins for source codes
which needs particular pre-treatments.
The package gmkpack.6.3.1-eac.tar.gz has been modified to compile the
model. The modifications are minor and concern the list of libraries needed
to create the binary.
arp : this is the core of the model. It contains : adiab/ climate/ dia/
mwave/ ocean/ phys_ec/ transform/ ald_inc/ common/ function/
namelist/ onedvar/ pp_obs/ utility/ c9xx/ control/ kalman/ nmi/
parallel/ setup/ var/ canari/ dfi/ module/ obs_preproc/ phys_dmn/
sinvect/
... and the following directories : sur / tal / tfl / uti / xrd / mpa
• to choose and create (if needed) the directories where to copy the in-
clude files and the libraries that will be created (for example:
/home/user_adm/include and /home/user_adm/lib/gnu)
In all the packages, you will find README files that give you more information
about the compilation and installation of the package.
auxlibs_installer.1.3.tgz
How to proceed?
During the interactive installation you will be asked to choose to compile
the libraries with a 32 bits or 64 bits representation of real numbers . The
commands are:
% cp auxlibs_installer.1.3.tgz /home/user1/install
% cd /home/user1/install
% gzip -d auxlibs_installer.1.3.tgz
% tar xf auxlibs_installer.1.3.tar
% mkdir /home/user1/include
% mkdir -p /home/user1/lib/pgi
% cd auxlibs_installer.1.3
% ./driver_interactive
....
% cd /home/user1/install
7. Compiling the model source 59
lapack_installer.1.2.tgz
If you are working on tori you will not need to install this package. You
can select the automatic or interactive installation. The package is installed
running the script driver_automatic or driver_interactive .
How to proceed?
During the interactive installation you will be asked to choose to compile the
libraries with a 32 bits or 64 bits representation of real numbers .
% cp lapack_installer.1.2.tgz /home/user1/install
% cd /home/user1/install
% gzip -d lapack_installer.1.2.tgz
% tar xf lapack_installer.1.2.tar
% cd lapack_installer.1.2
% ./driver_interactive
.....
% cd /home/user1/install
% /bin/rm -rf lapack_installer.1.1
odbdummy.tar.gz
% cp odbdummy.tar.gz /home/user1/install
% cd /home/user1/install
% gzip -d odbdummy.tar.gz
% tar xf odbdummy.tar
7. Compiling the model source 61
% cd odbdummy
psmiledummy-1.tar.gz
The script psmile.make will create and install the libraries libpsmile.MPI1.a
and libpsmiledummy.a. You need to edit the script to choose the compiler
and directories for installation.
How to proceed?
% cp psmiledummy-1.tar.gz /home/user_adm/install
% cd /home/user_adm/install
% gzip -d psmiledummy-1.tar.gz
% tar xf psmiledummy-1.tar
% cd psmile
The script build_gmkpack will install the GmkPack toolbox. By default, the
installation script will use the directory /tmp; you can modify the variable
GMKTMP to change this value.
During the installation the script allows you to create a configuration file
for compilation of Arpege-climat. It is possible to do it afterward or to
create other configuration files. You will use the script gmkfilemaker.
Manual pages are installed as well as a html tutorial.
How to proceed?
Here, we choose to create the configuration file later.
% export GMKTMP=/tmp/tmp.user_adm
% cp gmkpack.6.3.1-eac.tgz /home/user_adm/install
% cd /home/user_adm/install
% gzip -d gmkpack.6.3.1-eac.tgz
% tar xf gmkpack.6.3.1-eac.tar
% cd gmkpack.6.3.1
% ./build_gmkpack
...
The user environment has been modified as specified here above (GMKROOT,
PATH, MANPATH).
As configure would do, the script gmkfilemaker tries to get information
from the system and asks the user for complementary information and creates
a configuration file. This command can be run more than once, especially
to run GmkPack with different compilers.
The files are stored under the directory $GMKPACK_SUPPORT/arch. By default
GMKPACK_SUPPORT is the directory /home/user_adm/install/gmkpack_support
where /home/user_adm/install is the GmkPack installation directory. After
running this command, it is recommended to edit the resulting configura-
tion file and make sure the setup looks correct. It is also an opportunity to
improve the customization of the configuration files.
The files SXF90.TORI.x and PGI.LXEAC7.x are examples of the configuration
files :
They have been created running gmkfilemaker and then customized to run
a coupled model on TORI and a non coupled model on the Linux PC.
The compilation of the libraries needed for the coupling (libmpp\_io.a and
libpsmile.MPI1.a) is described in Oasis :
At this stage, all the externals libraries and GmkPack are installed, and a
configuration file for compilation has been created. You need now to install
the Arpege-climat source code and the script for compilation.
Here you will find a how-to to help you with GmkPack. You are invited to
read through the tutorial
7. Compiling the model source 65
file:///home/user_arm/install/gmkpack/doc/index.html
to have more information.
For GmkPack, there are two types of users : the administrator of the packs
and the developer.
• the administrator will create the reference packs. He will choose a di-
rectory to store the packs (ROOTPACK) with public access for developers.
• the developer will create local packs (in his directory HOMEPACK) based
on a reference pack, to test his new source code.
The user will need to modify his user profile file and create the directories
when needed.
• Administrator environment :
• Developer environment :
# Setup for GMKPACK : developer ’fred’
GMKROOT=/home/user_adm/install/gmkpack
ROOTPACK=/home/user_adm/public/packs
GMKTMP=/tmp/tmp.fred
66 7. Compiling the model source
HOMEPACK=$HOME/mypacks
HOMEBIN=$HOME/mypacks/binpack
GMKFILE=SXF90.TORI
PATH=$GMKROOT/util:$PATH
MANPATH=$GMKROOT/man:$MANPATH
export ROOTPACK HOMEPACK ROOTBIN HOMEBIN GMKTMP GMKFILE MANPATH
% tar xf /home/user_adm/install/src_arp502_export.01.tar
% ls -l
total 12
drwxr-xr-x 17 user_adm gr1 4096 Aug 21 09:43 ald
drwxr-xr-x 29 user_adm gr1 4096 Aug 21 09:43 arp
drwxr-xr-x 7 user_adm gr1 63 Aug 21 09:43 mpa
drwxr-xr-x 7 user_adm gr1 79 Aug 21 09:43 mse
drwxr-xr-x 7 user_adm gr1 79 Aug 21 09:43 sur
drwxr-xr-x 6 user_adm gr1 65 Aug 21 09:43 tal
drwxr-xr-x 6 user_adm gr1 65 Aug 21 09:43 tfl
drwxr-xr-x 3 user_adm gr1 23 Aug 21 09:43 uti
drwxr-xr-x 18 user_adm gr1 4096 Aug 21 09:44 xrd
The next step is to compile, running the script ics_arpclim. The binary is
created under $ROOTPACK/arp502_main.01.SX8RV20r400.x.pack/bin and
is named ARPCLIM:
on tori: qsub ics_arpclim
Do not forget to change the value of the elapse time request. Error and
output messages will be kept in the job output file.
on PC: ics_arpclim 1>arpclim.out 2>&1
The file arpclim.out will keep all error and output messages.
The user has setup the GmkPack environment. The reference pack
$ROOTPACK/arp502_main.02.SX8RV20r400.x.pack
has been created and compiled by the administrator.
The administrator has first created
$ROOTPACK/arp502_main.01.SX8RV20r400.x.pack.
Then, he has created
$ROOTPACK/arp502_main.02.SX8RV20r400.x.pack
to insert modified routines on top of arp502_main.01.
You will put your own routines, following the directory structure of Arpege,
in the directory : $HOMEPACK/arp502_test/src/local
For example, if you have modified the routine aplpar.F90
7. Compiling the model source 69
% cd $HOMEPACK/arp502_test
% cp $HOME/work/aplpar.F90 src/local/arp/phys_dmn
The next step is to compile, running the script ics_arpclim. The binary is
created under $HOMEPACK/arp502_test/bin and is named ARPCLIM.
on tori: qsub ics_arpclim
Error and output messages will be kept in the job output file.
on PC: ics_arpclim 1>arpclim.out 2>&1
The file arpclim.out will keep all error and output message.
• cleanpack : remove all files but source files (run after the command
resetpack)
• scanpack : scan the source code files within a pack (very useful within
$HOMEPACK/src/local)
These binaries are created by GmkPack through the scripts ics_pgd and
ics_prep.
You can created a developer pack based on the administrator pack with the
following command :
70 7. Compiling the model source
The prognostic fields added to the PGD file come from a restart file by the
binary PREP. This version of PREP can read ASCII or GRIB files. You
will need to extract to fields from the restart file (using xtrsfc) and then
format this FA file to a GRIB file (using fa2gb_arp) as it is done in the
script initsfxarp.
xtrsfc is created running the script xtrsfc.sh provided in here.
fa2gb_arp is created using a Makefile and source code provided in fa2gb_arp.tar.
The scripts and Makefile provided are setup to run on tori so that the SUR-
FEX initial file is created on tori.
Provided the license to access the source has been signed, the packages and
files are available on tori:
/cnrm/gc/mrga/mrga561/compil/aux/
auxlibs_installer.1.3.tgz
fa2gb_arp.tar
gmkpack.6.3.1-eac.tar.gz
initsfxarp
lapack_installer.1.2.tg
odbdummy.tar.gz
7. Compiling the model source 71
psmiledummy-1.tar.gz
xtrsfc.sh
/cnrm/gc/mrga/mrga561/compil/arp5.2/
GFORTRAN.LXEAC7.x
SXF90.TORI.x
ics_arpclim
intfb_arp502_export.01.tar.gz
src_arp502_export.01.tar.gz
userpack.SXF90
When looking at the model scripts, you can see several executable files which
are not the model itself. If you want to port the model on another computer
or to modify the executables, you need to access their sources. They are
included in compilation scripts with the same name of the executables. The
scripts are located in:
/cnrm/gc/mrga/mrga561/compil/misc/
Pre-processing scripts are updclig5a which updates the restart boundary
conditions and updozo5b updates the coefficients for the ozone parameteri-
zation.
For postprocessing, you need postM5b, postG5a, postE5a and postF5a.
72 7. Compiling the model source
8
Adding variables to the model
You can read through the document “User’s guide to add new GFL variables
or new GFL attributes in ARPEGE/IFS, ALADIN, AROME: cycle 32” by
K. Yessad to learn in details what adding a new GFL variable fully implies.
Here we will give an illustration with the example “adding the Convective
Vertical Velocity variable”. The GFL variable name will be YCVV . We will
show what are the steps to follow.
74 8. Adding variables to the model
First of all, it is necessary for you to get familiar with the structures TYPE_GFL_COMP
and TYPE_GFL_NAML . These structures are descriptors and define the at-
tributes of the GFL. Several attributes of the TYPE_GFL_NAML structure are
also attributes of the TYPE_GFL_COMP. You will be able to modify the default
values of the GFL component through the namelist NAMGFL .
You have to set up these attributes when adding a new GFL.
The TYPE_GFL_COMP structure :
! and NL residuals)
INTEGER(KIND=JPIM) :: NCOUPLING ! 1 if field is coupled by Davies relaxation,
! 0 if not, -1 if coupled with reference
! value for coupling REFVALC
REAL(KIND=JPRB) :: REFVALC ! Reference value for coupling, used in case
! NCOUPLING==-1
LOGICAL :: LBIPER ! True if field must be biperiodised inside
! the transforms
! End LAM specific attributes (Arome/Aladin)
CHARACTER(LEN=12) :: CSLINT ! S.L interpolaion "type"
INTEGER(KIND=JPIM) :: MP ! Basic field "pointer"
INTEGER(KIND=JPIM) :: MPL ! zonal derivative "pointer"
INTEGER(KIND=JPIM) :: MPM ! Meridional derivative "pointer"
INTEGER(KIND=JPIM) :: MP9 ! Basic field "pointer" t-dt
INTEGER(KIND=JPIM) :: MP9_PH ! Basic field "pointer" for Physics
INTEGER(KIND=JPIM) :: MP1 ! Basic field "pointer" t+dt
INTEGER(KIND=JPIM) :: MP5 ! Basic field "pointer" trajectory
INTEGER(KIND=JPIM) :: MP5L ! zonal derivative "pointer" trajectory
INTEGER(KIND=JPIM) :: MP5M ! Meridional derivative "pointer" trajectory
INTEGER(KIND=JPIM) :: MPSLP ! Basic field "pointer" physics
INTEGER(KIND=JPIM) :: MPSP ! Basic field "pointer" spectral space
INTEGER(KIND=JPIM) :: MP_SPL ! Basic field "pointer" spline interpolation
INTEGER(KIND=JPIM) :: MP_SL1 ! Basic field "pointer" in SLBUF1
INTEGER(KIND=JPIM) :: MP_SLX ! Basic field "pointer" in SLBUF1 for CPG_PT
INTEGER(KIND=JPIM) :: MPPT ! Physics tendency "pointer"
INTEGER(KIND=JPIM) :: MPPC ! Predictor/corrector auxiliary array "pointer"
TYPE(TYPE_GFL_COMP),POINTER :: PREVIOUS ! Pointer to previously def. field
5. yomfa.F90 : defines the Grib packing options and the list of Field Arpege
Descriptor (FAD)
8.1.2 Step-By-Step
For our new CVV variable, we want to be able to read/write the GFL on a file
and to use/compute it in a convection scheme. No advection or horizontal
diffusion will be applied to this variable. The default setup here will not
activate the CVV variable. Each time you want to use this variable you will
specify the attributes through the namelist NAMGFL :
78 8. Adding variables to the model
&NAMGFL
...
YCVV_NL%LGP=.T.,
...
&
• sufa.F90 : set the name of FAD and update the printout of YOMFA
...
USE YOMFA , ONLY : NVGRIB, NBITPG ,NBITCS ,NSTRON ,NPULAP , &
...
& YFAFSP5 ,YFASRC ,YFASDSAT ,YFACVV ,...
...
yom_ygfl.F90 : add YCVV and YCVV_NL and increment the number of indi-
vidual GFL (JPNAMED_GFL)
sudim1.F90 : set the default attributes and read the namelist NAMGFL
Modifications done in sudim1.F90 : You will have to look through the whole
routine sudim1.F90 .
• in part 1.1.2a:
! 1.1.1 Set implicit default values for items other than GFL attributes
...
! 1.1.2a Set implicit default values for GFL attributes
! (other than "GHG", "GRG", "TRAC", "AERO", "FORCING",
! "EASY DIAG" and "EXT").
YCVV_NL%LGP= .FALSE.
YCVV_NL%LSP= .FALSE.
YCVV_NL%NREQIN= 0
YCVV_NL%REFVALI= ZREFVALI_USELESS
YCVV_NL%LREQOUT= .FALSE.
YCVV_NL%LGPINGP= .FALSE.
YCVV_NL%LTRAJIO= .FALSE.
YCVV_NL%LADV= .FALSE.
...
! * Default for attribute LCDERS,LT9,LT1,LT5
! This value is automatically computed in SUGFL.
YCVV_NL%LPT= .FALSE.
YCVV_NL%LPC= .FALSE.
...
! * Default for attribute CNAME
! This value is automatically computed in SUGFL.
...
! * Default for attribute IGRBCODE
! This value is automatically computed in SUGFL.
80 8. Adding variables to the model
...
YCVV_NL%NCOUPLING= 0
...
! * Default for attribute REFVALC
! REFVALC is useful only when NCOUPLING=-1,
! but to avoid unitialised values, REFVALC is set to -999
! in the other cases.
...
YCVV_NL%REFVALC= ZREFVALC_USELESS
YCVV_NL%LSLHD=.FALSE.
YCVV_NL%LVSPLIP=.FALSE.
YCVV_NL%LRSPLINE=.FALSE.
YCVV_NL%LQM=.FALSE.
YCVV_NL%LQMH=.FALSE.
YCVV_NL%LHV=.FALSE.
...
! 1.2.1 Read NAMDIM and NAMGFL
CALL POSNAM(NULNAM,’NAMDIM’)
READ(NULNAM,NAMDIM)
CALL POSNAM(NULNAM,’NAMGFL’)
READ(NULNAM,NAMGFL)
...
• in part 1.3.3, reset the GFL attributes LSP and LGP to .FALSE. for
configurations where there is no GFL at all.
YCVV_NL%LSP= .FALSE.
YCVV_NL%LGP= .FALSE
• in part 1.9, make sure that the FGFL is not defined as a Grid-Point
and a SPectral variable
IF(YCVV_NL%LGP.AND.YCVV_NL%LSP) THEN
WRITE(NULERR,’(’’BOTH YCVV_NL%LGP AND YCVV_NL%LSP TRUE’’)’)
CALL ABOR1(’ SUDIM1 ’)
ENDIF
8. Adding variables to the model 81
sugfl.F90 defines and sets the attributes of YCVV calling the routine
DEFINE_GFL_COMP.
In an Arpege-climat run, sugfl.F90 is called after sudim1.F90 : until
now only the attributes of YCVV_NL were updated. The pointers are set in
this routine. You should insert the lines dealing with your GFL with the
same rank in the list of GFL in the 3 parts of the code: 1.1, 1.2 and 1.3 .
YGFL%NUMFLDS=0
...
! All gridpoint fields have to be set up before the spectral ones
! (i.e. part 1.2 SHOULD be done before part 1.3)
! * Simple GFL variables:
! (order should be the same in parts 1.1, 1.2 and 1.3).
INCR=0
...
YSDSAT => YGFLC(JPGFL-INCR)
INCR=INCR+1
YCVV => YGFLC(JPGFL-INCR)
INCR=INCR+1
...
! 1.2 Grid-point GFL.
...
IF(YCVV_NL%LGP) THEN
IGFLPTR=YGFL%NUMFLDS+1
YCVV=>YGFLC(IGFLPTR)
CALL DEFINE_GFL_COMP(YDGFLC=YCVV,CDNAME=YFACVV%CLNAME,KGRIB=NGRB149,&
& LDGP=.TRUE.,KREQIN=YCVV_NL%NREQIN, &
& PREFVALI=YCVV_NL%REFVALI, LDREQOUT=YCVV_NL%LREQOUT,LDERS=LLDERS,&
& LD5=LL5,LDT1=LLT1)
ENDIF
! 1.3 Spectral GFL.
...
IF(YCVV_NL%LSP)THEN
CALL ABOR1(’SUGFL: spectral CVV not coded ’)
ENDIF
82 8. Adding variables to the model
sudyn.F90 sets attributes of the component YCVV after the calculation of the
semi-lagrangian interpolation (subroutine SUCSLINT):
IF(YCVV%LACTIVE) THEN
CALL SUCSLINT(YCVV_NL%LSLHD,YCVV_NL%LRSPLINE,YCVV_NL%LVSPLIP, &
& YCVV_NL%LHV,YCVV_NL%LQM,YCVV_NL%LQMH,CLSLINT)
! Notice : this field is not supposed to be pronostic !
LLPT = LPC_FULL .AND. LLDIAB .AND. YCVV_NL%LPT.AND.LSLAG
LLPC = LPC_FULL.AND..NOT.LTWOTL.AND.YCVV_NL%LPC
CALL SET_GFL_ATTR(YCVV,LDADV=YCVV_NL%LADV,LDT9=LLT9,LDPHY=LSLPHY, &
& LDPT=LLPT,LDPC=LLPC,KCOUPLING=YCVV_NL%NCOUPLING, &
& CDSLINT=CLSLINT,PREFVALC=YCVV_NL%REFVALC)
ENDIF
Our variable YCVV will be used in the routine accvimpgy.F90. The call tree
is :
CALL CPG(list-of-arguments)
CALL MF_PHYS(list-of-arguments)
CALL APLPAR(list-of-arguments)
CALL ACCVIMPGY(list-of-arguments)
The modifications are linked to the addition of CVV arguments in the rou-
tines.
GFL is the array (dimension NPROMA,NFLEVG,YGFL%NDIM,NGPBLKS) holding
t and t-dt time GFL variables.
cpg.F90 is to be changed:
!* 4.4 MF-PHYSICS.
...
CALL MF_PHYS &
& (CDCONF,IBL,IGPCOMP,IST,IEND,IPCT,IGL1,IGL2,IGL3,IGL4,IOFF,ISTGLO, &
& LDRETCFOU,LDWRTCFOU0,LLADNMI,YI%LADV,YL%LADV,YS%LADV,YR%LADV, &
& YG%LADV,YTKE%LADV,YSDSAT%LADV,YCVV%LADV,YEXT(1)%LADV, &
8. Adding variables to the model 83
...
& GFL(1,1,YTKE%MP,IBL),GFL(1,1,YA%MP,IBL),GFL(1,1,YSRC%MP,IBL),&
& GFL(1,1,YSDSAT%MP,IBL),GFL(1,1,YCVV%MP,IBL),&
& GFL(1,1,YQVA%MP,IBL),GFL(1,1,YEXT(1)%MP,IBL),&
...
& GFL(1,1,YA%MP9,IBL),GFL(1,1,YSRC%MP9,IBL),&
& GFL(1,1,YSDSAT%MP9,IBL),GFL(1,1,YCVV%MP9,IBL),&
& ZDUMARR,GFL(1,1,YEXT(1)%MP9,IBL),PGMVS(1,YT9%MSP,IBL),&
...
& ’MF_’)
mf_phys.F90 is to be changed:
IF(LCVPGY) THEN
DO JLEV=1,NFLEVG
DO JROF=KST,KEND
PGFLT1(JROF,JLEV,YCVV%MP1)=PCVVT0(JROF,JLEV)
ENDDO
ENDDO
ENDIF
aplpar.F90 is to be changed:
..
REAL(KIND=JPRB) ,INTENT(INOUT) :: PCVV(KLON,KLEV)
...
! 1.- INITIALISATIONS COMPLEMENTAIRES
...
IF(LCVPGY.AND.KSTEP == 0) THEN
DO JLEV=KTDIA,KLEV
DO JLON=KIDIA,KFDIA
PCVV(JLON,JLEV)=0.0_JPRB
ENDDO
ENDDO
ENDIF
...
! 7.3.1 Shallow + Deep convection
CALL ACCVIMPGY ( KIDIA,KFDIA,KLON,NTCVIM,KLEV,&
& PALPH, PAPHIF, PAPRS, PAPRSF, PCP,&
& PDELP,PLH,PLNPR,ZQV,ZQCI,ZQCL,ZQLIS,PQSAT,PQW,&
& PR,PRDELP,PT,PTW, PU, PV,&
& PCPS,PGM,ZSTAB,PTS,&
& PDIFCQ,PDIFCQL,PDIFCQI,PDIFCS, PFCCQL, PFCCQN,&
& ZFHMLTSC,ZFHEVPPC,ZFPEVPPC,&
& PFPLCL,PFPLCN,ZNEBC,ZQLIC,PSTRCU,PSTRCV,&
& ICIS,INLAB,&
& INND,&
& PCVV,&
8. Adding variables to the model 85
& ’ACCVIMP’ )
...
When you want to add a new surface variable you have to update at least
the following routines :
• module/surface_fields.F90
• setup/su_surf_flds.F90
The surface variable structures are for prognostic surface variables (SD) and
diagnostic variables (SP). SP and SD buffers were split into groups. For
example: the group SB (SOILB) contains soil prognostic quantities for the
different reservoirs (1 or 3 at Météo-France, 4 at ECMWF): T Temperature,
Q Liquid water content, TL ice water content. The group VP (VCLIP) con-
tains deep soil diagnostic fields : TPC climatological deep layer temperature,
WPC climatological deep layer moisture.
As an illustration, we will follow the setup of one surface variable already
existing in the code. We choose the climatological deep layer moisture.
! Vclip
REAL(KIND=JPRB),ALLOCATABLE :: SD_VP (:,:,:)
TYPE(TYPE_SURF_GEN) :: YSD_VPD
TYPE(TYPE_SFL_VCLIP) :: YSD_VP
The group YSD_VP has 2 variables : WPC and TPC. The description of the
group will be done setting up the attributes of the structure TYPE_SURF_GEN :
NUMFLDS (Number of field in group), NDIM (Field dimension), NLEVS (Number
of levels), LMTL (.TRUE. if prognostic field) . . .
The description of the variable WPC will be done through the attributes of
the structure TYPE_SURF_MTL_2D : CNAME (Arpege field name), IGRBCODE
(Grib parameter code ), MP (field pointer) . . .
IF(ASSOCIATED(YSD_VP%YVP)) DEALLOCATE(YSD_VP%YVP)
ALLOCATE(YSD_VP%YVP(JPMAXSFLDS))
CALL INI_SFLP2(YSD_VPD,YSD_VP%YVP,IVCLIP,.FALSE.,’VCLIP - SD_VP ’)
YSD_VP%YTPC => YSD_VP%YVP(JPMAXSFLDS)
YSD_VP%YWPC => YSD_VP%YVP(JPMAXSFLDS)
IF(LECMWF) THEN
8. Adding variables to the model 87
ELSE
IF(LRELAXT) THEN
YSD_VP%YTPC => YSD_VP%YVP(YSD_VPD%IPTR)
CALL SETUP_SFLP2(YSD_VPD,YSD_VP%YTPC,CDNAME=’RELATEMPERATURE ’)
ENDIF
IF(LRELAXW) THEN
YSD_VP%YWPC => YSD_VP%YVP(YSD_VPD%IPTR)
CALL SETUP_SFLP2(YSD_VPD,YSD_VP%YWPC,CDNAME=’RELARESERV.EAU ’)
ENDIF
ENDIF
The surface field “climatological deep layer moisture” is setup if the key
LRELAXW is set.
INI_SFLP2 and SETUP_SFLP2 will print out some attributes at the beginning
of a run : (case where LRELAXT=LRELAXW=.TRUE.)
In the columns -999 is the Grib parameter code (default), 1 and 2 are MP
(basic field pointer in YV) 0 is ITRAJ ( not in trajectory).
The data are initialized from an Arpege file and stored in SD_VP (in sugridadm.F90)
and used in the physics (cpwts.F90 called by mf_phys.F90).
88 8. Adding variables to the model
Full-Pos allows you to post-process your personal fields at the cost of some
modifications in the software. This is done by using the Free-use Upper Air
fields for a 3D field and/or the Free-use Surface field for a 2D field.
How to use this mecanism ?
Let us assume you want to post-process:
You have to modify the software to fill the arrays PEXTRA and PEXTR2 with
your personnal fields. In this example this concerns the routine aplpar.F90.
The namelist of the run should look like :
/
&NAMAFN
TFP_FUA(1)%CLNAME=’FUA_TOT_CC ’, ; VEXTRA(1)
TFP_FUA(2)%CLNAME=’FUA_QLI_WAT ’, ; VEXTRA(2)
TFP_FUA(3)%CLNAME=’FUA_QIC_WAT ’, ; VEXTRA(3)
;
TFP_FUA(1)%LLGP=.TRUE., ; gridpoint field
TFP_FUA(2)%LLGP=.TRUE., ; gridpoint field
TFP_FUA(3)%LLGP=.TRUE., ; gridpoint field
;
TFP_FSU(1)%CLNAME=’FSU_LT_DTH ’, ; VEXTR2(1)
TFP_FSU(2)%CLNAME=’FSU_VL_DTH ’, ; VEXTR2(2)
;
TFP_FSU(1)%LLGP=.TRUE., ; gridpoint field
TFP_FSU(2)%LLGP=.TRUE., ; gridpoint field
/
&NAMDPHY
...
;
NVEXTR=3, ; Nb of 3D Free-use Upper air diagnostic fields
NCEXTR=31, ; Nb of levels in these 3D fields
NVXTR2=2, ; Nb of 2D Free-use Surface diagnostic fields
8. Adding variables to the model 89
;
;
NVEXTRDYN=0, ; Nb of extra fields coming from the dynamics
NVXP=0, ; Nb of Extra 3D prognostic fields
NCXP=31, ; Nb of levels in these 3D fields
NVXP2=0, ; Nb of extra 2D prognostic fields
;
/
&NAMFPC
CFP2DF(1)=’SURFPRESSION’,
’MSLPRESSURE’,
’FSU_LT_DTH ’, ; VEXTR2(1)
’FSU_VL_DTH ’, ; VEXTR2(2)
CFP3DF(1)=’GEOPOTENTIEL’,
’TEMPERATURE’,
’VENT_ZONAL’,
’VENT_MERIDIEN’,
’HUMI.SPECIFI’,
’FUA_TOT_CC ’, ; VEXTRA(1)
’FUA_QLI_WAT ’, ; VEXTRA(2)
’FUA_QIC_WAT ’, ; VEXTRA(3)
...
/
Modifications to aplpar.F90 :
!- - - - - - - - - - - -
! FOR 3D EXTRA FIELDS :
!- - - - - - - - - - - -
90 8. Adding variables to the model
DO JVEXTRA=1,KFLDX
!
IF (LLTRACE) THEN
WRITE(NULOUT,’(A,I3)’) " > APLPAR : JVEXTRA = ",JVEXTRA
ENDIF
!
IF (JVEXTRA == 1) THEN
DO JLEV=1,KLEVX
DO JLON=KIDIA,KFDIA
PEXTRA(JLON,JLEV,JVEXTRA)=PNEB(JLON,JLEV)
ENDDO
ENDDO
ENDIF
IF (JVEXTRA == 2) THEN
DO JLEV=1,KLEVX
DO JLON=KIDIA,KFDIA
PEXTRA(JLON,JLEV,JVEXTRA)=PQLI(JLON,JLEV)
ENDDO
ENDDO
ENDIF
IF (JVEXTRA == 3) THEN
DO JLEV=1,KLEVX
DO JLON=KIDIA,KFDIA
PEXTRA(JLON,JLEV,JVEXTRA)=PQICE(JLON,JLEV)
ENDDO
ENDDO
ENDIF
ENDDO
IF (JVEXTRA == 4) THEN
DO JLEV=1,KLEVX
DO JLON=KIDIA,KFDIA
PEXTRA(JLON,JLEV,JVEXTRA)=ZNEBS(JLON,JLEV)
ENDDO
ENDDO
ENDIF
IF (JVEXTRA == 5) THEN
DO JLEV=1,KLEVX
DO JLON=KIDIA,KFDIA
PEXTRA(JLON,JLEV,JVEXTRA)=ZNEBC0(JLON,JLEV)
ENDDO
ENDDO
ENDIF
8. Adding variables to the model 91
!- - - - - - - - - - - -
! FOR 2D EXTRA FIELDS :
!- - - - - - - - - - - -
DO JVEXTR2=1,KFLDX2
!
IF (LLTRACE) THEN
WRITE(NULOUT,’(A,I3)’) " > APLPAR : JVEXTR2 = ",JVEXTR2
ENDIF
!
IF (JVEXTR2 == 1) THEN
DO JLON=KIDIA,KFDIA
PEXTR2(JLON,JVEXTR2)=ZDTHLT(JLON)
ENDDO
ENDIF
!
IF (JVEXTR2 == 2) THEN
DO JLON=KIDIA,KFDIA
PEXTR2(JLON,JVEXTR2)=ZDTHVL(JLON)
ENDDO
ENDIF
!
ENDDO
The order of the fields in PEXTRA and PEXTR2 is important : you must give
to Full-Pos the name of the field in the same order. As it is coded, to ask
for the stratiform cloudiness ZNEBS you need to ask for four 3D diagnostics
fields:
/
&NAMAFN
TFP_FUA(1)%CLNAME=’FUA_TOT_CC ’, ; VEXTRA(1)
TFP_FUA(2)%CLNAME=’FUA_QLI_WAT ’, ; VEXTRA(2)
TFP_FUA(3)%CLNAME=’FUA_QIC_WAT ’, ; VEXTRA(3)
TFP_FUA(4)%CLNAME=’FUA_STRAT_C ’, ; VEXTRA(3)
;
TFP_FUA(1)%LLGP=.TRUE., ; gridpoint field
TFP_FUA(2)%LLGP=.TRUE., ; gridpoint field
TFP_FUA(3)%LLGP=.TRUE., ; gridpoint field
TFP_FUA(4)%LLGP=.TRUE., ; gridpoint field
;
...
/
92 8. Adding variables to the model
&NAMDPHY
...
;
NVEXTR=4, ; Nb of 3D Free-use Upper air diagnostic fields
NCEXTR=31, ; Nb of levels in these 3D fields
;
NVEXTRDYN=0, ; Nb of extra fields coming from the dynamics
NVXP=0, ; Nb of Extra 3D prognostic fields
NCXP=31, ; Nb of levels in these 3D fields
NVXP2=0, ; Nb of extra 2D prognostic fields
;
/
&NAMFPC
...
CFP3DF(1)=’GEOPOTENTIEL’,
’TEMPERATURE’,
’VENT_ZONAL’,
’VENT_MERIDIEN’,
’HUMI.SPECIFI’,
’FUA_TOT_CC ’, ; VEXTRA(1)
’FUA_QLI_WAT ’, ; VEXTRA(2)
’FUA_QIC_WAT ’, ; VEXTRA(3)
’FUA_STRAT_C ’, ; VEXTRA(4)
...
/
The value of KFLDX is set to NVEXTR, the value of KFLDX2 is set to NVXTR2
and the value of KLEVX is set to NCEXTR.