You are on page 1of 316

NUCLEAR ELECTRONICS

& COMPUTING



NEC2011





Proceedings
of the XXIII International Symposium





Joint Institute for Nuclear Research





NUCLEAR ELECTRONICS
& COMPUTING



XXIII International Symposium
Varna, Bulgaria, September 12-19, 2011

Proceedings of the Symposium

NEC2011





XXIII
, , 12-19 2011 .





2011


The Proceedings of the XXIII International Symposium on Nuclear Electronics &
Computing (NEC'2011) contain the papers presented at NEC'2011, which was held on
1219 September, 2011 (Varna, Bulgaria). The symposium was organized by the Joint Institute
for Nuclear Research (Dubna, Russia), the European Laboratory for Particle Physics (CERN)
(Geneva, Switzerland) and the Institute for Nuclear Research and Nuclear Energy of the
Bulgarian Academy of Sciences (Sofia, Bulgaria). The symposium was devoted to the problems
of detector & nuclear electronics, computer applications for measurement and control in
scientific research, triggering and data acquisition, methods of experimental data analysis,
computing & information systems, computer networks for scientific research and GRID
computing.


XXIII
(NEC'2011) , NEC'2011,
1219 2011 . . ().
(, ),
(CERN) (, )
(, ).
,
, ,
,
, ,
GRID-.
3
General Information

The XXIII International Symposium on Nuclear Electronics and Computing (NEC'2011) was
held on 12-19 September, 2011 in Varna, Bulgaria. The symposium was organized by the
Joint Institute for Nuclear Research (JINR) (Dubna, Russia), European Organization for
Nuclear Research (CERN) (Geneva, Switzerland) and the Institute for Nuclear Research and
Nuclear Energy of the Bulgarian Academy of Sciences (INRNE) (Sofia, Bulgaria). About
100 scientists from 15 countries (Russia, Bulgaria, Switzerland, Czech Republic, Poland,
Belarus, Azerbaijan, Germany, Georgia, France, USA, Italy, Romania, Ukraine and
Kazakhstan) have participated in NEC'2011. They have presented 61 oral reports and 28
posters.
JINR (Dubna):
V.V. Korenkov co-chairman, E.A. Tikhonenko scientific secretary, A. Belova (secretary),
A.G. Dolbilov, N.V. Gorbunov, S.Z. Pokuliak, Y.K. Potrebenikov, A.V. Prikhodko ,
V.I. Prikhodko, S.I. Sidortchuk, T.A. Strizh, A.V. Tamonov, N.I. Zhuravlev

CERN (Geneva):
L. Mappelli co-chairman, T. Kurtyka, P. Hristov

INRNE (Sofia):
I.D. Vankov co-chairman, L.P. Dimitrov secretary, S. Piperov, K. Gigov

International Program Committee

O. Abdinov (IoP, Baku), F. Adilova (IM&IT, AS, Tashkent), D. Balabanski (INRNE, Sofia),
A. Belic (IP, Belgrade), I. Bird (CERN, Geneva), J. Cleymans (University of Cape Town),
S. Enkhbat (AEC, Ulan Bator), D. Fursaev (Dubna University), I. Golutvin (JINR, Dubna),
H.F. Hoffmann (ETH, Zurich), V. Ilyin (SINP MSU, Moscow), V. Ivanov (JINR, Dubna),
B. Jones (CERN, Geneva), V. Kantser (ASM, Chisinau), A. Klimentov (BNL, Upton),
M. Korotkov (ISTC, Moscow), V. Sahakyan (IIAP, Yerevan), M. Lokajicek (IP ASCR,
Prague), G. Mitselmakher (UF, Gainesville), S. Newhouse (EGI, Amsterdam), V. Shirikov
(JINR, Dubna), N. Rusakovich (JINR, Dubna), N. Shumeiko (NSECPHEP, Minsk),
N.H. Sweilam (Cairo University), M. Turala (IFJ PAN, Krakow), A. Vaniachine (ANL,
Argonne), G. Zinovjev (ITP, Kiev).

MAIN TOPICS
Detector & Nuclear Electronics;
Accelerator and Experiment Automation Control Systems. Triggering and Data
Acquisition;
Computer Applications for Measurement and Control in Scientific Research;
Methods of Experimental Data Analysis;
Data & Storage Management. Information & Data Base Systems;
GRID & Cloud computing. Computer Networks for Scientific Research;
LHC Computing;
Innovative IT Education: Experience and Trends.

4
CONTENTS

Creating a distributed computing grid of Azerbaijan for collaborative research
O. Abdinov, P. Aliyeva, A. Bondyakov, A. Ismayilov 8

Fast Hyperon Reconstruction in the CBM
V.P. Akishina, I.O. Vassiliev 10

Remote Control of the Nuclotron magnetic field correctors
V.A. Andreev, V.A. Isadov, A.E. Kirichenko, S.V. Romanov, V.I. Volkov 16

The Status and Perspectives of the JINR 10 Gbps Network Infrastructure
K.N. Angelov, A.E. Gushin, A.G. Dolbilov, V.V. Ivanov, V.V. Korenkov,
L.A. Popov 20

The TOTEM Roman Pot Electronics System
G. Antchev 29

Novel detector systems for nuclear research
D.L. Balabanski 42

Data handling and processing for the ATLAS experiment
D. Barberis 52

Time-of-flight system for controlling the beam composition
P. Batyuk, I. Gnesi, V. Grebenyuk, A. Nikiforov, G. Pontecorvo, F. Tosello 60

Development of the grid-infrastructure for molecular docking problems
R. Bazarov, V. Bruskov, D. Bazarov 64

Grid Activities at the Joint Institute for Nuclear Research
S.D. Belov, P. Dmitrienko, V.V. Galaktionov, N.I. Gromova, I. Kadochnikov,
V.V. Korenkov, N.A. Kutovskiy, V.V. Mitsyn, D.A. Oleynik, A.S. Petrosyan,
I. Sidorova, G.S. Shabratova, T.A. Strizh, E.A. Tikhonenko, V.V. Trofimov,
A.V. Uzhinsky, V.E. Zhiltsov 68

Monitoring for GridNNN project
S. Belov, D. Oleynik, A. Pertosyan 74

A Low-Power 9-bit Pipelined CMOS ADC for the front-end electronics of
the Silicon Tracking System
Yu. Bocharov, V. Butuzov, D. Osipov, A. Simakov, E. Atkin 77

The selection of PMT for TUS project
V. Borejko, A. Chukanov, V. Grebenyuk, S. Porokhvoy, A. Shalyugin, L. Tkachev 86

The possibility to overcome the MAPD noise in scintillator detectors
V. Boreiko, V. Grebenyk, A. Kalinin, A. Timoshenko, L. Tkatchev 90

5
Prague Tier 2 monitoring progress
J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec 94

Detector challenges at the CLIC multi-TeV e+e- collider
D. Dannheim 100

J/ -> e+e- reconstruction in Au + Au collision at 25 AGeV in the CBM experiment
O.Yu. Derenovskaya, I.O. Vassiliev 107

Acquisition Module for Nuclear and Mossbauer Spectroscopy
L. Dimitrov, I. Spirov, T. Ruskov 112

Business Processes in the Context of Grid and SOA
V. Dimitrov 115

ATLAS TIER 3 in Georgia
A. Elizbarashvili 122

JINR document server: current status and future plans
I. Filozova, S. Kuniaev, G. Musulmanbekov, R. Semenov, G. Shestakova,
P. Ustenko, T. Zaikina 132

Upgrade of Trigger and Data Acquisition Systems for the LHC Experiments
N. Garelli 138

VO Specific Data Browser for dCache
M. Gavrilenko, I. Gorbunov, V. Korenkov, D. Oleynik, A. Petrosyan, S. Shmatov 145

RDMS CMS data processing and analysis workflow
V. Gavrilov, I. Golutvin, V. Korenkov, E. Tikhonenko, S. Shmatov, V. Zhiltsov,
V. Ilyin, O. Kodolova, L. Levchuk 148

Remote operational center for CMS in JINR
A.O. Golunov, N.V. Gorbunov, V.V. Korenkov, S.V. Shmatov, A.V. Zarubin 154

JINR Free-electron maser for applied research: upgrade of the control system
and power supplies
E.V. Gorbachev, I.I. Golubev, A.F. Kratko, A.K. Kaminsky, A.P. Kozlov, N.I. Lebedev,
E.A. Perelstein, N.V. Pilyar, S.N. Sedykh, T.V. Rukoyatkina, V.V. Tarasov 158

GriNFiC - Romanian Computing Grid for Physics and Related Areas
T. Ivanoaica, M. Ciubancan, S. Constantinescu, M. Dulea 163

Current state and prospects of the IBR-2M instrument control software
A.S. Kirilov 169

Dosimetric Control System for the IBR-2 Reactor
A.S. Kirilov, M.L. Korobchenko, S.V. Kulikov, F.V. Levchanovskiy,
S.M. Murashkevich, T.B. Petukhova 174
6

CMS computing performance on the GRID during the second year of LHC collisions
P. Kreuzer on behalf of the CMS Offline and Computing Project 179

The Local Monitoring of ITEP GRID site
Y. Lyublev, M. Sokolov 186

Method for extending the working voltage range of high side current sensing circuits,
based on current mirrors, in high-voltage multichannel power supplies
G.M. Mitev, L.P. Dimitrov 191

Early control software development using emulated hardware
P. Petrova 196

Virtual lab: the modeling of physical processes in Monte-Carlo method the
interaction of helium ions and fast neutrons with matter
B. Prmantayeva, I. Tleulessova 200

Big Computing Facilities for Physics Analysis: What Physicists Want
F. Ratnikov 206

CMS Tier-1 Center: serving a running experiment
N. Ratnikova 212

ATLAS Distributed Computing on the way to the automatic site exclusion
J. Schovancova on behalf of the ATLAS Collaboration 219

The free-electron maser RF wave centering and power density measuring
subsistem for biological applications
G.S. Sedykh, S.I. Tutunnikov 223

Emittance measurement wizard at PITZ, release 2
A. Shapovalov 227

Modernization of monitoring and control system of actuators and
object communication system of experimental installation DN2 at 6a channel
of reactor IBR-2M
A.P. Sirotin, V.K. Shirokov, A.S. Kirilov, T.B. Petukhova 236

VME based data acquisition system for ACCULINNA fragment separator
R.S. Slepnev, A.V. Daniel, M.S. Golovkov, V. Chudoba, A.S. Fomichev,
A.V. Gorshkov, V.A. Gorshkov, S.A. Krupko, G. Kaminski, A.S. Martianov,
S.I. Sidorchuk, A.A. Bezbakh 242

Ukrainian Grid Infrastructure. Current state
S. Svistunov 246

GRID-MAS conception: the applications in bioinformatics and telemedicine
A. Tomskova, R. Davronov 253
7

Techniques for parameters monitoring at Datacenter
M.R.C. Trusca, F. Farcas, C.G. Floare, S. Albert, I. Szabo 259

Solar panels as possible optical detectors for cosmic rays
L. Tsankov, G. Mitev, M. Mitev 264

Managing Distributed Computing Resources with DIRAC
A. Tsaregorodtsev 269

On some specific parameters of PIPS detector
Yu. Tsyganov 278

Automation of the experiments aimed to the synthesis of superheavy elements
Yu. Tsyganov, A. Polyakov, A. Sukhov, V. Subbotin. A. Voinov,
V. Zlokazov, A. Zubareva 281

Calibration of the silicon position-sensitive detectors using the implanted
reaction products
A.A. Voinov, V.K. Utyonkov, V.G. Subbotin, Yu.S. Tsyganov, A.M. Sukhov,
A.N. Polyakov, A.M. Zubareva 286

High performance TDC module with Ethernet interface
V. Zager, A. Krylov 292

Front End Electronics for TPC MPD/NICA
Yu. Zanevsky, A. Bazhazhin, S. Bazylev, S. Chernenko, G. Cheremukhina,
V. Chepurnov, O. Fateev, S. Razin, V. Slepnev, A. Shutov,
S. Vereschagin, V. Zryuev 296

Mathematical Model for the Coherent Scattering of a Particle Beam
on a Partially Ordered Structure
V.B. Zlokazov 300

The distributed subsystem to control parameters of the Nuclotron extracted beam
E.V.Gorbachev, N.I.Lebedev, N.V.Pilyar, S.V.Romanov, T.V.Rukoyatkina, 307
V.I.Volkov


INDEX of REPORTERS 312

8
Creating a distributed computing grid of Azerbaijan for
collaborative research

O. Abdinov, P. Aliyeva, A. Bondyakov, A. Ismayilov
Institute of Physics, Baku, Azerbaijan

In this article, we briefly review the results of a distributed computing system with
Grid (grid) architecture based on a set package of middleware gLite in the Institute of
Physics of ANAS. It was formed to meet the challenges of distributed data processing in
experimental particle accelerator Large Hadron Collider (LHC) - the Large Hadron Collider
(LHC) in Geneva (Switzerland).
A number of scientific centers of Azerbaijan, such as BSU, IP have many years of
traditional high level of cooperation with international research centers in the area of basic
research. The rapid development of network and computer and information technologies in
recent years have created the preconditions for unification of network and information
resources of Europe and Azerbaijan, aimed at solving specific scientific and applied problems,
the successful implementation of which is impossible without the use of high-performance
computing, new approaches in the conduct of distributed and parallel calculations and the use
of large amounts of data storage systems.
Creating a Grid infrastructure will significantly improve the effectiveness of
cooperation between research centers of Azerbaijan and Europe. The scope of cooperation
will join the new research group of the Baku State University, Institute of Physics, National
Academy of Sciences of Azerbaijan, Institute of Information Technology, and others,
significantly expands the spectrum of scientific and applied research of mutual interest.
In 2009, we started the creation of computing Gridinfrastructure in Azerbaijan and
work on installing the necessary clusters and application programs on it. In the process of the
project following results were accomplished:
creation of the computing center in the Institute of Physics (Fig. 1), which functions
24/7. The center includes 300 multiprocessing computers on the basis of processor Intel
Xeon, data storage (of ~ 140 TB) on the basis of client-server architecture, 160 cores
(blade servers), 4 UPS,
setting of the high speed connection to the Internet by the means of optical fiber
cables (with the speed of 25 MB/s) (Fig. 2),
installation of middleware, accomplishment of text trials, adjustment of an
uninterrupted functioning of the Grid-segment,
preparation of the necessary conditions for other scientific and educational centers
of Azerbaijan in order to connect to the given Grid- segment.

Fig. 1. The computing center in the Institute of Physics of ANAS in Baku
9











Fig. 2. The network infrastructure of AZRENA (Research and Educational Network in
Azerbaijan)

At the same time began work on several projects in different organizations: research
institutes, and universities. Among the first, of course, were the groups and organizations that
are already having problems with research in biology and medicine to study the properties of
matter in particle accelerators (in experimental nuclear physics and high energy physics). All
work of this type (grid) was carried out jointly, the large number of specialists from various
disciplines. As a result:
Local certification center (local CA) was created and it is testing now,
Azerbaijan grid services will be connected to EDU VO (JINR) with local CA,
Local CA will be registered in EUGridPMA,
Agreement with Ukraine to be participant of Medgrid VO (Medical Grid-system
for population research in the field of cardiology with electrocardiogram database),
10 TB disk spaces,
Research on solid state physics (charge density and electronic structure of systems
made of electrons and nuclei (molecules and periodic solids) within Density
Functional Theory (DFT), using pseudopotentials and a planewave basis ),
Research on astrophysics (calibration, data analysis, image display, plotting, and a
variety of ancillary tasks on Astronomical Data).


AZRENA Network
PSTN Backbone
ASEU
Odlar Yurdu
University
Med.
University
Foreign Lang.
University
Cooperation
University
Pedagogical
University
BSU ANAS
Arch.
University
AzTU
Teachers
University
10
Fast Hyperon Reconstruction in the CBM

V.P. Akishina
1,2
, I.O. Vassiliev
2,3
1
Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia
2
Goethe University, Frankfurt, Germany,
3
GSI, Darmstadt, Germany

Introduction
The main goal of the Compressed Baryonic Matter (CBM) Experiment [1,2] is to
study the behavior of nuclear matter in the conditions of high baryonic density in which the
transition to a deconfined quark gluon plasma phase is expected. One of the signatures of this
new state is the enhanced production of strange particles; therefore hyperons reconstruction is
essential for the understanding of the heavy ion collision dynamics. Also the yield of particles
carrying strange quarks is expected to be sensitive to the fireball evolution.

O
-
-hyperon decay reconstruction
O
-
-hyperon consist of 3 strange quarks, therefore is one of the most interesting objects.
Like all other hyperons O
-
will be measured in the CBM-detector by their decay into charged
hadrons, which are detected in the STS.

Input to the simulation
To study the feasibility of fast hyperon reconstruction in the CBM experiment a set of
10
4
central Au+Au UrQMD [3] events and a set of 10
4
O
-
-> K
-
A decay events were
processed at 25 AGeV using detector simulation tool CBMROOT with GEANT3 engine. To
investigate the dependence of O
-
-reconstruction efficiency on the track multiplicity 10
4

minimum bias Au+Au UrQMD events and a set of 10
4
O
-
->K
-
A decays embedded into
central UrQMD events were simulated.
Central Au-Au UrQMD event at 25 AGev contains in average 362 pions, 161 protons,
32 As and 13 kaons, which make a contribution to the background. The realistic STS
geometry with 2 MAPS at 5 cm and 10 cm and 8 double-sided segmented strip detectors was
used as the tracker. Monte Carlo (MC) -identification for protons was used, which can be
successfully replaced by the particle identification with Time-of-Flight (TOF) detector in the
experiment.

Event reconstruction
The reconstruction of O
-
decay events includes several steps. First, fast SIMDized
track finding and fitting [7,8] in the STS is performed using the L1 reconstruction package
[4]. The track with at least 4 MC points in the STS stations is considered as a reconstructable.
The reconstructed track is assigned to the MC particle if at least 70 % of track hits were
caused by this particle. Reconstructed track is called ghost if it is not assigned to any particle
according to the 70 % criterion. The track fitting algorithm was based on the Kalman filter
[5]. The primary vertex was determined from all reconstructed tracks in the STS excluding
the ones coming from well detached vertex.
Fast SIMDized A-decays reconstruction is already implemented. The average times of
different stages of reconstruction are listed in the table. Online O
-
-decay reconstruction is
under development.
11
Table 1. Time requirements for reconstruction stages [6]

Finder: 80 ms
Fitter: 1.6 ms
PV: 51.2 ms
A: 43.8 ms
Total: 176.6 ms/16 core

Detection strategy
To distinguish the signal from the background a set of cuts on single tracks
reconstructed A -candidates and O
-
-hyperons parameters were obtained. The cuts were chosen
with respect to the significance by studding the simulated distributions of the cut variables for
signal and background pairs. For each type of particles the significance function, which shows
the feasibility of signal detection against the background fluctuation, was calculated using
equation:
Background + Signal
Signal
= ce Significan

Cuts on protons, pions and kaons tracks.

In order to reduce combinatorial background we, first of all apply the cuts on single
tracks level. Impact parameter of charged track is defined as the distance between the primary
vertex and the track extrapolation point to the target plane with z=z
pv
. This value measured in
os is called _
2
prim
and takes into account track extrapolation errors, which depend on the
particle momentum. This cut is intended to reduce the amount of primary particles: _
2
prim
is
smaller for particles coming from primary vertex than for signal ones. Significance for
protons, pions and kaons reaches maximum at _
2
primcut
=5.5; 7; 8 o respectively (Fig. 1 for
proton), therefore cuts _
2
prim
>_
2
primcut
were chosen as optimal in case of each type of
particles.


Fig.1. Significance for proton

_
2
prim
.



Fig. 2. Significance for A

_
2
geo
.

12
Cuts on A-candidates parameters
After off-vertex tracks were selected all protons are combined with negatively charged
tracks to create a A-candidate, using KFParticle package [5]. Thus, the second step is aimed to
suppress A -determined background.
The next cut which was used for A candidates is a cut on _
2
geo
, which measures in os
the distance of closest approach for the pair of tracks, calculated as the distance between
tracks at the z-position of the secondary vertex. Thus, the optimal cut _
2
geo
<3 o (Fig. 2)
reduces random combinations of tracks.
The next cut, z-position of fitted secondary vertex, also reduces random combinations.
Most part of A decay points are well detached from the target plane, therefore z >5 cm cut
was applied. As one can see on Fig. 4, significance in this case has a plane wide maximum
with efficiency varying in a narrow diapason, therefore 5 cm was chosen for the cut value in
order to save more signal with almost the same significance value (Fig. 3). The _
2
topo
of the
A- candidate defined as the distance between primary vertex and the extrapolation of the
reconstructed particles momentum to the interaction vertex in the target plane, measured in
os. Primary As come from primary vertex, while signal ones come from O
-
decay point,
therefore primary As have _
2
topo
smaller than signal daughter ones and cut _
2
topo
>7.5 o was
applied, for which significance reaches the maximum (Fig. 4). This cut reduces random
combinations as well as primary As.




Fig. 3. Significance for A

zposition
Fig. 4. Significance for A _
2
topo

Fig. 5. t
-
p candidates invariant mass
spectrum
Fig. 6. t
-
p candidates invariant
mass spectrum: m
inv
=m
pdg
6 o
13
The obtained A candidates invariant mass distribution has o =1.282 MeV/c
2
(Fig. 5).
Reconstructed mass value 1.116 0.003 (Gev/c
2
) is in a good agreement with m
pdg
[9]. The
last cut we used for A-candidates is the cut m
inv
=m
pdg
6 o on invariant mass (Fig. 6).

Cuts on O

-candidates parameters
For the last step of O

reconstruction, remaining A-candidates are combined with


secondary negatively charged tracks. As it was done in A case, these O

-candidates are
accepted if they have a good quality geometrical and topological detached vertex: _
2
topo
<3 o,
_
2
geo
<3 o, z >5 cm.

Reconstructed invariant mass distribution of O
-
-candidates
After applying cuts for 10
4
background central Au+Au UrQMD events the distribution was
fitted with fourth-degree polynomial. The shape of background and signal was normalized to 10
8

central Au-Au UrQMD events. Also statistic fluctuations were added. The obtained signal and
background invariant mass spectra are shown on Fig. 7. Signal reconstruction efficiency is 0.5 %.
Obtained signal to background ratio S/B
2o
=0.38. Reconstructed mass value 1.6724 0.005 (Gev/c
2
)
is in a good agreement with simulated one 1.677245 (Gev/c
2
).


Fig. 7. Reconstructed invariant mass distribution
of AK
-
-candidates. O

reconstruction efficiency is
0.55 %, S/B ratio is 0.4, reconstructed mass value
is 1.672 GeV/c
2

Fig. 8. O

-hyperon reconstruction
efficiency vs track multiplicity


Efficiency analysis
In order to investigate the dependence of reconstruction efficiency on the track
multiplicity 10
4
minimum bias Au+Au UrQMD events and a set of 2 10
4
O

-decays
embedded into full UrQMD events were simulated. As a result the dependence O

-hyperon
reconstruction efficiency vs track multiplicity was obtained (Fig. 8). In the case when the
signal events are reconstructed alone the reconstruction efficiency after all cuts were applied
is c =3.1 %. In the case of a central UrQMD Au-Au collisions c drops down to 0.5 %. These
two values were obtained with high statistics and are shown with precise dots in the figure.
The average minimum bias efficiency is 2.4 %. This efficiency drop is caused by clustering in
STS detector.


14

-decays reconstruction
To study the feasibility of

and A reconstruction in the CBM experiment, a set of
10
4
central Au+Au UrQMD events at 4.85 AGeV were simulated. At 4.85 AGeV central
Au+Au UrQMD event contains in average 7 As and 0,034

. The

decays to At
-
with
branching ratio 99.9 % and ct =4.91 cm. The STS geometry with 8 double-sided segmented
strip detectors were used for tracking. No kaon, pion or proton identification is applied. In
order to reconstruct the Apt
-
decay the proton mass was assumed for all positively charged
tracks and pion mass for all negatively charged ones. The combination of single track cut
(_
2
prim
>3 o) and geometrical vertex (_
2
geo
<3 o) cut allows to see clear signal (Fig. 9) of A.
The

event reconstruction includes several steps: fast SIMDized tracks finding and
fitting [7, 8], where all tracks are found; tracks with _
2
prim
>8o and 5o (positively and negatively
charged respectively) are selected for a A search, where positively charged tracks were combined
with the t
-
-tracks to construct a A-KFParticle; good quality geometrical vertex (_
2
geo
<3 o) was
required to suppress combinatorial background.
The invariant mass of the reconstructed pair is compared with the A mass value; only
pairs inside 1.116 6 o were accepted; primary A rejection, where only A with _
2
prim
>5 o and
z-vertex greater than 6 cm are chosen. Selected As were combined with the secondary t
-
(_
2
prim
>
8o) tracks and

-KFParticle were created. The



-KFParticle were accepted to a

candidate if
it has good quality geometrical and topological detached vertex (_
2
geo
<3 o), _
2
topo
<3.5 o) and
z-vertex greater than 2 cm.
The resulting invariant mass spectrum is
shown in Fig. 10. The signal reconstruction
efficiency is about 5.5 %. The reconstructed
mass value 1.321 0.003~GeV/c
2
is in a good
agreement with the simulated one. Invariant
mass resolution value is 2.3 MeV/c
2
.
In order to see the background events a set
of so cold "soft" cuts was applied to the same 10 k
central UrQMd events. The resulting invariant mass
spectrum is shown in Fig. 11. The efficiency in this
case is 3.4 %, while S/B ratio drops down to 4.
Also we have studied the situation of
existing Dubna magnet then we will have
reduced acceptance (by the factor of 2.5). The
signal reconstruction efficiency is about 1.8 %.


Fig. 10. Reconstructed invariant mass
spectrum of A t
-
candidates. 8

were
reconstructed with no background.

Fig. 11. Reconstructed invariant mass
spectrum of A t
-
candidates. "Soft cuts"
S/B ratio is about 4, reconstructed mass
value is 1.321 GeV/c
2

Fig. 9. The pt
-
invariant mass spectrum.
Red line is signal Gaussian fit, green line
is polynomial background

15

References

[1] Compressed Baryonic Matter n Laboratory Experiments. The CBM Physics Book,
2011, http://www.gsi.de/forschung/fair_experiments/CBM/PhysicsBook.html
[2] Compressed Baryonic Matter Experiment. Technical Status Report, GSI, Darmstadt,
2005, http://www.gsi.de/onTEAM/dokumente/public/DOC-2005-Feb-447 e.html
[3] M. Bleicher, E. Zabrodin, C. Spieles et al. Relativistic Hadron-Hadron Collisions in the
Ultra-Relativistic Quantum Molecular Dynamics Model (UrQMD). (1999-09-16). In
J. Phys. G 25 1859.
[4] M. Zyzak, I. Kisel. Vertexing status. 14. CBM Collaboration Meeting, October 6-9,
2009, Split, Croatia.
[5] I. Vassiliev. Status Hyperon Reconstruction. 14. CBM Collaboration Meeting, April
4 - 8, 2011, Dresden, Germany.
[6] I. Kisel. Event reconstruction in the CBM experiment. Nucl. Instr. and Meth. A566, 2006,
pp. 85-88.
[7] S. Gorbunov, U. Kebschull, I. Kisel, V. Lindenstruth and W.F.J . Muller. Fast
SIMDized Kalman filter based track fit. Comp. Phys. Comm. 178, 2008, pp. 374-383.
[8] Particle Data Group, http://pdg.web.cern.ch/pdg/
16
Remote Control of the Nuclotron magnetic field correctors

V.A. Andreev, V.A. Isadov, A.E. Kirichenko, S.V. Romanov,
V.I. Volkov
Joint Institute for Nuclear Research, Dubna, Russia

The Nuclotron is one of the JINR basic facilities. It is intended to produce beams of
charged ions (nuclei), protons and polarized deuterons with energies up to 6 GeV per nucleon.
The orbit is deformed in the real magnetic structure because of various kinds of
perturbations which affect negatively on the accelerator complex functioning. The orbit
correction subsystem brings the same kind perturbations in the magnetic structure. It consists
of 40 correcting multipole SC-magnets (21 horizontal and 19 vertical). The availability of the
remote control is one of the key features for successful subsystem exploitation during the
accelerator runs.
Successful functioning of the Nuclotron depends on the advanced Control System
because of the complex equipment and occurring processes. The construction of this system
began almost at the same time of building of the accelerator and it still continues. The
magnetic field correctors control subsystem is one of the most important parts of the
Nuclotron Control System [1].
At the moment the development of the project NICA (Nuclotron based Ion Collider
fAcility) is carried out at JINR. The experience of the correctors control subsystem
construction can be used in another part of the NICA complex Booster [4].
Organization of the new remote control of the magnetic field corrector power supplies
is necessary for successful functioning of the subsystem because its equipment is located in
the dangerous radiation area.

The main tasks of the magnetic field correction subsystem NCS are:
1. Setting the current in the magnetic field corrector windings depending on
values of the guiding field B,
2. Measuring the real value of current in the windings of correctors,
3. Operational information displaying,
4. Receiving, interpretation and executing the operators commands from the
central or local control panels by computer network,
5. Sending the current information about the subsystems status to central or local
control panels by computer network,
6. Database management of correctors status,
7. Creation and analysis of archive files.
Specialized power supplies for correctors were ordered. National Instruments devices
are used for analog control and interface RS-485 is used for status control [3].

Status control is the interaction between the subsystem server and power supplies on
RS-485. It contains the following tasks:
- searching for the enabled supplies in the single program thread,
- requesting for enabled supplies status,
- editing supplies work parameters (on/off load, polarity change, error reset).
Analog control is the interaction between the subsystem server and supplies on
analog communication lines.
The subsystem can work in two different modes:
17
Dynamic mode:
Reference signal is written in WG (wave generator) before each cycle of the
accelerator operation;
Data from DAQ (data acquisition) board buffer loads to PC memory after the end
of the measurement cycle. They are used in graphic building.
Static mode:
The reference signal of switching is written in WG and instantly processed after the
operators command on current changing;
DAQ in the cycle mode measures the current diagram in supplies. The period of
measuring is 4 seconds.

The structure of this subsystem can be submitted in the following graphic view:

Fig. 1. Structure scheme of subsystem
Server (control computer and application) interacts with supplies by the status and
analog control.
The organization of the remote control of the magnetic field for new corrector power
supplies was an important task for the successful functioning of the correction subsystem.
The corresponding program modules were developed and integrated in the subsystem
server to solve this task The Client was also developed. It is a Windows-application
(operator's interface) which allows one to generate formation commands for supplies control,
to display status data and build graphics of the electric current and time relation.

Server main features are:
installation of the connection with one client from allowed computers;
client commands receiving and executing. The commands are executed the same way
as from the server console;
sending the information about the supply current status to the client;
sending waveforms of the supply output current after each cycle of the subsystem
measuring.

Client features:
providing the sending operators commands to the server,
18
giving the opportunity to the operator to edit the list of the allowed IPs for the remote
connection,
creating the graphic which contains the curves of the measured currents with the
opportunity of zoom and moving the view window along the time axis,
saving curves of the measured currents in the binary and graphical (JPEG) formats,
giving the opportunity to print the curves of the measured currents on the net or local
printer,
the interface is similar to the servers one,
sending the operators commands to the server,
receiving and executing the commands and data from the server,
opportunity to edit the list of allowed computers for the server remote connection,
the output-current-waveform-online-offline viewing, archiving and printing.

Client and server interaction can be submitted in the following graphic view.


Fig. 2. Block diagram of the server receiving and processing connection
Accelerator operators made some proposals on editing the program after the client
usage in two latest Nuclotron's runs. They were taken into account. The main difference
between the old and new versions of the Client is as follows:
1. Status and analog control were united;
2. Count of the possible supplies has been increased from 32 to 40;
3. First four tabs were joint in one. All supply control panels are arranged on it
(special version). This version is optimized for a full screen (resolution
1920x1080) mode of the application work;
4. Polarity indicator was substituted with explicit displaying of its value (+ or ).

19

Fig. 3. Client window. Current version

Fig. 4. Tab Measured current waveforms"
The soft programs were developed as a result of the performed work. Client and
Server are used for the remote control of the new supplies for the magnetic field correctors of
the Nuclotron. They meet all the necessary requirements.
The new corrector control subsystem has been successfully used during two latest runs
of the Nuclotron at the Joint Institute for Nuclear Research in the end of 2010/spring 2011
year. The subsystem has proved to be an effective instrument of the accelerator tuning. There
were no serious failures in operation and complaints from operators [2].

References
[1] V. Andreev, V. Vasilishin, V. Volkov. The Nuclotron Control System. Nuclear Electronics
& Computing. XXI International Symposium (NEC'2007). Proc. of the Symposium, Varna,
Bulgaria, Sept. 10-17, 2007. Dubna: JINR, 2008.
[2] CERN Courier, Upgrade of Nuclotron paves the way for NICA, Vol. 51, N. 3, April, 2011, p. 10.
[3] Power supply PS140-8 Technical report.
[4] A.N. Sisakian, A.C. Sorin et al. NICA JINR, 2009.
20
The Status and Perspectives of the JINR 10 Gbps Network
Infrastructure

K.N. Angelov, A.E. Gushin, A.G. Dolbilov, V.V. Ivanov, V.V. Korenkov,
L.A. Popov
Joint Institute for Nuclear Research, Dubna, Russia

Introduction
The JINR Gigabit network infrastructure is an aggregation of the multifunctional
network equipment and specialized software, which form the basis of the advanced
information and computing JINR infrastructure.

The main goals of the JINR network infrastructure:
creation of the common informational space of the available computing and storage
resources in JINR;
creation of the common information space for all categories of the JINR workers,
providing the possibilities for the data exchange between the JINR divisions, and
between JINR Administration and divisions;
organization and provision of the network access to the JINR central resources and
resources, and resources, located in the JINR divisions, for the different users groups;
support of the availability of the JINR GRID segment, as the significant part of
Russia Distributed Information Grid (RDIG) structure;
provision the access to the INTERNET resources;
support of the common network services: email (SMTP, IMAP, POP3, WebMail), file
exchange (ftp, scp, sftp, http, https), security (ssh, https, RADIUS and TACACS
authentication), DNS, users accounting, maintaining of the networks elements - IPDB, etc.

The JINR network infrastructure consists of the following components:
The external JINR-Moscow optical telecommunication data channel;
The JINR Local Backbone Network;
The Local Networks of the JINR divisions.
By the JINR Administration mandate Laboratory of the Information Technology is
responsible for maintaining of the external channel and the JINR optical backbone.
Networks in the JINR divisions are supported by the specialists from these divisions.


The State of the JINR external communications (data channels)
The external JINRMoscow optical telecommunication channel uses 10 Gbps Ethernet
over DWDM approach, when Ethernet data frames are transmitted by the DWDM (Dense
Wave Division Multiplexing) technology signal.
The most attractive feature of the DWDM technology comparing with the traditional
optic technologies which transfer one data signal over a single fiber is parallel transfer of the
multiple digital signals over a single fiber. To achieve this remarkable property digital signal
stream is parsed into multiple chunks which are to be sent over a single fiber by multiple
virtual optics channels, called lambdas (). Modern state of the DWDM allows up to 160 -s.
21
Each has its own frequency value, and neighbor -s do not overlap, giving the possibilities
of simultaneous parallel data transfer.

The equipment OME-6500 and CPL of the Nortel Systems company (Canada) based
on the DWDM technology were installed in three places in 2009 to form the JINR Moscow
external cnannel: in the central telecommunication node in LIT (r.215, bld.134), in the
settlement of Radishevo (130 km off Dubna), in the Moscow Internet Exchange MIX
(Moscow, Butlerova Street).


Fig. 1. The external JINR Moscow optical telecommunication channel

One can see in the Fig.1 the scheme of the interconnection between the JINR LAN and
MIX NAP (Moscow Internet Exchange Network Access Point). The optical cable passes three
intermediary points: Konakovo, Radishevo, Starbeevo, but the active network equipment is
installed only in one of these points Radishevo.
This particular type of the equipment works with 10 Gbps capacity for each . We have
two -s, it means that JINR telecommunication optical channel has a capacity of 20 Gbps in total.
Currently one is in use for the Arena project, and the 2-d is planned to use for giving the
virtual data channels in the JINR ISP project. Good to note that with type of the optical
equipment we can have up to 88 -s with the total channel capacity of 880 Gbps.

The JINR Local Network has direct connection to the following scientific, educational
and public networks:
GEANT European scientific and educational network 10 Gbps,
RBNet Russian Backbone scientific network 10 Gbps,
Moscow ant S.-Petersburg scientific and educational network 10 Gbps,
Internet 10 Mbps.

The intercity data communications
The intercommunications between different networks in the city of Dubna are
organized on the basis of the specialized data exchange node DBN-IX in JINR central
telecommunication node. This solution gives the Dubna Internet providers organize their
22
traffic transfer in direct way instead of sending it to Moscow Internet Exchange and then back
to Dubna to the partner. JINR Local Network has direct connection to the following networks:
Net-by-Net (the former LAN-Polis) 1 Gbps, Contact 1 Gbps, Telecom-MPK 1 Gbps.
The VPN-based network remote access is available service for the JINR employees
from their homes by the Net-by-NET (the former LANPolis), Contact, and Telecom MPK
networks. It should be noted that internetwork peering for the city ISPs is organized.

The State of the JINR Local Network
The JINR Local Network is referred as a campus-type network (a campus is an
extended student town or a research center with multiple distributed buildings).
The LAN comprises of an explicit backbone in the form of three rings to reach the
different parts of the JINR LAN and access networks. The access networks in JINR are users
networks which serve the scientific and administrative divisions; they are maintained and
developed by the staff of the divisions. There are a number of L3 switches (Cisco Catalyst
3560-E) installed in the backbone to serve as the input/output gateways for the corresponding
divisions. These switches being part of the JINR optical transport backbone physically are
located in the JINR divisions, but they are under control of the JINR LIT network operations
center. Access switch ports which face the internal divisions structures are under control of
local division staff. Thus the backbone switches, the optical transport cable, the equipment
of the central telecommunication node, the external telecommunication optical channel are
under supervision of the LIT NOC.
Since the mid of 2010 the JINR network uses two data speeds, or data rate.



Fig. 2. The Current 10 Mbps JINR LAN backbone

Three Laboratories LIT, LNP, LPHE use 10Gbps data rate (Fig. 2), the rest of the
divisions use 1Gbps data rate. Final activities to build the complete 10 Gbps backbone will
occur in the October 2011.
23
The monitoring of the JINR networking
The JINR networking monitoring system uses few protocols: SNMP, NetFlow,
accounting. The SNMP (Simple Network Management Protocol) picks up the necessary
network parameters, such as, for example, state of the network device interfaces, the loading
of these interfaces, the activity/inactivity state of the network elements, the bandwidth of the
data communication lines, etc. The NetFlow protocol picks up the traffic data on the basis of
hosts IP addresses. The accounting is more simple protocol of the network control, aimed to
present the events in the real-time approach, used for the logging the network events
synchronized with real time.
The results of the monitoring are presented by the Web-interface, called NMIS
Network Management Information System. The NMIS package was developed by the group
of specialists and distributed on the terms of the GPL (GNU Public License) usage and
adapted for the use in the JINR networking environment. The NMIS system shows the
following by the operators request:
permanent monitoring of the JINR Network state;
present the general network status and status of the main network elements (routers
and switches) of the JINR Network;
presents the status and usage of the critical JINR network elements.

The Control (monitoring) of the networking events consists of the following elements
(subsystems):
Data Collection System of the Control Information (SNMP, accounting, and netflow);
The Storage System for the collected data information;
The System of processing and visualization of the stored information.



Fig. 3. The complete structure of JINR LAN

24
Fig. 3 shows the structural view of the JINR 10 Gbps Ethernet networking, which
includes Cisco 7606 edge routers with optical links (upper lines in red) to the JINR ISP in
Moscow, two central core Layer 3 switches Cisco Catalyst 6509. These two 6509 switches
drive all layer 3 backbone Cisco 3560E switches, installed in the JINR LAN backbone for
every main division (9 divisions in total). Besides the above mentioned the two flow
collectors (the primary one and the backup) are connected to the Catalyst 6509 switches for
purpose of the traffic analysis.



Fig. 4. The example of the IPDB screenshots

In the Fig. 4 its shown the output made by operators request from IPDB system. The
demonstrated shotscreen presents the data transferred through the JINR network in 2010. Data
are presented as incoming into LAN, outgoing from the LAN, and all the data are
diffentiated by the division they relate to.
In the Fig. 5 one can check the interfaces statuses for the particular node (host) the
JINR LAN authorization system.
The authorization system is one of the most important critical systems for the network
of any organization, it is one of the crucial elements in the organization security system. The
authorization and accounting in JINR are organized on the basis of IPDB, and it uses the
following protocols LDAP, Kerberos, RADIUS, and TACACS. The IPDB is the JINR
network nodes data base structure which used for the deploying of the sophisticated JINR
LAN structure through its own web interface.
The following are the activities with IPDB (authorization, authentication, and accounting):
Registration and Authorization of the JINR network elements and JINR Network users;
Visualization of the JINR network traffic statistics;
Maintaining the VPN data base of remote users;
Holding users data base of electronic libraries;
Working with users data base of the computational JINR cluster.

25


Fig. 5. The example of the NMIS screenshots


Fig. 6

Fig. 6 presents the detailed information related to a particular node (host) and
information about a user, logged in to the node.
26
The Status and Statistics on the base of IPDB and NMIS
The most interesting statistical data related to the networking can be seen below.
These data are withdrawn from IPDB and NMIS packages.
Table 1 shows the contents of the IPDB database.
Table 1
Operator (Administrators of the JINR divisions) 43
Users 3530
Network Elements 7050
IP addresses 8495
Remote Access (total/ per month) 1188 / 15
Electronic Libraries (total/ per month) 964 / 20
AFS users 365
IPDB transactions, thousand per year 160
Number of the Scientific Networks 270

Table 2 shows the external incoming to JINR Network traffic.
Table 2

Year
2005 2006 2007 2008 2009 2010 2011
Incoming traffic,
Tbyte
46 83 242 376 536 1399 1465


Table 3 shows the data for 2011 year distribution of the traffic by the categories.
Table 3. Distribution of the traffic. Lists of the Categories
N Category Abbreviation
Incoming
(IN)
->
Outgoing
(OUT)
->
% of IN
% of
OUT
1
Scientific and
educational
organizations
SCIENCE 1137.86 Tb 535.22 Tb 89.41 % 77.88 %
2
File Exchange
(p2p)
P2P 108.11 Tb 146.31 Tb 8.5 % 21.29 %
3 Web resources WEB 18.15 Tb 3.9 Tb 1.43 % 0.57 %
4 Social Networks SOCIAL_NET 3.22 Tb 134.56 Gb 0.25 % 0.02 %
5 Software SOFTWARE 2.5 Tb 269.3 Gb 0.2 % 0.04 %
6 Multimedia flows MM_STREAM 2.18 Tb 251.94 Gb 0.17 % 0.04 %
7 Dubna Networks DUBNA 621.31 Gb 1.19 Tb 0.05 % 0.17 %
8 Erotic sites PORN 331.78 Mb 23.49 Mb 0 % 0 %
Total: 1272.62 Tb 687.26 Tb 100 % 100 %

27
Table 4 shows the Top 10 data for 2010 year for Scientific and educational
organizations by subcategories.
Table 4
N Category Abbreviation
Incoming
(IN)
->JINR
Outgoing
(OUT)
JINR->
% of
IN
% of
OUT
1
Science Park
Watergraafsmeer
Amsterdam (NL)
SARA 265.75 Tb 36.57 Tb 23.36 % 6.83 %
2
Nationaal instituut
voor subatomaire
fysica (NL)
NIKHEF 161.47 Tb 2.59 Tb 14.19 % 0.48 %
3
Fermi National
Accelerator
Laboratory (US)
FNAL 115.82 Tb 11.19 Tb 10.18 % 2.09 %
4
Istituto Nazionale
di Fisica Nucleare
(IT)
INFN 77.04 Tb 22.12 Tb 6.77 % 4.13 %
5
European
Organization for
Nuclear Research
(SE)
CERN 60.95 Tb 71.93 Tb 5.36 % 13.44 %
6
Forschungszentrum
Karlsruhe (DE)
KFK-ULTRA 59.47 Tb 2.88 Tb 5.23 % 0.54 %
7
Rutherford
Appleton
Laboratory (GB)
RAL 56.76 Tb 8.63 Tb 4.99 % 1.61 %
8
Institut National de
Physique Nucleaire
(FR)
IN2P3 52.18 Tb 17.32 Tb 4.59 % 3.24 %
9
Consorci Institut de
Fisica Altes
Energies (ES)
PIC 39.33 Tb 22.15 Tb 3.46 % 4.14 %
10
Academia Sinica
Computing Centre
(TW)
ASCC 20.04 Tb 46.1 Tb 1.76 % 8.61 %
Total:

1137.84 Tb 535.21 Tb 100 % 100 %

Dynamics of the JINR external data channels capacity shown in Table 5.
Table 5






Dynamics of the JINR LAN data rate shown in Table 6.
Table 6
Year 1986 1994 1998 2002 2004 2010 2011
Data Rate, Mb/s
0.512 10 155 100 1 000 10 000 10 000
Year 1999 2001 2003 2006 2009 2010 2011
Channel Capacity,
Mb/s
2 30 45 622 10 000 210 000 210 000
28
Summary
The main tasks of the development of the JINR network infrastructure fulfilled in
2010 - 2011 are:
Operational solution of the complete 10 Gbps backbone of the JINR Local Network;
The approach for the provisioning of the external customers with the virtual data
channels.
The basis of the virtual service is fault-resistant design of the network structure:
hardware, software, the means of the on-the-fly diagnostics, on-line monitoring of the
network health. The second important part of this task is obtaining of specialized license for
offering of the virtual channels for corporate users in Dubna.

The long-term perspectives of the JINR networking development is the quantitative
growth of the main networking parameter data rate. As it was already mentioned the
aggregate speed of the external data channel can reach 880 Gbps. The speed of the JINR data
backbone will be increased, but not so quickly as the external data channel.
And we do not expect in the near future that the interfaces of the workstations will
become equal 10 Gbps.

More attention should be paid to the all JINR networking aspects to be able to satisfy
demands of the users, who utilize the more and more sophisticated applications in their
everyday work. The efforts of the Network Operations Center specialists should be applied to
the following:
To support the exploration and extension of the multimedia applications for the JINR
users;
To integrate the low-level network monitoring and the GRID monitoring to have the
fast and reliable problem-solving mechanism;
Improvement of the reliability, availability, and quality of the network, the external
data channels, and all provided network services.

References
[1] A. Tannenbaum. Networks. St. Petersburg, 2003, p. 948 (in Russian).
[2] W. Stallings. Modern Networks. St. Petersburg, 2003, p. 783 (in Russian).
[3] B.A. Bezrukov, A.G. Dolbilov, A.E. Gushin, I.A. Emelin, S.V. Medved, L.A. Popov. JINR
Gigabit Ethernet Backbone. In Annual Report 2003, Laboratory of Information Technologies. Ed.
by Gh. Adam, V.V. Ivanov and T.A. Strizh. JINR, Dubna, 2004, p. 11-14 (in Russian).
[4] K.N. Angelov, B.A. Bezrukov, A.E. Gushin, A.G. Dolbilov, I.A. Emelin, V.V. Ivanov,
S.V. Medved, L.A. Popov. JINR Gigabit network infrastructure, services and security. In
Scientific report 2004-2005, Laboratory of Information Technologies. Ed. by Gh. Adam,
V.V. Ivanov and T.A. Strizh. JINR, Dubna, 2005, p. 15-24. (in Russian)
[5] K.N. ngelov, A.G. Dolbilov, I.A. Emelin, A.E. Gushin, V.V. Ivanov, V.V. Korenkov,
L.A. Popov, V.P. Sheiko, D.A. Vagin. 2006 2007 JINR networking results. In Scientific
report 2006-2007, Laboratory of Information Technologies. Ed. by Gh. Adam, V.V. Ivanov
and T.A. Strizh. JINR, Dubna, 2007, pp. 29-32.
[6] K.N. ngelov, A.G. Dolbilov, I.A. Emelin, A.E. Gushin, V.V. Ivanov, V.V. Korenkov,
L.A. Popov. 2008 2009 JINR networking results. In Scientific report 2008 -2009,
Laboratory of Information Technologies. Ed. by Gh. Adam, V.V. Ivanov, V.V. Korenkov,
T.A. Strizh, P.V. Zrelov. JINR, Dubna, 2009, pp. 17-19.


29
The TOTEM Roman Pot Electronics System

G. Antchev
On behalf of the TOTEM Collaboration
INRNE-BAS, Sofia, Bulgaria

The TOTEM experiment has three sub-detectors: Roman Pots (RP) with silicon strips, the T1 detector
with Cathode Strip Chambers (CSC) and T2 with Gas Electron Multiplier detectors (GEM). The RP detectors are
located in the straight sections of the LHC tunnel on both sides of the CMS experiment at IP5. The TOTEM RP
Electronics System consists of the following main components: the front-end with the VFAT2 chip mounted on
the RP hybrids for tracking and trigger generation; the on-detector electronics based on the RP Motherboard
(RPMB) for data conversion and transmission, and the counting room electronics with data acquisition and
trigger systems based on the TOTEM Front End Driver (TOTFED). A detailed overview of the TOTEM Roman
Pot Electronics System and its components is presented in this paper.

1. Introduction
TOTEM (Total Cross Section, Elastic Scattering and Diffraction Dissociation
Measurements) [[1]] is an experiment dedicated to the measurement of total cross section,
elastic scattering and diffractive processes at the LHC. The full TOTEM detector is composed
of Roman Pot Stations (RPS), Cathode Strip Chambers T1 (CSC) and Gas Electron
Multipliers T2 (GEM). The T1 and T2 detectors are located on each side of the CMS
interaction point in the forward region, but still within the CMS cavern (Fig. 1).

Fig. 1. Top: The TOTEM forward detectors T1 and T2 embedded in the CMS detector.
Bottom: The LHC beam line and the Roman Pots at 147 m (RP147) and 220 m (RP220) sector 5-6

Two Roman Pot (RP) stations are installed in the straight section of the LHC tunnel on
each side of the interaction point at 220 m and 147 m. Each RP station consists of two groups
of three RP separated by a few meters to obtain a sufficiently large lever arm to establish co-
linearity with the LHC beam for the tracks prior to generating a level 1 trigger for the
corresponding event. Three Roman Pots were designed in one group to approach the beam
with detector stacks from three different sides (top, bottom and one side, the other side is
impossible due to the presence of the second beam pipe). Each RP contains 10 silicon strip
detectors with 512 strips.

2. General design specifications
TOTEM needs to operate both as a standalone experiment and as a sub-detector of
CMS. This requires full compatibility with CMS. The RPs need to participate in the trigger
building with a high degree of flexibility. All this requires the use of several CMS

30
components. Standardization was necessary across the TOTEM sub-detectors for the
integrated circuits development and the counting room hardware. This leads to similar
systems for all three sub-detectors: they work with the same front end chips but with different
front-end boards compatible with their specific channel segmentation and geometry.

3. System Overview
Fig. 2 shows the TOTEM RP Electronics System Basic Block Diagram.



Fig. 2. The TOTEM RP Electronics System Basic Block Diagram

The TOTEM RP Electronics system is divided physically in two levels: on-detector
electronics and counting room electronics [[2]]. The on-detector electronics are in the tunnel
and are electrically isolated from the counting room via floating power supplies, optical signal
transmission and electrical transmitters with optocouplers. Due to the radiation requirements,
the low voltage power supplies are located in the closest alcove in the tunnel up to 70 m from
the RP. The high voltage power supplies are located in the counting room. The two levels of
the system are more than 200m apart.

4. On-detector Electronics
In the following section, the system will be described in more detail starting from the
on-detector electronics and moving up to the counting room electronics.

4.1. VFAT 2 readout chip
The VFAT2 front end ASIC [[3]] provides tracking and trigger building data. The
VFAT2 chip provides binary tracking data (1 bit per channel and per event). All data
corresponding to a triggered event is transmitted without zero suppression. The ~160 8 bit
registers controlling the VFAT2 chip are programmable through its I2C interface. The
VFAT2 includes a counter on its fast trigger outputs to monitor hit rates. Fig. 3 presents a
photo of the chip.


31


Fig. 3. The VFAT 2 chip

4.2. The Silicon Detector Hybrid
The silicon detector hybrid (Fig. 4) carries detector and 4 VFAT2 readout chips and a
Detector Control Unit DCU chip. The hybrid is connected to the outside world by means of
an 80 pin connector linked to a flat cable. The VFAT2 is biased internally - so as to simplify
the design of the hybrid.



Fig. 4. The Silicon Detector Hybrid

Each VFAT2 will send 4 trigger outputs to the motherboard for further coincidence,
resulting in 16 trigger outputs per hybrid (so 32 wires for LVDS occupying 40 % of the
connector). The connector linking the hybrid to the motherboard carries also HV and LV power,
clock and trigger signal, and also connections PT100 for temperature control. The strips on the
detector form a 45 degrees angle with the edge close to the beam. Flipping the detector hybrid
and mounting it face-to-face with the next one result in orthogonal strips giving the U and V
coordinate information. A picture of the detector package is shown on Fig. 5.

32


Fig. 5. Detector package

All electrical components are mounted on one side (the right looking from the top) to
avoid losing space between the hybrids.

4.3. The Roman Pot Motherboard
The TOTEM Roman Pot Motherboard (RPMB) [[4]] is the interface between the
hybrids with silicon detectors and front end chips in the Roman Pots, and the outside world.
The RPMB is glued in the vacuum flange which separates the vacuum chamber containing the
detector hybrids, and forms the feed through between vacuum and atmosphere. The hybrids
have a flexible part with an on-board connector for connection to the motherboard. The
motherboard is equipped with connectors to the detector hybrids from one side and front
panel with connectors to the patch panel form the other side.
The RPMB needs to provide power and control, as well as clock and trigger
information to the 10 hybrids. It acquires tracking and triggers data from the hybrids,
performs data conversion from electrical to optical format and transfers the data to the next
level of the system. It also collects information such as temperature, pressure and radiation
dose inside the pot. Fig. 6 shows a picture of the RPMB and mezzanines and on Fig. 7 is
presented the functional block diagram.



Fig. 6. The picture of the RPMB and mezzanines


33
Apart from the electrical functionality described in detail below, the design of the
RPMB was constrained by the mechanics and by radiation tolerance.
The RPMB has to fit in the Roman Pot mechanics, connect to 10 hybrids in a
secondary vacuum (the primary vacuum is that of the machine within the beam pipe, the
primary and secondary vacuum are separated by a window of about 100 micron thick), and
feed through about 800 signals to and from the outside world. The connections to the outside
are naturally on the end opposite to the hybrids. The maximum width of the feed through for
these 800 signals is about 12 cm and together with the other size limitations this results in a
very challenging layout with 16 layers for the RPMB.
The RPMB is also subject to radiation, requiring all components to be radiation
tolerant. In particular, all on-board integrated circuits are full-custom circuits designed in
0.25 micron CMOS technology with special techniques to increase the radiation tolerance
[[5],[6]].
The board also had to be produced in halogen free material because of safety
regulations.



Fig. 7. The RPMB Functional Block Diagram

The following are the general building blocks of the motherboard: Clock and Trigger
distribution circuitry, Gigabit Optical Hybrids (GOH) three for data and two for trigger bits
transfer, two Coincidence Chip mezzanines, LVDS to CMOS converters, Trigger VFAT2
mezzanine, Control Unit mezzanine CCUM, Radiation Monitor circuitry and temperature
sensors.
1) Power Distribution
The RPMB needs to receive low voltage power at 2.5 V for its own operation, and for
the operation of the hybrids. The power on the hybrids has been carefully separated between
analogue and digital blocks, both powered at 2.5 V.
The silicon detectors need to be biased up to 500 V after irradiation. The RPMB
receives this high voltage supply and distributes it to the detector hybrids. The supply is
separate for all detectors; grouping is done in the counting room. This allows isolating
defective detectors from the rest if needed.
2) The slow control
The slow control system is the same as for the CMS Tracker and ECAL detectors
[[7]]. A FEC-CCS board in the counting room sends and receives optical control data. A

34
Digital Opto-Hybrid Module (DOHM) converts this data back to electrical form and
interfaces with the RPMB via two 20pins 3M high speed connectors placed on the front panel.
A Communication and Control Unit mezzanine (CCUM) on the RPMB (Fig. 8)
decodes this information and provides 16 I2C interface channels and one 8 bit parallel control
port for use on the RPMB. All integrated circuits including the VFAT2 are controlled using
these I2C interfaces.


Fig. 8. CCUM Mezzanine photo

In addition to the slow control information transmitted over I2C, several sensors
mounted on the RPMB or on the hybrids provide additional information like temperature,
pressure and radiation dose data.
PT100/1000 sensors are used for temperature, and a piezoelectric pressure sensor
measures the pressure inside the pot.
A special small carrier card (RADMON) [[8]] is used for radiation monitoring on the
RPMB. This carrier is made of a thin (~500 m) double-sided PCB. It can host up to 5 p-i-n
diodes and five RadFETs mounted inside a proper package. It also includes a temperature
sensor (10k NTC). A photo of the carrier is shown on Fig. 9.



Fig. 9. RADMON carrier photo

3) Clock and Fast Commands
The FEC-CCS card receives clock and fast commands in the counting room and
includes these with the slow control data for transmission to the detector using the same channel
as the slow control. On the RPMB, the clock and fast command signals are reconstructed by the
PLL25 chip. The QPLL, a quartz based PLL, is used to further reduce the clock jitter necessary
for serialization and optical transmission of data. The clock and fast command tree has been
designed to minimize timing spread over all components on the RPMB.


35
4) Tracking Data transmission
The data sent by the VFAT2 front end chips upon a level 1 trigger signal is converted
from LVDS to CMOS on the RPMB and then presented to the gigabit optical hybrids GOH
modules, which serialize and convert the electrical data to optical for transmission to the Data
Acquisition (DAQ) system in the counting room. Three GOH modules are used to send data
from 40 VFAT2 chips.

5) Trigger Data generation and Transmission
Each VFAT2 front end chip has 8 trigger outputs, 4 of which are used in the Roman
Pots. Every hybrid therefore generates 16 trigger outputs, and 5 hybrids have the same
orientation of the silicon strips (U coordinate), and the 5 others have strips oriented at
90 degrees (V coordinate). The trigger signals are put into coincidence using two separate
Coincidence Chips (CC) - one for the U and one for the V coordinate. The CC chips are
mounted on the RPMB as mezzanine cards (CC mezzanine), one mezzanine per CC. Fig. 10
shows a photo of the CC mezzanine.



Fig. 10. CC Mezzanine photo

The CC provides 16 outputs (so the number of trigger signals is reduced from 2x80 to
2x16), and these signals have to be transmitted to the counting room.
For these coincidences a full custom chip was developed instead of using a Field
Programmable Gate Array. There are two reasons for this:
- The latency constraints on the generation of the trigger bits (especially from the
Roman Pots) are very severe: after subtraction of cable delays, only about 8-10 bunch
crossings are left for the generation of the trigger signals to be provided to CMS from the
signals generated by the Roman Pot. A full custom chip with dedicated logic can implement
the required coincidence in one clock;
- The CC needs to be placed on the RPMB or at least near the detectors and is
therefore subject to radiation. Special design techniques were used to make the CC much
more robust against radiation both with regard to total dose and single event effects than a
standard FPGA. The CC mezzanine was designed to carry one Coincidence Chip and two
130 pins input/output connectors.
To transmit trigger bits to the counting room two ways have been selected: optical
fibres are used for the 147m RP stations and in TOTEM standalone runs also for the 220m
stations. The runs with CMS on the other hand are subject to CMS's limited trigger latency
time, imposing trigger bit transmission with LVDS signals through fast electrical cables,
because the serialization and deserialization and optical transmission in the fibre (~5 ns/m)
take too much time. The electrical transmission over such a long distance requires a lot of care
to preserve signal integrity. This can only be achieved by restoring the LVDS signals to full

36
levels at regular intervals over the transmission distance. A special integrated circuit was
designed for this purpose: the LVDS repeater chip can treat 16 LVDS channels in parallel and
was designed in a special layout to guarantee radiation tolerance. This chip is mounted on a
small repeater board. A repeater stations consists of 12 repeater boards (one for every cable
carrying 16 LVDS signals) are placed at regular intervals of about 70m.
Since the trigger signals are sent on every clock cycle, some time reference has to be
included in the trigger data stream to facilitate recovering the correspondence between the
event and the transmitted bits. This is achieved through the VFAT2 trigger mezzanine which
is set to receive the fast command bunch crossing 0 (BC0) and generate the corresponding
output (Fig. 11). As a result, the GOHs data valid signal is disabled upon reception of the
BC0 signal for the duration of one clock cycle. This can be recognized in the counting room,
and provides the time reference.

Fig. 11. Trigger VFAT2 Mezzanine

In addition, the VFAT2 trigger mezzanine records the trigger bits and merges them
upon a level 1 trigger with the tracking data, so that the trigger bits which lead to a triggered
event are recorded with the tracking data from that event.

4.4. Digital Opto Hybrid Module
The control, timing and trigger information per RP station is handled by one TTC ring
which follows the CMS standard. A Digital Opto Hybrid Module (DOHM) receives and sends
the optical information using two Digital Opto Hybrids (DOH) mezzanines. It converts these
optical signals from and to electrical signals for the token ring. A photo of the DOHM module
is presented on Fig. 12.



Fig. 12. DOHM photo

37
5. Counting Room Electronics
The counting room electronics have been fully standardized across all TOTEM
detectors and the same hardware is used for data readout and trigger signal generation.

5.1. The TOTEM Front End Driver
The TOTEM Front End Driver, so-called TOTFED, receives and handles trigger
building and tracking data from the TOTEM detectors, and interfaces to the global trigger and
data acquisition systems. The TOTFED is based on the VME64x standard and has
deliberately been kept modular [[9]].
The TOTEM Front End Driver (TOTFED) functional blocks are shown on Fig. 13.
The general blocks are: Optical Receiver Modules (OptoRX12); CMC Transmitter, based on
the S-Link64 interface; VME64x Interface; USB Interfaces; MAIN and MERGER Controllers
with associated SPY Memory Buffer and Clock distribution circuits.

V
M
E
6
4
x
VME64x
Interface
MAIN 1
L
o
c
a
l

B
u
s
Spy 1
Memory
OpRX 1
+
S-Link64
MAIN 2
Spy 2
Memory
OpRX 1
+
S-Link64
MAIN 3
Spy 3
Memory
OpRX 1
+
S-Link64
CCS/TTS
Optional
To S-Link64
JTAG
CLOCK
CLOCK
USB 1
USB 2
USB 3
TTCrx
QPLL
Merger
FPGA
Spy 4
Memory
192bits 64bits
32bits
64bits
32bits
32bits
32bits
16bits
16bits
16bits
USB 4
Local Bus
16bits
B
u
f
f
e
r
s
V
M
E
6
4
x
VME64x
Interface
MAIN 1
L
o
c
a
l

B
u
s
Spy 1
Memory
OpRX 1
+
S-Link64
MAIN 2
Spy 2
Memory
OpRX 1
+
S-Link64
MAIN 3
Spy 3
Memory
OpRX 1
+
S-Link64
CCS/TTS
Optional
To S-Link64
JTAG
CLOCK
CLOCK
USB 1
USB 2
USB 3
TTCrx
QPLL
Merger
FPGA
Spy 4
Memory
192bits 64bits
32bits
64bits
32bits
32bits
32bits
16bits
16bits
16bits
USB 4
Local Bus
16bits
B
u
f
f
e
r
s


Fig. 13. TOTFED Functional Block Diagram

The CMS ECAL group has developed the Data Concentrator Card (DCC) [[10]]. The
board has 72 optical 800 Mbit/s inputs implemented in 6 NGK 12-Channel Receivers.
This board is very close to the full GOL count for the Roman Pots (72) and about twice
the GOL count of the GEM detectors. However, the data and trigger information content of the
TOTEM GOLs is much higher: the TOTEM data density is such that only 9 optical channels
would completely saturate one Slink64, thus requiring one DCC for each 9 channels an
extremely inefficient solution. A new module has therefore been designed using the previous
development as much as possible.

1) Design Strategy and Components
The TOTFED has been designed as a modular device. It is built from a set of
mezzanine cards plugged onto a main motherboard known as the VME64x Host Board.
The expensive optical components are mounted solely on mezzanine cards, so that
they can be tested separately and preserved if the motherboard is defective. Motherboards can
be equipped with a fraction of the total number of mezzanines, and some of the mezzanines
can be different depending on the application. For example, the base configuration for the
CMS Preshower application has three OptoRX12 mezzanines associated to a single S-Link64
(and FRL) but it is possible to incorporate a further mezzanine card to aid data suppression
prior to the S-Link64 [[12]]. For the TOTEM application the incoming data is distributed over

38
three FRLs. The TOTFED is intended for operation in a VME environment in the experiment but
is also equipped with USB ports to allow standalone operation. This is being implemented on the
basis of previous CMS Preshower work [[13]]. A photo of the TOTFED is presented in Fig. 14.



Fig. 14. TOTFED photo

2) VME64x Host Board
The VME64x Host Board is in 9U VME64x format. It is a motherboard that accepts
different mezzanine modules. It has PMC connectors for three OptoRX12 modules and three
other sets of PMC connectors for optional use. The MAIN and MERGER functional blocks
are implemented in FPGAs from the Altera Stratix family. Every OptoRX12 has its associated
MAIN controller connected via a 192bit bus. The MERGER shares part of this bus: 64bits
from each MAIN controller. The VME64x Interface is implemented in a further FPGA from
the Altera Cyclone family and is configured as a bridge between the VME and Local Bus.
The VME64x Host Board includes the TTCrx [[14]] ASIC with associated chipset for
receiving the TTC signals and distributing the decoded information across the board. The
TTC signals are provided to the TTCrx optically and/or electrically. For the optical interface
with the TTCrx, an associated optical receiver is used. Concerning the electrical interface, an
additional connector in the back side of the card is used. The same connector is used to
provide an extra flag that signals possible buffer overflow (trigger throttling signal). On top of
every OptoRX12 it is possible to plug-in a dedicated CMC Transmitter module, which
connects the TOTFED to the DAQ system. There is also a possibility to connect a fourth
CMC Transmitter module to the VME J 2 connector and additional rear adapter.
Flexible J TAG programming interface is used for reconfiguring all the on-board
FPGAs (in a variety of ways), including those hosted by the mezzanine modules.

3) The OptoRX12 module
The OptoRX12 is a general purpose plug-in module used for reception of optically
transmitted data by gigabit applications. It is based on a 12-channel optical receiver and an
FPGA from the Altera Stratix GX family with embedded hardware de-serializers qualified for
data rates up to ~3.2Gbps. The FPGA embedded de-serializers are compatible with the
Gigabit Ethernet protocol/encoding. For the interconnection with the VME64x Host Board,

39
the module incorporates an electrical interface (using five 64-pin PMC type connectors). The
electrical interface comprises dedicated pins for powering, clocking, configuration via J TAG
as well as a large number (280) of lines driven from the FPGAs I/O pins. This large number
of lines provides the de-serialized data from all 12 channels in parallel. Although the total
number of interconnections is large, the physical dimensions of the OptoRX12 were kept
relatively small (115mm x 75mm) allowing up to three of these modules to be plugged into a
VME64x Host Board (340mm x 360mm). Fig. 15 shows a photograph of the module. Details
about the OptoRX12 can be found in [[11]].



Fig. 15. The OptoRX12 mezzanine module

5.2. The RP DAQ System
The RP Data Acquisition System (DAQ) is built around the TOTFED unit. The main
objectives of this system are to acquire on-detector data from up to 72 optical links, perform
on-line data formatting and pass the data to the second level of the system. The DAQ also is
used for initialization and calibration of the front-end VFAT2 chips. It is meant to operate in
two modes stand-alone data taking at up to ~1KHz and data taking integrated in the CMS
DAQ system at a later stage. In the stand-alone mode the VME64x at ~40MB/s is being used
to read out the data from the detectors, while in data integrated with CMS mode the S-Link64
interface is being used at the higher rate of ~200MB/s. The total amount of Si detectors to be
read is 240 and the number of channels is 122880, which is covered by 960 VFAT2 chips.
Four TOTFED units are mounted in one crate together with the Front-End Controller
(FEC) and Trigger and Timing Control (TTCci) units. The photo of the RP DAQ crate is
shown on Fig. 16.


Fig. 16. The RP DAQ crate

40
The data from the crate is transferred to the TOTEM DAQ cluster. The cluster is a set
of PCs for event building and storing. The rates and capacities are accommodated in present
medium level storage and a transfer system based on Fibre Channel and SCSI technology.

5.3. The RP Trigger System
As it was mentioned above, the second function of the VFAT2 - triggering is to
provide programmable fast OR information based on the region of the sensor hit. This can
be used for the creation of a level 1 trigger. The Sector outputs (S-bits) of the VFAT2 give the
result of internal fast OR operations within one clock cycle. These S-bits are in LVDS format.
A Coincidence chip then performs coincidence operations between VFAT sector outputs.
The outputs from the CC chip (still in LVDS format) are then converted to CMOS
levels by dedicated LVDS-to-CMOS converters. Using the GOH, the trigger data is then
serialized and transmitted at 800 Mb/s to the counting room. The VME64x Host Board
equipped with OptoRX12 receivers and additional mezzanines compose the trigger system.
After converting to electrical signal, the trigger information is analyzed in the onboard FPGAs
and it is transferred to the local global trigger generator board LONEG. This is a mezzanine
board plugged onto another VME64x Host Board to build so called Trigger TOTFED. Since
the same hardware and software are being used to readout the Trigger TOTFED, it is easy to
integrate the trigger data into the data stream from the readout TOTFED. The photo of the
Trigger crate is shown on Fig. 17.



Fig. 17. The TOTEM Trigger crate

In the Trigger TOTFED board inside the FPGA the complex algorithms can be
performed in order to prepare the trigger primitives for the global L1 trigger.
The RP station RP220 is so far away from the counting room that optically transmitted
trigger data would not arrive within the latency allowed by the trigger of CMS. Therefore, in
addition to optical transmission for TOTEM runs, the electrical transmission with LVDS
signals was implemented for common runs with CMS. To maintain the electrical isolation
between the detector and the counting room, optocouplers are used to receive these
electrically transmitted signals. At regular intervals of about 70m along the total cable length
of 270 m, repeaters based on a custom-designed LVDS repeater chip are inserted to preserve
the electrical signal quality.



41
6. Summary
The Roman Pot Electronics System was built on a modular principle and it is based on
the common hardware and software developed within the frame of TOTEM collaboration.
Using the same hardware in the counting room for tracking data and trigger building provides
possibilities for common firmware developments. The system can be used in stand-alone
mode and also in the CMS experiment. The proton-proton elastic scattering has been
measured by TOTEM experiment at CERN in special dedicated runs with Roman Pot
detectors using the described above electronics system. The results are presented in [[15]].

References
[1] G. Anelli et al. The TOTEM Experiment at the CERN Large Hadron Collider, 2008 J INST
3 S08007, http://iopscience.iop.org/1748-0221/3/08/ S08007
[2] W. Snoeys et al. The TOTEM electronics system. TWEPP07, 3-7 Sep. 2007, Prague,
http://cdsweb.cern.ch/record/1089268/files/p205.pdf
[3] P. Aspell et al. VFAT2: A front-end system on chip .TWEPP07, 3-7 Sep. 2007, Prague,
http://cdsweb.cern.ch/record/1069906/files/p292.pdf
[4] G. Antchev. The TOTEM Roman Pot Motherboard, Topical Workshop on Electronics for
Particle Physics, Naxos, Greece, 15 - 19 Sep. 2008,
http://cdsweb.cern.ch/record/1159538/files/p446.pdf
[5] G. Anelli et al. Radiation Tolerant VLSI Circuits in Standard Deep Submicron CMOS
Technologies: practical design aspects. IEEE Transactions on Nuclear Science, V. 46, No. 6,
Dec. 1999, pp. 1690-6, http://cdsweb.cern.ch/record/428128/files/cer-002177437.pdf
[6] F. Faccio et al. Total dose and single event effects (SEE) in a 0.25 technology. Workshop on
Electronics for LHC experiments, Rome, Sep. 1998,
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.4359&rep=rep1&type=pdf
[7] F. Drouhin et al. The CERN CMS tracker control system. Nuclear Science Symposium
Conference Record, Oct. 2004IEEE, V. 2, 2004, pp. 1196-1200,
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01462417
[8] TOTEM Online Radiation Monitoring System EDMS,
https://edms.cern.ch/file/874945/0.8/RadiationMonitoringSystem.pdfU
[9] G. Antchev et al. The TOTEM Front End Driver, its Components and Applications in the
TOTEM Experiment. TWEPP07 Prague, 03 - 07 Sep. 2007, pp. 211-214,
http://cdsweb.cern.ch/record/1069713/files/p211.pdf
[10] G. Antchev et al. A VME-Based Readout System for the CMS Preshower Sub-Detector.
IEEE Trans. Nuclear Science, V.54, Issue 3, Part 2, June 2007, pp. 623-628,
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04237384
[11] S. Reynaud, P. Vichoudis. A multi-channel optical plug-in module for gigabit data
reception. 12th Workshop on electronics for LHC experiments, Sep. 2006,
http://cdsweb.cern.ch/record/1027469/files/p229.pdf
[12] FRL - Fed Readout Link, http://cdsweb.cern.ch/record/594312/files/p274.pdf
[13] P. Vichoudis et al. A flexible stand-alone test bench for facilitating system tests of the CMS
Preshower. 10th Workshop on electronics for LHC and other experiments, 2004,
http://cdsweb.cern.ch/record/814074/files/p127.pdf
[14] J . Troska et al. Implementation of the timing, trigger and control system of the CMS
experiment. IEEE TNS 53, 2006, pp. 834-837,
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01644949
[15] The TOTEM Collaboration (G. Antchev et al.). Proton-proton elastic scattering at the LHC energy
of 7TeV, http://cdsweb.cern.ch/record/1361010/files/CERN-PH-EP-2011-101.pdf

42
Novel detector systems for nuclear research

D.L. Balabanski
Institute for Nuclear Research and Nuclear Energy,
Bulgarian Academy of Sciences, Sofia, Bulgaria

Multi-detector Ge arrays with -ray tracking and imaging capabilities, such as AGATA and the
DESPEC spectrometers, which are under construction or in the design phase are discussed. The response of the
novel LaBr
3
:Ce scintillators in in-beam experiments is reported and detector systems using such scintillators,
which are under construction, such as the PARIS spectrometer, are discussed.
Introduction
A key theme in modern nuclear physics is what are the forces which act between the
nucleons in a nucleus and which arrangements does it take as a result of the interplay
between them?
In more detail, some of the key issues which will need to be resolved include:
how does nuclear shell structure evolve in exotic neutron-rich nuclei?
how do collectivity and pairing correlations change in neutron-rich nuclei?
what is the structure of the heaviest nuclear systems?
what are the limits of the high-spin domain?
To understand the properties of a nucleus it is not sufficient just to establish the
interactions between its components, it is also necessary to determine the arrangements of the
nucleons, i.e. the structure of the nucleus. So far our knowledge about the structure of the
nuclei has been mostly limited to nuclei close to the valley of stability, or nuclei with a
deficiency of neutrons, which can be produced in fusion-evaporation reactions with stable
beams and stable targets.
To address the basic questions posed above it is necessary to expand nuclear research
to exotic nuclei situated far away from the valley of stability. A major part of the recent
progress in our understanding of the structure of the atomic nuclei is related to the synthesis
and study of new species far away from the -stability line, the development of new
experimental methods to study them, as well as the development of new, improved theoretical
models of the nucleus.
The accelerators for radioactive ion beams (RIBs), such as the existing REXISOLDE
[1], GANIL/SPIRAL [2] and GSI [3] facilities, those under construction such as FAIR [4],
SPIRAL2 [2], and the upgrade of REXISOLDE, called HIEISOLDE [5], as well as the
future facilities such as EURISOL [6], open the possibility to study the structure of nuclei
with a large excess of neutrons and give access to a whole new range of experiments on exotic
nuclei (which have very different proton-to-neutron ratios compared to stable and near-stable
nuclei). In most of the cases these nuclei will be produced in limited quantities and the
experiments will be carried out in a high-background environment, which is related to the
production process.
The progress in understanding the structure of nuclei is closely related to the advances
of -ray detection techniques. The history of in-beam -ray spectroscopy is summarized in
Fig. 1. It starts with the first in-beam experiments of Morinaga and Gugelot [7], using NaI(Tl)
scintillation detectors, where they measured the ground-state rotational band in
162
Dy, as a
direct proof of the collective model of Bohr and Mottelson [8]. The real breakthrough in -ray
spectroscopy started with the construction of multidetector arrays of high-purity Ge (HPGe)
detectors. The first generation of such arrays, such as the German spectrometer OSIRIS, the

43
UK array TESSA or the US array HERA, consisted of 10-20 HPGe detectors and became
operational in the 80-es of the last century. These were soon followed by a second and third
generation arrays (for a review see Ref. [9]). In the second generation arrays the number of
detectors increased, reaching 250 in the EUROBALL spectrometer [10], and in the third
generation the detectors became segmented and were used in a close geometry as in the case
of the RISING array [11], in order to increase the solid angle coverage of the array.
The development of a fourth generation -ray detection system capable of tracking the
location of the energy deposited at every gamma-ray interaction point in a detector is a major
advance of detector technology which will provide an unparalleled level of detection
sensitivity, and will open new avenues for nuclear structure studies. Such instruments are
needed to address the physics questions listed above and to fully exploit the scientific
opportunities at existing and future facilities. See Ref. [12] for the physics opportunities with
such an array.

Fig. 1. Sensitivity of -ray spectrometers measured by the fraction of the reaction channel that
can be observed as a function of nuclear spin for some selected nuclear structure phenomena.
The associated timeline and arrays are indicated.
A European collaboration has been established to construct a 4 -ray tracking
spectrometer, called AGATA (Advanced GAmma-ray Tracking Array) [13]. The array will
consist of 180 highly-segmented HPGe detectors and will detect -rays, tracking their
Compton scattering between different segments of the Ge detectors.
The development of -ray detection technologies beyond -ray tracking, which is
implemented in AGATA [13], will further continue in direction of development of -ray
imaging techniques. An example of such detector system is the DESPEC spectrometer [14]
which is in design phase within the FAIR project [4].

44
Next to the advance of the semiconductor detector technologies, novel types of
scintillators were developed in recent years. The LaBr
3
:Ce crystals [15] attract special attention
because of the fact that they combine fast timing properties with very good energy response.
The AGATA spectrometer
AGATA, the Advanced GAmma Tracking Array [13, 16-19] is a 4 detector consisting of
180 HPGe detectors. Each detector crystal is electrically segmented in 36 ways, giving a total of
more than 6600 electronics channels. The detectors will be assembled into 60 triple cryostats. For
each detector pulse shape analysis will be performed in order to determine the interaction
positions of the gamma radiation within the crystal to an accuracy of better than 2 mm. By using
the energy and interaction position information, tracking algorithms will reconstruct the paths of
the rays through the detectors. The crystals have a length of 90 mm and hexaconical shape based
on an 80 mm diameter cylinder. In the complete geometry the inner radius of the sphere will be
23.5 cm and the Ge detectors will cover 82% of 4. The energy resolution of individual detector
segments is 0.9 to 1.1 keV at 60 keV, and 1.9 to 2.1 keV for 1.33 MeV gamma rays, respectively,
and the cross talk between segments is below 10
3
.


Fig. 2. A computer design image (left) of the AGATA demonstrator with 15 detectors (five
triple modules) and its photograph during operations in Legnaro National Laboratories (right).

In the first phase of the project a system consisting of five triple cryostats containing
15 detectors was built, see Fig. 2. It is called the AGATA demonstrator and was commissioned
at the INFN Legnaro National Laboratories in 2009 and started operations in 2010.
The number of detectors in the array will be continually increased to the whole array
with the expectation that it will be available around 2018. AGATA will be operated in a series
of experimental campaigns at accelerator facilities in Europe, moving in 2012 to GSI and in
2013 to GANIL. It will be combined with many different ancillary detectors to study specific
nuclear properties.



45
Table 1. The predicted performance of the AGATA spectrometer
Multiplicity of rays 1 10 20 30
Efficiency (%) 43.3 33.9 30.5 28.1
Peak to Total ratio (%) 58.2 52.9 50.9 49.1


The expected performance of AGATA (estimated for a stationary point-like source
placed in the centre, for 1.33 MeV ray), has been simulated using a code based on Geant4
[20,21]. The -ray tracking was performed by the program developed by Lopez-Martens et al.
[22]. The predicted efficiency of the array and peak-to-total (P/T) ratio for various -ray
multiplicities are given in Table 1.
Such performance for the gamma-ray spectroscopy is unprecedented. The high
efficiency and P/T ratios will allow studies of processes with low cross sections; for high
multiplicity, weakly populated gamma-ray cascades, as met in high-spin studies, the
sensitivity of AGATA (and of the competing GRETA spectrometer, which is build in the
USA) will be several orders of magnitude higher than that of existing gamma-ray arrays, as
illustrated in Fig. 1.
The new technique of -ray tracking involves measuring accurately the position and
energy of all the -ray interaction points in the detector segments. The position of the first
interaction defines the angle of emission of the ray from the source, relative to the detector
and is particularly important when detecting radiation emitted by a nucleus recoiling after a
reaction since it determines the extent of the energy spread arising from the Doppler shift. The
angular definition in AGATA compared with the previous generations of 4 spectrometers,
which consisted of single-crystal detectors, such as the Gammasphere or EUROBAL
spectrometers (see Fig. 1), will result in an order of magnitude improvement in spectral
response. Since most of the rays interact more than once within the crystal, the energy and
angle relationship of the Compton scattering formula is used to track the path of a given ray.
The full energy can then be retrieved by summing all the individual deposited energies for this
ray. As a result, very high efficiency can then be obtained in such a 4 spectrometer since
there are minimal dead areas.
The realisation of such a system requires the development of highly segmented
germanium detectors, digital electronics, pulse shape analysis to extract energy, time and
position information and tracking algorithms to reconstruct the full interaction. In order to
achieve -ray tracking, the germanium detector technologies were pushed beyond the existing
state-of-the-art. The milestones in the research and development phase of AGATA were:
development of a highly-segmented encapsulated HPGe detector,
development of a cryostat to hold a cluster of segmented detectors,
design and development of digital electronics,
development of algorithms for energy, time and position reconstruction,
development of tracking algorithms,
design and manufacture of associated infrastructure,
construction and test of a demonstrator array.
AGATA requires purpose built digital electronics and associated data acquisition
system to process the signals from the Ge detectors. The full system has to cope with over
6000 channels with the rate of each detector possibly up to 50 kHz. The segmented detectors
provide 37 signals (36 outer contacts and the inner core) from the FET/preamplifiers. The
electronics principle of AGATA is to sample these outputs with fast ADCs to preserve the full

46
signal information in a clean environment so that accurate energy, time and position can be
extracted. The first stage of the electronics will be a digitiser card, located close to the
detector. The digitiser contains 100 MHz 14 bit ADCs to digitise the signal and then the
information is transmitted via an optical link to a remote pre-processing card. Such a card
performs digital signal processing that is local to a particular detector such as energy
determination and time.
The pre-processing cards transmit their outputs to the pulse processing part of the
system which is envisaged to be farm of computers. This farm assembles the full data from all
elements of the array, uses PSA algorithms to determine the position of the interactions,
performs tracking to reconstruct the events and assembles the resulting data for storage. The
whole system shares a global time reference (clock) which is supplied by a global trigger and
synchronisation control system which is distributed by a network of optical fibres to the
frontend electronics of each crystal.
AGATA is based on a radically new technology and will constitute a dramatic advance
in -ray detection sensitivity that will enable the discovery of new phenomena, which are only
populated in a tiny fraction of the total reaction cross section or of nuclei that are only
produced with rates of the order of a few per second or less. Its unprecedented angular
resolution will facilitate high-resolution spectroscopy with fast fragmentation beams giving
access to the detailed structure of the most exotic nuclei that can be reached. Finally, the
capability to operate at much higher event rates will allow the array to be operated for
reactions with intense -ray backgrounds, which will be essential for the study of, for
example, transuranic nuclei.
The DESPEC spectrometer
The FAIR-NUSTAR facility [4] will provide beams of radioactive ions with
unprecedented intensities with the aim to study the atomic nucleus. The project focuses on
those aspects of nuclear investigations with rare isotope beams which can be uniquely
addressed with high-resolution setups.
The experiments will provide information on the force acting between the nucleons
inside the nucleus, with special emphasis on systems with exotic proton-to-neutron ratios:
both proton-rich and neutron-rich nuclei. In extreme neutron-rich nuclei radical changes in
their structure are expected with the possible disappearance of the classical shell gaps and
magic numbers and the appearance of new ones. The HISPEC/DESPEC experiment [14] of
the FAIR project addresses this kind of questions using radioactive beams delivered by the
energy buncher of the Low Energy Branch (LEB) of the Super Fragment Separator with
energies of 3-150 MeV/u for reaction studies or stopped and implanted beam species for
decay studies.
Decay studies lie at the very frontier of research of exotic nuclei, since once the
existence of an isotope has been demonstrated, the next elementary information we seek is how
it decays, even the half live of a new isotope can tell us a lot about the allowed or forbidden
character of the decay. At the same time decay spectroscopy provides often primary information
on excited states of nuclei far from stability. Very important aspect of DESPEC experiment [14]
is the possibility to study the decay properties of isomeric levels in nuclei which survive the
flight time from the moment of production until the time of arrival to the set-up.
All of the experiments anticipated with the DESPEC detectors involve deep
implantation of the ions in an active stopper prior to the decay. The detector will be highly
pixellated, which allows us to correlate in time and space the signal of the initial pulse from
implantation of the heavy ion with the signal produced in the same detector in the subsequent

47
beta decay. Neutron and high-resolution -ray detectors in a compact arrangement around the
active stopper in a highly flexible and modular geometry will be at the heart of this set-up.
One of the proposed germanium detector systems for the upcoming DESPEC array at
the FAIR facility consists of triple modules of electrically segmented planar high-purity
germanium detectors. There the position sensitivity will be obtained by means of pulse shape
analysis (PSA) for the -ray interactions. The possible segmentation patterns for such
detectors are: a double-sided strip detector (DSSD) and a detector with one-sided pixelated
geometry. The number of readout channels, which are considered, in both cases is similar,
either 8 + 8 strips for the DSSD, or 16 pixels. It has been found that the higher physical
granularity of the DSSD results in a significantly higher position resolution, as well as in a
somewhat lower probability of merging multiple interaction points [23]. The results of these
simulations were compared to measurements with the existing 25-pixel planar detector [24].
In a competing project at the ANL, USA (see e.g. [25]), special resolution of about 800 m
was achieved using a Ge DSSD 92 x 92 x 20 mm detector with 16 + 16 strips and strip width
of 5.3 mm.
The ultimate goal of these projects is the development of Ge DSSD detectors for -ray
imaging. This is required by the specific conditions of the future FAIR experiments, but such
detector would be enormously powerful for many applications. It would be a standalone
Compton Camera, and would allow excellent imaging and tracking. It would have good
position sensitivity with analogue electronics, and exceptional possibilities with digital
electronics. Large area detectors are important for non-nuclear tracking applications, such as
medical positron emission tomography (PET) scanners or Compton Cameras for national
security and for environmental cleanup.
Within the HISPEC/DESPEC project at FAIR a prototype of a planar detector is being
ordered. The Bulgarian group develops a highly compact cryostat to handle and precisely
position three segmented DSSD Ge detectors, which will be ready the fall of 2011. The
cryostat will be cooled down by combined Peltie and electric cooling machine. Peltie cooling
down to 20 C of the FET/preamplifiers is foreseen. Each preamplifier is powered up to 150-
200 mW. A schematic sketch of the detector arrangement is shown in Fig. 3. The Ge DSSD
detectors with dimensions 72 x 72 x 20 mm are positioned at less than 15 mm from each
other. They will be cooled down to below 180 C by a 3W electric cooler. At a later stage of
the project the high-voltage power supplies of the detectors and low-voltage power supplies of
the preamps will be integrated with the cryostat.
Each of the DSSD detectors provides 16 signals from the FET/preamplifiers. The
electronics architecture of the DESPEC array is similar to that of AGATA and is based on
sampling of the signal outputs with fast ADCs to preserve the full signal information in a
clean environment so that accurate energy, time and position can be extracted. The first stage
of the electronics will be a digitiser card, located close to the detector. The digitiser contains
100 MHz 14 bit ADCs to digitise the signal and then the information is transmitted via an
optical link to a pre-processing card. This card performs digital signal processing that is local
to a particular detector such as energy determination and time. A first prototype of a DESPEC
DSSD Ge detector has been delivered by EG&G Ortec and will be be tested in early 2012.

48



Fig. 3. Schematic layout of a system of three DSSD Ge detectors. The working conditions for
the detector system are listed in the figure.
Novel scintillation detectors
BrilLanCe-350 and BrilLanCe-380, Saint-Gobain Crystals trade-names [15] for
LaCl
3
:Ce and LaBr
3
:Ce are being brought to market under exclusive license to Delft and Bern
Universities. These scintillators are bright (60,000 photons/MeV for LaBr
3
:Ce) and have very
linear response, a combination that leads to very good energy resolution (< 3% at 662 keV
and about 6% at 122 keV for LaBr
3
:Ce). The materials also have fast scintillation decay times
(< 30 ns) which supports counting applications at very high rate. These fast light output
properties also lead to very fast timing (< 300 ps coincidence resolving time on 30mm long
pieces of LaBr
3
:Ce). These properties are retained at high temperature with only moderate
light loss (< 10%) at 175 C in both materials.
The excellent properties of the new scintillation materials open the avenue for a broad
range of applications in nuclear sciences. The INRNE nuclear structure group was involved in
a number of studies with detectors, based on the novel BrilLanCe-380 crystals.
Fast-timing measurements with LaBr
3
:Ce detectors: The measurement of lifetimes
of excited nuclear states is one of the most important topics in nuclear spectroscopy, because
these quantities are the essential ingredient in the determination of the reduced
electromagnetic transition probabilities, quantities that are rather sensitive to details of the
intrinsic structure of these states. The electronic method, based on the direct measurement of
the time decay spectrum of a certain state, relies on the use of fast detectors (with good timing
properties) and appropriate electronics. It continuously benefited of the development of new
scintillators and is currently applicable to lifetimes down to a few picoseconds. Thus, based
on the use of the BaF
2
crystals, the capabilities of the delayed coincidence method were
pushed into the low-picosecond range. A variant of the method, invented for use in the -
decay, utilizes triple coincidence measurements and is suitable especially for the
investigation of the neutron-rich nuclei where it was applied in many cases (Ref. [2628] and
references therein). In this case one of the -ray detectors is a HPGe detector, and its good
energy resolution is used to gate such as to select the desired decay branch. The other two
72x72x20
15
T< 180C
3Welectriccooler
FET:< 20C
Peltiercooling
Preamps:150200 mW

49
detectors are the timing detectors: a thin scintillator for the -rays, and a BaF
2
scintillator for
the -rays.







Fig. 4. Time spectra for pair of -ray transitions feeding and de-exciting the 3/2
+
state (top
graph), and a pair of cascade -rays (bottom graph). The two time distributions in each case
were obtained by gating on the two transitions as start and stop in both possible ways. The
bottom spectrum corresponds to a prompt coincidence, while the top spectrum shows a
clear shift of the centroid, corresponding to the lifetime of the 367 keV 3/2
+
level.
The -ray spectroscopy of fusion-evaporation reactions provides opportunities to
investigate many more nuclear states. However, the use of a similar method for delayed
coincidence measurements, in which one of the detectors is a HPGe and the other two are
scintillators (with good timing properties) could not be used due to the large number of -rays
in the spectra and the weak energy resolution of the BaF
2
detectors. The discovery of new
scintillators, such as LaBr
3
:Ce, is about to change this situation. In our experiments triple
coincidences are measured with an array containing both HPGe and LaBr
3
:Ce detectors. The
high energy resolution of the Ge detectors is used to select the desired ray cascade, and the
array of fast LaBr
3
:Ce to build the delayed coincidence time spectra for selected levels.
The method is illustrated by an in-beam lifetime measurement in
199
Tl, following the
197
Au(,2n) reaction [29]. It was performed with beams delivered by the Tandem van de
Graaff accelerator at IFIN-HH, Bucharest. A 5 m-thick Au foil was bombarded by a 24 MeV
-particle beam with intensity around 10 nA. The detection of the -rays was made with
7 HPGe detectors each with efficiency around 50%, five of them being placed at 143 and the
other two at 90 with respect to the beam, and five LaBr3:Ce detectors placed at 45 with
respect to the beam. The LaBr
3
:Ce detectors, delivered by Saint-Gobain Crystals, had crystals
of 22 (one), 1.51.5 (two), and 11 (two), and photomultipliers XP5500B02.
Time spectra obtained by gating on a pairs of gamma rays feeding and de-exciting the
3/2
+
367 keV state in
199
Tl and a pair of fast cascade transitions in
199
Tl are shown in Fig. 4. In
each of the cases two time spectra are presented, one obtained by gating on the -ray above
the level as a start transition and on the -ray below the level as a stop transition, and the
second with inverted roles for the two transitions. The difference between the centroids of the
two peaks is twice the lifetime of the 367 keV 3/2
+
state, resulting in a value T
1/2
= 47 3 ps.
This measurement shows that with such a setup with 5 scintillator detectors one can easily
measure lifetimes of the order of a few tens of picoseconds.

50
High-energy measurements with LaBr
3
:Ce detectors: Test measurements,
concerning detector efficiency and energy resolution, were performed in the Institute of
Nuclear Research of the Hungarian Academy of Sciences (ATOMKI) [30]. Low energy
-rays were studied using
60
Co and
152
Eu isotope sources, while the high-energy region was
covered by -rays emitted in (p,) reactions. Protons were accelerated to the resonance
energies (from 441 keV up to 1416.1 keV) by a 5MV Van-de-Graff accelerator, and impinged
on different thin evaporated targets: Al, Na
2
WO
4
, K
2
SO
4
and LiBO
2
. The produced -rays
possessed energies from 1.4 MeV up to 17.6 MeV. In order to obtain the internal efficiency of
the detector, the full absorption efficiency of each detector was divided by its solid angle. Due
to the high atomic number, Z, of La, the internal efficiency of 2 x 2 LaBr
3
:Ce detector was
found equal to 22.6(7)% for 1173 keV, 1.3(2)% for 11.6 MeV and 0.65(12)% for 17.6 MeV
[30]. The measured experimental internal efficiencies are presented in Fig. 5 as a function of
-ray energy. The achieved relative energy resolution was found to be the best among the
scintillation detectors. For low energy -rays (1.3 MeV) it is equal to 2.1% and improves to
1% at 10MeV and to about 0.7% for 17.6 MeV [30].

Fig. 5. Internal full absorption efficiency for -rays in a LaBr
3
:Ce 2 x 2 scintillation detector,
in comparison to GEANT4 simulations.
The PARIS spectrometer: The PARIS (acronym for Photon Array for studies with
Radioactive Ion and Stable beams) project [31] is an initiative to develop and build an
innovative high-efficiency -calorimeter principally for use at SPIRAL2 [2]. It is intended to
comprise a double shell of scintillators and use the novel scintillator material LaBr
3
:Ce, which
promises a step-change in energy and time resolutions over what is achievable using
conventional scintillators. The array could be used in a stand-alone mode, in conjunction with
an inner particle detection system, or with high-purity germanium arrays. The PARIS array
will play the role of an energy-spin spectrometer, a calorimeter for high-energy photons and a
medium-resolution detector.
The PARIS project draws on a wide section of the nuclear physics community with a
broad range of physics interests. The primary goal is to use PARIS at the SPIRAL2 facility, to
study the properties of hot rotating exotic nuclei produced in the fusion-evaporation reactions
by means of the -decay of the GDR. In addition, the installation of the array at the secondary

51
target position of the Super Separator Spectrometer (S
3
) [32] profiting from the future
LINAG beams of SPIRAL2 is expected to be very promising for -ray spectroscopy.

Conclusions
A number of cutting-edge projects in the field of -ray spectroscopy were discussed,
such as -ray tracking and imaging detectors. The feasibility of lifetime measurements in-
beam by the fast timing method, using triple coincidences of a mixed array of HPGe and
LaBr
3
:Ce detectors was demonstrated. The possibility to utilize the LaBr
3
:Ce detectors for
studies of hot nuclei was also discussed. Their high efficiency for detection of high-energy
-rays was demonstrated. All these are benchmark projects which define the level of
technological development in the field of modern radiation detection and measurements.
Acknowledgements: This work is supported by the Bulgarian National Science Fund, grants
DID-02/16, DRNF-02/5 and BRS-03/07.
References
[1] http://isolde.web.cern.ch/ISOLDE/REX-ISOLDE/index.html
[2] http://www.ganil-spiral2.eu/
[3] http://www.gsi.de/portrait/index.html
[4] http://www.gsi.de/fair/index.html
[5] http://hie-isolde.web.cern.ch/hie-isolde/HIE-ISOLDE_site/Welcome.html
[6] Y. Blumenfeld et al. Nuclear Physics News 19, No. 1, 2009, pp. 22-27.
[7] H. Morinaga and P.C. Gugelot. Nucl. Phys. 46, 210 (1963).
[8] A. Bohr, B.R. Mottelson and J. Rainwater. Nobel Lectures, Physics 1971 1980, Ed. S. Lundqvist
(World Scientific, Singapore, 1992).
[9] P.J. Nolan, F.A. Beck and D.B. Fossan. Ann. Rev. Nucl. Part. Sci. 47, 561 (1994).
[10] Achievements with the EUROBALL Spectrometer 1997 2003, eds. S. Lunardi and W. Korten,
http://euroball.lnl.infn.it
[11] H.J. Wollersheim et al. Nucl. Instr. Meth. A537, 2005, p. 637.
[12] AGATA Physics Case, ed. by. D.L. Balabanski and D. Bucurescu, 2008, http://www-
win.gsi.de/agata/Publications/apc-5r-F.pdf
[13] AGATA Technical Design Report, ed. by J. Simpson, J. Nyberg, W. Korten, 2008, http://www-
win.gsi.de/agata/Publications/TDR_EUJRA.pdf
[14] http://www.gsi.de/fair/experiments/NUSTAR/hispec_e.html
[15] http://www.detectors.saint-gobain.com/Brillance380.aspx
[16] AGATA Technical Proposal, ed. by J. Gerl and W. Korten, 2001, www-w2k.gsi.de/agata and
http://www.agata.org/reports/AGATA-Technical-Proposal.pdf
[17] J. Simpson and R. Krcken. Nuclear Physics News 13 No. 4, 2003, p. 15.
[18] J. Simpson. J. Phys. G: Nucl. Phys. 31, S1801, 2005.
[19] J. Eberth and J. Simpson. Prog. Part. Nucl. Phys. 60, 283, 2008.
[20] E. Farnea et al. LNL Annual Report 2003, LNL-INFN (REP)-202/2004, 160.
[21] E. Farnea, http://agata.pd.infn.it/documents/simulations/comparison.html
[22] A. Lopez-Martens et al. Nucl. Inst. Meth. A533, 2004, p. 454.
[23] A. Khaplanov, B. Cederwall, S. Tashenov. Nucl. Inst. Meth. A592, 2008, p. 325.
[24] A. Khaplanov et al. Nucl. Instr. Meth. A580, 2007, p. 1075; L. Milechina, B. Cederwall. Nucl. Instr.
Meth. A550, 2005, p. 278.
[25] A National Plan for Development of Gamma-Ray Tracking Detectors in Nuclear Science, by The
Gamma-Ray Tracking Coordinating Committee, http://www.pas.rochester.edu/~cline/grtcc/GRTCC-
report.pdf
[26] H. Mach, R.L. Gill, and M. Moszysky. Nucl. Instr. Meth. Phys. Res. A280, 1989, p. 49.
[27] M. Moszysky and H. Mach. Nucl. Instr. Meth. Phys. Res. A277, 1989, p. 407.
[28] H. Mach et al. Nucl. Phys. A523, 1991, p. 197.
[29] N. Margienean et al. Eur. Phys. J. A 46, 2010, p. 329.
[30] M. Ciemaa et al., Nucl. Inst. Meth. A608, 2009, p. 76.
[31] http://paris.ifj.edu.pl
[32] A. Drouart et al. Nucl. Inst. Meth. B266, 2008, p. 4162.
52
Data handling and processing for the ATLAS experiment

D. Barberis
On behalf of the ATLAS Collaboration
University of Genova and INFN, Sezione di Genova, Italy

The ATLAS experiment is taking data steadily since Autumn 2009, collecting so far over 2.5 fm
-1
of
data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed,
distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools
produced by the ATLAS Distributed Computing project. This paper reports on the experience of setting up and
operating this distributed computing infrastructure with real data and in real time, on the evolution of the
computing model driven by this experience, and on the system performance during the first two years of
operation.
Introduction
The four main High-Energy Physics (HEP) experiments using the Large Hadron
Collider (LHC) particle beams at CERN were built and are now operated by large
international collaborations, each consisting of several thousand physicists and engineers
belonging to hundreds of different institutions located in all continents. Each experiment
produces several petabytes of data each year that must be processed, distributed and analysed
by all collaboration members. Many larger and smaller computing centres associated to the
participating institutions contribute to this global effort by providing processing power and
data storage space. The technology of choice when this world-wide distributing computing
system started being designed in the late 1990's is based on the Grid computing paradigm [1].
Over the last ten years a large infrastructure, with a lower layer constituted by
common Grid middleware and an upper layer consisting of the specific experiment
application frameworks, has been progressively deployed and is now in full operation, under
the general co-ordination of the WLCG (World-wide LHC Computing Grid) collaboration.
CERN and all Tier-1 and Tier-2 sites, as well as the four LHC experiments, are since 2005
part of the WLCG Collaboration [2]. WLCG co-ordinates the deployment of new middleware
versions and provides a number of central services to sites and experiments (central operation
database with site downtimes, accounting service, ticketing etc.). It is a major communication
link between the sites and their users (the experiments) and an essential component of the
operation infrastructure.
Distributed computing is essential for all LHC experiments and particularly for
ATLAS, the largest multi-purpose experiment [3]. No institution within the collaboration can
afford to fund and host the enormous computing infrastructure that is necessary to store and
process all experimental data. The experience of the previous generation of HEP experiments,
which used the LEP accelerator at CERN and the Tevatron accelerator at FNAL (Chicago)
and were more than one order of magnitude smaller, led the LHC experiments to design
upfront a distributed computing system, able to exploit optimally all available resources,
independently of their geographical location. The Grid computing paradigm was adopted by
the LHC community as the initial idea looked rather simple and elegant: each site provides
common interfaces to its local batch system (the Computing Element, CE) and data storage
system (the Storage Element, SE), and publishes its properties through a common Information
System. A global authentication and authorisation framework guarantees the identity of the
submitter of workload and his privileges (or lack thereof) on each site.
53
System design and computing model
The initial implementations of Grid middleware were distributed by the VDT project
in the USA [4], the NorduGrid organization in the Scandinavian countries [5] and the
DataGrid project in the rest of Europe and elsewhere [6] in 2002-2003. The LHC experiments
used them to run the first tests of their distributed computing systems (the so-called "Data
Challenges"), submitting several thousand jobs simulating physics processes and the detector
response to the passage of elementary particles and collecting the outputs back. It was evident
that, although the available infrastructure was basically working, there were two major
problems to be addressed before opening the access to the Grid to all collaboration members:
the system was not robust against site downtimes (due to scheduled or unscheduled
maintenance periods, hardware upgrades, network outages, hardware failures etc.) and there
was no reliable data and job brokering tool. All experiments started designing and
implementing their specialised layer of tools to interface to the Grid middleware; these tools
now complement the Grid information system with experiment-specific information on site
topology (only for the sites supporting the given experiment), provide a high-level data
placement and management infrastructure including data catalogues and replica location
information, and global job submission tools for scheduled data processing activities and for
user analysis. Fig. 1 shows the layered structure of the middleware stack, with the Grid
middleware installed at each site at the bottom and the experiment specific applications at the top.
The Grid paradigm was modified for HEP from a compute-intensive system to a data-
intensive one. Experimental data are precious, as the cost of building and operating each LHC
experiment approaches one billion Swiss Francs, and data storage facilities (disks) are
comparatively more expensive than data processing units (CPUs). Data storage also needs to
be separated between archival storage (on tape) and online data (on disk). For reasons of
robustness and also as a safeguard against data corruption, at least two copies of the same data
must be kept on disk at different locations, and one on tape (two copies on tape for the "raw"
data produced directly by the experiment).

Fig. 1. Layered structure of the Grid middleware and applications used by HEP experiments

54
Computing centres at large national HEP laboratories ("Tier-1" sites), 10 in total for
ATLAS, provide archival facilities on tape, several petabytes of disk space, and several
thousands job processing slots. Smaller computing facilities, usually placed at universities or
their physics departments, provide disk space and processing facilities of differing size; there
are over 100 such "Tier-2" sites, most of which are dedicated to a single experiment (70 sites
for ATLAS).
Local, batch and interactive, facilities ("Tier-3") are used for the final data analysis,
usually consisting in preparing histograms and fitting functions from which final data to be
published are extracted. The CERN laboratory is the source of all "raw" data and its computer
centre ("Tier-0") holds a copy of all produced data and runs calibration, alignment and data
reconstruction procedures in real time, before distributing the data to the Tier-1 sites. Selected
data that are needed for specific physics analyses are then further distributed to Tier-2 sites.
The initial versions of the computing model in 1999-2000 [7] were strongly
constrained by the availability of network bandwidth and therefore strictly hierarchic: data
could be exchanged between the Tier-0 and the Tier-1s, and between each Tier-1 and the
group of associated Tier-2s. The associations between Tier-1s and Tier-2s were defined
according to best network connectivity. Since then, multi-gigabit networks became available
to the research community and an optical private network (LHCOPN [8]) was set up to
connect the Tier-0 and all Tier-1s with bandwidths of at least 10 Gbps in all directions. All
Tier-2 sites are connected with a bandwidth of at least 1 Gbps to the nearest Tier-1 site; the
largest of them have already upgraded their connectivity to 10 Gbps. An extension of the
LHCOPN network to include the larger Tier-2 sites is under deployment this year.
Building blocks of the distributed computing infrastructure
Grid middleware
Grid middleware includes all software components that are needed to provide remote
and secure access to the computing resources. Several suites have been developed and
deployed over the last ten years, all implementing server-client architectures. The servers run
on each site and act as interfaces between the common Grid protocols and the local systems.
The client code is distributed and installed by each user; it contains the commands to
communicate with the servers and get ones tasks executed at the remote site.
The Computing Element (CE) is the interface to submit batch jobs to the local batch
system. It has to pass through all job requests (in terms of memory, maximum time, disk
space etc.) and also provide information on each job status during the processing cycle. In
addition, it must store and provide dynamic information on the status of the local batch
queues, such as the number of running and pending jobs. As mentioned earlier, several
middleware suites are deployed for the use of HEP experiments, each one having specific
interfaces [4-6].
The Storage Element (SE) is the interface to manage data. It has to execute actions on
local files (store, move, delete) as well as transfer files between different locations. All
middleware suites used by HEP experiments implement the SRM (Storage Resource
Manager) interface [9].
Information system
A well-performing information system is essential for the operation of a distributed
computing system. In addition to static information like the computing resources available at
each site, in real time one has to know the status of each site (up, down, under maintenance
etc.) and of its resources (running and pending jobs, free and used disk space).
55
Each site publishes its own characteristics and status through the BDII [10]; this information
has to be complemented by experiment-specific information, such as the site topology and its
relation with other sites, therefore each experiment developed its own information layer. The
ATLAS experiment developed and recently deployed AGIS (ATLAS Grid Information
System) [11], an Oracle-based database that is used as the single source of information on site
status; the low-level information is taken from the BDII system and the additional information
is inserted from ATLAS sources.
Authentication and authorization
The analysis models of the LHC experiments are based on the concept that the data at
all remote sites are available to all experiment members. Users can access data and computing
resources with an X.509 personal certificate [12], avoiding the need for remote logins and
allowing a fine-grained allocation and management of remote resources.
Data management
The data management frameworks developed by each experiment on top of the basic
Grid infrastructure manage all data movement and keep track of the locations of each dataset.
The scripting language Python [13] is used for data and job management frameworks, with
Oracle [14] or MySQL [15] as database back-ends.
The enormous amount of data generated by LHC (several petabytes per experiment per
year) can be handled only by establishing hierarchical structures and cataloguing all the data [16].
Data files are grouped into datasets, i.e. collections of all files containing statistically equivalent
events in the same format and processing stage. As datasets can typically contain from 100 to
10000 files, in this way the cataloguing problem is reduced by 2 to 4 orders of magnitude. Each
dataset is then created, replicated, moved or deleted as a single unit. Every operation on files or
datasets must be registered in the central catalogues, in order to have at any point in time all
information about data locations, access and popularity. Tools have been developed to analyse
this information and automatically increase the number of replicas of the more popular datasets
and decrease the number of replicas of least accessed datasets [17].
Two upper-middleware components have been developed by the WLCG
Collaboration and are now available for general usage: the LCG File catalogue (LFC) and the
File Transfer System (FTS) [18]. The LFC is used by ATLAS and LHCb to store the relation
between the logical file names (the file names as known to all experiment data access tools)
and the physical file names on each site, as well as other file metadata like the file size and
checksum. The FTS is used by all experiments to schedule all file transfers between sites in
an optimal way, adapting the number of parallel transfers to the network capacity along each
transfer route.
Job management and brokering
The PanDA jobs submission framework manages the large number of jobs (several
hundred thousand jobs per day), interacting strongly with the data management tools,
directing the jobs to the sites where they can run fastest, and collecting the outputs to the site
indicated by the job owner [19]. "Production" jobs are submitted centrally to produce
simulated events or reprocess real events with better calibrations or reconstruction code, when
available. "Analysis" jobs are submitted by any ATLAS member who wishes to analyse the
data; most importantly, all data and computing facilities are available to all members of the
experiment, independently of the geographical locations and institute affiliations. The present
generation of job submission frameworks makes use of the concept of "pilot" jobs. All jobs
submitted to the Grid are identical "pilot" jobs; once the pilot starts execution, it checks the
56
environment (site, software availability, memory, disk space) and gets from a central database
(the "task queue") the highest priority job that can be run on that machine. In this way there is
no danger of submitting a job to a site that (at run time) is unable to run that job, or to queue a
job in a given site when other sites are free to run it; also, the job priority as defined by the
experiment is strictly and automatically enforced.
The software that is used to process the data is also experiment-specific. The Athena
software framework was developed for ATLAS starting in the early 2000's in C++using
object-oriented techniques and algorithmic code followed soon (also in C++), developed by
over 100 physicists, usually experts of their detector part, located at their institutes. ATLAS
uses Python as scripting and control language for the execution of the C++code. Compiled
libraries with the experiment software, several gigabytes per software release, are installed by
Grid jobs on each site contributing to ATLAS; in this way each job only contains the job
script and the additional code that is specific to that job. The Grid-based software installation
is now progressively replaced by the CVMFS [20] web-based file system, which consists of
central severs and local caches on each site and worker node that hold the active releases.
Access to run conditions information
Not all experimental data are in event data files; in order to properly process and
analyse the events, calibration, alignments and other time-dependent detector conditions data
that are stored in relational databases are needed. ATLAS stores the conditions data in Oracle
databases at CERN and replicates them to Oracle databases placed at the five largest Tier-1
sites; data are further exported to other sites using the web services technology, with a local
cache on each site. The Frontier system [21] developed initially for the CDF experiment at
Fermilab and then for CMS at CERN and finally adopted also by ATLAS and LHCb, consists
of web servers in front of Oracle databases, and Squid [22] caches placed at each site. In this
way it is possible to run any job on any site without having to worry about database access or
overloading central Oracle servers.
Grid operations
The described layered infrastructure was implemented progressively in 2005-2008 and
tested by ATLAS data challenges and global WLCG exercises. Starting in late 2007, cosmic-
ray runs were processed and distributed as if they were real accelerator data throughout 2008
and 2009 until the start of LHC operation in October 2009; these tests were extremely useful
to debug the global system, identifying and removing the bottlenecks. When LHC operations
started, ATLAS and the sites were ready for the expected initial data flow and no major
hiccup happened.
Fig. 2 shows the breakdown of weekly average data transfer rates by activity for the
ATLAS experiment: automatic data placement from production activities and brokering due
to data popularity produce about 70% of the data movements, the rest being due to user-
requested manual data transfers.
Over half million jobs per day are run over the Grid (without counting local batch and
interactive usage). As an example, Fig. 3 shows the average number of running jobs for the
ATLAS experiment; over 90k CPU cores are at any time used by ATLAS. Production consists
of jobs simulating the physics of the experiment and the detector response, in order to compare
experimental results with theoretical models, and of data reprocessing jobs, which are run when
better software of calibration constants become available. Analysis jobs are submitted to the
Grid by all Collaboration members who want to select and analyse processed data according to
their own criteria. Data are available to be analysed within a day of data-taking.

57

Fig. 2. Average weekly data throughput for ATLAS between January and August 2011. Balance
reached between scheduled data placement, dynamic data caching and user data requests.

A very large and distributed computing system can never be absolutely stable.
Hardware failures and network interruptions happen continuously, leading to the permanent
loss or temporary unavailability of data files and CPU resources. The only way to provide a
robust service to the users is to implement as much redundancy and automatic failovers as
financially possible, and shield the users from local site problems. First of all, all data should
be replicated on disk to at least two different sites, so that if one site is off or that disk has a
hardware failure, the data are still available. Data lost through hardware failures must be
replicated again automatically in order to re-create the second copy. Checksums must be
checked after each data transfer and compared to the original values in the data catalogues.
J obs that fail for Grid reasons must be retried automatically on a different site.


Fig. 3. Average number of running production (left) and analysis (right) jobs for the ATLAS
experiment between September 2010 and August 2011. The breakdown is by ATLAS cloud
(a Tier-1 site and all associated Tier-2 and Tier-3 sites).

The data provided by monitoring tools are used automatically by data transfer and job
brokering systems in order to avoid problematic sites; for example the ATLAS
HammerCloud [23] suite sends regular test analysis jobs to all sites and switches off job
58
brokering to malfunctioning sites, keeps sending test jobs and when the problem is solved job
brokering is re-activated automatically. Similarly, data transfer functional tests provide
information for data brokering tools.
With operational experience more and more failure modes can be identified
automatically, but human interventions are still needed to identify a number of failure modes.
Crews of shifters use the monitoring tools to have continuous views of the status of the sites
and services and alert the appropriate experts when needed. The fact that ATLAS has a world-
wide participation helps in this round-the-clock monitoring work, as people are only required
to do this service work during their normal working hours.
Evolution and outlook
As for all software projects, the Grid and experiment tools follow cycles of
development, deployment and operation. There is a continuous evolution in the tools and also
in the computing models of the experiments, as the experience of real data-taking is taken into
account. During the last few years, the concept of "cloud computing" has been proposed and
implemented in the commercial computing market. The HEP community, and ATLAS in
particular, have run tests on these commercial facilities and on equivalent research clusters,
generally with good results from the performance point of view, but with (presently) high
financial costs. LHC experiments need to store safely large amounts of data and have high I/O
rates for each job (up to 100 Mb/s per analysis job), which given the current charging
algorithms for commercial cloud computing, result in very high costs relative to the "in-
house" computing centres we have now. The technology is in any case interesting and the job
submission framework have been adapted to support the cloud computing interfaces in
addition to Grid interfaces, in case some of the available capacity will in the future be made
available only through these interfaces.
Data storage technologies also evolve rapidly in the commercial environment, driven
mostly by the needs of search engines and social networks. Cataloguing and indexing data is a
non-trivial problem when dealing with petabytes of data (several billion files).
Several R&D projects have been started, some of them jointly with WLCG, tracking
the evolution of computing technology, particularly in the fields of parallelisation of jobs for
many-core processors, virtualisation, NoSQL databases as back-ends for Grid tools and
interfacing to cloud computing infrastructures. The aim of this work is to continuously
improve and optimise the present tools and develop and test new generations of these tools
that will eventually replace the existing infrastructure, following the general trends of
computing technology.
Conclusions
The HEP community, and the ATLAS experiment in particular, have been the first
heavy user of distributed computing facilities and during the last ten years with the LHC
experiments have pioneered the usage of Grid infrastructures. A number of tools had to be
developed on top of the basic Grid middleware in order to build a robust and efficient system.
The present infrastructure fundamentally works: data can be processed and moved fast and
people can promptly analyse the data. This system is continuously adapted to the evolution of
computing technologies.
Acknowledgments
This work would not have been possible without the continuous support of the
managements of the ATLAS and WLCG Collaborations, and the financial support of the
participating institutions and funding agencies.
59
References
[1] Foster, C. Kesselman. The Grid: Blueprint for a New Computing Infrastructure. Morgan
Kaufmann Publishers. ISBN 1-55860-475-8, 1999.
[2] The WLCG Collaboration. LHC Computing Grid Technical Design Report. CERN-
LHCC-2005-024, ISBN 92-9083-253-3, 2005.
[3] The ATLAS Collaboration. The ATLAS Experiment at the CERN Large Hadron Collider.
J INST 3 S08003, doi: 10.1088/1748-0221/3/08/ S08003, 2008.
[4] Roy et al. Building and testing a production quality grid software distribution for the Open
Science Grid. J ournal of Physics: Conference Series. 180 012052, 2009.
[5] P. Eerola et al. The NorduGrid production Grid infrastructure, status and plans.
Proceedings of Fourth IEEE International Workshop on Grid Computing, 2003, pp. 158-
165, doi:10.1109/GRID.2003.1261711.
[6] E. Laure et al. Middleware for the next generation Grid infrastructure. Proceedings of the
Conference on Computing in High-Energy Physics, Interlaken (Switzerland), 2004.
[7] M. Aderholz et al. Models of networked analysis at regional centres for LHC experiments.
CERN/LCB 2000-001, 2000.
[8] LHCOPN: see http://twiki.cern.ch/twiki/bin/view/LHCOPN/WebHome and references
therein.
[9] F. Donno et al. Storage resource manager version 2.2: design, implementation, and testing
experience. J ournal of Physics Conference Series. 119 062028, doi: 10.1088/1742-
6596/119/6/062028, 2007.
[10] M. Flechl, and L. Field, Grid interoperability: joining Grid information systems. J ournal
of Physics Conference Series 119 062030, 2007, doi: 10.1088/1742-6596/119/6/062030.
[11] AGIS: see http://atlas-agis.cern.ch
[12] R. Housley et al. Internet X.509 public key infrastructure certificate and certificate
revocation list (CRL) profile, RFC3280, 2002.
[13] G. Van Rossum. The Python language reference manual, Network Theory Ltd., ISBN:
978-1906966140, 2011.
[14] M. Girone. CERN database services for the LHC computing grid, J ournal of Physics
Conference Series 119 052017, 2007, doi: 10.1088/1742-6596/119/5/052017.
[15] MySQL database: see http://www.mysql.com
[16] M. Branco et al. PanDA: distributed production and distributed analysis system for
ATLAS, J ournal of Physics Conference Series 119 062036, 2007, doi: 10.1088/1742-
6596/119/6/062036.
[17] A. Molfetas, V. Garonne. Popularity Framework to Process Dataset Tracers and Its
Application on Dynamic Replica Reduction in the ATLAS Experiment 2010. Grid data
deletion service. Proceedings of the 18
th
Conference on Computing in High-Energy
Physics, Taipei (Taiwan) October 2010, to be published by J ournal of Physics Conference
Series, 2010.
[18] Frohner et al. Data management in EGEE. J ournal of Physics Conference Series 219
062012, 2009, doi: 10.1088/1742-6596/219/6/062012.
[19] T. Maeno. Managing ATLAS data on a petabytes scale with DQ2. J ournal of Physics
Conference Series 119 062017, 2007, doi: 10.1088/1742-6596/119/6/062017.
[20] P. Buncic et al. CernVM a virtual software appliance for LHC applications. J ournal of
Physics Conference Series 219 042003, 2009, doi: 10.1088/1742-6596/219/4/042003.
[21] D. Dykstra, L. Lueking. Greatly improved cache update times for conditions data with
Frontier/Squid. J ournal of Physics Conference Series 219 072034, 2009, doi:
10.1088/1742-6596/219/7/072034.
[22] D. Wessels. Squid: the definitive guide. O'Reilly Media. Print ISBN: 978-0-596-00162-9,
Ebook ISBN: 978-0-596-10364-4, 2004.
[23] D. Van der Ster et al. Functional and large-scale testing of the ATLAS distributed analysis
facilities with Ganga. J ournal of Physics Conference Series 219 072021, 2009, doi:
10.1088/1742-6596/219/7/072021.
60
Time-of-flight system for controlling the beam composition
P. Batyuk
1
, I. Gnesi
2
, V. Grebenyuk
1
, A. Nikiforov
3
, G.Pontecorvo
1
, F. Tosello
2
1
Joint Institute for Nuclear Research, Russia
2
Istituto Nazionale di Fisica Nucleare, Torino, Italia
3
Moscow State University, Joint Institute for Nuclear Research, Dubna, Russia

At present, the PAINUC collaboration is studying the interactions of pions of
intermediate energies with the He-4 nucleus at the JINR phasotron. Contamination of the pion
beam with pions and electrons worsens the background conditions. There are, at least, two
reasons for composition change. One of them is related to inaccurate positioning of the target
station, and the other to the instability of currents in the elements of magnetic optics. The
second factor is less significant. A system, based on CAMAC (Computer-Aided Measurement
and Control) blocks and the ETHRNET crate controller of the CAEN company has been
created for time-of-flight monitoring of the beam composition. The system makes it possible
for the operator to react to changes in the beam composition and to correct it by varying
parameters of the beam control magnetic system.
The said system consists of the following blocks, developed in the JINR Dzhelepov
Laboratory of nuclear problems: a KL353 shaper, a KL354 [1] coincidence circuit, a 317
[2] time-code converter, a KL025 [3]

memory, a KL018 [4]

unit controlling incremental
recording to the memory, a Kl038 [4]

interface for the analyzer display, and a C111C [5]
crate controller. The block diagram of the equipment is shown in Fig. 1.


Signals from the scintillation detectors are input to the shapers, whose outputs are
connected to the inputs of the coincidence circuit. The output of the coincidence circuit is
Fig. 1. The block diagram of the equipment
61
connected to the start input of the time-code converter. The HF signal of the accelerator is
sent to the stop input of KA317.
Strictly speaking, the assembled system is not classical time-of-flight system, since it
involves no flight base. But taking into account that accelerated protons can be located only at
the maxima of the extraction oscillation (extraction of protons it is achieved at a frequency of
approximately 14 MHz), the system developed will measure relative distances in arrival time
between particles of different types. Note, that since it is the HF of the accelerator that serves
as the stop signal, the time distribution exhibits an inverse nature, i.e., the faster the
particles the closer they are to the end of the distribution, and the slower to its beginning.
The output dead time output of the converter is sent to the anticoincidence input of the
coincidence circuit, in order to discard loading effects that distort the TOF spectrum. The
outputs of time codes are transferred to the KL018 block via the external bus, and from to the
KL025 memory. With the aid of the KI044 block the spectrum is delivered to the indicator.
Fig. 2 presents the exterior view TOF system.




Fig. 2. The exterior view TOF system

The C111C Ethernet CAMAC Create Controller of the CAEN company was used for
connecting the CAMAC create modules with remote PC, and a special program was developed
for the remote connection of PC and for providing access to modules of the crate.
The crate contains modules performing signal-to-code transformation, a memory module for
recording spectra and a controller that connects other CAMAC modules with remote PC.
The program for remote use of the controller is realized via a socket connection. The
controller plays the role of a server, while the program is the client, running on remote PC
using LINUX with network access. The programm consists of three main parts:
Socket creation (socket with two parameters: IP address, port),
Establishment of client - server connection,
Data exchange.
For convenience of working with the controller we made use of the library C111C
Create Lib, which provides an advantage consisting in the volume of programs being smaller
and is correct operation of the controller being guaranteed. The point is that of the various
62
possibilities provided by this library we use a standard library function, requiring a sole
parameter, namely, a concrete structure with a port and IP address, and no unexpected errors
arise, when this concrete library is applied. The socket connection is presented in Fig. 3.
When the client receives the signal L, the program switches to the spectrum readout
mode from the memory. At each step of the cycle data from memory is read using command
F16. This data corresponds to bins of the spectrum histogram (numeration from 0 to 4095).
When new data is read from the memory, a function is called, that converts a char type string
(in hexadecimal format) to type int (in decimal format) by expansion into a series in powers
of sixteen. Thus, we coiffure a buffer. Now it contains 4096 numbers corresponding to
spectrum channels.

Fig. 3. The socket connection
Then, a new text file is created for outputing the obtained spectrum. It allows us to
store data for subsequent processing.
Construction of the spectrum is achieved after it is recorded. Examples of such spectra
are presented in Fig. 4 and Fig. 5. These spectra were obtained with different currents in the
magnetic lens ML31, that focuses the proton beam onto the target. These spectra were
obtained during a study of the beam and show the operability of the system described.

0 10 20 30 40 50 60 70 80 90100110120130140150160170180190200
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
N
time
ML31=-10,57
0,43ns/ch

Fig. 4

63
0 10 20 30 40 50 60 70 80 90100110120130140150160170180190200
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
N
time
ML31=-21,14
0,43ns/ch

Fig. 5
References
[1] V.F. Borejko, Yu.M. Valuev, V.M. Grebenyuk et al. JINR Communication 10-85-661,
Dubna, 1985 (in Russian).
[2] V.M. Grebenyuk, A.V. Selikov. JINR Preprint 10-90-335, Dubna, 1990 (in Russian).
[3] V. Antjuchov, D. Vasilev, N. Zuravlev et al. JINR Communication 10-86-854, Dubna,
1985 (in Russian).
[4] V. Antjuchov, D. Vasilev, N. Zuravlev et al. JINR Communication 10-85-922, Dubna,
1985 (in Russian).
[5] CAEN Technical Information Manual. Revision, N. 9, 2009.


64
Development of the grid-infrastructure for molecular docking
problems

R. Bazarov
1
, V. Bruskov
2
, D. Bazarov
1
1
Institute of Mathematics& ICT, Academy of Science Uzbekistan
2
Institute of Chemistry of Plant Substances, Academy of Science Uzbekistan

The development of a computing complex for the computer screening (in silico) of
low-molecular natural and synthetic chemical compounds include the following stages:
selection of experimental stuff based on which the computing model will be designed;
selection of software for implementation of this model; comparison of experimental results
and results of computer screening for determining adequacy actions of the computer complex.
The experimental basis of this work includes experiments for determination of the
inhibitors action of the natural flavonoinds on Na/K-Phase [1].
Amongst inhibitors of Na/K-ATPhase, flavonoids are very interesting because of the
mechanism of their action, which differs from the action of cardiac glycosides.
Due to the difficulty of the structure and the molecular weight of the biological target under
study (Na/K-ATPhase consist of 10122 atoms), and the absence of data on the X-ray structure
analysis for pure Na/K-ATPhase today, this problem requires involvement of the different methods
of molecular simulation as well as significant information and computing resources.
A cluster has been installed at the Institute of Mathematics and information
technologies IMIT AS of Uzbekistan, which has the following characteristics:
OS on WN: Scientific Linux-5.4, Middleware: gLite-3.2, Number of WN: 8;
Total number of cores: 64; Total RAM= 40GB. File storage: 3.0 TB;
Type of Network: Gigabit Ethernet. Peak Performance=0,5TFlops;
MPI: openmpi-1.4.1.
Access to the cluster through P-GRADE portal: http://94.141.66.222:8080/gridsphere/.
In order to select more adequate tools for docking, on work nodes was installed the
following software:
Firefly-7.1 [2], [http://lcc.chem.msu.ru/gran/index.html];
AutoDock-4.2.3 [3], [http://autodock.scripps.edu];
Dock-6.4, [4], [http://dock.compbio.ucsf.edu/];
Vina-1.1.1, beta-version [5].

Porting of Dock Vina-1.1.1 on cluster is performing in 4 stages:
1. Run an application in a worker node:
[sgmimit01@wn014~]$ /opt/exp_soft/imit/autodock_vina_1_1_1_linux_x86/bin/vina --
config conf.txt --log quer_flex.txt

2. Run job on CE (choice of free WN).
[imit032@ce01bdii ~]$ qsub vina.sh -q imit
108.ce01bdii.imit.uz
[imit032@ce01bdii ~]$ cat vina.sh // the execution sciript's content
#!/bin/bash




65
VINA_HOME=$VO_IMIT_SW_DIR/autodock_vina_1_1_1_linux_x86/
####################PARAMETERS#####################################
RECEPTOR=receptor.pdbqt FLEX=flex.pdbqt LIGAND=ligand.pdbqt
CENTER_X=147.32 CENTER_Y=15.682 CENTER_Z=-1.816
SIZE_X=70 SIZE_Y=70 SIZE_Z=70
LOG=quer_flex.txt
########################## EXECUTION###############################
$VINA_HOME/bin/vina --receptor $RECEPTOR --flex $FLEX --ligand $LIGAND --
center_x $CENTER_X --center_y $CENTER_Y --center_z $CENTER_Z --size_x $SIZE_X -
-size_y $SIZE_Y --size_z $SIZE_Z --log $LOG

3. Run job from UI of grid-site:
[vit@ui ~]$ glite-wms-job-submit vina.jdl // choice of free CE.

4. Run the job from grid-portal: before running job we need to send source data for this task
on SE and create Job0 with the following characteristics:

Job0 Properties:
Name Job
Type
Job_Executable Grid Resource
Job0 Seq vina.sh imit_GLITE_BROKER ce01bdii.imit.uz:/jobmanager-
pbs-imit

For this job was created 4 ports with following characteristics:

WORKFLOW Manager
Port
Name
Type File Type File Internal
File Name
1 in Remote lfn:/grid/imit/vip/alpha_beta.pdbqt receptor.pdbqt
2 in Remote lfn:/grid/imit/vip/alpha_beta_flex.pdbqt flex.pdbqt
3 in Remote lfn:/grid/imit/vip/quer_a.pdbqt ligand.pdbqt
4 out Remote lfn:/grid/imit/vip/wf4.txt quer_flex.txt

The jdl-file of Job0 is.
[
VirtualOrganisation = "imit"; Executable="Job0.sh"; JobType="Normal"; StdOutput= "std.out";
StdError = "std.err";
InputSandbox = {"vina.sh","Job0.sh","info.tar.gz"};
OutputSandbox = {"std.out","std.err","std.log"};
OutputSE = "se01.imit.uz"; RetryCount = 1; MyProxyServer = "px.imit.uz";
Environment={"LCG_CATALOG_TYPE=lfc","LFC_HOST=lfc.imit.uz",
"LCG_GFAL_INFOSYS=topbdii.imit.uz:2170"};
Requirements = other.GlueCEInfoHostname=="ce01bdii.imit.uz";
ShallowRetryCount = 3
]

On worker nodes of the cluster this job run about 3 hours.

A preliminary analysis of the probable site which binding Na/K-ATPhase with
66
template inhibitor ouabain was conducted by program package AutoDock Vina. Docking was
conducted for a hard molecule of the target under conformational mobilities of the ligand.
The performed analysis has shown the presence of four potential sites of binding.
The main binding site complies with one given PCA (Fig. 1). One can see that the
interactions between the ouabain and the amino-acid environment in the model complex and
in the complex we got experimentally, are similar.



Fig. 1. Ouabain molecule in binding site of Na/K-ATPhase



Fig. 2. Amino-Acid environment of ouabain molecule in binding site of Na/K-ATPhase

Earlier we studied a possibility of using a keto-enol tautomerism for flavon-3-ols, and
showed its presence for quercetin by the methods of S13 MRI and the quantum chemistry.
Failed attempt to displace ouabain from human's erythrocyte by flavonoids with strong
inhibition action, in particular, quercetin, directs on thought that mechanism of blocking
Na/K-ATPhase by flavonoids is not connected with the specific area of joining cardiac
glycosides to the ferment.
The molecular docking for three quercetins forms has confirmed this conclusion, i.e. the
more active centers of binding for quercetin are located between - and - subunits of the
67
ferment.

Fig. 3. Ouabain in 4 binding sites

Table 1. Binding energy and standard deviation of ouabain and quercetin docking
in Na/K-ATPhase

No. Ouabain Quercetin Quercetin-S Quercetin-R
kcal/mol rmsd kcal/mol rmsd kcal/mol rmsd kcal/mol rmsd
1
2
3
4
5
6
7
8
9
-8.8
-8.7
-8.5
-8.0
-7.9
-7.6
-7.4
-7.3
-7.3
0.000
27.590
23.873
23.987
18.681
25.832
41.988
19.307
5.533
-8.3
-7.8
-7.8
-7.7
-7.7
-7.6
-7.5
-7.3
-7.1
0.000
2.123
22.070
1.435
26.800
21.509
2.002
3.835
35.075
-7.6
-7.5
-7.4
-7.4
-7.4
-7.3
-7.3
-7.3
-7.2
0.000
1.993
20.665
2.711
25.072
2.131
8.906
2.199
24.990
-8.0
-8.0
-7.9
-7.7
-7.6
-7.6
-7.6
-7.4
-7.3
0.000
1.830
26.990
1.942
35.987
25.280
2.157
26.035
35.047

The character of the interaction between the quercetin and the ferment as well as
between the ouabain and the ferment are similar, that complies with in-vitro experimental
data. Herewith a ketone -R form's contribution is determinative.
Inhibition constants were calculated by the methods of statistical physics based on molecular
dynamics data: for ouabain 1.12 (1.11 in experiment), and 1.22 for quercetin. (1.56 experiment).
Screening of flavonoinds-inhibitors of Na/K-ATPhase by using Lamarckin GA algorithm give a
good conformation with the experiment binding parameters (correlation coefficient 0.83).
We can make a conclusion that the offered computing model has given similar data as
compared to the experimental ones and suitable for the virtual screening of the low-molecular
ligands on inhibition activity Na/K-Phase.

References
[1] F.T. Umarova, Z.A. Hushbaktova, E.X. Batirov, V.M. Mekler. Biological Membranes, 14, 1998, p. 24.
[2] http://lcc.chem.msu.ru/gran/index.html
[3] http://autodock.scripps.edu
[4] http://dock.compbio.ucsf.edu/
[5] O. Trott, A. J. Olson. AutoDock Vina: improving the speed and accuracy of docking with a
new scoring function, efficient optimization and multithreading. Journal of Computational
68
Chemistry, 31, 2010, pp. 455-461.
68
Grid Activities at the Joint Institute for Nuclear Research

S.D. Belov, P. Dmitrienko, V.V. Galaktionov, N.I. Gromova, I. Kadochnikov,
V.V. Korenkov, N.A. Kutovskiy, V.V. Mitsyn, D.A. Oleynik, A.S. Petrosyan,
I. Sidorova, G.S. Shabratova, T.A. Strizh, E.A. Tikhonenko, V.V. Trofimov,
A.V. Uzhinsky, V.E. Zhiltsov
Joint Institute for Nuclear Research, Dubna, Russia

From 2001, after starting the EU Data Grid project on creation of grid middleware and
testing the initial operational grid infrastructure in Europe, the JINR participates in the
international grid activities [1]. Since the 2003 year the Joint Institute for Nuclear Research takes
an active part in a large-scale worldwide grid project WLCG (Worldwide LHC Computing) in a
close cooperation with the CERN Information Technology department [2]. The JINR made a
significant contribution both in the WLCG, EGEE (Enabling Grids for E-sciencE) and EGI
(European Grid Infrastructure) projects. The JINR is an active member of the Russian consortium
RDIG (Russian Data Intensive Grid) which was set up in September 2003 as a national federation
in the EGEE project [3]. As a result, staff members of the Joint Institute for Nuclear Research
have been actively involved in the study, use and development of advanced grid technologies. The
most important result of this work was the creation of a grid infrastructure at JINR that provides a
complete range of grid services. The created JINR grid site (T2_RU_JINR) is fully integrated into
the global (world-wide) infrastructure (the name of the JINR grid site in the WLCG /EGI
infrastructure is JINR-LCG2). The resources of the JINR grid site are successfully used in the
global infrastructure, and on indicators of the reliability, the T2_RU_JINR site is one of the best in
the WLCG infrastructure.

A great contribution is made by the JINR staff members to testing and development of the
grid middleware, the development of grid-monitoring systems and organizing support for
different virtual organizations. The only specialized conference in Russia devoted to grid
technologies and distributed computing is organized and traditionally held in JINR. Constantly
working to train the grid technologies, the JINR created a separate educational grid infrastructure.
In the field of grid the JINR actively collaborates with many foreign and Russian research centers.
Special attention is paid to cooperation with the JINR Member States.

By November, 2011 the JINR computing farm has been upgraded to 2064 slots, a total
capacity of the Storage Element structured as dCache and XROOTD storage system was extended
to 1200 TB. The software includes a number of program packages which form the GRID
environment. A current version of the WLCG software is gLite 3.2. A monitoring and accounting
system has been developed at JINR and is in use by the entire Russian WLCG segment [3]. The
JINR external optical communication channel provides up to 2x10 Gbps data link. The JINR grid
site provides the following services: Storage Element (SE) services; Computing Element
(CE)services as grid batch queue enabling access for 13 Virtual Organizations (VO) including
ALICE, ATLAS, CMS, LHCb, MPD, HONE, FUSION, BIOMED, BES; Information Service
(BDII- Berkley DB Information Index); Proxy service (PX); the advanced service for access to
the LCG/EGEE resources (MyProxy); Workload Management System + Logging&Bookkeeping
Service (WMS+LB); RGMA-based monitoring system collector service (MON-box); LCG File
Catalog (LFC) service and VOboxes special services for ALICE, CMS, CBM and PANDA. Also
there are three NFS-servers dedicated to VOs. A global file system CVMFS for the access to
Virtual Organizations software has been installed, software required for LHC experiments is
69
currently installed (XROOTD, AliROOT, ROOT, GEANT packages for ALICE; CMSSW
packages for CMS; LHCb and ATLAS are supported from CVMFS global installation). The
JINR currently supports and develops the JINR WLCG-segment in the frames of the WLCG
infrastructure in accordance with the requirements of the experiments for the LHC running phase.

Current computing activities for ALICE, CMS and ATLAS are carried out in
coordination with LHC experiments [4-10]:

ATLAS: Functional Test of the ATLAS DDM (Distributed Data Management);
implementation of PD2P (PanDA Dynamic Data Placement); Xrootd and PROOF for
Atlas Tier3 data analysis; development and support of ATLAS DQ2 Deletion Service
became a major contribution to the cooperation with ATLAS experiment;
CMS: participation in CMS Phedex test data transfers; support of Phedex server
installed at the CMS VObox at JINR; CMS data replication to the JINR SE;
participation in CMS Dashboard data repository maintenance and CMS Dashboard
development [11-13], in particular, in improvement of CMS job monitoring and CMS
job failures reporting;
ALICE: regular update and testing of ALICE software (AliEn) required for ALICE
production and distributed activities not only at the JINR-WLCG site but also at
8 ALICE sites in Russia;
tests of readiness of the JINR site to store and process data for all the experiments the
JINR participates in (ALICE, ATLAS, CMS).

In 2010 - 2011 work for development, maintains and improvements of ATLAS DQ2
Deletion Service was provided by specialists of from JINR LIT. DQ2 Deletion Service serve
deletion requests for 130 sites with more than 700 endpoints (space tokens), it's one of critical
data management service in ATLAS. Aim of works in 2010 was implementing of new version
of Deletion Service for solving set of problems with scalability and deployment model, as
well new version of Deletion Service monitoring was implemented. In 2011 set of
improvements aimed to increasing of productivity of service was done. After optimization of
some algorithms, DB optimizations was achieved deletion rate more than 10Hz for some sites,
and overall deletion performance more than 6000000 files per day.

Also ATLAS Computing technical interchange meeting (the official ATLAS
Collaboration computing meeting) has been held at JINR on 31.05.2011 - 02.06.2011. This
event gathered about one hundred of ATLAS leading computing specialists.

The JINR has a large and long-term experience in Grid monitoring activities [14-16].
Currently the main areas of activity are:
RDIG monitoring and accounting system for the WLCG infrastructure of Russian
Tier2 sites (http://rocmon.jinr.ru:8080) and a continuous support is providing for
grid site administrators;
cooperation with Romania in the development of the monitoring system for
Romanian Tier 2 federation;
participation in development of global WLCG data transfer monitoring system
(https://twiki.cern.ch/twiki/bin/view/LCG/WLCGTransferMonitoring);
70
Tier3 monitoring project [16] the overall coordination and development at CERN
(https://svnweb.cern.ch/trac/t3mon): software environment and development
infrastructure (code repository, build system, software repository, external packages
built for dependencies) and, in particular, at JINR: VM-based infrastructure for
simulating different tier3 cluster and storage solutions was deployed. For the
moment it consists of the following parts: ganglia and nagios servers, a torque,
condor, proof, OGE-based clusters, two xrootd and one lustre-based storage
systems;
participation in development of web services monitoring for Dashboard
project(http://dashboard41.cern.ch/awstats/awstats.pl?config=master&configdir=/op
t/dashboard/var/lib/mawstats/conf ).

The JINR local monitoring system (http://litmon.jinr.ru) developed at the JINR is an
important basis to the global monitoring systems providing actual information on the status of
the JINR infrastructure to the higher levels of monitoring.
The dCache monitoring system for the JINR WLCG-segment has been developed
using Nagios, MRTG and custom plug-ins. The system provides information on input/output
traffic and requested and utilized space for both ATLAS and CMS experiments
(http://litmon.jinr.ru/dcache.html).
We continue to take part in the WLCG middleware testing/evaluation. During last two
years the directions and results were the following:
development of gLite MPI (Message Passing Interface) certification tests: MPI
patch #3714 certification and evaluation of the current status of MPI enabled
CREAM (Computing Resource Execution And Management service) CE
(Computing Element);
development and modernization of FTS (File Transfer Service) certification tests;
deployment of few gLite 3.2, EMI (European Middleware Initiative) and UMD
(Unified Middleware Distribution) components was tested;
in framework of developing of tests for LFC (LCG File Catalog) perl API
functions a separate LFC server (gLite 3.2) was installed on gLite testbed at JINR
and the corresponding GGUS (Global Grid User Support)tickets were submitted.

Participation in the WLCG Monte Carlo database (http://mcdb.cern.ch) results in [17-20]:
support for CMS production and users;
libraries for working with automatic documentation for Monte Carlo simulated
events (HepML language) were improved;
automatic data uploading with unified HepML descriptions was improved.
We support users (conducting courses, lectures, trainings) to stimulate their active
usage of the WLCG resources [21-23]. Also a special grid-training infrastructure for JINR
and the JINR Member States (Russia, Uzbekistan, Armenia, Bulgaria, Ukraine) has been
created. During the 2010-2011 years a number of schools and training events has been held:
user trainings on gLite middleware for graduate students of physics at the JINR
University Center : the semestrial course in grid technologies was conducted during
02.2010 - 06.2010;
71
the grid course for Egyptian students had been given during May-June, 2010;
JINR-CERN School on JINR/CERN Grid and Management Information systems was
held on October 25-29, 2010 (http://ais-grid-2010.jinr.ru/) (about 50 attended students
from JINR, Russian universities and Poland);
JINR-CERN School on JINR/CERN Grid and Advanced Information systems was
held on October 24-28, 2011 (http://ais-grid-2011.jinr.ru/) (about 100 attended
students from JINR, Russian universities, Poland, Ukraine, Georgia and Bulgaria);
a training course for system administrators from Bogolyubov Institute for Theoretical
Physics - BITP (Kiev, Ukraine) and National Technical University of Ukraine "Kyiv
Polytechnic Institute" - KPI (Kiev, Ukraine) was given. That course was focused
mostly on gLite 3.2 and AliEn services deployment;
basic training courses on gLite 3.2 services deployment for system administrators
from Mongolia, Kazakhstan and Azerbaijan were held.
international practice on grid technologies, JINR University Center, 06.09.11-
09.09.11.
Also the trainings for system administrators from Ukraine, Romania and Uzbekistan
have been conducted. Two grid sites based on gLite middleware (one at the Bogolyubov
Institute for Theoretical Physics and another at the National Technical University of Ukraine
"Kyiv Polytechnic Institute") were set up during one of these trainings. The trainings for
Romanian and Uzbekistan administrators were intended for giving practical skills in setting
up MPI enabled CREAM Computing Elements.
The traditional international conferences Distributed computing and Grid
technologies in science and education are organized and hosted by JINR. These conferences
gather scientists from Russia and CIS countries and it is the only conference in the Russian
Federation devoted especially to modern grid technologies. The fourth conference was
successfully held at the JINR in June, 2010 (http://grid2010.jinr.ru).
Information on JINR activities in the WLCG is currently presented at the JINR GRID
Portal (http://grid-eng.jinr.ru).
We provide a continiuos support for the JINR Member States and associated JINR
Member States in the WLCG activities working in a close cooperation with partners in
Ukraine, Belarus, Azerbaijan, Germany, Czech, Slovakia, Poland, Romania, Moldova,
Mongolia, South Africa, Kazakhstan and Bulgaria. Protocols and agreements for cooperation
in the field of grid technologies are signed between the JINR and Armenia, Belarus, Bulgaria,
Moldova, Poland, Czech and Slovak.

Recently at CERN the agreement on construction in Russia of Tier1 center for four
LHC experiments was signed by the representatives of Russian official agencies and CERN.
This preliminary decision involves the creation of a Tier1 center for ALICE, ATLAS and
LHCb experiments at the Kurchatov Institute and a single Tier1 center for the CMS
experiment - at JINR. Currently the plan of creation of CMS Tier1 center at the JINR is
developing in detail. Implementation of this plan will require significant investments and also
great efforts of JINR specialists, as well as to attract a certain number of new employees who
will have to provide a stable operation (24x7) of the future CMS Tier1 at JINR.
72
During 2010-2011 years the results of JINR grid activities were presented at the
GRID2010 conference in Dubna (http://grid2010.jinr.ru/), RDMS CMS Conference in Varna,
Bulgaria (http://rdms2010.jinr.ru/), ATLAS Software & Computing Workshop (04.04.2011-
08.04.2011, CERN), the RDMS conference in Alushta, Ukraine (May 2011,
http://rdms2011.kipt.kharkov.ua/), ATLAS Computing technical interchange meeting
(31.05.2011-02.06.2011, Dubna, JINR), Programme Advisory Committee for Particle
Physics, 35th meeting (21.06.2011-22.06.2011, Dubna, JINR), conference "Mathematical
Modeling and Computational Physics" (MMCP 2011) (4.07.2011-8.07.2011, Slovakia),
ATLAS Software & Computing Workshop (18.07.2011-22.07.2011, CERN), International
Summer School, ENU (07.08.2011-13.08.2011, Astana, Kazakhstan), Meeting on cooperation
JINR-Mongolia (21.08.2011-25.08.2011, Mongolia) and NEC'2011 symposium in Varna,
Bulgaria (September 2011, http://nec2011.jinr.ru).
The resources of the JINR grid site are actively used by different virtual organizations
and the JINRs contribution into the resources provided by the consortium RDIG in the 2010-
2011 years is the most significant one: 42% (Fig. 1).



Fig. 1. Normalised CPU time (kSI2K) per SITE consumed at Russia grid sites in 2010-2011 years

References
[1] S.D. Belov at al. Joint Institute for Nuclear Research in the WLCG and EGEE
projects. Proc. of NEC2009, Dubna, JINR, 2010, pp. 137-142.
[2] V. Korenkov. GRID Activities at the Joint Institute for Nuclear Research. Distributed
Computing and Grid-technologies in Science and Education IV Int.Conference, Proceedings
of the conference, Dubna, 2010, p. 142.
[3] V.A. Ilyin, V.V. Korenkov, A.A. Soldatov. RDIG (Russian Data Intensive Grid) e-
Infrastructure: status and plans. Proc. of NEC2009, Dubna, JINR, 2010, pp. 150-153.
[4] G. Shabratova (on behalf of ALICE). The ALICE GRID Operation. Distributed Computing
and Grid-technologies in Science and Education IV Int.Conference, Proceedings of the
conference, Dubna, 2010, p. 202.
[5] M. Demichev et al. Readiness of the JINR grid segment to process the first ATLAS data.
Proc. of NEC2009, Dubna, JINR, 2010, p. 111.
[6] E.A. Boger et al., ATLAS Computing at JINR. Distributed Computing and Grid-
73
technologies in Science and Education. IV Int.Conference, Proceedings of the conference,
Dubna, 2010, p. 81.
[7] V. Gavrilov et al. RDMS CMS computing activities to satisfy LHC data processing and
analysis. Proc. of NEC2009, Dubna, JINR, 2010, p. 129.
[8] V. Gavrilov et al., RDMS CMS TIER 2 centers at the running phase of LHC. Distributed
Computing and Grid-technologies in Science and Education IV Int.Conference, Proceedings
of the conference, Dubna, 2010, p. 103.
[9] O.Bunetsky et al. Preparation of the LIT JINR and the NSC KIPT (Kharkov, Ukraine) grid-
infrastructures for CMS experiment data analysis. P11-2010-11, 2010, Dubna, JINR, p. 12
(in Russian).
[10] V. Gavrilov et al. RDMS CMS Computing, to be published in the Proceedings of the 15
th

RDMS CMS conference (Alushta, May 2011).
[11] J. Andreeva et al. Dashboard for the LHC experiments. J. Phys. Conf. Ser.119:062008,
2008.
[12] I. Sidorova. Job monitoring for the LHC experiments. Proc. of NEC2009, Dubna, JINR,
2010, p. 243.
[13] Julia Andreeva, Max Boehm, Sergey Belov, Irina Sidorova, Jiri Sitera, Elena Tikhonenko

et
al. Job monitoring on the WLCG scope: Current status and new strategy. J. Phys.: Conf.
Ser., 219, 2010, p. 062002.
[14] S.D. Belov, V.V. Korenkov. Experience in development of Grid monitoring and accounting
systems in Russia. Proc. of NEC2009, Dubna, JINR, 2010, p. 75.
[15] A. Uzhinski, V. Korenkov. Monitoring system for the FTS data transfer service of the
EGEE/WLCG project. Calculating Methods and Programming, V.10, 2009, pp. 96-106 (in
Russian).
[16] J. Andreeva et al. Tier-3 Monitoring Software Suite (T3MON) proposal. ATL-SOFT-PUB-
2011-001, CERN, 2011, http://cdsweb.cern.ch/record/1336119/files/ATL-COM-SOFT-
2011-005.doc
[17] S. Belov et al. LCG MCDB a Knowledgebase of Monte Carlo Simulated Events.
Computer Physics Communications, V. 178, I. 3, 1 February 2000, pp.222-229.
[18] S. Belov et al. LCG MCDB and HepML, next step to unified interfaces of Monte-Carlo
simulation. Proceedings of Science, PoS ACAT08:115, 2008.
[19] S. Belov, L. Dudko, D. Kekelidze, A. Sherstnev. On Automation of Monte Carlo Simulation
in High Energy Physics. Distributed Computing and Grid-Technologies in Science and
Education: Proceedings of the 4th Intern. Conf. (Dubna, June 28-July 3, 2010). Dubna:
JINR, -11-2010-140, 2010, p.452, ISBN 978-5-9530-0269-1.
[20] S. Belov et al. HepML, an XML-based format for describing simulated data in high energy
physics. Computer Physics Communications, 2010, doi:10.1016/j.cpc.2010.06.026
http://arxiv.org/abs/1001.2576
[21] V.V. Korenkov, N.A. Kutovskiy. Educational grid infrastructure. Open System, N. 10, 2009
(in Russian).
[22] S.D. Belov, V.V. Korenkov, N.A. Kutovskiy. Educational grid infrastructure: status and
plans. Proc. of NEC2009, Dubna, JINR, 2010, p. 81.
[23] V.V. Korenkov, N.A. Kutovskiy. Distributed training and testing grid-infrastructure.
Distributed Computing and Grid-technologies in Science and Education IV Int. Conference,
Proceedings of the conference, Dubna, 2010, p. 148.

74
Monitoring for GridNNN project

S. Belov, D. Oleynik, A. Pertosyan
Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia

Contemporary distributed systems like computational grids are complex technical systems, therefore to keep
an eye on its state and to count the consumption of computational resources special automated tools should be used. In
this paper we discuss the experience of development of monitoring and accounting system for the project GridNNN
[1, 2], which aim is to provide grid infrastructure for National Nanotechnology Network in Russia. The grid
middleware used in GridNNN project is partially based on well known packages like Globus Toolkit 4 [3] and VOMS
[4], and to fit the needs of the specific application area several grid services were developed from scratch. In such
conditions, special monitoring and accounting system was created within the project.
The monitoring is rather general concept. Most common tasks we deal with are:
Continuous watching for the state of grid services both common for all infrastructure and in a particular
Resource Center;
Obtaining information on resources (slots number, operation system, hardware architecture, special
software packages) and their utilization;
Access control rules for the resources by Virtual Organizations and groups inside them;
Execution monitoring, tasks and jobs submission, state changes and return codes;
Resource usage information (especially CPU consumption);
Watching the quotas for resource usage by Virtual Organizations.
For effective control, planning and faults detection it's important to know not only current state of grid
infrastructure but also to keep track of state history.

Introduction
The aim of GridNNN project is a creation and support for national nanotechnology
network of Russia. The main goal of the project is to provide an effective access to the
distributed computational, informational and networking facilities for nanotechnology science
and industry. The base middleware of particular GridNNN services (like MDS, GRAM) is
Globus Toolkit 4, some services are developing by the project team (e.g. job handling tool
Pilot [5], Information Index [6] based on Globus MDS, GRAM connection with non-
standard Local Resource Management System, Web User Interface, etc.).
Operation of GridNNN, unlike many huge grid projects, is more centralized: there are
about 15-30 resource centers (supercomputers) controlled from two operation centers, the
main and the backup one (having the same set of central services). The infrastructure has one
central information index where all information providers have to publish their data. Another
difference from most grid project is a variety of different Virtual Organizations (VOs).

Monitoring subsystem overview
Typical computational jobs in GridNNN are parallel and use MPI technology. They
demand huge volume of computation, but do require to store or transfer considerable amount
of data. Therefore monitoring activities in this area are primarily concentrated on tasks and
jobs tracking (computational jobs are parts of a task and could be interconnected with each
other). Jobs monitoring is naturally associated with billing (or accounting) features: it is
important to know who, when and where is using the project resources.
To choose the resource center to submit jobs to end user should have possibility to
know main characteristics of the environment there. Most important of them are supported
VOs, hardware architecture, number of total available and free slots for running jobs,
operation system version, special software packages and so on.
Then, the next main point is overall information on the state of the infrastructure. Now
within monitoring subsystem simple test to check whether infrastructure services are alive
were prepared. Along with it was created geoinformation real-time visualization of systems
75
operation based on Google Earth, which allows seeing all jobs and tasks events on 3D globe
in real time [8]. This feature is of impress and often uses to make graphic presentations.
Also there are several significant tacks related to monitoring but not included to the
monitoring subsystem and supported by other teams in the project. Firstly, it is RAT-tests
(resource availability tests), sample tasks periodically submitted to each computational resource in
the system. The second type of regular checks is examination of the information published by
sites to their local information indexes (and then to the central information index).

Information gathering for the monitoring and accounting
All the information of services entry points and sites provided by special service
named Service for Registration of Resources and Grid Services (SRRGS). It is the main place
where such data is available and where it originates from. SRRGS provides data as a reply for
simple HTTPS queries.
Information System (including central and local information indexes) contains both
slow changing and dynamic information on the resources. Site publishes many clue
parameters like job available job slots, system architecture type, OS version, list of special
software packages, VO access information this changes not so frequent. But there is small
piece of information that contains changing real-time information on state of job queues and
available job slots on the site.
There are two types of Information System central services. The first one is based on
Globus MDS 4, and WSRF queries are used to perform a request [6]. The second one is based
on the GridNNN's development called Infosys2 and has a RESTful interface. Both types of
information indexes have the same information schema (based on extended GLUE 1.3) and
should contain and provide identical data gathered from the resource centers.
Along with obtaining information from the Information Indexes, there are special
functional test. For each type of service in the project there are small simple test just to check
if service is available and is responding to queries specific to if (e.g. Information Index should
return non-empty response to a WSRF or HTTP queries).
All collected information coming from Information Indexes, SRGGS and simple
service tests is then handled and stored to the monitoring database. Monitoring web interface
is used to present both real-time and historical information. Data flows in monitoring are
shown on the Fig. 1.

Fig. 1. Monitoring and accounting data flows

76
Jobs monitoring information and accounting data are taken from Pilot (job execution
service). Pilot servers publish special accounting log containing all the events occur with tasks
and their jobs (starting from task submission, sending jobs to the particular resources and to
the task finishing or termination). Monitoring service is querying for new events every
minute, and then parses result came in JSON format [7]. Obtained events information (task,
job, user, VO, start and finish time) is linked with the same events which is already in the
database, forming the states of tasks and jobs in question. Accounting data (mainly consumed
CPU time) is to be taken from local Grid Resource Allocation Managers (GRAMs) in
resource centers. Then it is linking with job information in database. At the end, full
aggregated accounting information is in the database and is available via web-interface.
Data representation for monitoring and accounting
For the representation of collected information, monitoring service has a web-interface
[9]. The main parameters to be displayed on the site are states of computational jobs queues,
resource characteristics, operation systems version and so on (see previous paragraph).
Accounting information is available on the site as several report views with tables and
diagrams by resources and users.
Real time jobs monitoring allows displaying on 3D globe how and where jobs are
started and finished [8]. Special script periodically (each 10 minutes) prepares information on
job events based on Accounting DB and makes KML file to use it in visualization in Google
Earth. It is probably the best and the most spectacular way of projects operation
demonstration to the wide community.
Summary
GridNNN project has its own features and peculiarities, and it differs from other grid
projects. Therefore for making monitoring and accounting for it, along with common means
and approaches some special developments are required to fit the project. Speaking of the
monitoring and accounting subsystem, it is ready and available on the web [9].
The project is already finished now; full-functional grid infrastructure of GridNNN is
in production state. The program packages created are used in newly developed Russian
national projects and are still evolving.
References
[1] V.A. Ilyin et al. Design and Development of Grid-infrastructure for National Nanotechnology
Network. Distributed Computing and Grid-Technologies in Science and Education: Proc. of the 4th
Intern. Conf. (Dubna, June28-July 3, 2010). Dubna: JINR, -11-2010-140, 2010, p. 452,
ISBN 978-5-9530-0269-1.
[2] GridNNN project site (in Russian), http://ngrid.ru
[3] I. Foster. IFIP International Conference on Network and Parallel Computing. Springer-Verlag LNCS
3779, 2005, pp. 2-13.
[4] VOMS project home page, http://voms.forge.cnaf.infn.it/
[5] L. Shamardin et al. GridNNN Job Execution Service: a RESTful Grid Service. Distributed
Computing and Grid-Technologies in Science and Education: Proc. of the 4th Intern. Conf. (Dubna,
June28-July 3, 2010). Dubna: JINR, -11-2010-140, 2010, p. 452, ISBN 978-5-9530-0269-1.
[6] M.M. Stepanova et al. Information System of GridNNN, Distributed Computing and Grid-
Technologies in Science and Education: Proc. of the 4th Intern. Conf. (Dubna, June28-July 3, 2010).
Dubna: JINR, -11-2010-140, 2010, p. 452, ISBN 978-5-9530-0269-1
[7] JavaScript Object Notation (JSON), http://json.org
[8] S. Mitsyn, S. Belov. Development of Real-time Visualization Service Based on Google Earth for
GridNNN Project. Distributed Computing and Grid-Technologies in Science and Education: Proc. of
the 4th Intern. Conf. (Dubna, June28-July 3, 2010). Dubna: JINR, -11-2010-140, 2010, p. 452,
ISBN 978-5-9530-0269-1.
[9] GridNNN monitoring and accounting site, http://mon.ngrid.ru
77
A Low-Power 9-bit Pipelined CMOS ADC for the front-end
electronics of the Silicon Tracking System

Yu. Bocharov, V. Butuzov, D. Osipov, A. Simakov, E. Atkin
Moscow Engineering Physics Institute, Russia

A pipelined analog-to-digital converter (ADC) for the front-end electronics of the Silicon Tracking
System of the upcoming CBM experiment in GSI/FAIR at Darmstadt is described. The implementation of so
large-scale system requires using highly integrated mixed-signal application-specific integrated circuits (ASIC),
which contain multiple ADCs. To meet the specific requirements for power consumption and chip area a number
of methods have been proposed. The adjacent stages of a pipeline share operational amplifiers. In order to keep
accuracy of the amplifiers in the first stages, they use a partially sharing technique. The feature of the proposed
scheme is that it also shares the comparators. The capacitors of the first stages of a pipeline are scaled down
along a pipeline for a further reducing the chip area and its power consumption. A 9-bit 20-MSamples/s ADC,
intended for use in multi-channel mixed-signal chips, has been fabricated via Europractice in a 180-nm CMOS
process from UMC. The prototype ADC shows a spurious-free dynamic range of 58.5 dB at a sample rate of
20 MSamples/s, when a 400 kHz input signal with a swing of 1 dB below full scale is applied. The effective
number of bits is 8.0 at the same conditions. ADC occupies an active area of 0.4 mm
2
and dissipates 8.6 mW at a
1.8 V supply.

Introduction
Power and area efficiency is a common problem in the design of analog-to-digital
converters (ADC). Today power lowering has moved to the forefront of design challenges.
There are some applications where the achievement of very low power consumption of ADC
is critical. In particular, this is the instrumentation for modern physics experiments. For
instance, the front-end electronics of the Silicon Tracking System of the upcoming CBM
experiment in GSI/FAIR at Darmstadt contains more than one million readout channels [1]
and [2]. The implementation of so large-scale systems requires using highly integrated mixed-
signal application-specific integrated circuits (ASIC), which contain multiple ADCs. In some
other ADC applications, such as battery-powered measuring devices and communication
units, very low power consumption at high and moderate sample rates is also mandatory.
In these applications pipeline architecture has been widely adopted because it
guarantees high speed with reasonable requirements for the resolution of comparators, an
acceptable power consumption and small area. A number of methods have been proposed for
reducing power dissipation and silicon area of pipelined ADC, including the technique of
sharing amplifiers by adjacent stages of a pipeline [3][10]. This research focuses on a study
of methods of further reducing the power dissipation of the pipelined ADC.
The design of a 9-bit 20-MSamples/s ADC utilizes the proposed methods. The
prototype ADC has been fabricated in a 180-nm CMOS process. To minimize power
consumption the ADC uses a state-of-the-art amplifier sharing technique as well as the
proposed comparator sharing technique. The paper considers the advantages and drawbacks
of this approach.

Circuit Overview
The ADC has a fully differential pipelined architecture. Fig. 1 shows its block-
diagram. The signal conversion path is shown single-ended for simplicity. The auxiliary
blocks and peripherals such as band-gap voltage reference, on-chip reference voltage buffers,
clock generator and digital interface are not shown.

78

Fig. 1.Proposed ADC architecture

The pipeline includes seven 1.5-bit stages and a 2-bit flash ADC. All 1.5-bit stages
except the first and the second are identical. They are grouped, as shown in Fig. 1. Some
elements of the two adjacent stages belong to only one stage, and some of their elements are
common. This makes it possible to reduce dramatically the power consumption as well as the
chip area. Sharing of operational amplifiers, which are the key elements of the switched-
capacitor multiplying digital-to-analog converter (MDAC), is a commonly used technique.
The feature of the proposed solution is that the amplifiers of the first stages are shared only
partially and the comparators are shared too.

Power Reduction Techniques
The possibility of sharing some elements of the adjacent stages in the pipelined ADCs
is because at any time the states of the odd-numbered stages are different from states of even-
numbered stages. In one-half of a clock cycle, the odd stages are in a sampling phase and the
even stages are in the amplification and residue estimation mode.
During the other half of a cycle, they go to an opposite state. The switched-capacitor
circuits can afford to share the amplifier between adjacent stages because they require the
amplifier only in the amplification phase and do not require in the sampling phase. During the
sampling phase operational amplifiers are either in the offset correction mode or not used at
all. If offset correction is not required, the amplifiers can be switched alternately between
adjacent stages to be connected to the stage, which is in a residue estimation mode. As the
adjacent stages are always in opposite states, amplifiers and comparators can be allocated to
the common shared resource. This allows halving the number of amplifiers.
In the proposed ADC the amplifiers, which are shared by the input sample-and-hold
amplifier (SHA) and the first MDAC, as well as the amplifiers of the second and third stages
are shared partially. This compensates the amplifier offset-voltage and suppresses a memory
effect in capacitors.
Fig. 2 illustrates the partially shared amplifier. It contains two identical not shared
preamplifiers A1 and A2, connected through the switches S1 and S2 to a shared block A3,
79
which includes an output amplifier, a switched-capacitor common- mode feedback circuit and
a bias circuit. In a sampling phase, A1 and A2 are in the offset compensation mode while A3
is connected to the preamplifier of the adjacent stage. Amplifiers in the stages from fourth to
seventh are completely shared.


Fig. 2. Schematic diagram of the partially shared fully differential operational amplifier

Comparators are also shared between adjacent stages. Fig. 3 illustrates a block-
diagram of a circuitry that contains a differential comparator shared by the stages numbered
as i and i+1. The comparator schematic diagram is shown in Fig. 4. It is a regenerative latch
with a built-in preamplifier, clocked by non-overlapped timing sequences shown in Fig. 5.
The stages can share the comparator because after latching of its output in flip-flop
registers at the beginning of a sampling phase, during the rest of this phase, comparator is not
active and therefore it can be used in the adjacent stage, which is in the residue estimation
mode.
80

Fig. 3. Block-diagram of the shared comparator


Fig. 4. Schematic diagram of the comparator
81

Fig. 5. ADC clocking

The drawback of this solution is that a latch signal should have a frequency twice the
ADC sampling rate. It requires an extra low-power frequency-doubling block. Fig. 6 shows a
schematics and a timing diagram of the developed frequency doubler. If the input signal is a
50% duty cycle clock and the capacitors C1 and C2 are tolerably matched, the frequency
multiplication factor keeps its value close to two in a wide range of the input frequencies.



Fig. 6.Schematics and timing-diagram of the frequency doubler

82
The proposed technique allowed to reduce the number of comparators from 17 to 11 as
well as to reduce their total power consumption and occupied area, adjusted for the effect of
the frequency doubler, by more then 30%.
The sampling capacitors of the first stages of the chain are scaled down along a
pipeline. The SHA and MDAC of the first stage use the capacitors of 0.8 pF, while the
MDAC of the fourth and further stages use the capacitors of 0.25 pF. This results in a lower
area and relaxed requirements for operational amplifiers in the stages from fourth to seventh.
They utilize the amplifiers with minimal operating currents and consequently with a minimal
power consumption [11].

Implementation
The ADC was designed in the Cadence IC6.14 Virtuoso environment as an intellectual
property (IP) block of the future mixed-signal multi-channel system-on-chip (SoC). Its first
application is expected to be in a silicon tracker station of the CBM experiment.
A prototype ADC has been fabricated via Europractice at the UMC foundry in 180-nm
CMOS process focused on mixed-mode and radio frequency circuits. It is a single-poly,
6-metal layers (1P6M) process with metal-insulator-metal (MIM) capacitors. The ADC
occupies an active silicon area of 0.4 mm
2
.
At a sampling frequency of 20 MHz the ADC has a power consumption up to 14 mW.
The core dissipates no more than 8.6 mW. The rest is the consumption of the reference
voltage source and reference voltage buffers, which are common blocks for multiple channels.
Fig. 7 illustrates the FFT spectrum of ADC output when a sinusoidal input signal with
the swing of 1 dB below full scale is applied. Coherent sampling was used to suppress a
spectral leakage. Fig. 8 and Fig. 9 show the ADC layout and a die photograph.




Fig. 7. FFT spectrum at input frequency of 400.39 kHz and sampling frequency of 20 MHz


83
Conclusion
This paper describes a low power pipelined ADC for a multi-channel mixed-signal
readout ASIC which is intended for use in CBM silicon tracking system at GSI/FAIR facility
[12]. By sharing amplifiers and comparators along a pipeline, the proposed 9-bit ADC uses
only 4 amplifiers and 11 comparators instead of 8 amplifiers and 17 comparators with the
conventional pipeline architecture.



Fig. 8. ADC layout



Fig. 9. Die photograph

More then 30% of the total power consumption of comparators and their occupied area
is reduced. A frequency doubler for clocking the shared comparators was proposed. The
84
partial sharing of amplifiers in the first stages of a pipeline enabled to keep their accuracy.
The prototype ADC has been fabricated in a 180-nm CMOS process. The device reaches a
spurious free dynamic range of 58.5 dB at the sample rate of 20 MHz when the input signal of
400 kHz with the swing of 1 dB below full scale is applied. The effective number of bits
(ENOB) is 8.0 at the same conditions. The core power consumption is 8.6 mW. It occupies of
an active area of 0.4 mm
2
. Table 1 summarized the ADC performance.

Table 1. ADC Performance Summary

Parameter Unit Value
Resolution Bit 9
Sampling Rate MSamples/s 20
Effective Number of Bits (ENOB) at 400kHz input Bit 8.0
Spurious Free Dynamic Range (SFDR) at 400 kHz
input
dB 58.5
Full Scale (differential) V
p-p
2
Supply Voltage V 1.8
Core power consumption mW 8.6
Total power including on-chip reference voltage
buffers
mW 14
On-chop reference source voltage V 1.23
Active Area mm
2
0.4
Technology Mixed Mode 180-nm CMOS
process with MIM capacitors


References
[1] J .M. Heuser, M. Deveaux, C. Muntz, J . Stroth. Requirements for the Silicon Tracker
System CBM at FAIR. Nuclear Instruments and Methods in Physics Research,
Section A, V. 568, 2006, pp. 258262.
[2] Yu. I. Bocharov, A. S. Gumenyuk, V. A. Lapshinsky, D. L. Osipov, A. B. Simakov.
The architecture of a specialized LSI for multi-channel sensor signal pickup. Sensors
and Systems, V.113, October 2008, pp. 4750.
[3] K. Nagaraj, H. S. Fetterman, J . Anidjar, S. H. Lewis, and R. G. Renninger. A 250-
mW, 8-b, 52-Msamples/s Parallel-Pipelined A/D Converter with Reduced Number
of Amplifiers. IEEE J . Solid-State Circuits, V. 32, March 1997, pp. 312320.
[4] Y.-D. J eon, S.-C. Lee, K.-D. Kim, J .-K. Kwon, J . Kim, and D. Park. A 5-mW 0.26-
mm2 10-bit 20-MS/s Pipelined CMOS ADC with Multi-Stage Amplifier Sharing
Technique. Proceedings 32nd European Solid-State Circuits Conference, September
2006, pp. 544547.
[5] H.-H. Ou, S.-J . Chang, and B.-D. Liu. Low-Power Circuit Techniques for Low-
Voltage Pipelined ADCs Based on Switched-Opamp Architecture. IEICE Trans
Fundamentals, V. E91A, February 2008, pp. 461468.
[6] B.-G. Lee and R. M. Tsang. A 10-bit 50 MS/s Pipelined ADC With Capacitor-
Sharing and Variable-gm Opamp. IEEE J . Solid- State Circuits, V. 44, March 2009,
pp. 883890.
85
[7] C.-H. Kuo, T.-H. Kuo, and K.-L. Wen. Bias-and-Input Interchanging Technique for
Cyclic/Pipelined ADCs With Opamp Sharing. IEEE Trans. on Circuits and
SystemsII, V. 57, March 2010, pp. 168172.
[8] C. Lijie, Z. Yumei, and W. Baoyue. A 10-bit 50-MS/s subsampling pipelined ADC
based on SMDAC and opamp sharing. IOP J . of Semiconductors, V. 31, November
2010, pp. 115006-1115006-7.
[9] Y. Rui, L. Youchun, Z. Wei, and T. Zhangwen. A 10-bit 80- MS/s opamp-sharing
pipelined ADC with a switch-embedded dual-input MDAC. IOP J . of
Semiconductors, V. 32, February 2011, pp. 025006-1025006-6.
[10] G. Shu, Y. Guo, J . Ren, M. Fan, and F. Ye. A power-efficient 10-bit 40-MS/s sub-
sampling pipelined CMOS analog-to-digital converter. Analog Integrated Circuits
and Signal Processing, V. 67, April 2011, pp. 95102.
[11] A.S. Gumenyuk and Yu.I. Bocharov. Power-Reduction Techniques for CMOS
Pipelined ADCs, Russian Microelectronics. V. 37, No. 4, April 2008, pp. 253263.
[12] E. Atkin, Y. Bocharov, V. Butuzov, A. Klyuev, D. Osipov, D. Semenov,
A. Simakov. Development of the derandomizing architecture for CBM-STS. CBM
Progress Report 2009, DSI Darmstadt, ISBN: 978-3-9811298-7-8, 2010, p. 45.
86
The selection of PMT for TUS project

V. Borejko, A. Chukanov, V. Grebenyuk, S. Porokhvoy, A. Shalyugin,
L. Tkachev
Joint Institute for Nuclear Research, Dubna, Russia
The goal of the space experiment TUS consists in a study of Extremely High Energy
Cosmic Rays. For this purpose EAS fluorescent radiation produced in EHECR interactions
with the atmosphere are registered. A registration system based on Hamamatsu PTM R1463
[1] is developed for determining the energy of these CR. The matrix of 256 PTMs is divided
into 16 groups of 16 pieces each. It is extremely important for the characteristics of the PTM
within a group to be as identical as possible.
For measurements we modified the stand [2, 3], used for selecting PMT R7877 for the
tile-calorimeter of the ATLAS detector. Modernization concerns the place where the PMT is
situated, development of a new divider, a change in the program for setting the high voltage,
removal of the photocathode pulse gating, since processes, recorded by the photomultipliers
are quite slow.
The photomultipliers were selected by their gain, their photelectron collection
efficiency as well as their dark current.
Let us recall the construction of the stand.
It consists of three basic assemblies - optical, executive and electronic. The optical unit
serves for obtaining the luminous flux, and also for its transportation to the executive part. The
executive part is a mobile square matrix (5x5 cells). A photodiode is placed at the center of the
matrix for calibration of the optical channels. The PMT to be tested are situated in the remaining
cells. Thus, it is possible to test to 24 PMT at the same time. The third unit realizes control functions
of the ADC, DAC and the transmission of data to a personal computer.
Fig. 1 presents the outline of the optical system, Fig. 2 of the measuring circuit, while
Fig.3 presents a scheme of the special divider.


Fig. 1. The outline of the optical system
87
As a source of constant light we used a light-emitting diode of the BP280CWB1K-
3.6VF-050T type. It has a characteristic energy peak near 480nm. For the attenuation of
light, filters were used with attenuation factors of 10,100 and 900, located on a rotating
five-position wheel. The two remaining positions on this wheel are occupied by an open-
end hole and by a black shutter. The open-end hole was used for determining the
operating point, i.e. the light flux corresponding a cathode current of 10nA. The black
shutter simulates full darkness during measurement of the pedestals.


Fig. 2. The measuring circuit

As photoreceivers for determining the attenuation factors of filters and channel
calibration coefficients we used Hamamatsu 1337-11 photodiodes in the optical unit and
S3204-04 diodes in the
executive unit. Their
quantum efficiency is
about 66%.











Fig. 3. A scheme of the special divider

For finding the gain we used the ratio between the anode and cathode currents for a
source providing a constant luminosity corresponding to a 10nA cathode current of the PMT.
Gains g were obtained for 7 values of the high voltage applied to the divider (U =500 - 800 volts
in steps of 50 V). For these data we obtained a linear approximation of the form g=a*U+b by the
least squares method. For each case of approximation calculation of the correlation coefficient
was performed, which showed how well parameters a and b describe the experimental data.
Fig. 4 presents the distribution of correlation coefficients.
88
For measurement of the photoelectron collection efficiency the method was used, in
which the photocathode current is measured for different voltages between the
photocathode and the first dynode. The photoelectron collection efficiency was determined
from the dependence of the cathode current on the voltage with a source providing a
constant luminosity corresponding to a 10 nA cathode current of the PMT. The voltage
varied from 0 to 100 V and was set at 29 values equally distant from each other. For each
PMT we found, by the least squares method, the coordinates (U_plato and the respective it
- I_plato) of onset point of the plateau for the photoelectron collection efficiency.


Fig. 4. The distribution of correlation coefficients

Fig. 5 and Fig. 6 show the results of histogramming the values of u_plato and
I_plato respectively.

Fig. 5. The results of histogramming the values of u_plato

89


Fig. 6. The results of histogramming the values of I_plato

Dark current measurements were performed after the PMT had been in darkness for
30 minutes with a voltage of 600 V.
Then, measurements were made of the anode current for the voltage of the divider
within the range of 500-900 volts in steps of 100 V. The dark current values for the anode
voltages 800 V and 900 V are shown in Fig. 7.


90
Fig. 7. The dark current values for the anode voltages 800 V and 900 V

The characteristics of 286 Hamamatsu PMTs R1463 were measured in a constant light
flux for three parameters: gain of the dynode system, the photoelectron collection efficiency,
the dark current. For the different reasons 6 PMT were rejected: PMT numb. VD9763 by its
gain; numbs. VE0986, VE1034, VE0954, VE0965 by their dark current, numb. VE1075 by its
photoelectric cathode efficiency.
References
[1] Hamamatsu R1463 datasheet, http://www.hamamatsu.com
[2] Technical characteristics of the prototype of the TILECAL photomultipliers test-bench.
TILECAL-98-148.
[3] Characterization of the Hamamatsu 10-stages R5900 photomultipliers at Clermont for
the TILE calorimeter. TILECAL-NO-108, 28 April 1997, PCCF-RI-97-5, 1997, p. 96.

90
The possibility to overcome the MAPD noise in scintillator
detectors

V. Boreiko, V. Grebenyk, A. Kalinin, A. Timoshenko, L. Tkatchev
Joint Institute for Nuclear Research, Dubna, Russia

The Multiplied Avalanche Photo Detector (MAPD) or Silicon Photomultipliers
(SiPM) was invented 10 years ago and are getting the increasing applications in the
experimental techniques: in cosmic rays investigations, high energy physic detectors etc., up
to medical technique PET. It is due to theirs many interesting and useful characteristics. First
of all they have the high amplification and the capability to receive the one photoelectron
spectra, and also many other properties those are very attractive for their using: low work
voltage, power consumption, compactness etc [1]. But the MAPD have essential
shortcoming-the high dark noise intensities or, the same, high dark pulse rate [2]. We
suppose to use MAPD for measurement of optical signal created by charge particle in
scintillator strip with the WLS fibers in the framework of the R&D for the image calorimeter
of the planned space project HERO [3] for the cosmic ray study. So the high dark pulse rate
MAPD overcoming is rather important.
The dark pulses are of different types. The first type of noise is the primary thermal
type. It is the property of all avalanchesemiconductor devices. The MAPD consist of about
thousand small pixel-avalanche diodes (Geiger counters) working at voltage that is over a
critical one. In this case the avalanche may for some reasons originate from any diode
without external signal. It means the MAPD is not only receiving device but a generator of
noise pulses also. The amplitude of these dark pulses are generally correspond to external
1-3 photoelectron signal. That is more
then average value of typical electronic
noises and may disturb the small
amplitudes of useful signals.
The second type of noises is so
called cross-talk pulses. It is known the
any avalanche in semiconductor is
accompanied by gamma radiation. Those
gamma erased from a given pixel may
stimulate one or more avalanches in the
neighbor pixels. The cross-talk noise
amplitude may be from 1 to 10
photoelectrons but the noise is not so
intensive in comparison with the thermal
noise.

Fig. 1. The dark noise spectra of


1mm*1mm MAPD:top - the total noise,
bottom: after cross-talk suppression




91
The structure of noises is shown on Fig. 1 of [2] for the 1mm*1mm MAPD as we
used also. One can see the 'tail' of multiple photo electron pulses on the top picture. The first
photo electron peak appears by the passing through the photodiode lonely photons, the next
one by simultaneous passing of two photons through the different pixels ect. The bottom
picture shows the residual noise spectrum after suppression cross-talk noise by special
technology developed in [2]. Up to now we have no the opportunity to use this technology
so we want instead to overcome the cross-talk noise by another methods.
First of all there is a trivial method to reject the part of spectrum with noise by the
threshold of electronics. It means to remote about 10 photoelectrons.
The other methods is possible in a system with two fibers and two MAPD outputs from one
scintillator strip. If we have the coincidence of two fiber signal we can select the events without
noise. But in this case we have the coincidences of the random noise events also. The known
formula for the counting the random coincidence events is Nrand=N1*N2*(T1+T2), where the
N1,N 2-the frequencies of noise events in 1 and 2 channel correspondently, T1,T2-resolution time
in the first and second channels[4,5]. Usually the MAPD number of random events N is about
1 MHz. It is very numerous, but the main part of them related with the noise events consists of one
and two photoelectron signals. It is seen from Fig1 that the number of one electron noise events
about ten times as much the number of two electrons. So if the noise one photoelectron events are
rejected only the random events after coincidences are reduced on two orders.
The measurements with MAPD are usually carried out with the charge-cod converter
or the gated integrator. In our case the input pulse for the integrator GATE is the coincidence
output pulse. The most part of them correspond to the true coincidences of useful pulses.
What means the seldom random GATE pulses for the integrator? It may be the pedestal
events without pulses or noise one.

Experimental part
For the experimental verification of the proposed idea we have used the electronic
set-up (Fig. 2), consist of two amplified channel with discriminators. The coincidence circuit
at the discriminator outputs selects the events with signals in bother channels only Besides
the strip has two fibers inside of it, the generator for LED that lightened the strip and
analyzer for watch the spectrum.





Fig.1
A
A
D
D
PC
C
LED
Strip
SiPMs
Gen
In
Gate
Analiz
Fan
Fan
Analyzer
SW
Fig. 2. The test circuit of two fiber strip system with coincidence. D-discriminator, C-circuit
of coincidence, Analizer-the analyzer for measuring spectra from strip, shinned by LED.

92
The set-up allows to compare two spectra: one the simple spectrum from single
fiber(Fig3a),controlled by pulses of generator and the other one is the spectrum pulses from
the same single fiber but controlled by the coincidence circuit (fig3b).The first spectrum has
the small pedestal and 1 to 4 photoelectron signal peaks. In the second coincidence spectrum
stay 2 to 4 peaks those are visible only but the first noise peak is disappeared and the second
one is decreased partly, that may be interpreted as the cancelation of the dark pulses,
including the cross-talk pulses. In the case of coincidence method the number of events are
decreased at 10% those are higher then one photoelectron events that is approximately the
number of cross-talk events.

Fig. 3. The one photoelectron spectra from one fiber of the two fiber strip, shinned by LED
and controlled by generator pulses (a), or by coincidence pulses (b).

It is need to mark the additional preferences of proposed two fiber circuit with
coincidence. It is known the length of MAPD pulses are very short (about 2-3 ns) and for
useful signals they are determined by the time of signal formation in the scintillator strip with
the WLS fiber that is about 10ns. It is allows to reject the noise pulses with help of the
duration discriminator.
So we can to consider the proposed two fiber circuit as the system with himself
trigger and posessives the 'quasinoiseless' property. It may be used instead of usual telescope
system with external trigger coincidence. If such proposed circuit is fulfilled in form of
separate module and same watch of them are spaced on upper and down part of any useful
volume between them, then we get the new trigger sistem with possebility to determine the
direction of passing through this volume particles.
We think the realization of the proposed idea may be simplify by introducing the
MAPDs directly inside of scintillator strip and to strengthen there near the fibers.

Conclusions
1. It is proposed the coincidence design structure for the decrease of one
photoelectron noise in MAPD on basis of two fibers+MAPD in one scintillator strip. It is
more compact and convenient for the modular autonomous realizations in dense 3D image
calorimeters.
2. The signals from fibers are always longer then noise impulses at the entrance of
a
a) b)

93
photodiodes and this can be used for increasing the suppression of noise, if module is
supplemented with the discriminator of duration.
3. The autonomous module on basis of two MAPDs in scintillator strip may be
considered as instrument without the inherent noise and together with additional such
modules at certain distance from the first they will be the small space moon spectrometer.

References
[1] B. Dolgoshein et al. Nucl. Instrum. and Meth. A563, 2006, pp. 368-376.
[2] P. Buzhan et al. Nucl. Instrum. and Meth. .A610, 2009, pp. 131-134.
[3] E.V. AAtkin et al. New High-Energy cosmic Observatory (HERO) project for study
the high-energy primary cosmic-ray radiation. 2009. pp. 4. Published in
Nucl. Phys. Proc. Suppl. 196, 2009, pp. 450-453.
[4] E. Rechin et al. The coincidence method. Atomizdat, 1979.
[5] S. Basiladze. The fast nuclear electronics. Energoatomizdat, 1982.

94
Prague Tier 2 monitoring progress

J. Chudoba
1,2
, M. Elias
1
, L. Fiala
1,2
, J. Horky
1
, T. Kouba
1,2
, J. Kundrat
1,2
,
M. Lokajicek
1,2
, J. Svec
1,2
1
Institute of Physics, Academy of sciences of the Czech Republic, v.v.i.
2
CESNET, z.s.p.o.

The last four years have brought many changes to the computing infrastructure at Prague Tier 2 site.
The network has been upgraded to the 10 Gb backbone. The number of computing cores have grown over three
thousand and storage capacity is now more than 1PB. This brought new challenges in monitoring the hardware,
software and network availability and performance. In the presented talk we show the measures taken to monitor
our computing infrastructure. We also present the workflow for resolving issues reported by users or by the
automatic monitoring systems.
Introduction
There are two sites in Prague involved in EGI and WLCG projects. The first one
praguelcg2 is middle-sized Tier-2 site with about 300 worker nodes and 3000 CPU cores. The
second site prague_cesnet_lcg2 is smaller with 10 worker nodes and 80 CPU cores. The latter
also plays key role in Czech NGI and runs critical services such as WMS, top-level BDII,
Operations portal, MyProxy, LFC and VOMS for Auger, VOCE and Meta virtual
organizations management server.
The praguelcg2 site also actively participates in D0 experiment as it delivered more
than 9900 HEPSPEC cpu years during 2010 for this experiment.
Monitoring health of hardware, status of services and performance of the services is
one of the major tasks for out administrators. History of the monitoring measurements
presented in graphs is also important and so we run several monitoring tools. We present their
important features and setup in this paper.
CFengine
CFengine[1] is a configuration management tool and not a typical monitoring tool. It
is used for automatic configuration of nodes and services at praguelcg2 [2]. However, it plays
an important role in monitoring as well, mainly because of two reasons:
We try to configure every aspect of our services via CFengine. For example, CFengine
ensures that software is installed, services are started and scheduled to run at system start. It
means that we do not have to monitor all the software packages and running services. We
only check that the CFengine daemon is run regularly and it returns success in fulfilling all
the requested steps (installed packages, running services). We have implemented a Nagios
sensor that checks whether CFengine agent has been run recently with success and without
any pending (unsatisfied) configuration steps.
The second important feature of CFengine for monitoring is ability to define groups of
nodes (called classes). We try to stick to these groups in our main monitoring tool - Nagios.
Nagios
Nagios[3] is the main monitoring tool at praguelcg2 since the beginning of its
operations[4]. We use many basic sensors to monitor load, memory and swap space
consumption, open ports, working ssh access, bindable LDAP etc.
95
Sensors
We have also many in-house developed sensors that check: pakiti [5] status, cfagent
completion, finished yum transactions, presence of ghost jobs on a worker node or certificate
lifetime.
All these sensors work in so called "active mode". It means that Nagios periodically
executes script on the main Nagios server or remotely (via NRPE [6]) on the monitored node.
We also use "passive sensors". These are executed in another way (usually by a cron daemon)
and their result is pushed into Nagios. This push can be done locally (into Nagios' command
pipe) or remotely via NSCA [7]. The most important passive check at our site is the system
log (syslog) status check.
Our nodes are configured not only to log system messages into local log files but also
to send these messages over the network to the central syslog machine. This machine runs
syslog-ng software that is able to filter messages and react appropriately. Our syslog-ng looks
for disk errors, memory errors, read-only re-mounts, kernel panics etc. and it sends
information about the problem to Nagios where it is associated to the appropriate node.
Check_mk
Check_mk is a powerful set of enhancements for Nagios. We use two parts of this
suite: livestatus and multiview.
Livestatus is a module that is dynamically inserted into the main Nagios process as a
library and it collects host and service status information. It is able to dump the information
requested by a simple query language. The status information is all kept in memory and so it
can respond much faster than database-based solutions like ndo2db.
The livestatus is closely related to multiview - web interface to Nagios status information
which is much more user friendly than the original web. The main advantages are:
Supports Nagios and Icinga,
Custom views + Bookmarks,
Integration with pnp4nagios,
Better searching than default Nagios web,
Easy massive downtime definition,
Active development.

The main disadvantage is that the check_mk's livestatus is not prepackaged in most linux
distributions.
Munin
Munin [9] is a simple system for generating RRD based graphs for a preconfigured set
of hosts and metrics with no support for automatic discovery on the hosts level, but with an
automatic discovery of monitored services and their changes.
This means that the administrator has to define list of hosts on the munin server,
monitored sensors are configured on the clients' side by copying sensors to a given directory.
Configuration of sensors is automatic. For example if an administrator wants to see graphs of
a hosts' disk usage, he simply copies (or makes symbolic link) df plugin to /etc/munin/plugins
and the server automatically creates a graph with all available disks in the system. If a new
disk appears in the system, munin adds it to the graph.
Munin also generates web pages with the list of all nodes and their respective sensors
and graphs.
The main advantages of munin are simple setup, easy-to-write plugin architecture and
availability of many sensors available from the user community.
96
At praguelcg2 we use munin mainly to visualize trends and history data when we
troubleshoot an operations incident (e.g. mysql performance, disk and memory consumption,
errors on network interfaces).

Various RRD graphs
Munin is a great tool for creating graphs based on RRD toolset, but due to some hard
coded restrictions (image size, update interval), there are cases when one has to generate
graphs by a script. We use in-house built scripts to visualize trends in: server room
temperature, UPS phase load, water cooling status, air conditioning status, torque queues
utilization, MAUI fairshare utilization etc.
Network monitoring
Monitoring of network has undertaken the most visible improvement during the last
two years.
External connectivity monitoring
The external connectivity is managed by CESNET (Czech NREN) and currently it
provides us the following lines:
10Gb connection to the Internet,
Dedicated (wavelength) with 1Gbit bandwidth:
1Gb connection to FZK (ATLAS DE cloud),
Dedicated with 10Gbit bandwidth divided to:
1Gb connection to BNL,
1Gb to FNAL,
1Gb to ASGC,
Several 1Gb connections to Czech HEP institutes.


Fig. 1. DPM head - memory usage
The graph shows a memory leak in DPM that was fixed in January
97
All these lines are monitored by G3 [10] system that can be seen on the Fig. 2 or
running live [11].

Weathermap and MRTG
MRTG has been used for a long time at praguelcg2 for visualization of network
switches throughput and transfer load on storage nodes. But the basic setup was not able to
give us an overview of the whole network in one picture when looking for bottlenecks. This is
where weathermap helped a lot.
Weathermap is a set of perl scripts that is capable to draw a nice picture of a network
and connect the picture with RRD graphs generated by MRTG. The result is a web page
where administrator can see the utilization of all lines in the network and he can click on a
line to get historical and detailed information.
The network topology in weather map is normally configured statically. This is hard to
keep up to date, so we decided to improve the tool with automatic topology creation based on
switches' port names.
One can see either the whole network (Fig. 4) or just one switch (with an associated
graph linked from MRTG Fig. 3).


















Fig. 2. G3 system showing praguelcg2 external connectivity status


Fig. 3. one switch in weathermap
98

Fig. 4. praguelcg2 network in weathermap


Netflow
Netflow[12] is a protocol developed originally by Cisco to offer detailed information
about data flows in the network. We have installed a dedicated server to collect netflow
information. The data collector implementation we use is flow-tools and the typical amount
of data per month is 3-7GB.
We use netflow for overall graphs of data traffic: we can see how many data were
transferred between particular worker nodes and particular file servers (mainly xrootd and
DPM file servers). We can also see how much the dedicated network lines are utilized.
Our site participates in regular security challenges organized by EGI security group.
The challenge usually starts with a report telling us there was a malicious activity performed
at our site and connected with a given IP address. Netflow was always a very important
source of information for the following investigation that was needed for proper handling of
the security challenge. We were able to search all network activity and identify the machine
and its processes that belong to the reported malicious activity.
Site_stat
This tool was created at our site. It enables us to monitor and account the data
transferred from and to our nodes. This tool creates statistics from the netflow data and
creates graphs and transfer tables for configurable groups (e.g. all data downloaded from FZK
Tier 1 during last month or all data transferred to our worker nodes from our xrootd servers
during last day).
The site_stat automatically resolves IP addresses and groups them in networks via
whois queries so we can tell which institutions communicates with our resources the most.
The following illustration (Fig. 5) shows the output of site_stat for incoming traffic
from outside networks during August 2011:







99


Fig. 5
References
[1] CFengine home page, http://www.cfengine.com/
[2] Tom Kouba. A centralized administration of the Grid infrastructure using
Cfengine.NEC2009 Proceedings.
[3] Nagios home page, http://www.nagios.org/
[4] Tom Kouba. Experience with monitoring of Prague T2 Site. NEC2007 Proceedings.
[5] Pakiti: A Patching Status Monitoring Tool, http://pakiti.sourceforge.net/
[6] Nagios Remote Plugin Executor, http://nagios.sourceforge.net/docs/3_0/addons.html
[7] Nagios Service Check Acceptor, http://nagios.sourceforge.net/docs/3_0/addons.html
[8] check_mk suite, http://mathias-kettner.de/check_mk.html
[9] Munin homepage, http://munin-monitoring.org/
[10] G3 System Distributed Measurement Architecture,
http://www.cesnet.cz/doc/techzpravy/2008/g3-architecture/
[11] CESNET2: experimental facility for High Energy Physics,
http://www.ces.net/netreport/hep-cesnet-experimental-facility/
[12] NetFlow Version 9, http://www.ietf.org/rfc/rfc3954.txt
[13] EGI SSC, https://wiki.egi.eu/wiki/EGI_CSIRT:Security_challenges
1. 192.108.45.0 -
192.108.47.255
16.85 TB 29.70% Forschungszentrum
Karlsruhe (FZK)
2. 129.187/16 6.63 TB 11.69% Leibniz-Rechenzentrum
(LRZ)
3. 134.158/16 6.25 TB 11.02% Institut National de
Physique Nucleaire et de
Physique des Particules
...
Total. 56.74TB
100
Detector challenges at the CLIC multi-TeV e
+
e
-
collider

D. Dannheim
1

CERN, European Organisation for Nuclear Research, Geneva, Switzerland

The beam parameters of the proposed CLIC concept for a linear electron-positron collider with a
centre-of-mass energy of up to 3 TeV pose challenging demands for the design of the detector systems. This
paper introduces the CLIC machine and the requirements for the detectors and gives an overview of the ongoing
detector studies.
1. Introduction
The LHC experiments have the potential to discover new physics at the TeV scale. A
lepton collider operating at these energies will then be required to complement the results of
the LHC experiments and to measure the properties of new particles with high precision. The
proposed Compact LInear Collider (CLIC) concept of a linear electron-positron collider with
a center-of-mass energy of up to 3 TeV will be a suitable machine for such measurements [1].
The detector requirements for precision physics in combination with the challenging
experimental conditions at CLIC have inspired a broad detector R&D program.

2. The CLIC accelerator
The CLIC project studies the feasibility of a linear electron-positron collider optimized for
a center-of-mass energy of 3 TeV with an instantaneous luminosity of a few 10
34
cm
-2
s
-1
, using a
novel technique called two-beam acceleration [2]. Fig. 1 shows the two-beam acceleration
principle. A drive beam of rather low energy but high current is decelerated, and its energy is
transferred to the low-current main beam, which gets accelerated with gradients of 100 MV/m.
The two-beam acceleration scheme thus removes the need for individual RF power sources. It is
expected that the machine will be built in several stages with centre-of-mass energies ranging
from 500 GeV up to the maximum of 3 TeV, corresponding to an overall length of the accelerator
complex between approximately 14 and 48 km.


Fig. 1. CLIC two-beam acceleration scheme

1
Presented at NEC2011, Varna, Bulgaria, on behalf of the CLIC Physics and Detectors Study
[http://cern.ch/LCD]
101
In order to reach its design luminosity of 6 x 10
34
cm
-2
s
-1
at a maximum centre-of-
mass energy of 3 TeV, CLIC will operate with very small bunch sizes (
x
x
y
x
z
40 nm x
1 nm x 44 m), leading to strong electromagnetic radiation (Beamstrahlung) from the electron
and positron bunches in the field of the opposite beam. The resulting luminosity spectrum has
a peak at 3 TeV with a tail towards lower center-of-mass energies. About one third of the total
luminosity is contained in the most energetic 1% fraction of the spectrum.
The beam parameters and machine requirements are very challenging. 12 GHz
accelerating structures drive the two main beams and collisions occur every 0.5 ns for a train
duration of 156 ns. The train repetition rate is 50 Hz. The components have to be stable on the
nm level while strong nal focusing inside the experimental areas is required.
Several test facilities (the most recent one CTF3) have been built over the past years,
which have succeeded in demonstrating the feasibility of the two-beam acceleration principle.

3. Detector Requirements and Challenges
The performance requirements for the detector systems at CLIC are driven by the
physics goals of performing precision measurements of newly discovered particles up to the
TeV scale, for example the Higgs boson or SUSY particles. The CLIC experiments shall
probe the parameter space of theories beyond the Standard Model over a large range, thus
allowing discrimination between competing models. The jet-energy resolution should be
adequate to distinguish between di-jet pairs originating from Z or W bosons as well as light
Higgs bosons. This can be achieved with a resolution of
E
/E 3.5% - 5% for jet energies
from 1 TeV down to 50 GeV. The momentum-resolution requirement for the tracking systems
is driven by the precise measurement of leptonic final states, e.g. the Higgs mass
measurement through Z recoil, where Z
0

+

-
, or the determination of slepton masses in
SUSY models. This leads to a required resolution of (p
T
)/p
T
2 x 10
-5
GeV
-1
. High-
resolution pixel vertex detectors are required for efficient tagging of heavy states through
displaced vertices, with an accuracy of approximately 5 m for determining the transverse
impact parameters of high-momentum tracks and a multiple scattering term of approximately
15 m. The latter can only be achieved with a very low material budget of less than 0.2% of a
radiation length per detection layer, corresponding to a thickness of less than 200 m of
silicon, shared by the active material, the readout, the support and the cooling infrastructure.
The time structure of the collisions, with bunch crossings spaced by only 0.5 ns, in
combination with the expected high rates of beam-induced backgrounds, poses severe
challenges for the design of the detectors and their readout systems. Of the order of one
interesting physics event per 156 ns bunch train is expected, overlaid by an abundance of
particles originating from two-photon interactions. These background particles will lead to
large occupancies (number of hits per readout cell) in the inner and forward detector regions
and will require time stamping on the nano-second level in most detectors, as well as
sophisticated pattern-recognition algorithms to disentangle physics from background events.
The gap of 20 ms between consecutive bunch trains will be used for trigger-less readout of the
entire train. Furthermore, most readout subsystems will be operated in a power-pulsing mode
with the most power-consuming components switched off during the empty gaps, thus taking
advantage of the low duty cycle of the machine to reduce the required cooling power.

4. Detector Concepts
The detector concepts ILD [3] and SiD [4] developed for the International Linear
Collider (ILC) [5] with a center-of-mass energy of 500 GeV form the starting point for the
two general-purpose detector concepts CLIC_ILD and CLIC_SiD. Both detectors will be
102
operated in one single interaction region in an alternating mode, moving in and out every few
months through a so-called push-pull system. The main CLIC-specific adoptions to the ILC
detector concepts are an increased hadron-calorimeter depth to improve the containment of
jets at the CLIC centre-of-mass energy of up to 3 TeV and a redesign of the vertex and
forward regions to mitigate the effect of high rates of beam-induced backgrounds.
Fig. 2 shows cross-section views of CLIC_ILD and CLIC_SiD. Both detectors have a
barrel and endcap geometry with the barrel calorimeters and tracking systems located inside a
superconducting solenoid providing an axial magnetic field of 4 T in case of CLIC_ILD and 5 in
case of CLIC_SiD. The highly granular electromagnetic and hadronic calorimeters of both
detectors are designed for the concept of particle-flow calorimetry, allowing to reconstruct
individual particles combining calorimeter and tracking information and thereby improving the
jet-energy resolution to the required excellent levels. The total combined depth of the
electromagnetic and hadronic calorimeters is about 8.5 hadronic interaction lengths. The hit-time
resolution of the calorimeters is of the order of 1 ns.


Fe Yoke
2
.
6

m


Fig. 2. Longitudinal cross section of the top quadrant of CLIC_ILD (left) and CLIC_SiD (right)
In the CLIC_ILD concept, the tracking system is based on a large Time Projection
Chamber (TPC) with an outer radius of 1.8 m complemented by an envelope of silicon strip
detectors and by a silicon pixel vertex detector. The all-silicon tracking and vertexing system
in CLIC_SiD is more compact with an outer radius of 1.3 m.
Both vertex detectors are based on semiconductor technology with pixels of 20 m x 20 m
size. In case of CLIC_ILD, both the barrel and forward vertex detectors consist of three double
layers which reduce the material thickness needed for supports. Fig. 3 shows a sketch of the vertex-
detector region of CLIC_ILD. For CLIC_SiD, a geometry with five single barrel layers and 7
single forward layers was chosen. The high rates of incoherently produced electron-positron pair
background events constrains the radius of the thin beryllium beam pipes and of the innermost
barrel layers. For CLIC_ILD the beam pipe is placed at a radius of 29 mm, while the larger
magnetic field in CLIC_SiD leads to a larger suppression of low-p
T
charged particles and therefore
allows for a reduced radius of the beam pipe of 25 mm. The material budget of 0.1% - 0.2% of a
radiation length per detection layer assumes that cooling can proceed through forced air flow
without additional material in the vertex region. The resulting impact-parameter resolutions are as
precise as 3 m for high-momentum tracks and the momentum resolution of the overall tracking
103
systems reach the required value of (p
T
)/p
T
2
2 x 10
-5
GeV
-1
. Time stamping of the pixel and strip
hits with a precision of 5 10 ns will be used to reject out-of-time background hits.


The superconducting solenoids are surrounded by instrumented iron yokes allowing to
measure punch through from high-energy hadron showers and to detect muons. Two small
electromagnetic calorimeters cover the very forward regions down to 10 mrad. They are
foreseen for electron tagging and for an absolute measurement of the luminosity through
Bhabha scattering.

5. Backgrounds in the Detectors
Beamstrahlung off the colliding electron and positron bunches will lead to high rates
of electron-positron pairs, mostly at low transverse momenta and small polar angles. In
addition, hadronic events are produced in two-photon interactions with larger transverse
momenta and polar angles. Fig. 4 (right) compares the polar-angle distributions of the main
sources of beam-induced background events. Electron-positron pairs produced coherently and
through the so-called trident cascade do not affect the detectors, as they leave the detector
towards the past-collision line with a design acceptance of <10 mrad. The incoherently
produced electron-positron events affect mostly the forward regions and the inner tracking
detectors. Approximately 60 particles from incoherent pair events per bunch crossing will
reach the inner layers of the vertex-detector. The hadron events will result in
approximately 54 particles per bunch crossing in the vertex detectors. Fig. 4 (left) shows the
expected hit rates in the barrel vertex-detector layers of the CLIC_ILD detector, as obtained
with two different simulation setups. Readout train occupancies of up to 2% are expected in
the barrel layers and of up to 3% in the forward layers, including safety factors for the
simulation uncertainties and cluster formation.
Due to their harder p
T
spectrum, the hadron events will also lead to large
occupancies and significant energy deposits in the calorimeters. The expected train
occupancies are up to 50% in the electromagnetic endcap calorimeters and up to 1000% in the
hadronic endcap calorimeters. Multiple readouts per train and possibly a higher granularity
for the high-occupancy regions will be required to cope with these high rates. The total energy
deposition in the calorimeters from electron-positron pairs and from hadron events is
37 TeV per train, posing a severe challenge for the reconstruction algorithms. Cluster-based
timing cuts in the 1-3 ns range are applied offline to mitigate the effect of the backgrounds on
the measurement accuracy for high-p
T
physics objects.

Fig. 3. Longitudinal cross section of the barrel and forward vertex region of the
CLIC_ILD detector. Dimensions are given in millimeters
104


Fig. 4. Polar-angle distribution of the main sources of beam-induced backgrounds, normalised
to one bunch crossing (left); average hit densities in the CLIC_ILD barrel vertex detectors for
particles originating from incoherent electron-positron pairs and from gg-->hadrons (right).

The radiation exposure of the main detector elements is expected to be small,
compared to the corresponding regions in high-energy hadron-colliders. For the non-ionizing
energy loss (NIEL), a maximum total fluence of less than 10
11
n
eq
/ cm
2
/ year is expected for
the inner barrel and forward vertex layers. The simulation results for the total ionizing dose
(TID) predict approximately 200 Gy / year for the vertex detector region.

6. Detector R&D
Hardware R&D for the proposed CLIC detectors has a large overlap with the
corresponding developments for the ILC detectors. In several areas, however, CLIC-specific
requirements need to be addressed. The following list contains examples of ongoing R&D
projects for the CLIC detectors:
- Hadronic calorimetry. The higher jet energies expected at CLIC require a denser
absorber material for a given maximal radius of the barrel hadronic calorimeter,
compared to ILC conditions. Tungsten is therefore foreseen as absorber for the barrel
hadronic calorimeter. Prototypes of highly granular tungsten-based calorimeters with
either analog or digital readout are currently under study in test beams performed
within the CALICE collaboration. One of the main goals of these tests is to improve
the simulation models describing the enlarged slow component of the hadronic
showers in tungsten, compared to the ones in steel absorbers;
- Vertex detector. The vertex detectors have to fulfill a number of competing
requirements. Small pixels and therefore small feature sizes are needed to reach very
high measurement accuracy and to keep the occupancies low. Time stamping in the
5-10 ns range requires fast signal collection and shaping. The amount of material has
to stay within a budget of 0.1% - 0.2% of a radiation length per detection layer, asking
for ultra-thin detection and readout layers and low-mass cooling solutions. Two
principal lines of vertex-detector R&D are pursued to reach these ambitious goals: In
the hybrid-detector approach, thinned high-resistivity fully depleted sensor layers will
be combined with fast low-power and highly integrated readout layers through low-
mass interconnects. The integrated technology option combines sensor and readout in
105
one chip. The charge collection proceeds in an epitaxial layer. Hybrid solutions
factorize the sensor and readout R&D and take advantage of industry-standard
processes for the readout layers. Drawbacks are the higher material budget, the
additional material and cost for interconnects and the additional complication of
handling the thinned structures. Integrated technologies can reach lower material
budgets and very low power consumption. On the other hand, fast signal collection
and readout has not been demonstrated yet in these technologies. A concern for a
future application at CLIC is also the limited availability of the custom-made
integrated CMOS processes;
- Low-mass cooling solutions. A total power of approximately 500 W will be dissipated
in the vertex detectors alone. The small material budget for the inner tracking
detectors constrains severely the permitted amount of cooling infrastructure. For the
vertex barrel layers, forced air-flow cooling is therefore foreseen. Fig. 6 shows a
calculation of the temperature distribution inside the barrel layers of the CLIC_SiD
vertex detector in dependence of the air-flow rate. A flow rate of up to 240 liter / s,
corresponding to a flow-velocity of 40 km/h, is required to keep the temperatures at an
acceptable level. Further R&D is required to demonstrate the feasibility of this air-
flow cooling scheme. Possible vibrations arising from the high flow velocities are of
particular concern. Supplementary micro-channel cooling [6] or water-based under-
pressure cooling may be required in the forward vertex regions;
- Power pulsing and power distribution. The ambitious power-consumption targets for
all CLIC sub detectors (for example < 50 mW / cm
2
in the vertex detectors) can only
be achieved by means of pulsed powering, taking advantage of the low duty cycle of
the CLIC machine. The main power consumers in the readout circuits will be kept in
standby mode during most of the empty gap of 20 ms between consecutive bunch
trains. Furthermore, efficient power distribution will be needed to limit the amount of
material used for cables. Low drop-out regulators or DC/DC converters will be used in
combination with local energy storage to limit the current and thereby the cabling
material needed to bring power to the detectors. Both the power pulsing and the
power-delivery concepts have to be designed and thoroughly tested for operation in a
magnetic field of 4-5 T;
- Solenoid coil. Design studies for high-field thin solenoids are ongoing, building up on
the experience with the construction and operation of the LHC detector magnets.
Principal concerns are the uniformity of the magnetic field, the ability to precisely
measure the field map and the requirement to limit the stray field outside the detector;
- Overall engineering design and integration studies. Various CLIC-specific
engineering and integration studies are ongoing. The main areas of these studies are
the design of the experimental caverns including centralized infrastructure for cooling
and powering, access scenarios in the push-pull configuration and integration issues
related to the machine-detector interface.

106

Fig. 6. Calculated average temperatures of the five barrel layers of the CLIC_SiD vertex
detector in dependence of the total air-flow rate

Conclusion
The detectors of the multi-TeV CLIC machine will have unsurpassed physics reach for
discoveries and for precision measurements complementing the results expected from the
LHC experiments. The proposed CLIC detector concepts will be able to measure the physics
with good precision, despite the high energies and challenging background conditions.
Detector R&D studies are ongoing worldwide, in collaboration with the ILC detector
community, aiming to meet the required performance goals.

References
[1] CLIC Conceptual Design Report: Physics & Detectors, 2011, available at
https://edms.cern.ch/document/1160419
[2] CLIC Conceptual Design Report: The CLIC Accelerator Design, in preparation.
[3] T. Abe et al. The International Large Detector: Letter of Intent, 2010, arXiv:1006.3396s.
[4] H. Aihara et al. SiD Letter of Intent, 2009, arXiv:0911.0006, SLAC-R-944.
[5] J. Brau, (ed.) et al. International Linear Collider Reference Design Report, 2007, ILC-
REPORT-2007-001.
[6] A. Mapelli et al. Low material budget microfabricated cooling devices for particle
detectors and front-end electronics. Nucl. Phys. Proc. Suppl., 215, 2011, pp. 349352.
107
J/ -> e+e- reconstruction in Au + Au collision at 25 AGeV in the
CBM experiment

O.Yu. Derenovskaya
1
, I.O. Vassiliev
2,3
1
Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia
2
Goethe University, Frankfurt, Germany
3
GSI, Darmstadt, Germany

Introduction
The Compressed Baryonic Matter (CBM) Experiment [1, 2] is designed to investigate
high-energy heavy ion collisions at the future international Facility for Antiproton and Ion
Research (FAIR) in Darmstadt, Germany. A scientific goal of the research program of the
CBM experiment is to explore a phase diagram of strongly interacting matter in the region of
the highest baryon densities.
The proposed detector system is schematically shown in Fig. 1. Inside the dipole
magnet there is a Silicon Tracking System (STS) which provide track and vertex
reconstruction and track momentum determination. Ring Imaging Cherenkov (RICH) detector
has to identify electrons among about one thousand other charged particles. Transition
Radiation Detectors (TRD) arrays additionally identify electrons with momentum above
1 GeV/c. TOF provides time-of-flight measurements needed for hadrons identification.
Electromagnetic calorimeter (ECAL) measures electrons and photons.




Fig. 1. CBM experimental setup

The investigation of charmonium production is one of the key goals of the CBM
experiment.
The main difficulty lies in the extremely low multiplicity expected in Au+Au
25 AGeV collisions near J/ production threshold. Thats why, the efficient event selection
based on J/ signatures are necessary in order to reduce the data volume to the recordable
rate. Here we present results in reconstruction of J/ meson in its di-electron decay channel
using KFParticle with complete reconstruction includes RICH, TRD, TOF and assume the
realistic STS detector set-up.

108
Input to the simulation
To study the feasibility of J/ detection, background and signal events have been
simulated. Decay electrons from J/ were simulated by PLUTO [5] generator. The
background was calculated with set of central gold-gold UrQMD [6] events at 25 AGeV.
Signal embedded into background was transported through the standard CBM detector setup.
In the event reconstruction, particles are first tracked by the silicon tracking system placed
inside a dipole magnetic field, providing the momentum of the tracks. Global tracking provide
additional particle identification information using RICH, TRD and TOF subdetectors.

Electron identification
In order to reconstruct J/ we used full electron identification procedure includes the
RICH, TRD and Time Of Flight detectors. In the CBM experiment the electrons and positrons
are identified via their Cherenkov radiation measured with the RICH. The Cherenkov ring
positions and radii are determined by dedicated ring recognition algorithms, and the ring
centers are attached to the reconstructed particle tracks. The radius of the reconstructed rings
is shown in Fig. 2 as a function of the particle momentum. We use the elliptic ring fit and
apply ring quality cuts based on the neural network to separate electrons from pions.


Fig. 2. The radius of the reconstructed rings as a function of the particle momentum.
Electrons and pions are clearly separated up to momentum about 10 GeV/c


Also the electrons are identified via their transition radiation measured with the TRD.
Fig. 3 shows distributions of energy losses of electrons and pions in the first TRD layer.
Distributions of energy losses in the other TRD layers are similar. Based on the individual and
total energy loss we employed neural network (a three-layered perceptron from the ROOT
package) to discriminate electrons from pions.
109


Fig. 3. Distribution of energy losses by electrons (dE/dx + TR) and pions (dE/dx) in the first
TRD layer

In addition to RICH and TRD, the information from TOF is also used to separate
hadrons from electrons (Fig. 4). The squared mass of charged particles is calculated from the
length traversed by the particle and the time of flight. A momentum dependent cut on squared
mass is used to reject hadrons (mainly pions) from the identified electron sample.


Fig. 4. The squared mass of charged particles as a function of the momentum in the TOF for
RICH identified electrons

The electron identification efficiency as well as the -suppression factor as function of
momentum is shown on Fig. 5.
110




Fig. 5. Efficiency of electron identification (left) and pion suppression factor (right) as
function of momentum

With the combined information from all detectors, we achieve an efficiency of 60%.
The combined RICH and TRD identification suppressed pions to a level of 13000. Evidently,
the use of TRD information significantly improves the electron-pion separation.

Reconstruction procedure
After electron identification, the positively charged tracks emerging from the target
which was identified as electrons by the RICH, TRD and TOF detectors are combined with
negatively charged tracks to construct a J/ - candidate, using the KFParticle package [3]. In
order to suppress the physical electron background a transverse momentum cut at 1 GeV/c
was applied to track - candidates for J/ decays to electrons. Fig. 6 demonstrates z - vertex of
reconstructed J/. We have got a quite good z-vertex resolution, which shows that using
KFParticle allows us to distinguished J/ vertex with high accuracy.

Fig. 6. Distribution of z - vertex of constructed J/ rectangle is target area

For the study of the signal-to-background ratio, the signal mass spectrum was
generated from events with one J/ decay embedded into UrQMD background. The
combinatorial background was obtained from pure UrQMD events. To increase the statistics
the event mixing technique was applied. The signal spectrum was added to the background
after proper scaling, taking into account the assumed multiplicity (HSD transport code),
111
J/ reconstruction efficiency and the branching ratio. The resulting invariant-mass spectrum
in the charmonium mass region is displayed in Fig. 7.

Fig. 7. Invariant mass spectra of J/ and mesons for central Au+Au collisions at 25 AGeV

The spectrum corresponds to 10
11
central gold-gold collisions at 25AGeV, or roughly
28 hours of beam time at full CBM interaction rate.

Table 1. Multiplicity, branching ratio, signal-to-background ratio, reconstruction efficiency
and mass resolution for J/ and in central Au+Au collisions at 25 AGeV

Multiplicity Br. ratio S/B Efficiency

J/ 1.92 * 10
-5
0.06 ~ 2 0.19 24 MeV
2.56 * 10
-7
8.8 * 10
-3
~ 0.043 0.19 25MeV

Conclusion
CBM detector allows to collect about 3150 J/ and 1.4 per hour with signal to
background ratio about 2 and 0.04 correspondently at 10 MHz interaction rate. The
simulations were preformed using realistic detector setup. Complete electron identification
including RICH, TRD and TOF detectors was used. We conclude that the feasibility of J/
and even measurements in central collisions of heavy-ions with CBM looks promising.
25 m gold target will be used in order to reduce -conversion. The study will be continued
with new STS, RICH, TRD and TOF geometries.

References
[1] Compressed Baryonic Matter n Laboratory Experiments. The CBM Physics Book, 2011,
http://www.gsi.de/forschung/fair_experiments/CBM/PhysicsBook.html
[2] Compressed Baryonic Matter Experiment. Technical Status Report, GSI, Darmstadt, 2005,
http://www.gsi.de/onTEAM/dokumente/public/DOC-2005-Feb-447 e.html
[3] S. Gorbunov and I. Kisel. Reconstruction of Decayed Particles Based on the Kalman Filter.
CBM-SOFT-note-2007-003, http://www.gsi.de/documents/DOC-2007-May-14.html
[4] O. Derenovskaya. 17th CBM collaboration meeting, Drezden, Germany, 2011.
[5] http://www-hades.gsi.de/computing/pluto/html/PlutoIndex.html
[6] M. Bleicher, E. Zabrodin, C. Spieles et al. Relativistic Hadron-Hadron Collisions in the
Ultra-Relativistic Quantum Molecular Dynamics Model (UrQMD). (1999-09-16). In
J. Phys. G 25, p. 1859.

112
Acquisition Module for Nuclear and Mossbauer Spectroscopy

L. Dimitrov, I. Spirov, T. Ruskov
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Bulgaria

1. Introduction
Until recently the eight bit microcontrollers (MCU) have been dominant in industrial
control systems, as well as in portable measuring devices, including those in different nuclear
applications. Sharply decreasing in prices and power consuming have lead to significant
growth in the use of 16-bits and recently of 32-bits high performance MCUs. This process is
facilitated by the fact that the developers can benefit from low cost development tools and
free software largely available.
In this paper a PC controlled, USB powered portable acquisition module for nuclear
and Mossbauer spectroscopy is described. Numerous modes of operation are possible:
- 4k channels pulse height spectrum analyzer using a 12-bits successive approximation
ADC with 5 us conversion time (with or without sliding scale linearization),
- 1k channels pulse height spectrum analyzer using a 10-bits successive approximation
ADC with 1 us conversion time (with or without sliding scale linearization),
- up to 8k channels pulse height spectrum analyzer using an external high-performance
spectroscopy ADC,
- 256 to 2k channels (in steps of 256 channels) Mossbauer spectrum analyzer (multiscaler),
- single channel analyzer,
- double counter,
- double ratemeter,
- timer.

2. General Description
The block diagram of the module is shown in Fig. 1. The module has one analog input
(Ain, BNC connector), two multipurpose digital inputs (Din1 and Din2, BNC connectors) and
one type B USB connector and one 26-pin header for an external spectroscopy ADC. Except
for the PZ adjustment the whole module control is carried out by the MCU mainly through the
PCI interface functions:
- select mode of operation,
- Spectroscopy Amplifier gain adjust and input polarity select,
- reading both external 12 bits ADC and internal 10 bits ADC,
- setting voltage levels in Single Channel Analyzer,
- loading sliding scale voltage value in Sliding Scale DAC,
- configuring Control Logic for selected mode of operation.

2.1. Power Supplies and References
All voltage supplies (3.3 V for MCU and logic and 5 V for linear part of the module)
and all reference voltages are derived from 5 V USB voltage. The MCU manages biasing
voltages depending on the current mode of operation thus reducing power consumption. When
no measurement is running all power except that required for the MCUs is switched off.



113
Fig. 1. Block diagram

2.2. Spectroscopy amplifier (SA)
The SA has four stages. The input stage consists of a passive differentiator with pole-
zero cancellation circuitry and operational amplifier with unity gain. The two way analog
switch controlled by the MCU is used to select the appropriate input polarity. The input stage
is followed by two programmable gain operational amplifiers with passive integrators in
front. The SA has fixed time constants of 1 us. The total gain can be set by the MCU from 1
to 1024 in twenty steps. The last stage includes an output buffer operational amplifier and a
base line restorer. All operational amplifiers are rail-to-rail, low power type, supplied by a
single 5 V source.

Fig. 2. Spectroscopy amplifier

114
2.3. Peak Detector (PD)
The Peak Detector samples the peak value of the pulse from SA and in the same time
issues a Ready pulse. The sampled voltage (Vpeak) is hold until it is sampled by either 12 bits
or 10 bits ADC when the discharge of the sampling capacitor is started.

2.4. Single Channel Analyzer
While measuring both pulse height and Mossbauer spectrum, only input pulses with
amplitudes between a predetermined lower and upper limit are allowed to be processed.

2.5. Sliding Scale DAC
The 12 bits DAC delivers the voltage for sliding scale linearisation. Simultaneously
with reading ADC12 the MCU loads the sliding scale DAC thus avoiding introduction of
additional dead time.

2.6. Control logic
The whole module logic including Control Logic is implemented in one CPLD chip.
The control logic is configured by MCU depending on selected mode of operation.

2.7. MCU
The powerful 32-bit, 80 MHz MCU is the core of the module. Due to its enhanced
features, multiple power management modes and multiple interrupt vectors with individually
programmable priority the MCU allows flexibility and efficiency at low energy consumption
about 40 mA for the whole module.


Acknowledgments
The financial support of National Fund contract DTK-02/77/09 is gratefully
acknowledged.
115
Business Processes in the Context of Grid and SOA

V. Dimitrov
Faculty of Mathematics and Informatics, University of Sofia, Bulgaria

The term business process has very broad meaning. In this paper the term is investigated in the context
of Grid computing and Service-Oriented Architecture. Business process in this context is a composite Web
service written in WS-BPEL and executed by a specialized Web service, called orchestrator. Grid computing
is intended to be an environment for business process of this kind and even more, this environment to be
implemented as Web services. The problems in implementation of business processes with WS-BPEL in Grid
environment are investigated and discussed.
Business Process and Web Services
What is business process? OMG in [1] defines business process as A defined set of
business activities that represent the steps required to achieve a business objective. It includes the
flow and use of information and resources. These business activities are sometimes called
tasks.
A task can be local one (implemented as part of the business process) or external one
(available for reuse to the other processes). In the last case, the task is specified as Web
service, following OASIS WS-* specifications. The process is a Web service too. So, it is
available as a Web service to the other processes. A process can have as a step sub process,
but it is implemented as a part of the enclosing process.
Web services could be simple or composed ones. The first ones are implemented in
some programming language. The second ones are implemented as composition of other Web
services, i.e. as business process. Sophisticated hierarchies of composite Web services could
be implemented at different abstraction levels.
The business process has a business objective. This business objective could be
scientific one. This means that a business process could be classified as scientific business
process. How the business process achieves its objective has to be measured. This
measurement could be implemented via monitoring of Key Performance Indicators (KPI) of
the business process. So, monitoring subsystem is an essential part of the Business Process
Management System (BPMS) the execution environment of the business process.
The business process is defined by its workflow the all possible sequences of execution
of its steps (tasks). Every task needs resources and possibly information to run. These resources
and information have to be provisioned to the process for its execution. The information can be
local one (process state) or derived from an external source (i.e. Web service). In such a way,
Web services supporting local state are called stateful to distinguish them from the stateless
Web services that do not support conversational state.
Business process could be specified in BPMN or in WS-BPEL [2]. A BPMN specification
is at higher level of abstraction and does not contain any implementation information. WS-BPEL
specifies business process as composite Web service of other Web services as is presented above.
It is possible a BPMN specification to be directly interpreted and executed in a business process
execution environment (IBM WebSphere Lombardi [5]), but usually it is translated in WS-BPEL
specification that is interpreted by a specialized Web service called orchestrator (IBM
WebSphere Business Modeler & Process Server [6, 7]). The first approach is used for fast
prototyping of business processed developed from scratch. The second approach is used for
development of sophisticated business processes integrating reusable software exposed as Web
services.
116
Business Process Management
Business process management (BPM) is a holistic management approach [3] focused
on aligning all aspects of an organization with the wants and needs of clients. It promotes
business effectiveness and efficiency while striving for innovation, flexibility, and integration
with technology. BPM attempts to improve processes continuously. It can therefore be
described as a "process optimization process." It is argued that BPM enables organizations to
be more efficient, more effective and more capable of change than a functionally focused,
traditional hierarchical management approach.
BPM is business process development process. Its life cycle is vision, design,
modeling, execution, monitoring, and optimization.

Vision. Functions are designed around the strategic vision and goals of an organization. Each
function is attached with a list of processes. Each functional head in an organization is
responsible for certain sets of processes made up of tasks which are to be executed and
reported as planned. Multiple processes are aggregated to accomplishment a given function
and multiple functions are aggregated to and achieve organizational goals.

Design. It encompasses both the identification of existing processes and the design of "to-be"
processes. Areas of focus include representation of the process flow, the actors within it, alerts
& notifications, escalations, Standard Operating Procedures, Service Level Agreements, and
task hand-over mechanisms. Good design reduces the number of problems over the lifetime of
the process. Whether or not existing processes are considered, the aim of this step is to ensure
that a correct and efficient theoretical design is prepared. The proposed improvement could be
in human-to-human, human-to-system, and system-to-system workflows, and might target
regulatory, market, or competitive challenges faced by the businesses.

Modeling. Modeling takes the theoretical design and introduces combinations of variables
(e.g., changes in rent or materials costs, which determine how the process might operate under
different circumstances). It also involves running "what-if analysis" on the processes.

Execution. One of the ways to automate processes is to develop or purchase an application
that executes the required steps of the process; however, in practice, these applications rarely
execute all the steps of the process accurately or completely. Another approach is to use a
combination of software and human intervention; however this approach is more complex,
making the documentation process difficult. As a response to these problems, software has
been developed that enables the full business process to be defined in a computer language
which can be directly executed by the computer. The system will either use services in
connected applications to perform business operations or, when a step is too complex to
automate, will ask for human input. Compared to either of the previous approaches, directly
executing a process definition can be more straightforward and therefore easier to improve.
However, automating a process definition requires flexible and comprehensive infrastructure,
which typically rules out implementing these systems in a legacy IT environment. Business
rules have been used by systems to provide definitions for governing behavior, and a business
rule engine can be used to drive process execution and resolution.

Monitoring. Monitoring encompasses the tracking of individual processes, so that
information on their state can be easily seen, and statistics on the performance of one or more
processes can be provided. An example of the tracking is being able to determine the state of
a customer order so that problems in its operation can be identified and corrected. In addition,
this information can be used to work with customers and suppliers to improve their connected
117
processes. These measures tend to fit into three categories: cycle time, defect rate and
productivity. The degree of monitoring depends on what information the business wants to
evaluate and analyze and how business wants it to be monitored, in real-time, near real-time
or ad-hoc. Here, business activity monitoring (BAM) extends and expands the monitoring
tools generally provided by BPMS. Process mining is a collection of methods and tools
related to process monitoring. The aim of process mining is to analyze event logs extracted
through process monitoring and to compare them with an a priori process model. Process
mining allows process analysts to detect discrepancies between the actual process execution
and the a priori model as well as to analyze bottlenecks.

Optimization. Process optimization includes retrieving process performance information
from modeling or monitoring phase; identifying the potential or actual bottlenecks and the
potential opportunities for cost savings or other improvements; and then, applying those
enhancements in the design of the process. Overall, this creates greater business value.

Re-engineering. When the process becomes too noisy and optimization is not fetching the
desire output, it is recommended to re-engineer the entire process cycle. BPR has become an
integral part of manufacturing organization to achieve efficiency and productivity at work.
Workflow
The definition of this term is given in [4] as follows: Workflow is concerned with the
automation of procedures where documents, information or tasks are passed between
participants according to a defined set of rules to achieve, or contribute to, an overall business
goal. Whilst workflow may be manually organized, in practice most workflow is normally
organized within the context of an IT system to provide computerized support for the
procedural automation and it is to this area that the work of the Coalition is directed.
Workflow is associated with business process re-engineering.
Workflow is the computerized facilitation or automation of a business process, in
whole or part. This means that workflow is a more specialized term than business process.
The last one do not concerns only automation and here the term business process is preferably
used instead workflow.
Workflow Management System is a system that completely defines manages and
executes workflows through the execution of software whose order of execution is driven
by a computer representation of the workflow logic. All WFM systems may be characterized
as providing support in three functional areas:
the Build-time functions, concerned with defining, and possibly modeling, the
workflow process and its constituent activities,
the Run-time control functions concerned with managing the workflow processes in an
operational environment and sequencing the various activities to be handled as part of
each process,
the Run-time interactions with human users and IT application tools for processing the
various activity steps.
SOA
Service-Oriented Architecture (SOA) by [8] is the architectural solution for
integrating diverse systems by providing an architectural style that promotes loose coupling
and reuse. SOA is architectural style, which means that it is not a technology. SOA
fundamental construct are the service (logical, self-contained business function), service
provider and service consumer. The service is specified with implementation independent
118
interface and the interaction between service consumer and service provider is based only on
this interface.
As architectural style, SOA defines some requirements to the services:
Stateless. SOA services neither remember the last thing they were asked to do nor do
they care what the next is. Services are not dependent on the context or state of other
services, only on their functionality. Each request or communication is discrete and
unrelated to requests that precede or follow it;
Discoverable. A service must be discoverable by potential consumers of the service.
After all, if a service is not known to exist, it is unlikely ever to be used. Services are
published or exposed by service providers in the SOA service directory, from which
they are discovered and invoked by service consumers;
Self-describing. The SOA service interface describes, exposes, and provides an entry
point to the service. The interface contains all the information a service consumer
needs to discover and connect to the service, without ever requiring the consumer to
understand (or even see) the technical implementation details;
Composable. SOA services are, by nature, composite. They can be composed from
other services and, in turn, can be combined with other services to compose new
business solutions;
Loose coupling. Loose coupling allows the concerns of application features to be
separated into independent pieces. This separation of concern provides a mechanism
for one service to call another without being tightly bound to it. Separation of concerns
is achieved by establishing boundaries, where a boundary is any logical or physical
separation that delineates a given set of responsibilities;
Governed by policy. Services are built by contract. Relationships between services
(and between services and service domains) are governed by policies and service-level
agreements (SLAs), promoting process consistency and reducing complexity;
Independent location, language, and protocol. Services are designed to be location
transparent and protocol/platform independent (generally speaking, accessible by any
authorized user, on any platform, from any location);
Coarse-grained. Services are typically coarse-grained business functions. Granularity
is a statement of functional richness for a servicethe more coarse-grained a service
is, the richer the function offered by the service. Coarse-grained services reduce
complexity for system developers by limiting the steps necessary to fulfill a given
business function, and they reduce strain on system resources by limiting the
chattiness of the electronic conversation. Applications by nature are coarse-grained
because they encompass a large set of functionality; the components that comprise
applications would be fine-grained;
Asynchronous. Asynchronous communication is not required of an SOA service, but
it does increase system scalability through asynchronous behavior and messaging
techniques. Unpredictable network latency and high communications costs can slow
response times in an SOA environment, due to the distributed nature of services.
Asynchronous behavior and messaging allow a service to issue a service request and
then continue processing until the service provider returns a response.
The most popular technological implementation of SOA is Web services. Starting
from version 1.1, Web services specifications diverge to capture more SOA. Not all above
mentioned requirements are directly supported by Web services, but SOA can be supported at
design time. Some of the requirements are not desirable in some environments, like stateless
services in Grid that is why SOA is an architectural style but not technology.
119
By the way, Web services specifications have been modified to capture Grid
requirements to Web Services. The most important result of this initiative is Web Services
Resource Framework (WSRF) - an extension to Web services. More details on these
extensions are discussed below.
OGSA
Open Grid Services Architecture (OGSA) as specified in [9]: OGSA realizes the
logical middle layer in terms of services, the interfaces these services expose, the individual
and collective state of resources belonging to these services, and the interaction between these
services within a service-oriented architecture (SOA). The services are built on Web service
standards, with semantics, additions, extensions and modifications that are relevant to Grids.
This means Grids that are OGSA-compliant, are SOA-compliant, based on Web services. The
Core WS-* specifications does not include orchestration of Web services, but the full power
of SOA could be achieved only with WS-BPEL orchestrators. This is well understood by the
main SOA vendors and they usually offer several variants of orchestration. Why it is so
important SOA suitcases to have orchestrator service? Because with the orchestrator
reusability of the services is accomplished very well and it is possible simply to compose new
Web services in hierarchies at different abstraction levels.
In reality it is still far away from OGSA-compliant Grids. Even OGSA specification
still continues to visualize the Grid as mega batch computer. There is terminology mismatch
in OGSA from the past and the future. In the next versions of OGSA we hope that this will be
overcome GGF and WS are very closely working together. Today, OGF uses WS-*
specifications deriving its own profiles for OGSA and do not extend them. In these profiles
the word MAY is mainly changed to MUST for Grids.
Our focus here is on the business processes. What is specified in OGSA on that topic?
Many OGSA services are expected to be constructed in part, or entirely, by invoking other
servicesthe EMS Job Manager is one such example. There are a variety of mechanisms that
can be used for this purpose.
Choreography. Describe required patterns of interaction among services, and
templates for sequences (or more structures) of interactions.
Orchestration. Describe the ways in which business processes are constructed from
Web services and other business processes, and how these processes interact.
Workflow. Define a pattern of business process interaction, not necessarily
corresponding to a fixed set of business processes. All such interactions may be
between services residing within a single data center or across a range of different
platforms and implementations anywhere.
OGSA, however, will not define new mechanisms in this area. It is expected that
developments in the Web services community will provide the necessary functionality. The
main role of OGSA is therefore to determine where existing work will not meet Grid
architecture needs, rather than to create competing standards.
In other words, OGSA do not say anything about the central competition issue of SOA
platforms vendors. OGF wait for solutions from the Web services world.
Some Considerations and Conclusion
There are two important thinks that have to be mentioned when we talk about service
orientation of Grids. First is that a Grid could be service-oriented implemented, but this
doesnt necessary means that it is a service-oriented environment for execution of service-
oriented solutions. Second one is that an environment could be service-oriented environment
120
for execution of service-oriented solutions, but this does not means that this environment has
to be service-oriented implemented.
OGSA specifies service-oriented architecture for Grid implementation, but it does not
necessary specify Grid as service-oriented platform supporting execution and development of
service-oriented solutions. It is extremely primitive to consider a batch job as a business
process and job tasks as services as is done by some authors. What is the difference? Services
have a fixed location even in the case of WSRF extension. This means that they are installed,
configured, managed and supported at given locations. Every service has an owner
responsible for it. The service needs of supporting execution environment it is not possible a
service to be executed at any free computing resource. This is the same to expect that a
program written in high level programming language can be executed directly on the
computer without compiling etc. Service-oriented solution is like a program written in high
level programming language and in comparison batch job is like a machine code program.
Ideas are the same, but technologies, tools and the most important programmer productivity
are extremely different.
The next question is how business processes could be implemented in Grids, i.e. how
Grid could become service-oriented platform? In practice, SOA platforms are very different
from what is manifested by their vendors. SOA solutions running on one platform in one data
center are really execution optimal. When a business process has to access remote Web
service, it has to do intensive XML-interchange with the remote Web service. This problem
could be solved only with specialized hardware solutions that off-load the servers from XML-
processing and security protection tasks. When a business process is running on one server, it
is translated to simple object-oriented program in Java or C#.
It is clear for now that business process specification for Grids would be WS-BPEL
with some possible Basic Profiles. The most important is the orchestrator Web service.
Nowadays we have many orchestrators from different vendors. All of them are using WS-
BPEL with some extensions. There is no WS specification for orchestration service and such
a service no vendor has intention to specify. The situation is like in the pre-Internet era: many
vendors have had supplied incompatible private network solutions and standardization
organization have had tried to develop commonly accepted networking standards. The
Internet has tried to create network of networks and then has happened that its protocols have
become local ones protocols. That is why today OGF has to try to establish orchestrator Web
service specification orchestrator of the orchestrators, no vendor would do that.
One remark on the business process specification language: Some researchers have
tried to use specification languages other than WS-BPEL, arguing that it is not human
friendly, but there is no BPEL-engine that has no tools for graph representation of WS-BPEL
that is really human friendly. Even some of these researchers try to argue that Petri nets as a
simple formal technique are better than WS-BPEL. First, WS-BPEL uses Petri nets (links)
and by my opinion it is the worst part of the specification my experience shows that most of
the bad errors are resulted of links. Petri nets are good formal technique; their power as a
formal technique is that their expression power is more than that of finite state automata and
less of that of Turing machine. So extending Petri nets to Turing machine expressing power is
nonsense, but thats the way of business process composition with Petri nets in these
researches.
There are two scenarios for BPEL orchestrator. In the first one, orchestrator engine is
located in the Grid. In this case, all Grid functionality is fully available to the engine. At this
point we have to mention some performance issues. SOA solutions could be high
performance or not. This is achieved mainly with SOA architect efforts it is not a problem
of automatic optimization. How this is done? The orchestrator has its own supporting
121
infrastructure. The last one is located in the site where is located the orchestrator. Then most
of the services used in the business process composition are local one on the same site. Only
some of the services are not local one. Think of data as services the standard SOA approach.
Then this means that the data exchange among the services of the business process are
optimized and supported in the local site by high performance local area network and other
applicable technics. Interactions with remote services are slow, but acceptable. Business
process is an optimal solution developed by the SOA architect it is not a subject of
automatic optimization.
In the first scenario, the orchestrator and its entire infrastructure is a part of the Grid. This
means that orchestrated Web services are located in the Grid on the orchestrators site. Some of
them could be exposed outside the Grid, but they are mainly for consumption in the Grid. Every
site could have different BPEL-orchestrator it is exposing Web services. In the Grid overall
compatibility of these orchestrator engines have to be defined using a Basic Profile.
In the second scenario, the orchestrator engine is outside the Grid. This means that its
business process could use some Grid services as remote services as is mentioned above.
These processes could be results visualization ones. In this scenario the main problem is the
security issue how Grid services have to be accessed from the outside. Not that this problem
does not exists inside the Grid, but in this situation it is more difficult. WS-Security
framework cold be used, but it has to be part of the Grid functionality.
One final remark, do not think of Grid as a mega batch engine. This old vision still
exists in OGSA, but think of Grid as an ocean of services. These services are located at fixed
Grid sites; they are exposed and could be used. What is the purpose of this remark? Many
researchers continue to accept the Grid in the old fashioned way and as result of that their
efforts are mainly directed to capsulate the job batch engine as Web service. For them the task
is a task in the job but not in the Web service. Even the business process is compiled to batch
job that is sent for execution in the Grid. This is not OGSA.

Acknowledgments
This research is supported by 02/22/20.10.2011 funded by the Bulgarian
Science Fund.
References
[1] OMG, Business Process Model and Notation (BPMN). Version 2.0, 3 January 2011,
http://www.omg.org/spec/BPMN/2.0
[2] OASIS, Web Services Business Process Execution Language Version 2.0, OASIS Standard,
11 April 2007, http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.pdf
[3] J. vom Brocke. Handbook on Business Process Management: Strategic Alignment.
Governance, People and Culture (International Handbooks on Information Systems). /HKVJH
& M. Rosemann. Vol. 1. Berlin, Springer, 2010.
[4] Workflow Management Coalition. The Workflow Reference Model. 19 January 1995,
http://www.wfmc.org
[5] IBM, WebSphere Lombardi Edition, http://www-01.ibm.com/software/integration/lombardi-
edition
[6] IBM, WebSphere Business Modeler Advanced,
http://www-01.ibm.com/software/integration/webphere-business-modeler/advanced/features
[7] IBM, WebSphere Process Server, http://www-01.ibm.com/software/integration/wps
[8] K. Holley, A. Arsanjany. 100 SOA Questions, Asked and Answered. Pearson Education Inc.,
Published by Prentice Hall, Upper Saddle River, New Jersey, 2011, p. 07458.
[9] OGF, GFD-I.080, The Open Grid Services Architecture, Version 1.5, 24 July 2006,
http://www.ogf.org/documents/GFD.80.pdf

122
ATLAS TIER 3 in Georgia

A. Elizbarashvili
Ivane Javakhishvili Tbilisi State University, Georgia

PC farm for ATLAS Tier 3 analysis
Arrival of ATLAS data is imminent. If experience from earlier experiments is any
guide, its very likely that many of us will want to run analysis programs over a set of data
many times. This is particularly true in the early period of data taking, where many things
need to be understood. Its also likely that many of us will want to look at rather detailed
information in the first data which means large data sizes. Couple this with the large number
of events we would like to look at, and the data analysis challenge appears daunting.
Of course, Grid Tier 2 analysis queues are the primary resources to be used for user
analyses. On the other hand, its the usual experience from previous experiments that analyses
progress much more rapidly once the data can be accessed under local control without the
overhead of a large infrastructure serving hundreds of people.
However, even as recently as five years ago, it was prohibitively expensive (both in
terms of money and people), for most institutes not already associated with a large computing
infrastructure, to set up a system to process a significant amount of ATLAS data locally. This
has changed in recent years. Its now possible to build a PC farm with significant ATLAS
data processing capability for as little as $5-10k, and a minor commitment for set up and
maintenance. This has to do with the recent availability of relatively cheap large disks and
multi-core processors.
Lets do some math. 10 TB of data corresponds roughly to 70 million Analysis Object
Data (AOD) events or 15 million Event Summary Data (ESD) events. To set the scale,
70 million events correspond approximately to a 10 fb
-1
sample of jets above 400-500 GeV in
PT and a Monte Carlo sample which is 2.5 times as large as the data. Now a relatively
inexpensive processor such as Xeon E5405 can run a typical analysis Athena job over AODs
at about 10 Hz per core. Since the E5405 has 8 cores per processor, 10 processors will be able
to handle 10 TB of AODs in a day. Ten PCs is affordable. The I/O rate, on the other hand, is a
problem. We need to process something like 0.5 TB of data every hour. This means we need
to ship ~1 Gbits of data per second. Most local networks have a theoretical upper limit of
1 Gbps, with actual performance being quite a bit below that. An adequate 10 Gbps network
is prohibitively expensive for most institutes.
Enter distributed storage. Fig. 1A shows the normal cluster configuration where the
data is managed by a file server and distributed to the processors via a Gbit network. Its
performance is limited by the network speed and falls short of our requirements. Today,
however, we have another choice, due to the fact that we can now purchase multi-TB size
disks routinely for our PCs. If we distribute the data among the local disks of the PCs, we
reduce the bandwidth requirement by the number of PCs. If we have 10 PCs (10 processors
with 8 cores each), the requirement becomes 0.1 Gbps. Since the typical access speed for a
local disk is > 1 Gbps, our needs are safely under the limit. Such a setup is shown in Fig. 1B.

123



First activities on the way to Tier3s center in ATLAS Georgian group
The local computing cluster (14 CPU, 800 GB HDD, 8-16GB RAM, One Workstation
and 7 Personal Computers) have been constructed by Mr. E. Magradze and Mr. D. Chkhaberidze
at High Energy Physics Institute of Ivane Javakhishvili Tbilisi State University (HEPI TSU). The
creation of local computing cluster from computing facilities in HEPI TSU was with the aim of
enhancement of computational power (resources). The scheme of the cluster network is following
on Fig. 2.



The Search for and Study of a Rare Processes Within and Beyond Standard Model at
ATLAS Experiment of Large Hadron Collider at CERN.


Fig. 1. A. Centralized data storage; B. Distributed data storage
Fig. 2. Scheme of cluster at High Energy Physics Institute of TSU

124
INTERNATIONAL SCIENCE & TECHNOLOGY CENTER (ISTC); Grant G-1458 (2007-
2010)
Leading Institution : Institute of High Energy Physics of I. Javakhishvili Tbilisi State
University (HEPI TSU), Georgia.
Participant Institution: Joint Institute for Nuclear Research (JINR), Dubna, Russia.
Participants from IHEPI TSU: L. Chikovani (IOP), G. Devidze (Project Manager),
T. Djobava, A. Liparteliani, E. Magradze,
Z. Modebadze, M. Mosidze, V. Tsiskaridze

Participants from JINR: G. Arabidze, V. Bednyakov, J. Budagov (Project
Scientific Leader),
E. Khramov, J. Khubua, Y. Kulchitski, I. Minashvili,
P. Tsiareshka
Foreign Collaborators: Dr. Lawrence Price, (Senior Physicist and former
Director of the High Energy Physics Division, Argonne
National Laboratory, USA)
Dr. Ana Maria Henriques Correia (Senior Scientific
Staff of CERN, Switzerland)

G-1458 Project Scientific Program:
1. Participation in the development and implementation of the Tile Calorimeter Detector
Control System (DCS) of ATLAS and further preparation for phase II and III
commissioning,
2. Test beam data processing and analysis of the combined electromagnetic liquid argon
and the hadronic Tile Calorimeter set-up exposed by the electron and pion beams of
1 350 GeV energy from the SPS accelerator of CERN,
3. Measurements of the top quark mass in the dilepton and lepton+jet channels using the
transverse momentum of the leptons with the ATLAS detector at LHC/CERN,
4. Search for and study of FCNC top quark rare decays t Zq and t Hq

(where
q= u, c; H

is a Standard Model Higgs boson) at ATLAS experiment (LHC),
5. Theoretical studies of the prospects of the search for large extra dimensions trace at
the ATLAS experiment in the FCNC-processes,
6. Study of the possibility of a Supersymmetry observation at ATLAS in the mSUGRA
predicted process gg for EGRET point.


125
ATLAS Experiment Sensitivity to New Physics
Georgian National Scientific Foundation (GNSF); Grant 185
Participating Institutions:
Leading Institution : Insititute of High Energy Physics of I. Javakhishvili
Tbilisi State University (HEPI TSU), Georgia
Participant Institution: E. Andronikashvili Institute of Physics (IOP)
Participants from IHEPI TSU: G. Devidze (Project Manager), T. Djobava (Scientific
Leader), J. Khubua, A. Liparteliani, Z. Modebadze,
M.Mosidze, G. Mchedlidze, N. Kvezereli
Participants from IOP: L. Chikovani. V. Tsiskaridze, M. Devsurashvili,
D. Berikashvili, L. Tepnadze, G. Tsilikashvili,
N. Kakhniashvili
The cluster was constructed on the basis of PBS (Portable Batch System) software on
Linux platform and for monitoring was used Ganglia software. All nodes were
interconnected using gigabit Ethernet interfaces.
The required ATLAS software was installed at the working nodes in SLC 4 environment.
The cluster have been tested with number of simple tests and tasks studying various
processes of top quarks rare decays via Flavor Changing Neutral Currents tZq (q= u,c quarks),
tHqbbar,q , tHqWW*q (in top-antitop pair production) have been run on the cluster.
Signal and background processes generation, fast and full simulation, reconstruction and analysis
have been done in the framework of ATLAS experiment software ATHENA. (L. Chikovani,
T. Djobava, M. Mosidze, G. Mchedlidze).

Activities at the Institute of High Energy Physics of TSU (HEPI TSU):
PBS consist of four major components: (working model is shown on the Fig. 3):
Commands: PBS supplies both command line commands and a graphical interface.
These are used to submit, monitor, modify, and delete jobs. The commands can be
installed on any system type supported by PBS and do not require the local presence
of any of the other components of PBS. There are three classifications of commands,
Job Server: The Job Server is the central focus for PBS. Within this document, it is
generally referred to as the Server or by the execution name pbs_server. All
commands and the other daemons communicate with the Server via an IP network.
The Server's main function is to provide the basic batch services such as
receiving/creating a batch job, modifying the job, protecting the job against system
crashes, and running the job (placing it into execution),

126
Job executor: The job executor is the daemon which actually places the job into
execution. This daemon, pbs_mom, is informally called Mom as it is the mother of all
executing jobs,
Job Scheduler: The Job Scheduler is another daemon which contains the site's policy
controlling which job is run and where and when it is run. Because each site has its
own ideas about what is a good or effective policy, PBS allows each site to create its
own Scheduler.



Activities at the Institute of High Energy Physics of TSU (HEPI TSU):
on that Batch cluster had installed Athena software 14.1.0 and 14.2.21,
the system was configured for running the software in batch mode and the cluster had
been used on some stages of the mentioned ISTC project,
also the system used to be a file storage.
Example of PBS batchjob file for athena 14.1.0 (Fig. 4).
Fig. 3. PBS working schema

127



Plans to modernize the network infrastructure
It is planned to rearrange the created the existing computing cluster into ATLAS Tier
3 cluster. But first of all TSU must have the corresponding network infrastructure.
Nowadays the computer network of TSU comprises 2 regions (Vake and Saburtalo).
Each of these two regions is composed of several buildings (the first, second, third, fourth,
fifth, sixth and eighth in Vake, and Uptown building (tenth), institute of applied mathematics,
TSU library and Biology building (eleventh) in Saburtalo). Each of these buildings is
separated from each other by 100 MB optical network. The telecommunication between the
two regions is established through Internet provider the speed of which is 1 000 MB (Fig. 5).

MagliviRegion
U N I V E R S I T Y U N I V E R S I T Y

VakeRegion
U N I V E R S I T Y U N I V E R S I T Y
Fiber Cable FiberCable
ISP



Servers and controllable network facilities are predominantly located in Vake region
network. Electronic mail, domain systems, webhosting, database, distance learning and other
services are presented at TSU. Students, administrative staff members and academic staff
members, research and scientific units at TSU are the users of these servers. There are 4 (four)
Fig. 4. PBS batchjob example
Fig. 5. TSU existing network topology

128
Internet resource centers and several learning computer laboratories at TSU. The scientific
research is also supported by network programs. Total number of users is 2500 PCs. The
diversity of users is determined by the diversity of network protocols, and asks for maximum
speed, security and manageability of the network.
Initially, the TSU network consisted only from dozens of computers that were
scattered throughout different faculties and administrative units. Besides, there was no unified
administrative system, mechanisms for further development, design and implementation. This
has resulted in flat deployment of the TSU network.
This type of network:
Does not allow setting up of sub-networks and Broadcast Domains are hard to control.
Formation of Access Lists of various user groups is complicated.
It is hard to identify and eliminate damages to each separate network.
It is almost impossible to prioritize the traffic and the quality of service (QOS).
Because there is no direct connection between the two above-mentioned regions it is
impossible to set up an Intranet at TSU. In the existing conditions it would have been possible
to set up an Intranet by using VPN technologies. However, its realization required relevant
tools equipped with special accelerators in order to establish the 200 MB speed connection.
This is the equipment that TSU does not possess.
The reforms in learning and scientific processes demands for the mobility and
scalability of the computer network. It is possible to accomplish by using VLAN
technologies, however in this case too absence of relevant switches hinders the process of
implementation.
The planned modern network topology of TSU is shown on the Fig. 6 with modern
network materials (Fig. 7) and the cable system structure for each building (Fig. 8).




Fig. 6. TSU planned network topology

129





With all above-said, through implementing all of the devices we will have a
centralized, high speed, secured and optimized network system.
Fig. 7. TSU planned network topology
Fig. 8. Network cable system structure for TSU buildings

130
Improving TSU informatics networks security traffic between the local and global
networks will be controlled through network firewalls. The communications between sub-
networks will be established through Access Lists.
Improving communication among TSU buildings main connections among the ten TSU
buildings are established through Fiber Optic Cables and Gigabit Interface Converters
(GBIC). This facilities increase the speed of the bandwidth up to 1 GB.
Improving internal communication at every TSU building internal communications will
be established through third-level multiport switches that will allow to maximally reducing
the so - called Broadcasts by configuring local networks (VLAN). The Bandwidth will
increase up to 1 GB.
Providing the network mobility and management in administrative terms, it will be
possible to monitor the general network performance as well as provide the prioritization
analysis for each sub-network, post or server.

INSTALLING THE TIER 3g/s SYSTEM at TSU. Atlas Tier-3s model is shown of the Fig. 9.
ATLAS Tier-3s


The minimal requirement is on local installations, which should be configured with a
Tier-3 functionality:
A Computing Element known to the Grid, in order to benefit from the automatic
distribution of ATLAS software releases:
Fig. 9. Atlas Tier-3s model

131
Needs >250 GB of NFS disk space mounted on all WNs for ATLAS software,
Minimum number of cores to be worth the effort is under discussion (~40?),
A SRM-based Storage Element, in order to be able to transfer data automatically from the
Grid to the local storage, and vice versa:
Minimum storage dedicated to ATLAS depends on local user community (20-40 TB?),
Space tokens need to be installed,
LOCALGROUPDISK (>2-3 TB), SCRATCHDISK (>2-3 TB), HOTDISK (2 TB):
Additional non-Grid storage needs to be provided for local tasks (ROOT/PROOF).
The local cluster should have the installation of:
A Grid User Interface suite, to allow job submission to the Grid,
ATLAS DDM client tools, to permit access to the DDM data catalogues and data transfer
utilities,
The Ganga/pAthena client, to allow the submission of analysis jobs to all ATLAS
computing resources.
Tier 3g work model is shown on the Fig. 10.


Fig. 10. Atlas Tier 3g work model

132
JINR document server: current status and future plans

I. Filozova, S. Kuniaev, G. Musulmanbekov, R. Semenov, G. Shestakova,
P. Ustenko, T. Zaikina
Joint Institute for Nuclear Research, Dubna, Russia

1. Introduction
Nowadays various institutions and universities around the world create their
own repositories, depositing there different kinds of scientific and educational documents
making them open for the world community. Open Access to Research a way to make
scientific results available to all scientific and educational community by the Internet. Fig. 1
shows the annual growth of the numbers of repositories and deposited in them records,
according to statistics given by the Registry of Open Access Repositories (ROAR
http://roar.eprints.org). Today there are near 2000 Open Access (OA) repositories with
scientific research documents created in the frameworks Open Archive Initiatives (OAI) [1].
Besides, such kind of initiative has been put forward in education, as well, Open Educational
Resources [2]. Open Access in science is way to collect, preserve the intellectual output of
scientific organization and disseminate it over the world. This is the aim of the Open Access
repository of the Joint Institute for Nuclear Research, JINR Document Server (JDS) which
started its functionality in 2009 [3, 4]. In this paper we describe some peculiarities of filling
and depositing documents, the possibilities of the visualization of search and navigation and
the ways of the further development of the information service at JDS.



Fig. 1. Growth of the repositories and records number over the world

2. JINR Document Server Collections
Building the institutional repository has as its objects to make accessible the scientific
and technical results of JINR researchers for the international scientific community, to
increase the level of informational support of JINR employees by granting an access to other
scientific OA archives and to estimate the efficiency of scientific activity of JINR employees. JDS
133
has been built as OAI-compliant repository to realize these goals. JDS functionality, supported by
the CDS Invenio software [5], covers all the aspects of modern digital library management.
JDS, integrated into the global network of repositories ROAR, makes its content available for
everyone anywhere at any time. The content of JDS is composed of the following objects:
1. The research and scientific-related documents of the following types:
Publications issued in co-authorship with JINR researchers,
Archive documents that describe all the essential stages of the JINR research activity,
2. Tutorials, various educational video, audio and text materials for the students and young
scientists,
3. Documents providing informational support for scientific and technological researches
performed in JINR.
As a digital library JDS consists of two parts: digital collections and service tools. The
objects stored in the repository are grouped into collections: published articles, preprints,
books, theses, conference, proceedings, presentations and reports, reports, dissertation
abstracts, clippings from newspapers and magazines, photography, audio and video materials
[6]. All these collections are arranged hierarchically into two trees: the basic, or regular,
and virtual. The basic tree (left column in Fig. 2) in the JDS is formed according
to classification feature the genre of scientific publications, and the virtual tree by
the subject of publications (right column in Fig. 2).
Narrow by collection:

Articles & Preprints (32,736)
Articles (14,867) Preprints (15,403) Conference Papers (3,248)

Books and Reports (1,653)
Reports (166) Books (1,487)


Conferences, Presentations and Talks (19,427)
Conferences Announcements (4,099) Conferences
Proceedings (15,326) Lectures (0) Notes of Schools and
Seminars (0) Talks (2) Notes (0)

INDICOSEARCH (0)
INDICOSEARCH.events (0)
INDICOSEARCH.contribs (0) INDICOPUBLIC (0)

Handbooks & Manuals (0)

Theses (8) [restricted]

Multimedia (3)
Press (3) Audio (0) Videos (0) Pictures (0) Posters (0)

Bulletins (0)
STL Bulletins (0)

Focus on:
JINR Articles & Preprints (23,652)
JINR Published Articles (12,744) JINR
Preprints (13,640)
JINR Conferences (177)
JINR Annual Reports (137)
JINR (12) VBLHEP (13) BLTP (19) FLNR (16)
FLNP (28) DLNP (10) LIT (20) LRB (12) UC (7)
High Energy Experiments in JINR (179)
FASA-3 (1) MARUSYA (3) EDBIZ (0)
BECQUEREL (0) NUCLOTRON &
NUCLOTRON-M (164) NIS (0) NICA/MPD (19)
Heavy Ion Physics Experiments in
JINR (9)
ACCULINNA (1) DRIBS (1) DRIBS-2 (0)
CORSET-DEMON (0) MASHA (0)
VASSILISSA (7)
Non-accelerator Neutrino Physics &
Astrophysics (162)
BAIKAL (106) EDELWEISS & EDELWEISS-
II (10) GERDA (9) GEMMA & GEMMA-II (5)
LESI (0) NEMO (32)
External Experiments (1,055)
SPS (219) FAIR (59) RHIC (321) LHC (459)
Fig. 2. JDS Collections in basic and subject trees
134

The subject collection allows one to perform selective search. It may be advantageous
to present a different, orthogonal point of view on nodes of the regular tree, based on the other
attributes. Developed user interface of JDS provides a wide range of information services:
search and navigation, creation of groups by interest, saving the searched results,
individual and group bookshelves, deposit of manuscripts and arrangement the discussions on
them, sending out notices and messages.
The main point for a newly created repository is how to fill its content by the documents
of interest issued before. We used various methods of filling and updating the JDS content of
the publications of JINR authors: automatic data collection (harvesting) from arXiv.org and
CERN Document Server (CDS), other OAI compliant archives; semi-automatic collection of
documents from retrieval databases SPIRES, ADS, MathSciNet (Fig. 3).



Fig. 3. JDS Data Sources

The second point is how to deposit manuscripts, preprints and new publications of
JINR authors. A little earlier the Personal INformation (PIN) data base has been developed in
JINR where research workers deposit their personal information in. In addition to other
personal data (affiliation, collaborations, participation in various projects, experiments,
teaching, grants etc.) PIN includes their publications (bibliography and full texts). With the
aim not to force the authors to deposit their manuscripts and publications twice (in PIN and
JDS) we set the communication channel to import data from PIN to JDS. It delivers PIN data
in MARCXML format which are then uploaded into JDS. Furthermore, preprints issued by
JINR Publishing department (bibliography and full texts) are uploaded into JDS without
authors participation. Nevertheless, authors can (by their desire) deposit their manuscripts
and publications in JDS in the self-archiving or by proxy mode (Fig. 3).
Collection INDICOSEARCH with its subcollections is intended for searching in the
administrative data base ADMIN, composing various events like committees, lectures,
reports, meetings, conferences etc. The data base was created and is managed by the
elaborated in CERN the INDICO software [7]. Inasmuch as Indico provides OAIcompliant
output format, it is possible to harvest data from this database.
135
3. Visualization of Search and Navigation
The design and usage of visual interfaces to digital libraries is becoming an active and
challenging field of information visualization. The visualization helps humans mentally
organize, electronically access, and manage large volume of information and provide a
valuable service to digital libraries. Readers, while looking for the relevant documents, are in
need of new tools that can help them to identify and manage enormous amounts of
information. Visual interfaces to digital libraries apply powerful data analysis and information
visualization techniques to generate visualizations of large document sets. The aim of
visualization is to make the usage of digital library resources more efficient in reduced search
time, provide better understanding of a complex data set, reveal relationship among objects
(authors, documents), make data to be viewed from different perspectives. To attain this aim
there are three directions to be explored: i) identification of a composition of search results,
understanding of interrelation of retrieved documents, and improvement of the search engine;
ii) overview of the coverage of a digital library by facilitating browsing; iii) visual
presentation of user interaction data in relation to available documents and services.
Visualization of search and navigation allows analyzing the search results more efficiently.
Thus a visual interface for search and navigation in a digital library should meet the following
requirements:
plain representation of search results for better identification,
finding of interrelations among documents,
improved search engine,
graphical vision and navigation in a digital library,
mapping of users operation with available documents with the aim of better
functionality of a digital library.
In the last decades, a large number of information visualization techniques have been
developed, allowing visualizations of search and navigation in digital libraries. With the aim
of usage in JDS the following tools designed for information visualization were analyzed:
Java Universal Network / Graph Framework (JUNG),
JGraphT,
JavaScript InfoVis Toolkit (JIT),
Graphviz,
Prefuse (set of software tools for creating rich interactive data visualizations).
The package Prefuse, as meeting all our requirements, has been chosen to visualize the
JDS resources [8]. Two visual prototypes were developed by Prefuse on the part of real JDS
data. The tree-map method proposed in paper [9] is applied to display the collection of JDS
documents by the subject (Fig. 4).
Each rectangle (collection) of the tree-map has a certain color; the larger the area of the
rectangle, the more articles in the topic. The next level of the hierarchy displays records, and a
final one publication in the form of interactive display rectangles. The visual
representation also contains the search path and found document which is highlighted. The
publication is a reference to itself. When user points cursor at the publication, the
information about the author and publication date is displayed. Visualization created with
the usage of the threaded tree allows one to explore the repository content by subjects. The search
result sets out in more light tone that allows one to estimate visually the number of publications
that the search finds. It supports zooming and panning which allow one to display more detailed
information.

136



Fig. 4. Visualization of JDS Resources: Tree Map Method

Usage of the radial graph method [10] demonstrates relationship between
authors and coauthors of publications (Fig. 5). The author appears at the center, co-authors
on the second concentric circle, and the co-co-authors on the third one. To avoid the
oversaturation the only three levels of hierarchy are used in rendering. The author and his
coauthors are highlighted by appropriate color. When cursor selects the author / co-author
the information about the number of his publications is displayed.



Fig. 5. Visualization of JDS Resources: Radial Graph Method
137
4. JDS: What is futher?
With the aim to minimize the author's efforts to deposit his publications we are
planning to arrange the reverse channel delivering documents (bibliography and full texts)
from JDS to PIN. The program of the visualization interface is continuing visualization of
search results, statistics and monitoring.
JDS users may join in groups by interests and are able to interact with each other
through the service tool WebGroup to discuss on actual publications using the module
WebComment. The module WebComment provides a socio-oriented tool for the discussed
document ranking by readers. WebMessage facilitates clustering of users in groups via a web-
forum. JDS has a custom module WebStat, providing statistics collection by some parameters
such as the number of calls, downloads, citations, etc. Thus, all these service tools, WebStat,
WebBasket, WebGroup, WebMessage, WebComment facilitate to form a social network
within the scientific community in the framework of the information system. The following
elements for visualization of the users groups as a social network will be developed:
browsing the groups and its members, browsing the detailed information about group
members, browsing relations between the groups, visual search by the user.
Results of the research, scientific and engineering efforts represented in publications
have semantic relations between them via quoting mechanism. Description of
these relationships and their properties opens up new possibilities for studying the document
corpus of digital libraries. Publications, as well as collections of publications, contain other
semantic linkages which reflect the logic of author's thought on certain subjects. These
linkages can be studied and described. There are other types of linkages also. For example,
the linkages relating to personal profile the author of publications, his affiliation,
participation in the collaborations, experiments, projects, etc. So, the personal profiles of
authors are formed in the framework of the digital library. These profiles can serve as a
basis of scientific social network. By analyzing these linkages, we can
obtain important information about motivations of these interactions. For example, intensity
of relations between authors, working in some scientific area (subject project) illustrates the
degree of activity in this area, etc. We see the following directions of JDS development:
creation of JINR social science network, elaboration of semantic search and navigation.

References
[1] Open Archives Initiative, http://www.openarchives.org
[2] Open Educational Resources, http://en.wikiversity.org/wiki/Open_educational_resources
[3] I.A. Filozova, V.V. Korenkov, G. Musulmanbekov. Towards Open Access Publishing at JINR. Proc. of
XXII International Symposium on Nuclear Electronics and Computing (NEC`2009). Dubna: JINR, 2010.
JINR, E10, 11-2010-22. pp. 124-128.
[4] V. Borisovsky et al. On Open Access Archive for publications of JINR staff members. //V. Borisovsky,
V. Korenkov,S. Kuniaev, G. Musulmanbekov, E. Nikonov, I. Filozova. Proc. of XI Russian Conference
RCDL'2009. Petrozavodsk: Karelia Scientific Center of Russian Academy Science, 2009. pp. 451-458 (in
Russian).
[5] CDS Invenio, http://invenio-software.org/
[6] V.F. Borisovsky et al. Open Access Archive of scientific publications: JINR Document Server
//V. Borisovski, I. Filozova, S. Kuniaev, G. Musulmanbekov, P. Ustenko,T. Zaikina, G. Shestakova. Proc.
of XII Russian Conference RCDL'2010. Kazan: Kazan State University, 2010, pp. 162-167 (in russian).
[7] INDICO, http:// http://indico-software.org/
[8] J. Heer, S.K. Card, J.A. Landay. Prefuse: a toolkit for interactive information visualization. Portland
Oregon: SIGCHI Conference on Human Factors in Computing Systems, 2005.
[9] B. Jonson, B. Shneiderman. Treemaps: a space-filling approach to the visualization of hierarchical
information structure. Proc. of th Second Internat. IEEE Visualization Conf., 1991.
[10] S.K. Card, J.D. Mackinlay, B. Shneiderman. Readings in Information Visualization: Using Vision to
Think. San Francisco: Morgan Kaufmann, 1999.
138
Upgrade of Trigger and Data Acquisition Systems for the LHC
Experiments

N. Garelli
CERN, European Organisation for Nuclear Research, Geneva, Switzerland

The Large Hadron Collider (LHC) and its experiments demonstrated high performance over the first
two years of operations, producing many physics results. However, the actual beam conditions are far from the
nominal ones, which will be reached in the next years after following shutdown periods, mandatory to upgrade
and maintain the accelerator complex. Further, there is a plan of increasing the delivered instantaneous
luminosity beyond the design value up to 5 times of it, i.e. 5x10
34
cm
1
2s

, which will allow the experiments to


collect much higher integrated luminosity than initially anticipated, opening many new physics programs. New
requirements on the read-out and trigger systems of the LHC experiments are required, thus an upgrade plan is
on the way to be established. Moreover, technology improves and more performing detectors can be adopted and
some components need to be replaced since they have been damaged by the radiation.
This talk presents, after a brief introduction to the present status of the read-out system of the LHC
experiments, an overview of the various R&D upgrade possibilities at various phases from now to the highest
luminosity LHC period. In particular, we will focus on the evolution of the DAQ systems to cope with the new
trigger requirements and the integration of sub-detectors with new back-end electronics.

1. Introduction
The Large Hadron Collider (LHC) project has been built at CERN (European
Organization for Nuclear Research) to explore the TeV energy scale in order to increase the
human knowledge about fundamental particles. The LHC physics program foreseen proton-
proton and heavy ions collisions, however in this article I will focus on the proton-proton
collisions program only. At the LHC protons collide with nominal center of mass energy of
14 TeV and a bunch crossing frequency of 40 MHz in four interaction points, in which
particle detectors have been built.
The LHC is successfully providing collisions since March 2010, but the nominal
design parameters have not yet reached and this will require a major upgrade of the
accelerator complex. Furthermore, the LHC has the possibility of increasing the luminosity
beyond the design value, guaranting a large discovery range.
In this article I will briefly describe the LHC program to reach the design luminosity
and beyond in the next decades [1]. Further, I will describe how the experiments will evolve
to cope with the more demanding constraints imposed by high integrated and instantaneous
luminosity in particular for the trigger and the data acquisition systems. More details about the
upgrade plans will be given for three experiments only: ATLAS, CMS and LHCb.

2. The Large Hadron Collider: design and beyond
The LHC has been built in the tunnel which hosted the Large Electron Positron (LEP)
of a 27 km circumferences and about 100 m underground. It is composed of superconducting
magnets cooled to 1.9 K with about 140 tons of liquid He, generating a magnetic field of
about 8.4 T.
The LHC is operating at so far unexplored conditions, thus commissioning and testing
periods are needed before reaching the design conditions. The yearly schedule is roughly
arranged in two-month data taking period interspersed with one-month slot for refurbishment,
maintenance, and testing. Every three years of operations, a whole year shutdown is needed to
operate major components upgrade and deliver higher luminosity.
139
The first long shutdown, named consolidation phase, will occur in 2013, in which the
main focus will be on the repair of the joints between the superconducting magnets. An
electrical fault in one of these buses was the cause of the accident in September, the 19
th
2009:
as a precaution, lower center-of-mass energy (7 TeV) with respect to the nominal one has
been delivered. After the first long shutdown period, the design center-of-mass energy of
14 TeV and the peak luminosity of 10
34
cm
1
2s

will be reached.
Other long shutdown periods are foreseen in 2017 and 2021 and are referred to as Phase 1 and
Phase 2 respectively. During Phase 1 a new collimation system will be deployed since it will become
necessary to protect the machine from the higher losses. Further, the new injector system (Linac4) will
be commissioned. After Phase 1 a peak luminosity of 2*10
34
cm
1
2s

will be reached. During Phase 2


instead, new bigger quadrupoles and new radio frequency cavities will be deployed allowing to reach
the maximum possible LHC instantaneous luminosity of 5 * 10
34
cm
1
2s

. By the end of 2030,


3000 fb
-1
of data will be delivered: 1000 times more data with respect of today.

3. The LHC experiments: design and beyond
The original design and the future upgrade program of the LHC experiments depend
on the LHC time schedule, the expected instantaneous and total delivered luminosity, and
more in general on the beam conditions.
By design, the interesting collisions at the LHC are rare and always hidden within
~22 minimum bias events at the design luminosity, called pile-up. Since the pile-up effect
increases with the peak luminosity, the scenario is going to get worse after the LHC upgrade:
after Phase 2 about 100 minimum bias events will be expected for every interesting collision.
Originally, stringent requirements on the design of the experiments had been already imposed:
fast electronics response to resolve individual bunch crossings, high granularity to avoid that a
pile-up event goes in the same detector element as the interesting event, and radiation resistant
components. On top of this, at LHC nominal conditions, O(10) TB/s of mostly minimum bias
data is produced, which is impossible to store: sophisticated trigger and Data AcQuisition
(DAQ) systems have been developed to select and convey to local mass storage only the data
interesting for analysis at O(100) MB/s rate.
Each experiment is composed of multiple trigger levels: the first one, referred to as
Level-1, analyzes the information coming from the muon chambers and the calorimeters to
produce a coarse event selection, while the following ones, referred to as High Level Trigger
(HLT) are software-based.
The LHC upgrade plan for reaching luminosities beyond the design imposes new
working points and new challenges to the experiments. The higher pile-up will impose
stringent requirements on the pattern recognition system and on the first trigger level. A
simple increase of the Level-1 threshold of the transverse momentum cannot be simply
applied, since a lot of physics would be lost, thus more sophisticated decision criteria must be
adopted, such as software algorithms running into the electronics or exploiting at this stage
also inner tracking information. Consequently, the decision time at the Level-1 will increase
and a longer latency might be needed: the read-out systems of all the sub-detectors could be
replaced. New technologies could be exploited, both for the new Level-1 and for the read-out
technology.
While it would be impossible to install new calorimeter detectors due to reduced
budget, time and manpower, it will be necessary to replace the inner trackers. They will be
damaged by the accumulated dose and higher granularity will become fundamental to detect
interesting events. Higher detector granularity implies higher number of read-out channels
and thus an increased total event size: a larger amount of data will have to be treated by the
140
DAQ system and the network. Eventually, more complex reconstruction algorithms might be
required at the HLT and the possible higher output rate could be accommodated in case of
expanded global data storage.
Each upgrade plan aims to deploy a trigger system which guarantees good and flexible
data selection and a DAQ system which ensures high data taking efficiency. As of today, it is
very hard to precisely detail the upgrade schedule of the LHC experiments. However, in the
next chapters the major activities foreseen by ATLAS, CMS and LHCb are described.

4. The upgrade plans for the ATLAS experiment
A Toroidal LHC ApparatuS (ATLAS) [2] is a general purpose experiment composed
of about 90 millions read-out channels. The ATLAS trigger and DAQ system (TDAQ) [3]
reduces the event rate from the LHC nominal collision rate of 40MHz down to 200Hz. This is
achieved with a three-level trigger system. The HLT is in fact composed of the second trigger
level (Level-2) and the Even Filter (EF). The Leve-2 has tight timing constraints and thus it
accesses only a subset of event data in the so-called Regions of Interest (RoIs). The RoIs are
limited areas in the eta-phi plane defined by the Level-1. Normally, a RoI corresponds to
about 2% of the total event data. The EF analyzes the events selected by the Level-2 and
sends the accepted ones to the Sub-Farm Output (SFO). In the SFO the events are streamed
into local data files, which are asynchronously moved to the mass storage. The backbone of
the TDAQ system is composed of two Gigabit Ethernet networks. In the so-called Data
Collection area the data movement is organized around a push-pull architecture: the Read-Out
System (ROS) receives via about 1600 optical fibers the event fragments from the detector
and provides them on request to the Level-2 and the Event Builder (EB). The EB, which
decouples the two network domains, is composed of Sub-Farm Input (SFI) applications. They
merge all the data fragments to form ATLAS events and send them to the EF, via the second
network, the EF network. A schematic overview of the TDAQ system is depicted in Fig. 1.
During the LHC consolidation phase in 2013 the major activities will be:
An upgrade of the sub-detector read-out system to enable a Level-1 output of 100 kHz,
The addition of a new pixel detector layer, the Insertable B-Layer (IBL), built around
a new beam pipe. The current innermost layer of the pixel detector will have in fact
significant radiation damage which will heavily compromise its efficiency, thus a new
layer is needed,
The TDAQ farms and networks will also go through a consolidation phase, and in
particular the replacement of the network cores will be mandatory, since the actual
ones will exceed their life-time.
A plan for a revised TDAQ architecture is under consideration. The proposal foresees
to merge three farms (Level-2, EB, and EF) within a single homogeneous system, to simplify
the software and the maintenance. Nowadays, in fact, high expertise is required to balance the
CPU and the network resources on these farms, while an automated system balance is
envisaged. Further, a huge configuration has to be handled because of the two different
steering instances at Level-2 and EF, and the two separated networks. The new architecture
would allow using a single HLT instance and would easily fit with the possibility of merging
into one the two different networks.
141

Fig. 1. A schema of the trigger and DAQ system of the ATLAS experiment

During the Phase-1 ATLAS will be focused on the Level-1 upgrade in order to cope
with the expected pile-up with an instantaneous luminosity of 2*10
34
cm
1
2s

. In particular, the
foreseen activities will be:
The installation of a new muon detector Small Wheel (SW). The muon precision
chambers are expected to deteriorate with the time, thus they will be replaced possibly
exploiting new and more performing technologies, as Micromegas detectors. The
addition of a fourth muon detector layer will provide an additional trigger station, will
reduce the rate of fake signals, will improve the resolution of the transverse
momentum and will allow a resolution of 1 mrad on a level-1 track segment,
To provide increased calorimeter granularity,
To introduce the usage of a Level-1 topological trigger. A proposal still under
discussion foresees additional electronics to have a Level-1 trigger based on topology
criteria, which would keep it efficient at high luminosities. The consequences of this
choice will be longer latency and the need of developing common tools for
reconstructing topology both in muon and calorimeter detectors,
The usage of Fast Track Processor (FTK). FTK will provide tracking for all L1-
accepted events within O(25s) for the full silicon tracker. The pattern recognition
will be done in associative memories, while the track fitting in dedicated FPGAs.
For what concerns Phase-2 ATLAS did not finalize a plan yet. However, major works is
expected in three areas:
Development of a fully digital read-out of the calorimeter detectors for both the data
and the trigger path. This would allow reaching a faster data transmission and having
the trigger access to the full calorimeter resolution. This will provide finer cluster and
better electron identification,
142
Improve the Level-1 muon trigger. The current muon trigger logic assumes the tracks
to come from the interaction point and the resolution on the transverse momentum is
limited by the interaction point smearing. A proposal foresees to use the Monitored
Drift Tube chambers (MDT) information in the trigger logic, since a resolution
100 times better than the actual trigger chambers (RPC) would be achieved, no need
for vertex assumptions would be required and the selectivity for high-pT muons
would be improved. The current limitation of this project is that the MDT read-out
system is serial and asynchronous, so a new fast read-out system has to be deployed,
Introduce a Level-1 track trigger. In 2021 a new inner detector will be installed. It will
account only for silicon sensors providing a better resolution and a reduced occupancy
with respect to today. Combining silicon detectors tracks with the calorimeter data at
Level-1 will improve the electron selection, while a correlation with the muon
detector information will reduce the fakes. It would be possible to perform some b-
tagging also at the Level-1.

5. The upgrade plans for the CMS experiment
The Compact Muon Solenoid (CMS) [4] is a general purpose experiment with
characteristics similar to ATLAS.
The trigger and DAQ system [5] differs from the ATLAS one by being arranged in
eight independent slices containing all DAQ components and the logical interfaces to the
trigger system, having two event building stages, and only one HLT level. A schema of the
CMS trigger and DAQ system is shown in Fig. 2. On a Level-1 trigger, every Front End
Driver (FED) is sending data fragments via the SLINK to the Front-end Readout Link cards
(FRL). In the first stage of the CMS Event-Builder (the Fed-Builder, implemented by a
commercial Myrinet network) these fragments are collected and grouped together to form
larger super fragments. Those are then distributed to eight Readout-Builders according to a
simple round-robin algorithm. Super fragments on average contain eight FRL data fragments.
The resulting 72 super fragments are then concatenated in the second stage of the CMS Event-
Builder (the RU-Builder implemented by a commercial Gigabit Ethernet switching network)
to form entire event data structures in the Builder Units (BUs). In the BUs the events are
analyzed by Filter Unit (FU) processes in order to find the HLT trigger decision. Triggered
events are transferred to the Storage Manager (SM) where they are temporarily stored until
they are transferred to the Tier-0 center. The HLT farm is implemented by a computer cluster,
while the SM by two PC racks attached to a Fiber Channel SAN.
In the consolidation phase in 2013 CMS will complete its design project of having a
fourth layer of forward muon chambers which will improve the trigger robustness and
preserve the low transverse momentum threshold. In order to cope with more demanding
trigger challenges, the processing power of the CMS HLT farm will increase of a factor three.
The requirements and the plans for Phase 1 are similar to the ATLAS ones [6]. All the
upgrades will require a coherent evolution of the DAQ system in order to cope with the new
design:
Due to the radiation damage, the innermost silicon tracker will be replaced. The new
pixel detector envisaged by CMS will be composed of four barrel layers and three
end-cap layers. The goal is to achieve better tracking performance, improve the b-
tagging capabilities and reduce the material using a new cooling system based on CO2
rather than C6F14,
In order to maintain the Level-1 rate below 100 kHz, a low latency, and good selection
over the data, tracking information will be added at the Level-1. In addition, a regional
143
calorimeter trigger will be introduced, exploiting more sophisticated clustering and
isolation algorithms to handle higher rates and complex events. A new infrastructure
based on Advanced Telecommunications Computing Architecture (ATCA) will be
developed, to increase bandwidth, maintenance and flexibility,
An upgrade of the hadron calorimeter detector is foreseen to use silicon
photomultipliers, which allow a finer segmentation of the readout in depth.
Also for CMS the plans for Phase 2 are not yet finalized, but for sure the silicon
tracker will be replaced. R/D projects for new sensors, new front-end, high speed link and
tracker geometry arrangement are on going. The new tracker is expected to have more than
200 millions pixels and more than 100 million strips. As for ATLAS, CMS will need to add
Level-1 tracking information to the Level-1 to cope with the high luminosity.


Fig. 2. A schema of the trigger and DAQ system of the CMS experiment
6. The upgrade plans for the LHCb experiment
The Large Hadron Collider beauty (LHCb) experiment [7] is a single-arm forward
spectrometer with about 300 mrad acceptance designed to perform flavor physics
measurements at the LHC, in particular precision measurements of CP violation and rare B-
meson decays. It has been designed to run with an average number of collisions per bunch
crossing of about 0.5 and a number of bunches of about 2600, meaning with an instantaneous
luminosity of 2 10
32
cm
1
2s

. But in 2011 LHCb ran beyond design, copying with a luminosity


of 3.3 10
32
cm
1
2s

. The design integrated luminosity was 5 fb


-1
, but a new physics program
has been made with 50 fb
-1
: interesting physics searches as the 1 GeV Majorana neutrinos and
precision measurements such as the charm CPV could be done. Therefore, a major detector
upgrade is foreseen.
LHCb reads-out 10 times more often than ATLAS and CMS to reconstruct secondary
decay vertices. Thus, the LHCb trigger and DAQ system has to cope with a very high rate of
144
small events (~55 kB). The first trigger level is named L0 trigger and it is very efficient on
reconstructing dimuon events, but removes half of the hadronic signals. All the trigger
candidates are stored in raw data and compared with the offline candidates in the HLT. The
HLT1 has tight CPU constraint (12 ms), reconstructs particles in the Vertex Locator detector
(VELO), and determines the position of the vertices. The HLT2 is responsible for the global
track reconstruction and searches for secondary vertices.
By 2017 the expected rates will be even higher and the L0 would remove all the
hadronic signals. Thus, the upgrade plans foreseen to increase the read-out to 40 MHz and
eliminate the trigger limitations [8]. The L0 trigger will be removed and the Low Level
Trigger (LLT) will be deployed: it will not simply reduce rate as L0, but will enrich the
selected sample. A major upgrade for the front-end, the read-out electronics and the DAQ
system will be done. While the VELO will be exchanged with a new detector, no major
changes for the muon and the calorimeters are foreseen.

7. Conclusion
The trigger and DAQ systems of the LHC experiments worked extremely well until
the end of 2011.
After the long LHC shutdown of 2017 beyond design conditions will be reached, in
particular there will be a significant increased luminosity and a consequent increased pile-up:
this will impose to the experiments to upgrade their detectors and read-out systems to work
beyond initial design. In particular, new inner trackers will have to be installed, since the
current ones will be damaged by the received dose and tighter constraints will be imposed by
the higher pile-up. In order to keep an efficient Level-1 trigger, more complex hardware
selections will be applied and the system will have to deal with a longer latency. New read-
out links to provide higher bandwidth will be developed and therefore the DAQ and the
network system will have to scale accordingly.
As of today it is difficult to define the upgrade strategy due to the unstable schedule
and because the actual experts are operating and maintaining the current systems. However,
many plans and R/D projects are on-going and under discussion and for sure the upgrade of
the LHC experiments will be exciting.

References
[1] Chamonix 2011 Workshop on LHC Performance.Ed. Carli C. CERN, 2011.
[2] ATLAS Collaboration and G. Aad et al. The ATLAS experiment at the CERN Large
Hadron Collider. J.Instrum.3 S08003, 2008.
[3] ATLAS Collaboration. ATLAS, High-Level Trigger, Data Acquisition and Controls:
Technical Design Report. CERN/LHCC/2003-022, Geneva, CERN, 2003.
[4] CMS Collaboration. The CMS experiment at the CERN LHC. J. Instrum. 3, 2008.
[5] CMS Collaboration. CMS trigger and data-acquisition project: Technical Design Report.
CERN, 2002.
[6] CMS Collaboration. Technical Proposal for the Upgrade of the CMS Detector Through
2020. CERN-LHCC-2011-006, 2011/01/14, http://cdsweb.cern.ch/record/1355706
[7] The LHCb Trigger System Technical Design Report. CERN LHCC 2003-031, September
2003.
[8] LHCb Collaboration. Letter of Intent for the LHCb Upgrade. CERN-LHCC-2011-001;
LHCC-I-018, CERN, 2011.
145
VO Specific Data Browser for dCache

M. Gavrilenko, I. Gorbunov, V. Korenkov, D. Oleynik, A. Petrosyan,
S. Shmatov
Joint Institute for Nuclear Research, Dubna, Russia

The tools for monitoring of a Tier-2 disk space based on the dCashe system was developed. The
prototype of the monitoring system was deployed on the farm of JINR CMS Regional Center to provide
information on a status of disk space of the J INR Tier-2.

The Worldwide LHC Computing Grid (WLCG) [1] is a global collaboration of more
than 140 computing centres in 35 countries, the 4 LHC experiments, and several national and
international grid projects. The mission of the WLCG project is to build and maintain a data
storage and analysis infrastructure for the entire high energy physics community that will use
the Large Hadron Collider at CERN [2]. The LHC was built to help scientists to answer key
unresolved questions in particle physics. E.g.: What is the origin of mass? Why do tiny
particles weigh the amount they do? Why do some particles have no mass at all?
One of the 4 major experiments at LHC is CMS [3]. The CMS experiment uses a
general-purpose detector to investigate a wide range of physics, including the search for the
Higgs boson, extra dimensions, and particles that could make up dark matter. To have a good
chance of producing a rare particle, such as a Higgs boson, a very large number of collisions
is required. Most collision events in the detector are "soft" and do not produce interesting
effects. The amount of raw data from each crossing is approximately 1 megabytes, which at
the 40 MHz crossing rate would result in 40 terabytes of data a second, an amount that the
experiment cannot hope to store or even process properly. The trigger system reduces the rate
of interesting events down to a manageable 100 per second or 100 megabytes of data
respectively. Nevertheless thats a huge amount of data to manage. To provides the data
placement and the file transfer of the CMS experiment data the PhEDEx project was
established [4].
In CMS experiment for event data monitoring the CMS Dataset Bookkeeping System
(DBS) is used [5]. DBS is a database and user API that indexes event-data data for the CMS
Collaboration. The primary functionality is to provide cataloging by production and analysis
operations and allow for data discovery by CMS physicists.
Nevertheless there is one major gap in data monitoring system of the CMS experiment
at the level of storage elements (SE). There is an urgent need in tool for monitoring and
managing of data on them. One of the most common SE types used in WLCG is dCache [6].
The dCashe provides storing and retrieving huge amounts of data, distributed among a
large number of heterogenous server nodes, under a single virtual filesystem tree with a
variety of standard access methods. Depending on the Persistency Model, dCache provides
methods for exchanging data with backend (tertiary) Storage Systems as well as space
management, pool attraction, dataset replication, hot spot determination and recovery from
disk or node failures. Connected to a tertiary storage system, the cache simulates unlimited
direct access storage space. Data exchanges to and from the underlying HSM are performed
automatically and invisibly to the user. Beside HEP specific protocols, data in dCache can be
accessed via NFSv4.1 (pNFS) as well as through WebDav. dCache system is used by more
then one third of the sites in WLCG. While dCache consists of various subsystems such as
for example Location Manager [7], Name server (PNFS or Chimera) [8], gPlazma [9]
authentication manager etc., the most important component for development of our project is
the Chimera name server database.
146
The dCache is a distributed storage system, nevertheless it provides a single-rooted
file system view. While dCache supports multiple namespace providers, Chimera is the
recommended provider and is used by default. The inner dCache components talk to the
namespace via a module called Pnf sManager , which in turn communicates with the Chimera
database using a thin J ava layer, which in turn communicates directly with the Chimera
database.
Practically all important for monitoring information can be taken from Chimera
database. For this work three component monitoring system was developed:
1. Database backup system (dumps original the Chimera database to a separate
server),
2. Initial data processing (compute directory sizes, convert adjacency tree table
structures to nested set etc.),
3. Web interface access monitoring information.

Its worth mentioning that Chimera database uses an adjacency tree structure [10]
which doesnt allow fast enough reading and require big computational resources for a
relatively simple procedures such as generating a list of child of a node and this situation is
highly inappropriate for online monitoring. So first of all we convert adjacency list model to
nested sets [11] which solves most of the problems.



















Fig. 1. Page with user statistics

Web interface at the moment provides functionality to monitor disk space usage by
users or by directory (Fig. 1) and allows browsing tree structure. While browsing the tree
there is information about directory or file size, VO directory or file belongs to, and number
of subdirs in the directory available (Fig. 2). Also there is a built in search by files and
directories available.
As soon as monitoring system and name server are physically separated usage of the
system doesnt effect dCache. Also the system dumps Chimera database so it can not lead to
any mistakes in dCache functioning. Database dumps its information ones per day but this
can be changed by request.
147

















Fig. 2. Directory tree browser


The future plans for development include:
1. Provide authorization based on standard x509 grid certificates with role discrimination
based on them,
2. Enable file management from web interface for specific roles,
3. Production deployment of the system.

References

[1] http://lcg.web.cern.ch/lcg/
[2] http://public.web.cern.ch/public/
[3] http://press.web.cern.ch/public/en/LHC/CMS-en.html
[4] https://cmsweb.cern.ch/phedex/about.html
[5] http://cmsdbs.cern.ch/
[6] http://www.dcache.org/
[7] http://www.dcache.org/manuals/cells/docs/api/dmg/cells/services/LocationManager.html
[8] http://trac.dcache.org/projects/dcache/wiki/Chimera
[9] http://trac.dcache.org/projects/dcache/wiki/gPlasma
[10] http://xlinux.nist.gov/dads/HTML/adjacencyListRep.html
[11] http://explainextended.com/2009/09/24/adjacency-list-vs-nested-sets-postgresql/




148
RDMS CMS data processing and analysis workflow
1
V. Gavrilov
Institute of Theoretical and Experimental Physics, Moscow, Russia
I. Golutvin, V. Korenkov, E. Tikhonenko, S. Shmatov, V. Zhiltsov
Joint Institute for Nuclear Research, Dubna, Russia
V. Ilyin, O. Kodolova
Skobeltsyn Institute of Nuclear Physics, Moscow State University, Moscow, Russia
L. Levchuk
National Science Center Kharkov Institute of Physics and Technology, Kharkov, Ukraine

Introduction
Russia and Dubna Member States (RDMS) CMS collaboration (Fig. 1) [1], founded in
the 1994 year takes an active part in the Compact Muon Solenoid (CMS) Collaboration [2] at
the Large Hadron Collider (LHC) [3] at CERN [4]. RDMS CMS Collaboration joins more
than twenty institutes from Russia and Joint Institute for Nuclear Research (JINR) member
states. RDMS scientists, engineers and technicians were actively participating in design,
construction and commissioning of all CMS sub-detectors in forward regions. RDMS CMS
physics program has been developed taking into account the essential role of these sub-
detectors for the corresponding physical channels. RDMS scientists made large contribution
for preparation of study QCD, Electroweak, Exotics, Heavy Ion and other physics at CMS.
The overview of RDMS CMS physics tasks and RDMS CMS computing activities are
presented in [5-10]. RDMS CMS computing support should satisfy the LHC data processing
and analysis requirements at the running phase of the CMS experiment [11].


Fig.1. The RDMS CMS Collaboration


1
These activities are partially supported by the grant of the Russian Foundation for Basic Research (RFBR) and
the National Academy of Science of Ukraine (NASU) (10-07-90400-Ukr) for 2010 -2011 years.


149
Current RDMS CMS Activities
During the last few years, a proper grid-infrastructure for CMS tasks has been created
at the RDMS CMS institutes, in particular, at Institute for High Energy Physics (IHEP) in
Protvino, Joint Institute for Nuclear Research (JINR) in Dubna, Institute for Theoretical and
Experimental Physics (ITEP) in Moscow, Institute for Nuclear Research (INR) of the
Russian Academy of Sciences (RAS) in Moscow, Skobetsyn Institute for Nuclear Physics
(SINP) in Moscow, Petersburg Nuclear Physics Institute (PNPI) of RAS in Gatchina,
P.N. Lebedev Physical Institute (LPI) in Moscow and National Scientific Center Kharkov
Institute of Physics and Technology (NSC KIPT) in Kharkov. In the CMS global grid-
infrastructure these RDMS CMS sites operate as CMS centers of the Tier 2 level with the
following names: T2_RU_IHEP, T2_RU_JINR, T2_RU_ITEP, T2_RU_INR, T2_RU_SINP,
T2_RU_PNPI, T2_UA_KIPT. Also T3 CMS center has been recently organized in Minsk,
Belarus (T3_BY_ NCPHEP).
The RDMS CMS computing model provides a valuable participation of RDMS
physicists in processing and analysis of CMS data. At the data-taking phase of the
experiment, the CMS basic requirements to the CMS Tier2 grid-sites are:
persons responsible for site operation at each CMS T2 site,
site visibility in the WLCG global grid-infrastructure (BDII),
availability of recent actual versions of CMS Collaboration software (CMSSW),
high efficiency of regular file transfer tests,
certified links with CMS T1- and T2 grid-sites,
regular CMS Job Robot (JR) tests,
disk space of 150-200 TB for: central space (~30 TB), analysis space (~60-90 TB),
Monte Carlo space (~20 TB), local space (~30-60 TB) and local CMS users space
(~1 TB per user),
CPU resources ~ 3KSI2K per 1 TB disk space, 2GB memory per job.
Since 2008 year the RDMS Tier2 centers are associated with CMS Exotics Physics
Analysis Group and CMS Muon Physics Object Group (both groups hosted at the JINR site),
CMS Heavy Ion Physics Analysis Group (hosted at the MSU site) and JetMet/HCAL Physics
Object Group (hosted at the ITEP). Some later the KIPT site was associated with CMS
Electroweak Analysis Group. The special tests shown that the RDMS Tier2 sites are satisfied
all requirements for such hosting including the additional requirements for certification of
data transfer links between RDMS sites and other Tier-2 centers associated also with the same
CMS Physics Groups. In general, RDMS CPU resources are sufficient for analysis of the first
data expected after the LHC start and for simulation.
By the spring of the 2011 year CMS T2 sites (computing centers) were considered in
the context of the CMS computing requirements as ready for the data-taking phase of the
experiment in the case of:
site visibility and CMS virtual organization (VO) support,
availability of disk and CPU resources,
daily SAM tests availability > 80%,
daily JR efficiency > 80%,
commissioned links TO Tier-1 sites 2,
commissioned links FROM Tier-1 sites 4.
The status of readiness of RDMS CMS T2 sites in September, 2011 is shown at the Fig. 2.
150

Fig. 2. Readiness of RDMS CMS T2 sites in September, 2011 (the permanent updated link
see here http://lhcweb.pic.es/cms/SiteReadinessReports/SiteReadinessReport.htm).

More than 600 TB were transferred to the RDMS Tier-2s from December 2010 to
December 2011 (Fig. 3). The maximum transfer rate to RDMS Tier-2 was more than 80 MB/s
(Fig. 4).



Fig. 3. The cumulative transfer volume for the RDMS T2-sites from December, 2010 to
December, 2011.
151


Fig. 4. Transfer rates (up to 80 MB/s) at the RDMS T2-sites during December, 2010
November, 2011.

The RDMS CMS T2 sites are actively used by the CMS collaboration: 3 542 782 jobs
of the CMS virtual organization were submitted to the RDMS CMS T2 sites from December,
2010 to December,2011 and from December, 2009 to December, 2010 2 582 827 jobs.
In line with the CMS computing requirements for the data-taking phase of the
experiment, now the RDMS CMS grid-sites provide:
the computing and data storage resources in full,
centralized deployment of actual versions of CMS specialized software (CMSSW),
data transfers between the CMS grid-sites with the usage of the FTS grid-service on
basis of VOBOX grid-services for CMS with the Phedex Server,
SQUID proxy-servers for the CMS conditions DB access,
certification of network links at the proper data transfer rates between JINR and CMS
Tier1 and Tier2 centers,
daily massive submission of CMS typical jobs by the CMS Job Robot system,
CMS data replication to the JINR data storage system in the accordance with RDMS
CMS physicists requests,
participation in the CMS Monte-Carlo physical events mass production in the
accordance with the RDMS CMS physicists scientific program.
A group of RDMS CMS specialists takes an active part in the CMS Dashboard
development (grid monitoring system for the CMS experiments)
(/http://dashboard.cern.ch/cms).
152
The dedicated CMS remote worldwide-distributed centers (ROC) were built in
different scientific organization [12]. The JINR CMS Remote Operation Center (ROC) was
founded in the 2009 year to provide participation in CMS operations of a large number of
RDMS CMS collaborating scientists and engineers. The JINR CMS ROC has been designed
as a part of the JINR CMS Tier 2 center and provides the following functions:
monitoring of CMS detector systems,
data monitoring and express analysis,
shift operations,
communications of the JINR shifters with personal at the CMS Control Room (SX5)
and CMS Meyrin centre,
communications between JINR experts and CMS shifters,
coordination of data processing and data management,
training and information .

In the 2010 year the CMS ROC was founded and certified also at the SINP MSU to
provide the similar functions for CMS participants in Moscow.
RDMS CMS physicists work in the WLCG environment, and now we are having more
than 30 members of CMS Virtual Organization.

Summary
The RDMS CMS computing centers have been integrated into the WLCG global grid-
infrastructure providing a proper functionality of grid services for CMS. During last two years
a significant modernization of the RDMS CMS grid-sites has been accomplished. As result,
computing performance and reliability have been increased. In the frames of the WLCG
global infrastructure the resources of the both computing centers are successfully used in a
practical work of the CMS virtual organization. Regular testing of the RDMS CMS
computing centers functionality as grid-sites is provided.

All the necessary conditions for CMS data distributed processing and analysis have
been provided at the RDMS CMS computing centers (grid-sites). It makes possible for RDMS
CMS physicists to take a full-fledged part in the CMS experiment at its running phase.

References
[1] V. Matveev, I. Golutvin. Project: Russia and Dubna Member States CMS Collaboration.
Study of Fundamental Properties of the Matter in Super High Energy Proton-Proton and
Nucleus-Nucleus Interactions at CERN LHC. 1996-085/CMS Document, 1996,
http://rdms-cms.jinr.ru
[2] CMS Collaboration, Technical Proposal, CERN/LHCC, 1994, pp. 94-38,
http://cmsinfo.cern.ch
[3] http://public.web.cern.ch/Public/Content/Chapters/AboutCERN/CERNFuture/WhatL
HC/WhatLHC-en.html
[4] http://www.cern.ch
[5] V. Gavrilov et al. RDMS CMS Computing Model. Proc. of the Int. Conference
Distributed Computing and Grid-Technologies in Science and Education, Dubna,
2004, p. 240.
153
[6] V. Gavrilov et al. RDMS CMS Computing. Proc. of the 2
nd
Int. Conference
Distributed Computing and Grid-Technologies in Science and Education, Dubna,
2006, p. 61.
[7] D.A. Oleinik et al. RDMS - CMS Data Bases: Current Status, Development and Plans.
Proc.of the XX Int. Symposium on Nuclear Electronics and Computing, JINR, Dubna,
2006, p. 216.
[8] V. Gavrilov at al. Current Status of RDMS CMS Computing. Proc. of the XXI Int.
Symposium on Nuclear Electronics and Computing, Dubna, 2008, pp. 203-208.
[9] D.A. Oleinik at al. Development of the CMS Databases and interfaces for CMS
experiment. Proc. of XXI Int. Symp. on Nuclear Electronics & Computing
(NEC`2007), ISBN 5-9530-0171-1, 2008, pp. 376-381.
[10] V. Gavrilov at al. RDMS CMS Computing activities before the LHC startup. Proc. of
3
rd
Int. Conference Distributed Computing and GRID-technologies in Science and
Education, Dubna, 2008, pp. 156-159.
[11] CMS Collaboration. The Computing Project, Technical Design Report. CERN/LHCC-
2005-023, CMS TDR 7, 2005.
[12] A.O. Golunov et al. The JINR CMS Remote Operation Centre. Distributed Computing
and Grid-technologies in Science and Education IV Int. Conference, Proceedings of
the conference, Dubna, 2010, p. 109.


154
Remote operational center for CMS in JINR

A.O. Golunov, N.V. Gorbunov, V.V. Korenkov, S.V. Shmatov, A.V. Zarubin
Joint Institute for Nuclear Research, Dubna, Russia

The dedicated remote center of CMS Experiment at the LHC has been founded in JINR (Dubna). The
main mission of the center is operational and efficient monitoring of the CMS detector systems, including the
measurements of performance parameters during the prompt data analysis, monitoring of data acquisition and
quality of data. Since 2009 the centre is involved in remote shifts and operation works of the CMS.

Introduction
The remote operation play an important role in the detector operations, monitoring and
prompt data analysis for the Compact Muon Solenoid experiment (CMS) [1] at the LHC. To
provide participation in CMS operations of large number of collaborating scientists and
engineers the dedicated CMS remote worldwide-distributed operation centers (ROC) were
built in different scientific organization (Fig.1).
The purpose of the worldwide centers is to help specialists working on CMS
contribute remotely their expertise to commissioning and operations at CERN [2].
One of these centers was founded in Joint Institute for Nuclear Research (JINR) [3].
This center is located in Laboratory of High Energy Physics of JINR (LHEP). JINR CMS
ROC is focused on operations and monitoring of inner endcap detectors, where collaboration
of Russia and Dubna Member States institutions (RDMS) bears a full responsibility on
Endcap Hadron Calorimeters (HE) [4] and First Forward Muon Stations (ME1/1) [5].



Fig. 1. Locations of operating or planned CMS Centers Worldwide [2]

The purpose and functions
The JINR CMS ROC should provide effective facilities to support activities which are
associated both with the main CMS Center in the CERN main site in Meyrin and in part with
the CMS Control Room at the LHC interaction point 5 in Cessy. The JINR ROC has
following main primary functions:
155
1. Monitoring of the detector systems including data acquisition (DAQ) system, detector
control system (DCS), i.e. slow control system for both high voltage and low voltage
systems, temperature monitoring, etc, and data quality monitoring (DQM),
2. On-the-fly Data Analysis Operations, in particular detector performance
measurements, display events, calibrations of sub-detector systems, etc.,
3. Effective participation in remote shifts as well as prompt access of experts to
experimental information. Communication of shifter with system experts during data
taking and CMS systems operations,
4. Offline Computing Operations for coordinating the processing, storage and distribution of
real CMS data, MC data, their transfers at RDMS Tier-1 and JINR Tier-2,
5. Training and outreach.
The important point is to have secure access to information that is available in control
rooms and operation centers at CERN.

2. 2. The structure
The JINR ROC consists of the file and graphic server, monitoring and analysis
system, user workstations, video-conferencing system. The scheme of JINR ROC is given in
Fig. 2. The main ROC room is shown on the left with the server and three monitoring
workstations while the right plot depicts user workstations (a total of nine stations) which are
located outside of the main room. The conference system includes a screen, a projector and
high-quality Tandberg 550MXP video-conferencing system.
The monitoring system of JINR ROC includes the SLC Linux graphic and file server,
development server, two 40 LCD screens (information displays) mounted on the wall and
connected to the server and three monitoring workstations (working places).
The server is used for express-analysis, storage of files with monitoring information
and results of data analysis. The server is based on Dual Xeon 2.53 processor,
16GB RAM, and 6B HDD with 2 gigabit Ethernet cards and 9600GT dual-head video card.

Ethernet 1Gb/s
Wireless AP
Main room
LHEP Server Room
Users Workstations
Monitoring Workstations
Information Displays
Control Room
Control server
Cisco
To JINR Ti er-2
Development
server

Fig. 2. The scheme of the JINR Remote Operations Center
156
One of information displays shows the LHC page 1 [6] with information about the
LHC beam status. The second screen maps the CMS data taking including the DAQ system,
DCS, and event displays [7].
Two of three working places are equipped with an interactive console SLC Linux
PCs (Intel CoreDuo 3GGz, 4GB RAM, 750GB HDD, 2 dual-head video card) and three 20
LCD screens each. They are assigned for operations of two CMS detector sub-systems (HE
and ME1/1). Their main functions include monitoring of detector operations and data quality.
One screen provides information on status of parameters of the sub-detector control system
low and high voltage, cooling system etc, as well as information on a run number, run type,
number of stored events, monitoring of data taking. The second one allows to monitor data
quality and to display results of express-analysis. The third screen provides access to the e-log
book of shifts, shifter manuals and also serves for another interactive works.
The third working place has two LCD screens. It is for shift leader and/or offline
computing operations (Fig. 3).


Fig. 3. The JINR Remote Operations Center
Each of working place is equipped by local communication tools (web-cameras and
headphones) to enable connections between shifter and experts when needed.

3. LINKS and Communications
The center is managed by the special control server placed in the single room (ROC
Control Room). The gigabit router, network switch and a wireless network are used to link the
ROC parts (server, monitoring workstations, network printer, and videoconferencing system)
to each other. The ROC Control Room is connected to the 16-port gigabit switch in the main
server room of LHEP JINR. This switch provides links between the JINR ROC and JINR
Tier-2 located in Laboratory of Information Technologies of JINR. The user workstations are
also linked directly with the LHEP Server Room.
Experimental data are transferred from CERN (Tier-0/RDMS Tier-1) as well as from
other CMS Tier-1 sites to the JINR Tier-2 site. The CMS GRID transfer system (PHEDEX) [8] is
used for the bulk transfer. Then data are processed at Tier-2 and analyzed in JINR ROC. The
157
small part of data can be transferred directly to the JINR ROC for prompt processing and detailed
analysis. The CMS individual file transfer system (FileMover) [9] is applied for this case.
There are various Web applications to help follow CMS operations, notably the DQM
system used by all CMS sub-systems [2].
The high definition H.323 point-to-point features of Tandberg videoconference system
allow to involve the center in the CMS TV outreach events for public overview of LHC and CMS
Status, Live Event Displays, etc. The video-conference system also uses a software-based video
system (EVO) [10] to help to coordinate shifters and experts, and makes easy weekly meetings.
4. Local monitoring
To prevent overheating and overloading of the local server the RRD-based monitoring
was organized. It shows current core temperatures, disk and memory usage, user processes
etc. (Fig. 4).

Fig. 4. Sample of local monitoring graph
4. Operations
The JINR ROC has been tested in cosmic tests and the first LHC collisions on 0.9 TeV
and 2.36 TeV in 2009. The sub-system data quality monitoring, online and offline global data
quality monitoring, slow control systems, DAQ monitoring system were in use remotely. In
2010 the JINR ROC center provides three shifter working place for participation in data
taking and analysis within 24-houres during 7 TeV CMS Run. From 2011 all sub-detector
shifts moved to ROCs. Part of HCAL and muon shifts covered by JINR.

References

[1] CMS Collaboration (R. Adolphi et al.). The CMS experiment at the CERN LHC. JINST 3:S08004, 2008.
[2] Lucas Taylor and Erik Gottschal. CMS Centres Worldwide: a New Collaborative Infrastructure. Proc. Of
CHEP09, 2129 March, 2009, Prague; J. of Phys: Conf. Series, 219 (2010) 082005; L. Taylor et al., CMS
centres for control, monitoring, offline operations and prompt analysis. Proc. of CHEP '07, 2.7 Sept. 2007,
Victoria; J. of Phys: Conf. Series, 2008, p. 119.
[3] A. Golunov et al. The JINR CMS Regional Operation Centre. Proc. of the 4th International Conference
"Distributed Computing and Grid-technologies in Science and Education" (June 28 - July 3, 2010).
[4] CMS HCAL Collaboration: G. Baiatian et al. Design, performance, and calibration of CMS hadron
endcap calorimeters. CERN-CMS-NOTE-2008-010, Mar 2008, pp. 36.
[5] Yu.V. Erchov et al. ME1/1 Cathode Strip Chambers. CERN-CMS NOTE-2008-026, Part. Nucl. Lett. N.
4 (153), 2009, p. 566.
[6] http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr= LHC1
[7] http://cmsdoc.cern.ch/cmscc/cmstv/cmstv.jsp?channel=2&frames=yes
[8] R. Egeland, T. Wildish, and Ch.-H. Huang. PhEDEx data service. J.Phys.Conf.Ser. 219, 2010, p. 062010;
R. Egeland et al. Data transfer infrastructure for CMS data taking. Proc. of Science, PoS (ACAT08), 2008, p. 033;
L. Tuura et al. Scaling CMS data transfer system for LHC start-up. J.Phys.Conf.Ser, 119, 2008, p. 072030.
[9] B. Bockelman and V. Kuznetsov. CMS FileMover: one click data. CHEP 2009.
[10] http://evo.caltech.edu/evoGate/Documentation/
158
JINR Free-electron maser for applied research: upgrade of the
control system and power supplies
E.V. Gorbachev, I.I. Golubev, A.F. Kratko, A.K. Kaminsky, A.P. Kozlov,
N.I. Lebedev, E.A. Perelstein, N.V. Pilyar, S.N. Sedykh, T.V. Rukoyatkina,
V.V. Tarasov
Joint Institute for Nuclear Research, Dubna, Russia

The 30 GHz free-electron maser (FEM) with the output power of 20 MW, pulse
duration of 170 ns and repetition rate 0.5-1 Hz, was made several years ago at J INR,
DUBNA, in collaboration with IAP RAS, Nizhny Novgorod [1]. The FEM was pumped by
the electron beam of linear induction accelerator LIU-3000 which produced the electron
beam (0.8 MeV, 25 A) with the repetition rate up to 1 Hz. Along with the research in the field
of relativistic RF electronics it is supposed to use FEM in the studies on acceleration
technique, biology, medicine [2,3]. For this purpose a specialized RF stand was made on the
basis of FEM (Fig. 1).

Fig. 1. Laser heating, RF heating experiments at SLAC and J INR-IAP


The main tasks of the research were formulated under with the leadership of the
IC (CERN) group: participation in the studies on the limitations of the life-time for CLIC
collider accelerating structures. These limitations are related with their damage and fracture as
a result of cyclic heating by short high-power microwave pulses. The physical reason for the
material fracture in the accelerating structure is related to large mechanical stresses at the
metal surface while short pulsed microwave heating, which can exceed the plasticity level.
The surface damage of copper and other metal and alloys has been studied in the experiments
with pulsed UV lasers, equivalent ultrasonic vibrators and microwave radiation sources
(11.4 GHz klystrons with the power of 50 MW and pulse duration 1500 ns) [4, 5, 3]. The
159
parameter range for these experiments (number of pulses in the dependence on the pulse
heating value or equivalent mechanical stress) is shown in
Fig. 2.
The value of the pulse heating in the experiments by JINR-IAP collaboration was planned
to be about 200 that approximately by 2 times exceeds the heating values reached in the
experiments at SLAC1 and SLAC2. It is necessary to emphasize that to solve this task it was
required to provide the spectrum width of the output RF pulse to be not worse than 1*10
-3
and the
energy value instability and RF pulse duration not worse than some units of percent. Preliminary
results obtained in Dubna experiments at the intermediate pulse heating value, were reported at
previous conferences [6]. In the experiments fulfilled at the pulse heating value of about 250, the
investigation of the metal damage process was also performed: from the beginning of the damage
(as micropicks and uneven surfaces of a micron size) till the appearance of cracks on the metal
surface after which regular cracks began to appear in the area under study [7].


Fig. 2. Summary of the results obtained by ultrasonic vibrators, ultraviolet
At present on the basis of the developed experimental stand an opportunity of a
selected damage of the cancer cells is studied by using powerful RF pulse radiation. For this
purpose it is necessary to involve nano-sizable absorbers of RF radiation into the cell tissue in
such a way to concentrate them only on the cancer cells (for example, chemically binding the
absorbers with specific anti-bodies). A significant difference in the absorption of the radiation
by the health tissue and nano-absorbers as well as practically full absence of the heating
transfer due to the radiation pulse regime, give an opportunity of local heating and selected
damage of cells avoiding the damage of the neighbouring healthy cells. Preliminary results on
the exposure of the cancer cells placed on the thin mylar film or on the film coated with gold
50nm thick, have confirmed the opportunity to kill the cancer cells which were in contact with
the metallized film while the control samples remained undamaged (
160
Fig. 3). Now the experiments are performed to select optimal RF absorbers of nano-
meter sizes.


Fig. 3. Microphotographs of the cancer cell samples taken in 60 minutes after irradiation
To solve the above tasks, it was required to provide precision stabilization of the main
systems of the RF stand: accelerator magnetic lens supplies, high voltage pulse systems of the
accelerator and supply pulse systems of the MCE magnetic field: accelerator energy
instability and currents of the acceleration track focusing elements must not exceed 1*1 - 3
from pulse to pulse.
The distributed data acquisition system was constructed several years ago [6, 7], and it
demonstrated its versatility and reliability. Recently some new features have been added. The
scheme of the modernized system is shown in
Fig. 4.
The modular FEM control and acquisition system allow us to control the new
subsystems without disturbing the experiments schedule. The report describes the upgrade of
the following subsystems :
stabilization of the modulators high-voltage power supply,
stabilization of the electron gun high-voltage power supply,
stabilization of the electromagnetic undulator high-voltage power supply.

Three identical stabilization systems were constructed. The injector, modulators and
undulator high voltage stabilization systems are intended for high voltage regulation with the
accuracy better than 0.3%. They allow one to set necessary voltage limits and indicate high
voltage measurement results. Each system can be controlled either locally by using the
embedded keyboard and LCD display or remotely via RS-232 interface.
The stabilization system consists of the control and power units.

The power unit is located in the accelerator hall and includes a high voltage
transformer with a thyristors regulator in the primary winding. The control unit is located in
the control room and all its connections to the power unit are galvanicaly isolated. The control
unit measures high voltage by means of the 12-bit ADC, calculates the error between the
measured value and the limit. It forms two bursts of pulse signals with opposite phases for
161
two IGBT-transistors in the half-bridge inverter. The control signal bursts are referenced to
zero transitions of the AC line phase. They control thyristors to rectify the voltage from the
high voltage transformer to charge the secondary winding capacitor to the necessary level.

RF pulse parameters
Radiation spectrum
Beam currents
RF pulses



Ethernet
TCP Client
TCP Client
Socket

TCP Server Sockets
Digital
Crate
Switch
Start
Magnetic fields
Pulse accelerating
voltages
FEM oscillator
HV
power
supplies
Magnets
power
supplies
Modulators
Injector
Undulator
Lenses
Solenoid
Pulse
shape
recog-
nition
An active
automatic start
Synchronization


Fig. 4. Modernized control system block diagram
162

Micro-
controller
(AT90S8535)
Keyboard &
buttons
LCD display
Timing unit
(Altera EPM7064S)
Galvanic isolation
IGBT inverter
CONTROL UNIT
High Voltage signal
(divider)
Synchronization from
Control System
Galvanic isolation
12-bit ADC
(MAX187)
thyristors

Fig. 5. Control unit functional diagram


The functional diagram of the control unit is shown in Fig. 5. The main part of the
control unit is the Atmel microcontroller. It initiates measurements by using the analog-to-
digital converter, reads them when conversion finish is compared to the limits which can be
set manually by using the keyboard or remotely via RS-232. Altera CPLD (Complex
Programmable Logic Device) produces bursts of the pulses with the length which varies
according to the error value. The measurement results are displayed on LCD monitor. The
indication is synchronized with the accelerator cycle allowing the operator to control the high
voltage in due time.
The stabilization systems have been successfully used on the accelerator and achieved
the required stabilization accuracy.

References
[1] N.S. Ginzburg, A.A. Kaminsky, A.K. Kaminsky et al. High-Efficiency Single Mode FEM-
oscillator based on a Bragg Resonator with Step of Phase of Corrugation. Phys. Rev. Lett. V.
84, Issue 16, 2000, pp. 35743577.
[2] A.K. Kaminsky, E.A. Perelshtein, S.N. Sedykh et al. Powerful 30-GHz J INR-IAP FEM:
Resent results, prospects and applications. Proc. of the 31st Int. FEL Conf., Liverpool, UK,
2009, p. TUPC76.
[3] S.P. Besedin, A.K. Kaminsky, O.V. Komova et al. Experiments on application of high-power
microwave radiation to biomedicine using micro- and nanoparticles. Strong Microwaves:
sources and applications, edited by A. G. Litvak. Nizhny Novgorod: Institute of Applied
Physics, V. 2, 2009, pp. 524528.
[4] D.P. Pritzkau, R.H. Siemann. Experimental study of RF pulsed heating on oxygen free
electronic copper. Phys. Rev. Special Topics Accelerator and Beams, V. 5, 112002, 2002,
pp. 1-22.
[5] L. Laurent, S. Tantawi, V. Dolgashev et al. Experimental study of RF pulsed heating. Phys.
Rev. Special Topics Accelerator and Beams. V. 14, Issue 4, 041001, 2011, pp. 1-22.
[6] E.V. Gorbachev, V.V. Tarasov, A.A. Kaminsky, S.N. Sedykh. The Focusing Magnetic Field
Stabilization System for LIU-3000 Accelerator. Proc. of NEC'2007. Dubna: J INR, 2008,
pp. 29-30.
[7] E.V. Gorbachev, V.V. Tarasov, A.A. Kaminsky,T.V. Rukoyatkina at al. Status of the facility
for experiment on RF heating of the copper cavity the imitator of the CLIC high-gradient
accelerating structure. Proc. of NEC'2009. Dubna: J INR, 2010, pp. 134-138.


163
GriNFiC - Romanian Computing Grid
for Physics and Related Areas

T. Ivanoaica, M. Ciubancan, S. Constantinescu, M. Dulea
Horia Hulubei National Institute for R&D in Physics and Nuclear Engineering, Romania

A review of the GriNFiC computing infrastructure and of the research activities it supports is presented,
with special emphasis on the national contribution to the LHC Computing Grid collaboration.

1. Introduction
After being driven many years almost exclusively by the requirements of the high energy
physics community and EU projects, the development of the Grid infrastructure in Romania
needs to be oriented towards the support of more varied scientific domanins and interdisciplinary
collaborations. The recent creation of the Computing Grid for Physics and Related Areas
(GriNFiC) marks a first step towards reaching this goal. The place GriNFiC occupies in the
framework of the national Grid infrastructure, as well as its various technical and usage aspects
are presented below.

2. National Grid Infrastructure
The evolution of Grid computing in Romania is closely related to that of the national HEP
community. The first Grid application has been implemented at NIHAM/IFIN-HH centre in
2002, in the framework of the computing support for ALICE collaboration. 2004 marks the
beginning of the participation in the EGEE and SEE-GRID suite of EU projects, three LCG sites
being EGEE certified between 2005 and 2006. An important step forward was the signing of the
WLCG Memorandum of Understanding (MoU) by the National Authority for Scientific Research
(ANCS), as funding agency of Romanian Tier-2 Federation (RO-LCG) [1]. Taking into account
the ever-increasing need of computing capacities for scientific research, a special call for projects
on Grid technology was launched by ANCS two years later, and ten projects were accepted for
funding from the Sectorial Operational Programme 'Increase of Economic Competitiveness' (SOP
IEC), co-funded by the European Fund for Regional Development. As a result of GriCeFCo SOP
IEC project [2], the integrated Grid system IFIN GRID was built, and the Romanian Computing
Grid for Physics and Related Areas (GriNFiC) [3] was established.
The national Grid infrastructure counts today 12 EGEE certified and active resource
centres [4] connected to the NREN RoEduNet [5], which provides a 10 Gbps backbone plus
access to GEANT network. The above-mentioned centres are hosted by 4 research institutes and
4 universities, and organized into two consortia: RO-LCG and RoGrid-NGI [6]. This list should
be completed with the newly created SOP IEC centres.

Romanian Tier-2 federation RO-LCG
The main objective of the RO-LCG federation is the coordination of provision of the
pledged hardware resources and necessary manpower for computational simulation and data
analysis necessary in theree LHC experiments (ALICE, ATLAS, and LHCb), together with the
realization of the minimal Tier-2 SLAs assumed in the MoU concluded with CERN [7].


164
The consortium consists of three research institute and two universities, being led by
IFIN-HH, and providing 9 Grid sites that support the ALICE, ATLAS, and LHCb experiments.
The coordination of the sites and the monitoring of the Grid services is ensured by the Centre of
Informational Technologies of IFIN-HH (CIT), which hosts one of the major national
communication nodes that connects the research institutes from the Magurele Physics Platform to
RoEduNet through a 10 Gbps fiber optics link.
The members of RO-LCG are IFIN-HH (which hosts 5 LCG centres), the Institute for
Space Sciences (ISS), the National Institute for Isotopic and Molecular Technologies of Cluj
(ITIM), Politehnica University of Bucharest (UPB), and Alexandru Ioan Cuza University of
Iasi (UAIC). Together, these institutions provide more than 2000 logical CPUs and 1,5 PetaBytes
of storage space which are dedicated to WLCG collaboration.

3. GriNFiC Role and Objectives
GriNFiC seeks at extending
the Grid support for research
activities in physics beyond the field
of high energy physics. It is currently
built using the infrastructure created
in the framework of the GriCeFCo
project, which provided most of the
funding. GriNFiC is intended to
provide user access to services of
distributed computing and HPC
capacities, data storage, software
libraries, and access to collaborative
tools. Also, it hosts the facilities used
by Grid monitoring services.

GriNFiC supports two (new) national VOs: gr i di f i n and i f ops. i f ops is dedicated to
the monitoring of the sites of GriNFiC, including those of RO-LCG, while
gr i di f i n provides the framework necessary for running the applications of non-LCG
users. The main GriNFiC site is GRIDIFIN, which is located at CTI/IFIN-HH.
Besides the RO-LCG members, the first partners of GriNFiC are Carol Davila
University of Medicine and Pharmacy (UMFCD), which hosts the MeGrid site [8], the Technical
University of Civil Engineering of Bucharest (UTCB), and the Physics Faculty of the Bucharest
University (FF-UB). The topology of the
GriNFiC network is depicted in Fig. 1.
Separate grid resources were dedicated for the support of gr i di f i n VO on the existing
GriNFiC centres. The first users of the new computing capacities are from the fields of
theoretical and nuclear physics (IFIN-HH), condensed matter physics (National Institute for
Materials Physics INFM), while new projects are intended to start soon in the field of
biophysics (National Institute for Lasers, Plasma and Radiation Physics, IFIN-HH).


Fig. 1. GriNFiC network topology


165
4. GRIDIFIN site
The GRIDIFIN site hosts the following main GriNFiC
servers, which also provide services for RO-LCG sites: VOMS,
WMS, BDII-top server, LFC server (also used for SAM tests).
Besides, the GRIDIFIN cluster contains a CREAM-CE server,
a DPM_MySQL server as Storage Element (SE), and worker
nodes (Fig. 2).

Sanity checks
Every main service (CE, SE, LFC, BDII, VOMS,
WMS) provided by the local sites is tested and sanity checks
are published using Nagios. The warnings and alerts are sent by
email to the site administrators to allow them the quick repair
of services.

Service Availability Monitoring (SAM)
The NGI SAM service tests are doubled by GriNFiC
SAM tests, using similar software, just to detect and solve
incipient problems before these being detected by the central tests. The purpose is to improve the
availability and reliability performance, especially after the switching of RO-LCG to NGI-
provided tests has led to lower test reliability and the absence of notifications towards site
administrators about the emerging problems. The tests are performed every half hour, using the
dedicated VO (i f ops) and the local services BDII, VOMS, LFC, WMS and UI. The results are
published via Nagios for all GriNFiC sites (including the RO-LCG sites), and the site
administrators are alerted through e-mail regarding all the warnings and errors, allowing them to
act for the recovery of the problems in minimum time.

Monitoring and accounting tools
Tools for monitoring/accounting of jobs, network traffic for cluster and storage and Grid
production were developed in IFIN-HH, and use Nagios for the display of the reports. Specific
plugins were programmed for this purpose and installed on the servers of the main site, which
account for the number of jobs running at each site, the internal and external network traffic of
the storage element and the daily kSI2k figure for each group of the site.
Accounting information on every group of users of the site is kept up to date and is
displayed once a day. The display of the kSI2k*hour figure of each group allows to control the
usage of Grid resources. Snapshots of the job monitoring and accounting interfaces are
represented in Fig. 3 below.

User interface
Two web interfaces are available to the users for: a) the registration to a VO and / or the
application for an account on the User Interface (UI) server; b) editing of application files, as well
as compilation, submission, and management of jobs.

Fig. 2. GRIDIFIN topology


166

Fig. 3. J ob monitoring (queued/exec) and job accounting (in megaSI2k) on each VO


Fig. 4a. J ob list with execution links Fig. 4b. J ob list showing a running job

After registration, the user has access to GriNFiCs resources and services through a web
interface [9]. The principal menu offers the user the following options: to edit the source code; to
edit the wrapper (for compilation); to edit the J DL file; to access the job management interface.
The interface of job management provides the proxy renewall button, the job list and their status.
After the proxy renewall, the user has access to the job
execution links, as shown in Fig. 4a. If job submission
succeeded, the status of the job becomes Runningas in
Fig. 4b. Otherwise, an error message is issued and a link
to the applications log (for debugging) plus buttons for
editing the job files (source, jdl, wrapper) appear (see
Fig. 5).
The web interface will soon be upgraded to
allow user access to more information such as the
available resource quotas, resource usage (including
execution time), etc.


5. Grid production and use case
The overhelming majority of the national Grid resources are still used by the WLCG
collaboration. According to the data published by the EGI accounting portal [10], during the
previous 12 months 97,42% of the national Grid production was provided by the three LHC VOs
al i ce, at l as, and l hcb; these are followed by the gr i di f i n, i l c, and hone VOs (Fig. 6a).
Summing these contributions we conclude that more than 99% of the national Grid production
was realized in physics.
Fig. 5. J ob list in case of submission failure


167


Fig. 6a. CPU time per VO Fig. 6b. Total number of jobs per country

With 1.5% of the total number of jobs processed by all the Tier-2 centres on al i ce,
at l as, and l hcb VOs during the same period of time, RO-LCG is ranking 12
th
in the WLCG
collaboration (Fig. 6b).
The first registered users of gr i di f i n are researchers from the field of nuclear physics,
computational physics, the physics of new materials, and theoretical physics, that are interested in
the numerical modeling and simulations of complex systems.
At present, the most important use of the GRIDIFIN infrastructure is in the field of
computational modeling of complex materials. Here we shortly review, as an use case, some
results obtained with GRINFICs computing support, regarding the modeling and simulations of
materials within the Density Functional Theory (DFT) using an all-electron local orbitals code,
Full Potential Local Orbitals (FPLO) [11].
The goal of the study performed by N. Plugaru (INFM) and R. Plugaru (IMT) [12] is to
compute the electronic structure of (nano-)materials from first principles given the chemical
composition and the geometrical structure of the system, by solving the electronic Schroedinger
equations, without using empirical information or free parameters.
Calculations were performed on Grid regarding 3D periodic systems and thin films, using
structural models with up to 200 atoms per supercell, or on systems with intrinsic point defects
and/or impurity atoms. Band structure calculations were completed on compounds such as
transition metal - doped TiO2 and ZnO.
The results of the above investigations can find applications such as in the design of dilute
magnetic semiconductors films for spintronic devices.

6. Funding
RO-LCG infrastructure was realized in the framework of various national projects within
programmes of the National Plans for Research, Development and Innovation 1 and 2 (PNCDI1
& PNCDI2), financed by the National Authority for Scientific Research (ANCS). In particular,
the operation, development and maintenance of the infrastructure was funded from the
Capacities-M3-CERN programme, CONDEGRID project [13].


168
GRIDIFIN infrastructure was realized with funding from the GriCeFCo project, special
structural funds call SOP IEC 2.2.3 for Grid.
The continuing collaboration with LIT-J INR/Dubna in the framework of the Hulubei-
Meshcheriakov programme (2005-2013) was important for know-how exchange.
The operational support and part of the know-how regarding the HPC infrastructure to be
accessed through GriNFiC benefits of the FP7 HP-SEE project High-Performance Computing
Infrastructure for South East Europes Research Communities [14].

5. Conclusions
Nine years after the first grid application was implemented in Romania, physics still
represents the main area of use of Grid technology (with a share of more than 99% of the national
Grid production). This reflects both the needs of the scientific community and the level of
organization of the computing support.
GriNFiC provides appropriate computational resources and software tools to the national
physics community and is able to adapt its computing environment to various requirements, far
beyond the HEP domain.
GriNFiC infrastructure benefits of its own, independent, technical support for SAM tests, job
monitoring and accounting, which improves the availability of its resources and services to the users.
The currently reported achievements are promising regarding further developments of
GriNFiC and its virtual user community of scientists that work in physics and interdisciplinary fields.

Acknowledgements
This work was partly supported from the following projects/contracts: Optimization
Investigations of the GRID and Parallel Computing Facilities at LIT-JINR and Magurele
Campus, Hulubei-Meshcheriakov programme; 12EU/2009 CONDEGRID and PN 09 37 01 04,
funded by ANCS.

References
[1] Romanian Tier-2 Federation RO-LCG, http://lcg.nipne.ro/
[2] GriCeFCo, Grid System for Research in Physics and Related Areas,
http://grid.ifin.ro/gricefco/
[3] GriNFiC portal, http://grid.ifin.ro/
[4] Gstat 2.0, http://gstat-prod.cern.ch/gstat/summary/Country/Romania/
[5] Romanian National Research and Education Network RoEduNet, http://www.roedu.net/
[6] RoGrid-NGI consortium, http://www.rogrid.ro/
[7] WLCG MoU, http://lcg.web.cern.ch/lcg/mou.htm
[8] Resource Center for Translational Research in Oncology, http://portal.medgrid.ro/
[9] Web access to GRIDIFIN, https://grid.ifin.ro/webui/indexnew.php
[10] EGI accounting portal, http://www3.egee.cesga.es/
[11] FPLO, http://www.fplo.de/
[12] N. Plugaru, R. Plugaru. Effect of oxygen vacancies on M-M exchange in anatase
M:TiO2 (M=Mn, Fe or Co). Psi-k 2010 Conf., Berlin, Germany, September 12-16, 2010, to be
published.
[13] CONDEGRID project, http://lcg.nipne.ro/condegrid/
[14] HP-SEE project, http://www.hp-see.eu
169
Current state and prospects of the IBR-2M instrument control
software

A.S. Kirilov
Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia

The instrument control software for IBR-2 (IBR-2M) reactors is constantly modified.
From time to time it is changed radically. This happens due to modification of instruments
themselves and because hardware and software are constantly being improved too.
The development of the current version of the IBR-2 instrument control software was
originated in the beginning of 1990s for the Neutron Spectrometer with High Resolution
(beam 6a of IBR-2). It works at a VME-based computer in the OS-9 operating system
environment. Later this software was ported to other instruments and called Sonix (Software
fOr Neutron Instruments on X11 base)/1/. However, VME hardware was rather expensive.
Thus, we decided to choose PC platform as a base for new control systems. New hardware
gave us the opportunity to critically evaluate our software as well. Unfortunately, Sonix was
not conceived from the very beginning as a universal system. Thus, its transportation to new
instruments was not so easy. Besides, using a PC with the Windows operating system for
instrument control reduces the overall costs of the system. So we decided to change both the
hardware and software platforms.
The new version Sonix+ software complex /1/ inherited basic solutions from the
older Sonix system. In particular, those are the modular organization, the use of a special
database for device control and display of the system current state, and the use of script
programming for the measurement procedure.
At the same time, some basic features were revised to make the system more unified,
flexible and comfortable for the user. Thus, the Sonix+ instrumental complex was created at
FLNP on the basis of the experience obtained with Sonix and in accordance with the recent
trends in the field. We call it instrumental because it consists of a large number of various
modules and is organized to simplify installation at new spectrometers.

Structural changes
The most important components of a neutron instrument are: various detectors with
DAQ controllers, motors for moving and rotating different elements, sample environment
equipment (temperature controllers, magnetic field devices), etc.
In a modular system every component (module) is responsible for some device or
function. In practice, a set of modules for each instrument has a non-linear structure. Some
modules serve hardware devices they are at a lower level of the hierarchy. The other
modules are used to maintain work of the others. We have chosen these components and tried
to create them as universally as we can expect. We called these modules servers. If one has to
port Sonix+ to another instrument, servers can be used without redesign.
At the moment the complex includes the following modules:
Type Number
Low-level modules to serve various devices (DAQ, stepper
motors, temperature controllers, etc.)
> 20
Intermediate level modules: servers and adapters > 10
Python system scripts > 20
Graphical user interface >10
System modules 4
Altogether > 65
170



Fig. 1. Sonix+ structure

At the moment the following servers are available:
exposition server to handle DAQ controllers,
motor server with evident destination,
script interpretation server to execute program of the experiment,
spectrum reading server to read data during exposition,
command channel servers for remote control over execution via sockets.
Of course, for communication of servers with other modules specialized protocols are needed.
Actually the following protocols have been established:
universal protocol for device control and inquiring,
DAQ protocol,
motor protocol,
remote inquiring and control protocol via sockets.

171
Script utilization
Whereas the script language in the Sonix system was custom-made and has rather poor
functionality, the Python language /2/ is used as a script language in the Sonix+ and all nice
features and packages of Python are available to control experiments and for preliminary data
processing.
Python is an open-source widespread powerful interpretative programming language
that allows you to work more quickly and integrate your systems more effectively.
The use of a full programming script language (Python) allows one to reasonably
separate the specificity of instrument measurements into the most flexible and easy-to-change
component. This fact is very useful from the viewpoint of unification and for facilitating the
transfer of Sonix+ to new instruments.
It is also important that using the Python as a script language opens a wide door for the
user to program experiments and to include preliminary data processing components even
without the obligatory assistance of the authors of the instrument software.

Two-step scheme of spectrum storage
A variety of spectrum saving formats are caused by a great number of existing data
treatment programs. In my opinion this is the most conservative part of the instrument
software. One can change a control system more or less easily, but to create a new data
treatment program is a greater problem. In order to bypass this problem a double spectra
saving scheme has been proposed.
Initially, spectra are saved in the internal format, which is common for all
instruments. All available parameters are stored in a separate file. So this couple of files
contains all information concerning a particular measurement.
Subsequently, script procedures specific for each instrument have been created to
transform data files from internal to instrument specific format. Files may be renamed
according to the spectrum name scheme accepted at a particular instrument as well.
Again with the Python this can be implemented more easily.

User interface
A new view on the complex organization makes it possible to propose a new universal
user interface, which has sufficient functionality to control any measurement at any
instrument. Certainly, a universal interface could be less convenient to the user than a
specialized one. Nevertheless it makes porting of software to new instruments much easier
and simplifies adaptation of the user if he works with several instruments.
The former interface was actually an interface to the available devices (controllers) a
separate window for each controller. So, an interface for a comprehensive instrument usually
consists of too many windows.
The new interface is organized according to another structural principle each
window is dedicated to one of the main users needs. There are three main needs to watch:
current state of the instrument, the measurement history (log file), the picture of spectra
(spectrum). The fourth need is to control the measurement process. Thus, four programs
(windows) are generally sufficient to conduct an experiment. There are some additional
programs as well.

172


Fig. 2. The new interface is organized according to another structural principle each main
users need has its own window

Important note. All parameters of the instrument controlled by the software are accessible.
Besides, the user can select any subset of them to watch in the special list.

Main Programs Function
Reflector Watch current values of parameters
LogViewer Watch the history of an experiment
Is_client Script interpretation control
SpectraViewer Visualization of spectra

Web user interface
The WebSonix/3/ system gives an additional possibility to supervise the experiment
and control the measurement procedure.
The system allows one to reflect the actual statuses of all spectrometer components,
view measurements protocols, display acquired spectra, and to control the course of the
experiment on the spectrometers under control of the Sonix+ software package. The system
does not depend on the spectrometer characteristics and permits simple changes of their
structure and easy adaptation to special features of spectrometric data representation. The
system is based on PHP and Python scripts.

Current state and future plans
In 2004 Sonix+ was installed at two IBR-2 instruments NERA-PR and REMUR.
Both instruments worked successfully before the reactor shutdown. Also the version for the
YuMO instrument was prepared for practical testing.
During the reactor modernization period the work on Sonix+ has been continued. New
modified versions were tested at various instruments outside JINR (DSD in Yekaterinburg in
2005, GEK3 and GEK5 in Obninsk, MOND in Kurchatov Institute in 2007).
173
The following is a schedule of the instrument control system modernization including
software at the IBR-2M fast pulsed reactor:

Beam number Instrument What to do Completion year
4 YuMO hardware modification 2011
5 HRFD Sonix -> Sonix+ 2013
6 RTD Sonix+ 2012
6 DN-6 Sonix+ 2013
7 EPSILON, SKAT Sonix -> Sonix+ 2012
7 NERA-PR hardware modification 2011
8 REMUR hardware modification 2011
9 REFLEX Sonix+ 2011
10 GRAINS Sonix+ 2013
11 FSD Sonix -> Sonix+ 2011

For the complex itself we suppose to create a more suitable and flexible user interface.
In particular, new program tools to visualize data from all kinds of detectors including 2D
PSD are developed in the PyQt environment using mathploplib /3/ as a graphical library.
Matplotlib is a python 2D plotting library, which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib can be
used in python scripts.
A program (or programs) for preliminary tuning of instruments (especially
reflectometers) is very urgent. We intend to organize this program as an assembly of
prefabricated elements (exposition, motors and script control, data visualization, etc.). We
hope that PyQt technology will help us to solve this problem.


Fig. 3. Visualization of data from 2D PSD with SpectraViewer 3D

References
[1] Sonix+, http://sonix.jinr.ru/
[2] python, http://www.python.org/
[3] WebSonix, Instruments and Experimental Techniques, Vol. 52, No. 1, 2009, pp. 3742.
[4] mathplotlib, http://matplotlib.sourceforge.net/users/intro.html
174
Dosimetric Control System for the IBR-2 Reactor

A.S. Kirilov, M.L. Korobchenko, S.V. Kulikov, F.V. Levchanovskiy,
S.M. Murashkevich, T.B. Petukhova
Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia

In 2010 the modernization of the dosimetric control system for the refurbished IBR-2 reactor was
completed. The system represents a module of rate counters that accept pulses from various detectors. The
module is designed in CAMAC standard with USB interface. The software part includes a new package of
programs for real-time dosimetric control, which runs under Windows XP. The system makes it possible to
control the radiation situation at personnel work places, in technological premises and atmospheric emissions.
Simultaneously, the control system collects and sorts the data flow, analyzes it, and stores the information into
archive. An important part of the new control system is full visualization of radiation levels in real time. In case
of danger, when radiation safety limits are exceeded, the system issues audio, graphical and textual warnings.
The present paper describes the structure and features of the dosimetric control system for IBR-2.

Hardware of the control system
The prototype of the modernized control system was the equipment of the previously
used facility System 8004-01, which provided monitoring of all kinds of ionizing radiation.
The facility was intended for continuous remote dosimetric control and provided the
measurement of gamma-ray and neutron emission dose, volumetric activity of beta-active
gases, alpha- and beta-aerosols. In addition, the system controlled positions of the safety
shutters on the reactor research beams. All these options are preserved in the new system.
The electronic part of the new system represents a multichannel module for acquiring
and accumulation of data from 112 detectors. The block of counters, which is shown in Fig. 1,
includes 132 identical detector signal shapers, a control circuit (ALTERA programmable
integrated circuit), and a data-transfer circuit (FT245R USB FIFO chip).


Fig. 1. Block of counters


The shaper (Fig. 2) consists of an integrated circuit, comparator and fast digital
magnetic insulator.

USB
interface
D001
D132
F001
F132
Controlcircuit
(ALTERA)
Data
transfer
circuit
(FT245R)
175


Fig. 2. Detector signal shaper

The control circuit (Fig. 3) provides the choosing of the block operation regime,
reception of information from detectors and preparation of data for transmission to PC. This
system includes an instruction decoder, 132 14-digit counters, 2688bit shift register, and the
interface to the data transfer circuit.

Fig. 3. Control circuit

The operation of this device is supervised by two commands from PC:
Start of exposition H80,
End of exposition HC0.
When this device gets the first command, it starts transmission of information shots to
PC every second. Each shot consists of 134 16-digit words. The format of each information
shot in the basic mode is:
1
st
word Shutter position,
2-133 words Data from 132 detectors, respectively,
134
th
word Terminator (H8000 for transmission of the next shot, or HC000
for the last shot when End of exposition command is received).
The use of FT245R USB FIFO chip for data transfer through a USB channel allows
the operating system to interpret the device as a standard COMn port.

Software package
The software package for dosimetric control is written in C++ with the use of
Microsoft Visual C++.NET 7.0 and MFC library. This package provides monitoring of the
Todatatransfer
circuit
Counter001
(14digit)
Counter132
(14digit)
Shift
Register
(2688)
Instruction
decoder
Interface
withdata
transfer
circuit
Vcc
Comparator
Vref
+3..3V
Magnetic
insulator
tocontrol
circuit
176
radiation situation from the reactor dosimetric control room with a control computer and from
the main control room of the reactor.
The software package has a modular structure and four logical levels:
d_sdc: module, which interacts with the block of acquiring and accumulation of data,
s_sdc: module of data procession,
sdc: user graphical interface on the control computer,
sdc_server and sdc_client: software-server on the control computer and user graphical
interface on the remote computer, respectively.
The program modules running on the control computer, store their object data in a
special database Varman [1], which is used in the IBR-2 instrument control software Sonix+
[2] in JINR FLNP.
Fig. 4 demonstrates a screenshot of the main window of the dosimetric control system,
which runs on the control computer.



Fig. 4. SDC program window

Before the start of measurement the user may open the dialog window of the detector
of interest to set the thresholds, and, if necessary, to set the recording speed of the filtering
logs for particular detectors. This information is stored in configuration files and used for the
next system starts. The user may choose or create a directory to store necessary data in its
sub-directories created automatically.
When monitoring is started, every second the block of counters transfers data from
132 detectors to PC. 20 of 132 detectors are the compensation ones for sensors of beta-active
gases and aerosols. Thus, the monitoring data are provided by 112 detectors.
The pulses from gamma and neutron detectors are converted to radiation dose rate
units (Sv/h), while the data from detectors of beta-active gases and alpha- and beta-active
aerosols are converted to volumetric activity units (Bq/m
3
).
177
The main window of the dosimetric control program shows data from detectors as a
diagram. The diagram represents data in percents of threshold values. The red line
corresponds to 100%. The diagram data can be switched between linear and logarithmic
scales and adjusted by the kind of radiation. Each type of radiation has its individual color.
The program provides prompts showing the type of detector and its location.
If the upper threshold is exceeded, the diagram bar of the respective detector changes
its color to red, the alarm is set on and an emergency warning is displayed.
In addition, in the main program window the status of 12 beam shutters (open/shut,
duration of the last open state and number of open-shut cycles) is updated every second.
The data processing module periodically stores logs to HDD in individual files and
permits the user to get, print or save the following information:
Current levels of radiation,
Maximal levels of radiation during observation and the time of their appearance,
Exceedance of the threshold values, time of its appearance, threshold exceedance
duration, corresponding dose during the threshold exceedance in any channel,
Integral values of radiation levels,
Mean values of radiation levels during the shift,
Beam shutters status log.

Fig. 5 demonstrates an example of a log file, which contains the information about the
exceedance of radiation threshold values.



Fig. 5. Example of a log file with the information about the exceedance of radiation threshold
values
178

The remote operation of the program package is realized within the client/server
model with the application of socket technique. Data transfer is provided by TCP/IP. By
request, the server takes data from the database and passes them to the client. The graphical
interface-client on the remote computer looks like just the same as the main window on the
control computer. However, functionally there are some restrictions. The user cannot
intervene in the process of control and change the settings.

Conclusion
A new dosimetric control hardware and software complex for the IBR-2 reactor was
put into operation in the winter of 2010. In the summer of 2011 the complex was successfully
tested during the trial start-up of the modernized reactor.

References
[1] A.S. Kirilov, V.E. Yudin. The implementation of the real-time database for
controlling experiments in the MS Windows environment. JINR, 13-2003-11,
Dubna, 2003.
[2] Sonix+, http://sonix.jinr.ru

179
CMS computing performance on the GRID during
the second year of LHC collisions

P. Kreuzer
1
on behalf of the CMS Offline and Computing Project
1
RWTH Aachen IIIA

The CMS Computing Model deployed on the Worldwide LHC Computing GRID (WLCG) has successfully
met its milestones of efficiently processing the first year LHC data and serving themto thousands of physicists
worldwide as input to their searches for new physical phenomena. The second year of LHC collision running is
characterized by a spectacular expected increase in integrated luminosity by ~3 orders of magnitude (to O(5) fb
-1
),
resulting in a data volume of O(2) billion events. This requires the experiment to ramp up resources, adapt computing
services and move towards sustained computing operations. In parallel, the experience gained on the GRID and
emerging technologies lead to a natural evolution of the CMS Computing Model.
We review the main computing achievements by CMS including the host laboratory processing, data
distribution and re-processing on the GRID, data serving, and analysis by CMS physicists during the second year
of LHC operations. We also present the evolution of the usage of GRID resources by CMS to adapt to constantly
growing data volume and processing demands. We conclude with prospects for upcoming years CMS data
processing.
1. Introduction
The CMS Computing model [1] was designed on a hierarchy of computing tiers described
in the MONARC Project [2], with the host laboratory (Tier-0 at CERN) used for data archival
storage, calibration workflows and prompt data reconstruction, 7 Tier-1 centers used for
secondary archival storage, organized data processing and data serving, and a number of ~50
Tier-2 centers used for organized event simulation and user analysis. The sites are distributed
worldwide and may vary in size, for a total disk storage capacity of 41 PB (plus 66 PB tape
storage capacity at Tier-0 and Tier-1s only) and a total CPU capacity of 554 kHEP-SPEC06 [3].
All sites are members of the Worldwide LHC Computing Grid organization [4].
The first year LHC collision running was very successful for the CMS Computing
project [5]. After a service break during the winter 2010, proton-proton collision running
resumed in March 2011 and is scheduled to last until November 2011, followed by a month of
heavy-ion collision data taking. At the editing time of the present paper, an integrated
luminosity of 2.91 fb
-1
had been delivered to CMS, which is a factor 1000 larger than for
2010 overall. When including the simulated data needed for physics analysis, a total number
of ~2 billion events was recorded, which represents a challenge in terms of data archiving,
processing and transferring. In parallel, LHC peak luminosities larger by 1 order of magnitude
compared to the first year running (2.9 x 10
33
cm
-2
s
-1
), with up to 15 interactions per bunch
crossing, hence directly impacting the event size and the memory consumption of the CMS
software application used to reconstruct the data. In the next 3 Sections we review the
performance of the main CMS Computing infrastructure and workflows, and we conclude in
the last Section with a brief outlook into the future.
2. Prompt Data Reconstruction and Data Storage (Tier-0)
The CMS Tier-0 infrastructure is composed of a large CPU farm of 71 kHEP-SPEC06
(~3500 dedicated cores and ~1000 cores from the public CMS share), and a CASTOR storage
pool, mainly used as buffer space before data is archived to tape or transferred onto the
WLCG infrastructure.
The Tier-0 farm is also hosting express workflows, which are high priority
processes allowing for a first analysis of a small fraction of the data, within a few hours after
the data has been acquired. These data are transferred with very high throughput (2-3 GB/s) to
180
a dedicated CERN Analysis Facility (CAF) [6]. The CMS CAF also contains dedicated CPU
resources, where experts are running calibration and alignment workflows, the output of
which is re-injected into the CMS conditions databases, as input to the full reconstruction of
the data. Such data taking, calibration and processing cycle is achieved within maximum 48h.
As opposed to the previous year, the dedicated Tier-0 farm was regularly saturated by
prompt reconstruction workflows in year 2011, see a typical example in Fig. 1.

Fig. 1. number of jobs running (green) and pending (blue) on the dedicated CMS Tier-0 CPU
farm (Source: Service Level Status (SLS) Overview, CERN [7]).
The main limitation came from the large memory consumption of the CMS
application software (~3 GB per job), caused by an increasing number of interactions per
LHC bunch crossing, hence higher track density and event size. This led to an inefficient CPU
utilization (70%), given that only 6 slots were used on average, on typical 8-core nodes. As a
result, the CMS Offline group provided an improved application in terms of memory
consumption, which was to be deployed by end of the proton-proton collisions data taking.
However, overall the Tier-0 met its mandate to timely reconstruct and distributed the output
data to the Tier-1 resources for further processing and analysis.
3. Data Re-Processing and Data Serving (Tier-1)
The CMS Tier-1 centers are large computing facilities with O(1500) processing cores,
O(1-2) PB disk storage capacity and an even larger tape storage capacity. These centers are
usually serving multiple Virtual Organizations (VO), for example several LHC experiments
simultaneously, which requires operational support by both VO-oriented and Lab-oriented
personnel. CMS is using 7 Tier-1 centers [8] located on 4 continents. The workflows running
at these centers are centrally managed by the experiment; they include (i) the archival storage
of a second copy of the RAW CMS data, (ii) the re-processing and skimming of CMS
physics data samples and (iii) the data serving to Tier-2 centers, with the goal to distribute
data in timely manner to thousands of CMS physicists worldwide. Since 2011, CMS has also
181
resumed simulated data production at Tier-1 centers, in particular during non-data taking or
non-data-reprocessing periods.
The Tier-1 site utilization and the re-processing performance by CMS have been very
successful since the beginning of the 2011 data taking period: the full processing capacity was
regularly reached (~13 k jobs slots) and at the editing time of the present paper, more than
half the recorded collision events from 2011 had been re-processed at least once. This result is
even more remarkable when considering that a new workflow management system for data
production and re-processing was deployed early 2011: the WMAgent is web-controlled
tool developed by CMS and contains a full bookkeeping of the submitted jobs, hence it is less
manpower intensive and it has flexible interfaces to the GRID middleware. The new
machinery is completed with a Request Manager, where physicists can place their
processing orders and a Work Queue, allowing for an efficient throttling and queuing of
workflows to be submitted to one of the 60 CMS sites.
Each event produced by CMS is archived at the host laboratory and at least once at
another Tier-1 centre. However, in 2011 CMS used its well established data transfer system
PhEDEx[9] and the underlying WLCG software and networking infrastructure to replicate
20% data more that once (non-custodial copy), in order to efficiently respond to data serving
requests by physics groups or individuals, typically located at Tier-2 centers. In Fig. 3 the data
transfer throughput from all Tier-1 to all Tier-2 centers is shown for a time window of 1 year
before and during the 2011 data taking: the average performance is close to 600 MB/s, with
weekly peaks twice as high. Networking limitations have been encountered in a few cases
where a very large number of physicists tried to access a particularly popular dataset at a
given site: these limitations may be solved by tuning the underlying transfer software
parameters, or via the deployment of a dedicated networking infrastructure, as it is already the
case between the host laboratory and the Tier-1 centers. Finally data transfers between Tier-2
centers have also increased heavily during the year, contributing to 25% of the overall traffic
between sites.

Fig. 2. Data transfer throughput between the 7 Tier-1 centers and 50 Tier-2 centers, for a time
window of 1 year. The lower activity periods during LHC winter service breaks are visible
and the highest peak in J uly 2011 corresponds to a local data migration between 2 storage
systems within CERN (CASTOR and EOS).
182
4. Data Simulation and Data Analysis (Tier-2)
One of the two main workflows CMS is running at Tier-2 centers is centrally managed
event simulation, which has been considered since several years as a good scaling measure for
distributed processing on the GRID. Fig. 3 shows the cumulative event production in terms of
CPU consumption at all involved CMS sites, throughout the year 2011. A dozen Tier-3
centers were also involved in opportunistic manner.


Fig. 3. Cumulative CPU time spent for CMS event simulation in 2011. In addition to Tier-1
and Tier-2 centers, a dozen Tier-3 centers were included opportunistically for centrally
managed production.
The software machinery used for even production on the GRID is very similar to that
used for data re-processing (see Sect. 3). The number of simulated events roughly
corresponds to the amount of acquired LHC events. The events produced at Tier-2 or Tier-3
centers are transferred to a defined Tier-1 center for custodial storage and as input to the
further digitization and reconstruction steps. The workflow prioritization at Tier-1 centers is
such that production jobs are sent only when resources are not busy with data processing,
hence keeping a high CPU utilization level.
At Tier-2 centers, an equal share between production and user analysis jobs was
foreseen although in 2011 the latter became dominant (75%), which is explained by the
available Tier-1 resources for production and the increasingly active analysis community at
Tier-2 centers. On Fig. 4 the number of distinct users per week is shown for a year time
window, peaking at 475 in the busiest time periods and resulting in an impressive amount of
physics publications by the CMS collaboration in a short time period.
183

Fig. 4. Weekly number of distinct CMS analysis users for a 2 years time window. Holidays or
busy summer conference periods a clearly visible.
While the number of analysis users is a clear measure of the success of the CMS
computing model on the WLCG GRID, another important figure is the success rate of the
analysis tool used by CMS physicists. The CMS Remote Analysis Builder (CRAB [10]) was
designed to provide an efficient interface between the physicists and the GRID middleware,
and to handle job accounting and tracking via a central CRAB Server. This dedicated CMS
analysis tool also includes a stage-out mechanism for the user output data, to be transferred
from the site where a given user job was running to the centre where the user is located.
Intensive efforts are being made to increase the overall GRID analysis success rate, now
reaching the 80% level. Besides application or configuration errors, the main limitation comes
from the stage-out efficiency, since it is site-dependent. A new mechanism using centrally
managed data transfers are being investigated in order to further increase the efficiency.
A necessary condition for all production and analysis workflows described above to be
successful is a high level of readiness of the sites. For this purpose, large efforts in Site
Readiness Monitoring [11] have been made by the CMS computing community, in particular
by the local site administrators, and by a worldwide crew of computing shifters who are
applying 24/7 monitoring and alarming procedures to CMS central services and sites. The site
readiness metric has been constructed on various criteria characterizing the CMS workflows,
such as analysis test-jobs continuously sent to sites or data transfer load-tests between sites.
Fig. 5 shows the CMS Tier-2 Site Readiness status in the last 2 years, with a satisfactory
plateau of 75% sites ready to receive CMS workflows.
184

Fig. 5. CMS Tier-2 Site Readiness Status in last 2 years, showing a plateau of 75% sites in
ready state. As similar monitoring is applied to Tier-1 centers, with more stringent criteria.
5. Russia and Dubna Member State (RDMS) contributions to CMS Computing
The RDMS Tier-2 centers contributing to CMS include 7 Russian sites and 1
Ukrainian site. Moreover, in recent years a growing number of CMS member institutes and
Tier-3 centers from RDMS countries appeared, which may locally collaborate with a nearby
Tier-2 centre, in order to ease the access to CMS data by their scientific community.
One particularity of RDMS sites is that they are multi-VO Tier-2 centers, typically
serving all 4 LHC experiment, as opposed to most other CMS Tier-2 centers. This makes
local data and CPU management more complex. The average number of jobs slots used by
CMS at RDMS Tier-2 ranged at nearly 700 during a 2 months time period in Summer 2011,
which represents a relatively modest (3%) contribution to the total CMS Tier-2 capacity.
However the size and availability of RDMS Tier-2 centers is expected to increase
substantially in upcoming years. Fig. 6 shows the distribution of CMS jobs among RDMS
Tier-2 sites: J INR is the dominant site with a 40-50% job slot contribution to the total
RDMS capacity for CMS.
6. Conclusions
The CMS Offline and Computing project has successfully achieved its mandate during
the second year LHC collisions running, in terms of data production, data processing and data
transfers. The increase of the LHC luminosity and event size has pushed operations and
resources to the limit, in particular at the Tier-0 centre. Improvements in the application
software memory consumption should help in the future, yet the luminosity will further
increase as well, therefore the 2012 run is expected to be as challenging.
One of the main evolution of the CMS Computing Model will come through wide area
access to data, together with a global xrootd redirector: this should help for data access
fallback scenarios at Tier-2 centers or for disk-less data access at Tier-3 centers, but it may
also introduce new networking challenges. Other fields of evolution are the whole node
scheduling, to reduce the amount of memory per job, and the disk-only storage solutions at
Tier-0 or Tier-1 centers, to increase the data access efficiency. In any case, the CMS Offline
and Computing community is impatiently awaiting the 2012 LHC data and deliver it in user-
friendly format and volume to the even more impatient CMS physicists!
185

Fig. 6. Running CMS jobs subdivided among various RDMS Tier-2 sites during Summer 2011
References
[1] C. Grandi, D. Stickland, L. Taylor et al. The CMS Computing Model. CERN-LHCC-2004-
035/G-083 (2004).
[2] M. Aderholz et al. Models of Networked Analysis at Regional Centres for LHC experiments
(MONARC) - Phase 2 Report. CERN/LCB 2000-001 (2000).
[3] HEP-SPEC06, http://hepix.caspur.it/benchmarks/doku.php
[4] I. Bird et al. LHC computing Grid. Technical design report. CERN-LHCC-2005-024 (2005).
[5] I. Fisk on behalf of the CMS Offline and Computing Project. Challenges for the CMS
Computing Model in the First Year. Presented at 17th International Conference on Computing
in High Energy and Nuclear Physics (CHEP 09), Prague, Czech Republic, 21-27 Mar 2009.
J .Phys.Conf.Ser.219:072004, 2010.
[6] O. Buchmuller et al. Prepared for 17th International Conference on Computing in High
Energy and Nuclear Physics (CHEP 09), Prague, Czech Republic, 21-27 Mar 2009.
J .Phys.Conf.Ser.219:052022, 2010, p. 8.
[7] Service Level Status (SLS) Overview, CERN, https://sls.cern.ch/sls/index.php
[8] M. Albert et al. Prepared for 17th International Conference on Computing in High Energy and
Nuclear Physics (CHEP 09), Prague, Czech Republic, 21-27 Mar 2009.
J .Phys.Conf.Ser.219:072035, 2010, p. 7.
[9] L.Tuura et al. Scaling CMS data transfer system for LHC startup. Prepared for International
Conference on Computing in High Energy and Nuclear Physics (CHEP 07), Victoria, BC,
Canada, 2-7 Sep 2007. J .Phys.Conf.Ser.119:072030, 2008.
[10] D.Spiga et al. CRAB: The CMS distributed analysis too development and design. Prepared at
the 18th Hadron Collider Physics Symposium 2007 (HCP 2007) 20-26 May 2007, La Biodola,
Isola d'Elba, Italy. Nucl.Phys.Proc.Suppl.177-178:267-268, 2008.
[11] J . Flix et al. Prepared for 17th International Conference on Computing in High Energy and
Nuclear Physics (CHEP 09), Prague, Czech Republic, 21-27 Mar 2009. Published in
J .Phys.Conf.Ser.219:062047, 2010.

186
The Local Monitoring of ITEP GRID site

Y. Lyublev, M. Sokolov
Institute of Theoretical and Experimental Physics, Moscow, Russia

We describe a local monitoring of the LCG Tier2 ITEP site (Russia, Moscow). Local monitoring includes:

The temperature of the computer hall,
The status of the queues of the jobs,
The status of the GRID services,
The status of the GRID site UPSs,
The site status details, obtained by NAGIOS,
The site status details, obtained by GANGLIA.

Introduction
This article is about local monitoring of the GRID site at ITEP, Moscow, Russia. Our
site has 10 functional servers, 274 CPUs for working nodes, 314 TB disk storage space. It is
used by eight Virtual Organizations groups.

The temperature of the computer hall
The temperatureis obtained from three sensors:
- the external sensor,
- the sensor of cooled air,
- the sensor of the internal temperature in the hall.

There is a possibility to increase the number of sensors to 50. The history of the
collection of statistics according to the temperature makes it possible to examine it with
different accuracy (hour Fig. 1), daily, weekly, monthly, annual.


Fig. 1. The temperature obtained from the three sensors with hourly precision

187
The status of the queues of the jobs
The status of the queues of the jobs allows for the analysis:
- the general current state of the queues of ITEP site (Fig. 2);



Fig. 2. The current state of the queues

- the current state on different CEs (Fig. 3);



Fig. 3. The current state on different CEs



188


- the graphs of the state of jobs on different CEs during different period (Fig. 4);



Fig. 4. The graphs of the state of jobs on different CEs

- the graphs of the state of fundamental characteristics CEs (Fig. 5).

Fig. 5. The graphs of the state of fundamental characteristics CEs






189
The status of the GRID services (Fig. 6)

Fig. 6. The status of the GRID services

The status of the GRID site UPSs (Fig. 7)

Fig. 7. The status of the GRID site UPSs

The site status details, obtained by NAGIOS and by GANGLIA
We use two main tools for monitoring of active site services: NAGIOS and GANGLIA.
They use a standard set and an extended set of plugins. NAGIOS is the main tool for monitoring,
but it has static configuration only, no graph supported by default and a limited number of resources.
GANGLIA allows graphic and easy to store data of its monitoring.

190
By Nagios (Fig. 8).


Fig. 8. The site status details, obtained by NAGIOS

By GANGLIA (Fig. 9).

Fig. 9. The site status details, obtained by GANGLIA
Summary
We described our tools for local monitoring of GRID site into ITEP. We are grateful
to the organizers of NEC`2011 for the invitation to participate in the symposium.

References
[1] Ganglia homepage, http://ganglia.sourcefogge.net
[2] Nagios homepage, http://www.nagios.org
[3] ITEP GRID homepage, http://egee.itep.ru
[4] EGEE Nagios sensors description, http://egee.grid.cyfronet.pl/core-services/nagios
191
Method for extending the working voltage range of
high side current sensing circuits, based on current mirrors,
in high-voltage multichannel power supplies

G.M. Mitev, L.P. Dimitrov
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Bulgaria
High side current sensing circuits, based on current mirrors, are suitable for measuring small load
currents ( to mA range) under high output voltage, yet they have a limited working range of several hundred
volts, set by the breakdown voltage of the transistors. That range is insufficient for some applications requiring
several hundred to several thousand volts.
This paper discusses a method for extending the voltage range, especially useful in group-controlled
multichannel power supplies. We propose an example circuit and present some comparative results.
Keywords: extended voltage range, current mirror, current sensing, high voltage, DC power supply,
ionizing radiation detector
1. Introduction
In multichannel power supplies, that utilize common return ground for all the
channels, individual channel output currents cant be measured on the low side. High side
output current measurement in ionizing radiation detector power supplies usually involves
complicated circuits and/or poor power efficiency. Current mirror based circuits have been
proposed in [1] and [2] that overcome these problems. Still, they are limited by the working
voltage range of the available transistors at few hundred volts (400-500 V). That is sufficient
for some applications, like semiconductor detectors and HPD power supplies, but insufficient
for other photomultiplier supplies. That is the motivation behind our research to extend
the voltage range of this schematic class and enable a wider array of applications.
2. Problem and solution
Current mirror based current sensing circuits usually
operate by drawing a small current proportional to the load, for
example Fig. 1. In order to extend the voltage range of such a
circuit it is sufficient to add a voltage regulator (Q3, Fig. 1),
controlling the drop on the current sensing circuit. There is a
special requirement posed to the regulator it shouldnt add or
subtract any current into the measurement circuit. This can be
successfully fulfilled by a low leakage MOS transistor. It must
have smaller gate leakage current, negligible to the minimal
measurement current Im, and high breakdown voltage.
3. Test setup
The test circuit is set up according to the schematic on Fig. 2. It contains three main
components a current sensing mirror [1], a voltage extending regulator and a current-to-
voltage convertor with adjustable gain.
The current mirror circuit is already well discussed in [1] and [2] and is not the subject
of this paper. It consists of dual BJT Wilson current mirrors. The upper one sets the ratio of
the output current to the measurement current. The lower one sets the ratio of the currents,
between the two arms of the first mirror, to 1:1. The measurement current is calculated as
Io
Q3
BSP230
Im
Uref
R1
100
R2
10k
Q2
FMMT558
Q1
FMMT558
Ui
Fig. 1
192
approximately
8 7
8
2
R R
R
I I
o m

(1). It can work with voltages in the range between


10400 V and load currents from 50 A to 10 mA.
The voltage regulator is built with a BSP230 high voltage p-channel MOS transistor. It
is capable of sustaining 300 V. Unfortunately, there are difficulties producing high-voltage p-
channel MOS transistors (unlike n-channel), so if wider range extension is needed one should
connect more devices in series. That comes at the cost of more leakage currents added up to
the measurement current. The ratio of the voltages across the sensing circuit and the voltage
extender is chosen to be 4:3, giving a maximum working voltage range of 700 V 400 V
across the measurement circuit and 300 V across the extender.




The power efficiency worsens with the addition of the voltage extending circuit. The
ratio of the measurement to output current is still set by the current sensing circuit but we
consume additional current from the HV supply to power the resistive divider that sets the
proportion of voltage drops across the sensing and extending circuits. In practice it is hard to
set that resistance higher than 100 M and for circuits operating in the range of several
hundred to one thousand volts that translates to consumption close to 10 A. The sensing
circuit has a typical quiescent current in the range of 100 nA and maximum measurement
current in the range of 100 A, with a ratio of 100
m
o
I
I
and 10 mA load current. That means
10% increase in power consumption under maximum load and 100 times with no load.
Q3
FMMT458
Q4
FMMT458
Q5
FMMT558
Q6
FMMT558
R1
10k
R2
10k
R7
10k
R8
130
Q1
BCV61C
Q2
BCV61C
Q7
BCV62C
Q8
BCV62C
Q9
BSP230
R3
4M
R4
3M
+
-
U1A
LMC6482A
3
1
4
1
1
2
+
-
U1B
LMC6482A
5
7
4
1
1
6
R5
30k
R6
30k
R9
10K
0 0 0
R10
30k
R11
30k
R12
15k
0
Ui
Io
Um
Fig. 2
I-V
convertor
Voltage
extender
Current
sensing
193
Considering the measurement circuitry on its own, that may sound like a high price for
extending the voltage range. Considering the power supply as a whole, the matter is a lot
different. We always need a voltage feedback for the regulator control circuit and for output
voltage measurement. That is achieved again using a resistive divider. Combining these
circuits leads to using a common circuit setting the ratio(s) for the extending transistors and
for voltage feedback and measurement. It is already available within the power supply and
extending the current measurement circuit voltage range doesnt add to power consumption as
a whole and doesnt worsen power efficiency.
The measurement current is converted to a voltage value using a series resistance. In
order to avoid interference with the voltage measurement device input resistance we use an
ultralow input current (20fA) operational amplifier as a buffer. The current to voltage
convertor circuit has the added benefit of being able to easily adjust the I-V conversion ratio.
4. Results
The experimental data are shown on Fig. 3. They are divided in three sets, each
containing two measurement series. The first series contain the measurement results for an
extended measurement circuit and the second one the same circuit without the voltage
extender. Three working voltages are used - near the top, in the middle and at the lower end of
the working range.
The voltages chosen for the extended circuit are 650,350 and 90 V. Using a ratio of 4:3
for the resistive divider means 371, 200 and 51 V across the sensing circuit.
The voltages chosen for the circuit without voltage extender are 350, 200 and 50 V.
That allows for direct comparisons between the two circuits because the sensing circuits are
working at the approximately same voltages.
Measurement data sets are obtained from the sensing circuit at a given voltage and the
extended circuit at a voltage such that the drop across the sensing part is the same as in the
first case. The three sets are obtained at 650 V/350 V, 350 V/200 V and 90 V/50 V
respectively.
The first set of data (Fig. 3a) shows the transfer characteristic Um=f(Io). It can be seen
that the characteristics of the two circuits, at comparable voltages, overlap almost perfectly. It
indicates that the transfer function is defined by the sensing circuit and not seriously
influenced by the extender circuit.
The second data set (Fig. 3b) shows the change in the measured value for the three working
voltages, calculated as % 100 .
m
em m
m
U
U U
U

, caused by the addition of the extender circuit.
The peak value of this error reaches just above 5% at 10A.
The third data set (Fig. 3c) shows the change in the measured value caused by the change in
the power supply voltage. It is calculated as

% 100 .
2 / ) ( ) (
) ( ) (
) (
L m H m
L m H m
m
V U V U
V U V U
V U

, separately
for both circuits. It shows the influence of the supply voltage on the measured value especially
large for small currents. The error falls below 10% at about 20 A.
5. Discussion
The measurement characteristics of the circuit are almost unaltered by the voltage
extender. The only notable error region is around 10 A. In that region the influence of the
leakage currents is very high and the measurement characteristics are strongly dependent on
supply voltage. The error introduced by the extender circuit is an order of magnitude smaller
than the supply voltage influence.
194

Fig. 3
0,0001
0,001
0,01
0,1
1
10
0,0001 0,001 0,01 0,1 1 10
Io{mA}
U
m
{
m
V
}
Uem@650V
Um@350V
Uem@350V
Um@200V
Uem@90V
Um@50V
-6
-5
-4
-3
-2
-1
0
1
0,0001 0,001 0,01 0,1 1 10
Io{mA}

U
m
{
%
}
Um{%}@350V
Um{%}@200V
Um{%}@50V
0,1
1
10
100
1000
0,0001 0,001 0,01 0,1 1 10
Io{mA}

U
m
(
U
i
)

{
%
}
Uem {65090V}
Um {35050V}
a)
b)
c)
195
Conclusion
The transfer function is defined by the sensing circuit and almost independent from
the extender circuit. The error introduced by the use of voltage extender is insignificant to the
inherent supply voltage dependency of the sensing circuit.
The use of resistive voltage divider in the voltage extender causes high power
consumption, comparable to that of the sensing circuit, and worsens the power efficiency.
Nevertheless, such circuit is already present in a typical power supply and combining them
mitigates the problem.
Overall, the voltage extended circuit works with negligible deterioration of
measurement accuracy with no additional power efficiency loss. The only drawback is the
slightly increased component count. The main benefit possibility to greatly increase the
working voltage range of high side current sensing circuits, based on current mirrors, in high-
voltage multichannel power supplies.
References
[1] L. Dimitrov, G. Mitev. Novel current mirrors application in high side current sensing
in multichannel power supplies. Proceedings of the XXII International Symposium on
Nuclear Electronics and Computing (NEC2009), ISBN 978-5-9530-0242-4.
[2] G. Mitev. Specifics of using current mirrors for high-side current measurements in
detector power supplies. Annual journal of electronics, ISSN 1313-1842, 2009.
196
Early control software development using emulated hardware

P. Petrova
Institute for System Engineering and Robotics, Bulgarian Academy of Sciences, Bulgaria
In the lengthy system development process, quite often the software requires longer time for design and
testing than the time necessary for the hardware preparation. As an attempt to shorten the overall project period it
is more convenient to have both hardware and software design in parallel. In order to begin the software design
as early as possible, it is essential to be able to emulate the work environment of the finished system, including
even some parts of the system itself. Through software simulation it is possible to substitute almost any missing
hardware for the purposes of control software development and testing, in order to have all system components
ready within a given timeframe.
In this paper is presented an example of a software development process separate from the
complementary hardware, using a LabVIEW emulated equipment.
Keywords: Out-of-the-Box Solution, Rapid Control Prototyping, Hardware-in-the-Loop Simulation,
Hardware/Software Co-Simulation, Virtual Robot, Programming Environments
1. Introduction
Software robot simulators simplify development work for robotics engineers. The
behavior-based simulators allow users to create worlds with different degree of complexity
built-up from objects and light sources, and to program real robots to interact with these
worlds, or design virtual robots to be part of them. Some of the most popular applications for
robotic simulations use a physics engine for precise 3D modeling and rendering of the work
environment along with the motion of the robot in 3D space. The use of a simulator allows for
robotics control programs to be conveniently written and debugged off-line (with no hardware
present) with the final version of the program later tested on an actual robot. One such
simulation environment is the NI LabVIEW Robotics Module. It has its own 3D rendering
engine but can also connect to other third party engines.
LabVIEW is a graphical programming tool based on the dataflow language G. It offers
runtime support for real-time (RT) environment, which caters to the needs of embedded
systems prototyping. Due to its characteristics this environment presents itself as an ideal
medium for both the design and the implementation of embedded software. This approach
provides a key advantage - a smooth transition from design to implementation, allowing for
powerful co-simulation strategies like Hardware-in-the-Loop (HIL), Runtime Modeling, etc.
Such solution gives a state-of-the-art flexibility and control performance possibilities.
2. Requirements and design process
Any real world system, motion systems like robots in particular, include a variety of
elements mechanical elements (wheels and gearboxes) and electrical components (power
converters, digital circuits, sensors). The operation of all those components is coordinated by
embedded software programs that abstract the dynamics of the interacting parts, providing
basis for higher level programs, which perform reasoning and deliberation. This separation of
the domains of responsibility for the different pieces of the control software provides the
necessary modularity for the system hierarchy. From its structure arises the interest of having
tools and techniques that can span on the entire space from low-level to high-level
specifications and provide a common environment for development, prototyping and
deployment.
Some of the requirements for the successful control software development
environment arise from the system structure, but others are function of its desired
performance. It is important that the environment would facilitate the development of reliable
programs and simplify the integration of the different modules. The availability of standard
197
control design and signal processing routines is also essential, as well as the possibility of
generation of easily maintainable, readable and interpretable, platform independent code.
The proposed by the Robotics Module system description language and design
environment address all these issues. It provides connectivity to variety of sensors and
actuators, as well as their virtual simulated counterparts, along with tools for importing code
from other languages, including C/C++ and VHDL, and a physics-based environment
simulator.
In the traditional design process a model of the system is used to devise a possible
control strategy, which is iteratively tested through a computer-based simulation and ported to
a prototype test rig. A design generated in the LabVIEW environment can be directly used for
HIL simulations, in which the control software is tested in a similar to the intended
deployment environment. This reduces the need for compromises based on the compatibility
considerations and provides the designer with many more degrees of freedom, which speeds
up the development process in a cost-effective manner.
Another important principle is the implementation of the lower levels in the control
hierarchy in real time (RT) environments, while the higher levels are implemented in
intensive and intuitive user interface environments, which are both present in the module in
question. This might highly improve the usability, which is commonly an overlooked issue.
Moreover, easy integration of the hierarchical components facilitates the top-to-bottom
and the bottom-to-top data and control flow, carrying information that could be used for the
fine-tuning of the system and for the identification and optimization of its critical
components. This is also a function of the applied control design algorithm. A variety of
algorithms are supported by the development environment, which enables the performance
adaptive control and also implies that the embedded description language and the
environment should have a certain level of sophistication for such implementations to be
easily deployed.
Simulation and Computer-Aided-Design serve very important role in robotics
development. They allow the performance of different tasks, like modeling the behavior of a
real hardware system, using an approximation in software, as well as performing design
verification through 'soft' rapid prototyping, which allows the designer to detect conceptual
errors as early as possible and makes possible the problem identification prior to the
implementation. Another key step is the performance evaluation of a system, which might be
essential for specific and dangerous environments, where actual testing is impossible.
Through the simulation process, comparison of (possibly experimental) architectures
can be performed, which allows for Trade-Off Evaluation of different designs. This approach
also enables designers to separate and make the development of the hardware and the
software independently, not relying on the availability of any of the two, which in turn allows
for their parallel design.
Finally, the design, debugging and validation of the mechanical structure of the robot,
the visualization of its dynamic of motion and the analysis of the kinematics and dynamics of
the robotic manipulators, along with its interaction with the environment can be fully
performed within the virtual medium of the Robotics Module.
3. System overview
The full system (Fig. 1) consists of several major components: the Environment Simulator,
along with the environment model and the Robot Simulator.

198
There is a GUI for the high-
level control and a G-based low-
level control software. The link
between the embedded software and
the GUI is provided by the
LabVIEW Virtual Instruments (Vis)
that control the whole test setup.
The central part in the test
setup takes the Simulation Scene,
which comprises of the simulated
environment, along with the
environment objects, and the
simulated robot.
The Robotics Module
provides methods to read or write
the properties of the simulated
components, or invoke methods on
components, respectively. The
LVODE property and method
classes are arranged in a hierarchy
with each class inheriting
the properties and methods
associated with the class in
the preceding level. They
reflect the geometrical and
physical properties of the
environment objects and the
robots bodies.
The Robot Simulator
(Fig. 2) is responsible for
modeling the hardware found
on different models of
physical robots, to translate
robotic operations into
operations that the
Environment Simulator provides and react to the feedback provided by it.
The Environment Simulator reads and displays simulation scenes you design (the
virtual environment in which the robot will operate), calculates realistic, physics-based
properties of simulated components as they interact, and advances the time in the simulation.
An important part for the simulation plays the Manifest file. When you design a simulation
environment and components, you save their definitions in an .xml file, called a Manifest file. The
simulator reads from Manifest files to render the components they define. Each Manifest defines
one Simulation Scene, which is a combination of simulated components and their properties. You
can render one simulation scene at a time in the simulator. Simulation scenes contain the following
components: (1) Environment - each simulation instance must have an environment that describes
the ground and any attached features. The environment also has associated physical properties, such
as the surface material and force of gravity; (2) Robots - they contain simulated sensors and
actuators; and (3) Obstacles - environments can contain obstacles that are separate from the
environment and have their own associated properties.
Fig. 2
Fig. 1
199











Fig. 3
The Environment Control Panel allows properties of the simulation to be set (e.g.
time-step size, what Environment Model file to use for the simulation, etc.). Its role is
executed by the Master Simulation VI. You write VIs to control the simulator and simulated
components. The VIs contain the same code to control simulated robots that embedded
applications running on real robots might contain.
Fig. 3 shows a LabVIEW robot simulation. This is one type of environment with
different objects, in which is placed a StarterKit robot in an autonomous navigation mode.
The small picture displays a view from the robots camera, from the robots perspective.
Conclusions
Robot simulators can be successfully used not only to simplify the mechanical design
of robots, but also for emulation and testing of their control software, long before the final
phases of development. Testing can be made more rigorous and repeatable, since a test
scenario can be re-run exactly. Simulators are great medium for testing ideas of intelligent
robotics algorithms. Their use makes possible the evaluation of the efficiency of the control
algorithms, the employed parameter values and the variety of sensor-actuator configurations
in an early stage of design.
Using simulations, allows for a top-down robot control system design and offline
programming, and gives way to parallel development of software and hardware, separate from
each other, thus facilitating modular system development.
LabVIEW provides a single environment that is a framework for combining graphical and
textual code, giving the freedom of integration of multiple approaches for programming, analysis,
and algorithm development. It provides all of the necessary tools for effective robotics
development - libraries for autonomy and a suite of robotics-specific sensor and actuator drivers.
References
[1] Ram Rajagopal, Subramanian Ramamoorthy, Lothar Wenzel, and Hugo Andrade. A Rapid
Prototyping Tool for Embedded, Real-Time Hierarchical Control Systems. EURASIP Journal on
Embedded Systems, Vol. 2008, Article ID 162747, 2008, p. 14, doi:10.1155/2008/162747.
[2] D. Ratner & P. M.C. Kerrow. Using LabVIEW to prototype an industrial-quality real-time solution for the Titan
outdoor 4WD mobile robot controller. IROS 2000 Proceedings IEEE/RSJ International Conference on
Intelligent Robots and Systems, 31 October - 5 November 2000, Vol. 2, pp. 1428-1433, Copyright IEEE 2000.
[3] J .-C. Piedboeuf, F. Aghili, M. Doyon, Y. Gonthier and E. Martin. Dynamic Emulation of Space
Robot in One-g Environment using Hardware-in-the-Loop Simulation. 7th ESA Workshop on
Advanced Space Technologies for Robotics and Automation 'ASTRA 2002'ESTEC, Noordwijk, The
Netherlands, November 19 - 21, 2002.
[4] Martin Gomez. Hardware-in-the-Loop Simulation. Embedded Systems Design Magazine, 30 November
2001, http://www.eetimes.com/design/embedded/4024865/ Hardware-in-the-Loop-Simulation.
[5] Brian Bailey, Russ Klein and Serge Leef. Hardware/Software Co-Simulation Strategies for the
Future. Mentor Graphics Technical Library, October, 2003, http://tinyurl.com/6cx95ey


200
Virtual lab: the modeling of physical processes in Monte-Carlo
method the interaction of helium ions and fast neutrons with matter

B. Prmantayeva, I. Tleulessova
L.N. Gumilyov Eurasian National University, Astana, Kazakhstan

1. The purpose of work
The study of interactions of charged particles and neutrons with matter. Measurements
of parameters of nuclear reactions. The study of differential cross sections of elastically
scattered neutral and charged particles on atomic nuclei.

2. A brief theoretical introduction
2.1. Nuclear interactions of charged particles with matter
When passing through matter, charged particles interact with atoms of this matter
(electrons and atomic nuclei). And, accordingly, these particles participate in three types of
interaction - strong (nuclear), electromagnetic and weak. As part of the laboratory work is
considered the nuclear and electromagnetic interaction of charged particles with matter.
Electromagnetic interference is one of intense interactions in nature, but it is weaker
than nuclear interaction (about 100-1000). With the passage of charged particles through
matter, energy losses are mainly due to ionization inhibition.
Nuclear interaction (the strongest interaction in nature) manifests itself in one of the interesting
shapes in the form of direct interaction processes (scattering of particles on nuclei, nuclear reactions),
and these processes are characterized by large cross sections (10
-27
10
-24

2
). Due to the large cross
sections of the strong interaction, the fast particles, going through matter, lose energy through the
processes of nuclear absorption and scattering.



Fig. 1. The interaction of the incident particle with the target nucleus

One frequently discussed process in nuclear physics of the interaction of two particles
is the process of elastic scattering in which the total kinetic energy and momenta of the
colliding two particles are stored and only redistributed between them. As a result, the
particles change its energy and direction of motion. Coulomb and nuclear forces are taken
here as forces, due to the action which can occur the elastic scattering. Nature of scattering by
Coulomb or nuclear forces determined by the sighting parameter b. It is obvious that a
201
charged particle (Fig. 1) passing at a given rate close to the other charged particle with b1,
would be scattered on a larger angle than the particle flying away - b2 where b2>> b1.
Low-energy charged particles are scattered on the Coulomb forces, charged high-
energy particles and neutrons - on the Coulomb and nuclear forces, also there is the
interference of the Coulomb-nuclear interactions taking place.

2.2. The interaction of neutrons with matter
The main type of interaction of neutrons with matter are the different types of nuclear
reaction and elastic scattering on target nuclei. Depending on whether neutron reaches the
nucleus or not, its interaction with nuclei can be divided into two classes: a) elastic potential
scattering on nuclear forces without hitting of neutron in the nucleus (n, n); b) nuclear
reactions of various types ((n, ), (n, p), (n, )), fission, etc., inelastic scattering (n, n `), elastic
scattering with the setting of a neutron in the nucleus (elastic resonance scattering).
The relative role of each process is determined by the relevant sections. In some
substances, for which the role of elastic scattering of relatively high, fast neutron loses energy
in a series of successive elastic collisions with the nuclei of the material (slowing down the
neutrons). The process of deceleration continues as long as the kinetic energy of the neutron is
equal to the energy of thermal motion of atoms in the decelerating material (moderator). Such
neutrons are called thermal neutrons. Further collisions of thermal neutrons with atoms of the
moderator practically do not change the neutron energy and lead only to move them to a
material (the diffusion of thermal neutrons), which continues as long as the neutron not
absorbed by the nucleus.

3. Mathematical description of the theory
The main characteristics of the nuclear reaction
B b A a + + , (1)
are the differential ( ) E
d
d
, u
o
O
, integral ( ) E
int
o and total
tot
o cross-sections.

Fig. 2. The result of the interaction of different colliding particles and theemission of
secondary particles in a given solid angle under the condition of axial symmetry (the
symmetry axis of the beam).

202
The differential cross section ( ) E
d
d
, u
o
O
characterizes the departure of nuclear
reaction at a certain angle to the beam of incident particles (at fixed beam energy E).
Integral cross section
int
o

}
O
=
t
u u
o
t o
0
int
sin 2 d
d
d
, (2)
characterizes the total number of particles emitted from the target in this reaction with fixes
energy of incident particles. The total cross section
tot
o
( )

=
i
i tot int
o o , (3)
is the sum of all the integral cross sections for all open output channels. To obtain the absolute
differential cross-section is necessary to measure the following core values:
N

number of particles incident on the target during the experiment,


N

number of nuclei in the target, falling to 1 sq. cm,
N
b
the number of particles emitted from the target in this reaction,
O solid angle of the detector that registers the emitted particles (in radians).(An
element of solid angle is a value that is numerically equal to the area cut out on the
unit field of cones, as shown in Fig. 2),
c the efficiency of this detector for detecting particles b (%) or in fractions unit.

In addition to these components, which determine the absolute magnitude of the cross
section, it is necessary to measure the additional parameters that characterize the quality of the
incident beam and target. These additional features include:
energy spread of the beam of incident particles (E beam),
thickness of the target, expressed in energy loss of incident particles A
target
,
Full integral cross section of the target for the incident particles.
(Necessary only in cases where the incident particles - neutrons or gamma quants, and the
target is thick).

4. Algorithm and a description of the program
Using the mathematical description of the theory, a mathematical model was made for
calculations. Fig. 3 shows the interaction of the beam with the target and itslayers, which
perform calculations.

Fig. 3. The calculation model
203
At launch, the program offers a choice of two virtual laboratories nuclear physics. To
start the virtual lab #1, you must click on appropriate button.
After opening the main interface all the necessary input data loads by default. The
next stage is setting their own input.


Fig. 4. The main interface is a virtual laboratory #1

4.1. Input parameters of the experiment
The beam parameters
When selecting from the main menu of the "beam parameters are" running their input:


Fig. 5. "The beam parameters"

a
A atomic number of incident particles,
a
Z - charge of the incident particles,
a
I - the initial beam current of incident particles, [in] (for helium ions),
a
N - the flux density of the beam of incident particles,
(

2
1
cm s
(for neutrons),
0
E - energy of incident particles (assuming a monoenergetic beam) [MeV].
After entering all the data, the number of particles falling on the surface of target
material would be calculated
( )
e Z
I
N
a
a
a

=
0
, (4)
where Cl e
19
10 6 , 1

= - elementary charge.
204
Target parameters
Similarly, you are installing the input parameters for the target:



Fig. 6. Parameters of target


A
A - atomic number of the material (target),
A
Z - charge of the material (target),
sub
d - the thickness of the material (target), [mkm],
| | 2 , 1 e AM - a model of the target (1 - a thin target R d
sub
<< ; 2 half thick target
R d
sub
~ 3 , 0 , where R - the mean free path of particles in matter).
Value of the number of atoms in 1 cm
3

A
- is taken from reference data. Calculation
of mass carried out as
-24
arg
10 1,660531 =
A et t
A M , density as
-24
arg
10 1,660531 =
A A et t
A . Accordingly, the target thickness is recalculated in
mg/cm
2
as

10
10 1,660531
-24
arg

=
A A sub
et t
A d
d

.


4.2. Output results
In the process of calculating the differential cross sections and set of statistics, the
dependence of the angular distribution of the differential cross section as theoretical, as
"experimental" can be viewed in Cartesian and polar coordinates. And choosing The
differential cross section (Cartesian coordinates) or "The differential cross section (polar
coordinates), there will be an appropriate dependence.

205


Fig. 7. 3D-plot of the angular distributions of differential cross sections
depending on the energy beam of alpha particles

After performing the laboratory work, the program generates a complete report, which
reflects all the input parameters of the experiment and all values that were measured in the
virtual experiments. If necessary, the report can be saved on hard disk as a text file and open,
such as spreadsheets, Microsoft Excel for further additional calculations or plotting
differential cross sections.
206
Big Computing Facilities for Physics Analysis: What Physicists
Want

F. Ratnikov
Karlsruhe Institute of Technology, Germany


Producing reliable physics results is a challenging task in a modern High Energy Physics. It requires
close cooperation of many collaborators and involves significant use of different computing resources available
to the collaboration.
This paper is based on our experience as a national support group for CMS users. A multi-tier
computing model is discussed from the perspective of physicists involved in final data analysis. We discuss what
physicists could expect from different computing services.

Introduction
Modern High Energy Physics experiments are organized as huge factory for producing
physics results. The vital components of these factory are modern accelerators delivering
outstanding amount of collisions at outstanding energies, giant detectors built using cutting
edge technologies, high performance trigger and data acquisition systems capable to process
5 THz of input information and selecting potentially interesting events, computing farms
processing data using sophisticated algorithms and producing about 50 PBytes data every
year, and storage systems handling all these data. However all these dramatic efforts make
sense only if physicists can convert collected using all this machinery data into new physics
results. Creating comfortable conditions for effective physics analysis is therefore necessary
for the success of the entire experiment. We will discuss requirements for the effective
physics analysis based on the experience obtained while running CMS [1] experiment at
CERN for several years.

Computing Model
The CMS offline computing is arranged in four tiers. The corresponding data flow
starting with the detector data up to the final physics results is presented in Fig. 1. A single
Tier-0 center at CERN accepts data from the CMS data acquisition system, archives the RAW
data and performs prompt reconstruction of selected data streams. 7 Tier-1 centers are
distributed over collaborating countries and are responsible for data archiving, reconstruction,
skimming, and other data-intensive tasks. 53 Tier-2 centers take the load of producing Monte-
Carlo simulation data, and serving both simulated and detector data to physics analysis
groups. Tier-2 centers provide the most computing resources for processing data specific to
individual analysis. Tier-3 centers provide interactive resources for the local groups. Data are
mostly moved between different computing tiers of the same level, or from more central to
more local facility. RAW data from Tier-0 are distributed to Tier-1, reconstructed and
archived on site, and transferred to other Tier-1 centers for redundancy. Primary data streams
are skimmed on Tier-1 into many smaller secondary streams, and resulting skimmed datasets
are transferred to Tier-2 centers for use by physics groups. Finally, after applying the analysis
specific selections, data are further moved to Tier-3 for the interactive analysis. The only
exception in this one-directional data flow is a Monte-Carlo data: events are generated and
reconstructed in Tier-2 centers, and then are transferred to Tier-1 facilities for archiving.



207
Data Analysis Patterns
Most analyses start with skims datasets stored in Tier-2 centers in either full
reconstruction (RECO) or Analysis Object Data (AOD) format. The data relevant for signal
studies, as well as data for both data driven and MC driven background studies, are processed.
Obtained condensed results are stored in format convenient for the interactive analysis. These
data are transferred to the corresponding Tier-3 center where they are analyzed.
When the analysis converges, obtained results are analyzed using appropriate
statistical methods. Note that in this analysis pattern Tier-2 and Tier-3 are those computing
facilities mostly used for the physics analysis. The level of service of the affiliated Tier-1
affects the performance of the physics analysis only marginally.
Physics analyses are targeted for the conferences. With continuously increasing
amount of collected experimental data, typical analysis has a half-year cycle: results are
prepared for winter and for summer conferences. This puts peak load on the computing
systems during the preparation for the major conferences.

Resources
Three kinds of resources used on every computing facility are: CPU power, storage
space, and network bandwidth. From the user perspective the requirement to the resources is
that the amount of these resources should be big enough. This practically means that waiting
time for the result should be small comparing with the turn over time for this result. For
example, the primary processing of necessary skimmed data is done only few times during the
analysis cycle, thus it is expected to take not more than a week. However the routine scanning
of analysis ntuples producing working histograms is repeated many times a day, thus is
Fig. 1. The characteristic data flow and data processing chain starting from the detector
readout and ending by the publication of the obtained new physics result.
208
expected to take not more than half an hour. People expect no significant limitation in storing
analysis ntuples on disks. It is hard to provide exact numbers for resources requirements; they
significantly vary from one analysis to another. As a reference, Table 1 presents numbers for
CMS German computing resources

distributed over three CMS sites: DESY in Hamburg,


RWTH in Aachen, and KIT in Karlsruhe. These resources are split into three groups:
CMS pledged Tier-2 resources - these resources are controlled by the CMS
management, and the user part is shared between all CMS users,
National Tier-2 resources - these resources serve exclusively physicists belonging to
German CMS groups,
National Analysis Facility (NAF) at DESY - these are resources for the interactive
physics analysis of physicists belonging to German CMS groups: interactive disk
space, regular batch queues etc.
These resources provide pretty comfortable physics analysis environment for German CMS
groups, which currently include 118 physicists, 70 graduate and 73 undergraduate students.

Table 1. Computing resources available on three German computing sites and serving CMS
physics analyses performed in German universities








Services
Available computing resources make a little help for the physics analysis unless they
are wrapped by the reliable services. Basing on our experience we present a list of those
services that significantly improve the speed and quality of the physics analysis.
Stable and Concervative Computing Environment
The stability and reliability of the computing facility used for physics analysis is in the top
of the list. The goal of the physics analysis is to produce final result according to the schedule.
Such planning is only possible if behaviors of the computing system are predictable and easy to
understand by physicists. In rapidly changing environment a lot of efforts are wasted to follow up
changes, and for gaining new knowledge on appropriate ways of using the facility. The less time
is wasted on this technical stuff, the more boost the physics analysis get.
Experiment Specific Software Installations
All actual versions of the experiment-wide software are expected to be available on
the analysis facility. This includes major releases and relevant patch releases. Once the
analysis is halfway, the upgrade to using a new software release is very time consuming. It
requires a lot of efforts for validations and cross checks, and is usually not justified. Therefore

Details of the HEP-SPEC06 benchmark may be found in http://hepix.caspur.it/benchmarks/doku.php. Modern


Linux based computers provides approximately 6-8 HEP-SPEC06 per one CPU core

CPU Storage
Tier-2, CMS pledged 18.6K HEP-SPEC06 970 TBytes
Tier-2, National resources 16K HEP-SPEC06 670 TBytes
Tier-3 - NAF/CMS 9.6K HEP-SPEC06 60 TBytes
209
the software release may be considered as obsolete only when all physics analysis based on
this release are completed. Taking into account a typical length of the physics analysis or PhD
preparation of 1 year, this means that all major software releases from the last 12 months may
be required. For 12 months from October 2010 till October 2011 CMS produced 10 major
software release, and 34 backward compatible sub-releases.

Working Disk Area
Interactive analysis requires disk space. Two major types of used storage areas are: the
working area with the code used for the physics analysis, and the data area to store data files
in various formats. The modern analysis may require few hundreds Gigabytes for the working
area. The data area of about 1 TBytes is usually good enough for the individual physicist.
About 10 TBytes may be necessary for the physics group sharing common analysis data.

Code Repositories
The collaboration wide code is stored in corresponding central repositories. E.g. code
repositories for the Tevatron experiments are provided by Fermilab. The code repositories for
LHC experiments are sitting at CERN. The computing facility should provide easy ways for
accessing central code repositories for corresponding collaborations. This may be not
straightforward if e.g. stand alone Kerberos authentication is used on site.

Homogeneous Environment
In case of big collaboration, any analysis facility is only one of many available. People
may migrate from one facility to another due to different circumstances: geographical
relocation, collaborating in physics analysis with different groups, as a work around
temporary problems on the local facility etc.
Ideally, physicists would like to learn those procedures to setup necessary
environment, useful commands and tools only once, and then reuse this knowledge on other
sites. The time spent on learning specific details relevant only to a specific site is a wasted
time from the physics analysis perspective.
There are de facto few reference computing environment setups: the Fermilab
facilities for Tevatron experiments, CERN analysis facility for LHC experiments.
However providing standard environment may be challenging for those sites serving
several collaborations simultaneously thus sharing resources between them. For example, KIT
shares its local analysis facility between CDF, CMS, Belle, and AMS experiments.
The good solution for this case is a dedicated portal, which is set up according to
the homogeneous environment for corresponding collaboration, separate portals for different
environments.

Grid
Grid is a working horse for analysis data processing. To let people easily communicate
with Grid the site needs installed and properly configured Grid middleware, e.g. the
appropriate version of gLite. The following Grid services are absolutely necessary on site:

Authentication,


J ob submission tools,


Data transfer tools.

210
Operating Grid Storage Element (SE) helps a lot by letting analyzers to direct those
data produced by the Grid jobs anywhere over the world immediately to the local site for
the following interactive analysis.

CRAB
CMS Remote Analysis Builder (CRAB) [2] is a crucial component for success of the
CMS physics program. It effectively converts Grid into an application level computing cloud
for CMS physicists. The application in this context is a CMS reconstruction or analysis,
which is run on some CMS dataset. The CRAB takes care of the most of related computing
machinery:

Converting user working area into Grid sandbox,


Analyzing requested input data, checking data availability on potential destination
Grid sites,


Splitting input data between different jobs of one task,


Submitting jobs to appropriate Grid sites according to data availability,


Controlling progress of submitted jobs,


Resubmitting those jobs failed by Grid related reasons,


Collecting outputs of Grid jobs on the local site,


Publishing produced data in CMS data catalog for following re-use.


Local Batch Queues
Local batch queues running in the computing environment identical to the
environment of the interactive portals is a natural extension of the portal on the analysis
facility.
Although Grid provides a significant computing resource, it requires a big overhead to
gridify a particular computing task. In contrast, the local jobs may use the same working
area, data files, and tools available for interactive use on the portal. Once physics analysis
chain requires more CPU resource than may be reasonably provided by the portal machine,
the same task may be submitted into the local batch queue.
Ideally, if some task is run on the portal as
myScript par1 par2 par3
there should be a possibility to run it transparently in the queue like
submit myScript par1 par2 par3
to produce identical result.

Latest ROOT Releases
The ROOT software package [3] is widely used by HEP experiments in very three
different ways:

As a core of the event data model used to store data in files,


As a tool to present ongoing and final physics results,


As an analysis tool, e.g. a statistics analysis tool.

The first use usually requires the conservatively old ROOT releases. The necessary
release is usually included as a constituent part of the corresponding collaboration software
distribution. On the contrast, the last use usually requires the latest and greatest ROOT
release. The good example is a RooStats package included in ROOT. The RooStats is a
211
working horse for interpretation of new LHC results, the main tool for combining the Atlas
and CMS result. However this package is still under heavy development, and the most recent
ROOT release is usually necessary for obtaining new results. As a result, several ROOT
versions are required to co-exist on the analysis facility to satisfy different needs of different
analyses.

Backup
The lost of the working area may delay physics analysis by many weeks. A regular
backup of the analysis working area is expected to tolerate the possible hardware failure.
However, not less important, human mistakes happen, especially in rush of preparation to the
conference. To tolerate own mistakes, and make some peace in mind, there must be a quick
and simple way to recover lost information. To facilitate such recovery, the German NAF
facility has the complete previous day backup of the AFS user home area available directly
and immediately to every user.

Summary
Physics analysis is the last mile in the long and complicated chain from collecting
data by the experimental detector to publishing new physics results. A convenient computing
environment is necessary to make analysis efforts mostly efficient. This includes providing
reasonable amount of local computing resources accompanied by corresponding high quality
services and tools. Bright minds are the most decent resource to fulfill the physics analysis.
The ultimate goal of analysis computing facility is the most efficient use of this resource for
producing high qualityphysics results.

References
[1] CMS Collaboration. The CMS experiment at the CERN LHC. J INST 3, 2008, p. S08004.
[2] G. Codispoti et al. CRAB: A CMS application for distributed analysis. IEEE
Trans.Nucl.Sci.56:2850-2858, 2009.
[3] http://root.cern.ch

212
CMS Tier-1 Center: serving a running experiment

N. Ratnikova
Karlsruhe Institute of Technology, Germany

Effective use of huge computing resources is a key for success in a modern high energy physics
experiment. The CMS Collaboration relies on seven Tier-1 computing centers located at large universities and
national laboratories all over the world. Tier-1 is responsible for accepting raw and simulated data for custodial
storage, re-processing of primary datasets and Monte Carlo data, and serving data to Tier-2 sites for analysis.
This paper gives an overview of CMS computing model with an emphasis on the role of Tier-1 centers. We
discuss Tier-1 local and central site monitoring. Finally, we summarize experience of operating a German Tier-1
center in the first years of active LHC data taking, including such aspects as efficient data storage, managing
data transfers, ensuring data consistency, inter-operating with the local grid facilities, and CMS central
operations teams.
1. Introduction
Compact Muon Solenoid (CMS) collaboration [1] is one of two general purpose High
Energy Physics experiments hosted at the LHC [2]. Physics data produced at the design
luminosity of 10
34
cm
-2
s
-1
are suppressed by the CMS Trigger and Data Acquisition system
[3] to O(10
2
) MB/s, which are transferred to seven Tier-1 sites outside CERN for archiving,
processing, and distribution over the data grids for the scientists to analyze and produce new
physics results.
This article presents operational experience at the German CMS Tier-1 Center in the
first two years of LHC data taking with a focus on the experiments point of view.
Section 2 gives a brief overview of the CMS Computing model, the Tier-1 Center role
and requirements, resources, services, and various metrics used for the assessment of the
sites performance. In sections 3 and 4 we discuss operations at the German Tier-1 Center at
GridKa, including organization and coordination of the everyday activities, training and
acquired expertise of the local personnel.
2. CMS Computing model and Tier-1 requirements
Compact Muon Solenoid (CMS) is one of the four big experiments running on the
LHC facility at CERN intended to test the Standard Model at the TeV energy level, search for
the Higgs Boson and for new physics beyond the Standard Model. CMS data processing and
data storage rely on distributed computing centers integrated into the Worldwide LHC
Computing Grid (WLCG) [4].
CMS has chosen to adopt a distributed model for all computing aspects including
processing and archival of the raw and reconstructed data, Monte Carlo simulation, and
physics analysis. Computing resources are organized in a four-tier structure with a Tier-0
Center and CMS Analysis Facility at the hosting laboratory at CERN, a small number of Tier-
1 Centers located at large regional computing centers connected via high-speed network,
which are responsible for safeguarding, reprocessing and serving assigned primary datasets,
and a relatively large number of Tier-2 and Tier-3 analysis centers where physics analysis is
performed. Service agreement for such hierarchy has been established in the LCG
Memorandum of Understanding [5].
Role of CMS Tier-1 Center
In accordance with the CMS Computing Model the CMS Tier-1 Centers provide a
wide range of high-throughput high-reliability computing services for the entire Collaboration
213
through both WLCG agreed grid interfaces and global CMS services. High level availability
and technical support are expected.
Main functions include:
Organized sequential processing of the data: event selection, skims, reprocessing, and
other data production tasks,
Custodial storage of a large fraction of the experiment raw and simulated event data,
Serve data to other Tier-1-2-3 sites for replication and physics analysis.
Tier-1 Centers also provide regional services to the local communities, according to
the responsibilities to the users associated with the funding bodies that support the facility.
These functions however should not interfere with the ability of the site to fulfill the
obligations towards the whole Collaboration.
CMS Tier-1 Requirements
Each Tier-1 Center takes custodial responsibility for the assigned primary datasets. A
given Tier-1 center may have the only available copy of the dataset, therefore it must
potentially allow any CMS user to access it. User-visible Tier-1 services are the subject of
formal service level agreements with the Collaboration and include:
data archiving service,
disk storage services,
data access services,
reconstruction services,
analysis services,
user services.
User-visible services rely on computing resources and system-level services: mass
storage system; site security; prioritization and accounting.
Nominal computing resource requirements for the CMS Tier-1 center foresee:
WAN with incoming transfer capacity of 7.2 Gb/s, and outgoing 3.5 Gb/s,
2.5 MSI2k of CPU power and 1.2 PB of Disk space,
Mass Storage capacity of 2.8 PB with acceptable data loss ~10s of GB per PB stored;
and data access rate from MSS ~800 MB/s,
CPU node I/O bandwidth: Gigabit connectivity.
Further requirements and specifications are detailed in the Appendix A of the CMS
Computing TDR [6].
Status of CMS Tier-1 Centers
CMS has seven regional Tier-1 centers outside CERN: T1_DE_KIT in Karlsruher
Institut fr Technologie, Karlsruhe, Germany; T1_ES_PIC in Port d'Informaci Cientfica,
Barcelona, Spain; T1_FR_CCIN2P3 in Centre de Calcul de l'IN2P3, Lyon, France;
T1_IT_CNAF in Centro Nazionale per Ricerca e Sviluppo nelle Tecnologie Informatiche e
Telematiche INFN, Bologna, Italy; T1_TW_ASGC in Academia Sinica Grid Computing,
Taipei, Taiwan; T1_UK_RAL at Rutherford Appleton Laboratory, Didcot, UK; and
T1_US_FNAL at Fermi National Accelerator Laboratory, Batavia, IL, USA.
All current CMS centers are integrated into the CMS workflow management and data
management system and are constantly monitored for the service availability and
performance. Tier-1 site representatives provide weekly reports to the CMS facility operations
and data operations management teams. General coordination, integration and validation of
various computing systems, technologies, and components are performed by the CMS
214
Integration team. The details of how the sites are evaluated, operated and monitored, are
outlined in the next sections.
Site Metrics
To evaluate and measure the availability and operability of the Tier-1 sites, CMS is
using tools developed by WLCG or within the experiment, which include:
Job Robot is an automated system to submit and manage fake analysis jobs. It is used to
test if a site is capable to run certain CMS workflows at the required scale. Regular job
submissions to all CMS sites allow to measure the daily success rate and get an estimate
of the site efficiency. Resulting statistics are published daily in a summary page,
SAM tests is a collection of basic functionality tests for Grid services. They are used
to verify the correctness of the CMS software installation and configuration, to
reproduce the operations performed by a typical Monte Carlo simulation or analysis
job, or by an individual CMS user. The results are analyzed and visualized on the
CMS Dashboard. SAM tests allow to detect site problems, measure and rank sites by
availability, they are also used as a commissioning criteria for the new sites,
BackFill jobs are artificial analysis jobs that produce a sustained rate of the processing
load at the site,
Debug transfers are initiate artificial data transfers between the sites, which are used
to evaluate the quality of the network links between these sites,
Savannah and GGUS ticketing systems are used to keep track of the problems at the
sites, it is also used as a work progress and operational activities tool. Savannah
system, which is mostly used for experiment specific projects, is automatically
bridged to the GGUS system, main tool for tracking the Grid site operational issues.
Statistics of opened/closed tickets can also serve as a site performance criteria.

Site Readiness Report summarizes all metrics and presents cumulative current status
of the site availability and performance. Sites readiness status is taken into account by the
central operations for planning activities. Scheduled and unscheduled downtimes are taken
into account as well.
German Tier-1: resources and services
Grid Computing Center in Karlsruhe, GridKa [7] was established in 2002. The
original design was driven by the demands of the German particle physics community in view
of unprecedented requirements on data handling and computing [8] for the upcoming
experiments at the LHC at CERN. It provided production environment for the particle physics
experiments that were already active: CDF, DZero, Babar, COMPASS, and was used as a test
facility for the LHC Grid computing models. Today, GridKa is one of eleven Tier-1 centers of
WLCG, and one of four Tier-1 centers supporting all four major LHC experiments. Still,
several non-LHC particle physics experiments, including Belle-II collaboration and Auger
experiment, are supported by GridKa.
The CMS dedicated resources and services include:
Batch system: PBSpro (~13k job slots for all VOs). 1850 slots are reserved for CMS.
For maximum number of running CMS jobs the factor 2 is negotiable,
Storage sytems: dCache (dedicated CMS instance) and xrootd,
Tape,
WAN:
215
o LHC Optical Private Network links to CERN, CNAF, IN2P3, SARA: 10
Gbit/s,
o SARA link used for traffic to FNAL: 10 Gbit/s (current throughput is limited
to 1Gbit/s),
o Internet link to DESY and Aachen: 10 Gbit/s,
o link to Poland: 1 Gbit/s.
3. CMS Tier-1 Operations, Infrastructure, and Tools
WLCG project is built on top of the infrastructure developed by European and USA
Grid projects EGEE and OSG. It provides basic Grid services via so called middleware
components, which include security, computing element, storage element, monitoring and
accounting, virtual organizations, workload management, information service, and File
Transfer Service (FTS). CMS Operations rely on the middleware components and
corresponding Grid services. Service availability, levels of support, response times are
covered in the LCG Service Level Agreement document, and go beyond the scope of this
paper. Here we focus on support for experiment specific services and procedures. They are a
part of the Karlsruhe CMS group obligations to the Collaboration formalized in the
Maintenance & Operations plans, and are credited with a 2.6 FTE (equivalent of 31.2 months)
of service work per a calendar year.
Job processing
Normally all job submissions and data processing operations at Tier-1 are managed by
the CMS central operations teams. However a few actions for proper data handling are
required from the site. Particularly for compact tape utilization, sites are asked beforehand to
create the tape families for the files that are to be produced and archived on tape. In
preparation for a large data re-processing campaign the site may be asked to pre-stage the
necessary input datasets from tape.
The output of the application jobs and the corresponding log files are first written to
the local disk on the worker node. If necessary, a powerful on-site expert may login directly
to the worker node, inspect the log files and troubleshoot any site specific problems. After job
execution, the application output is staged out into the CMS namespace on the storage
element. Additional merging step is applied to the small files before archiving to tape in order
to reduce the load on the mass storage system. Once the merging step is complete, the original
unmerged output files can be removed. Sites are responsible for regular clean-up of those
obsolete unmerged files.
The CPU efficiency of the jobs is constantly monitored to identify any jobs with a low
CPU to wall-clock-time ratio. This condition usually indicates a problem with the data access,
when jobs stay idle waiting for the required data. Monitoring of the total numbers of running
and queued jobs per experiment helps to distinguish general infrastructure problems from the
application specific ones.
Data transfers
Managing data is by far the most labor-intensive task in Tier-1 support. The CMS
distributed data transfer system [9] is the key ingredient which enables the optimal usage of
all distributed resources. Data transfers from Tier-0 to Tier-1 sites must be done in a timely
manner to avoid the overflow of the disk buffers at CERN. Simultaneously, the data are
transferred in bursts to Tier- 2 level sites for analysis, and simulated Monte Carlo data
produced at Tier-2 centers are moved to Tier-1 sites for archival. Additionally, data may be
synchronized between different Tier-1 sites, and served to Tier-3 sites.
216
CMS uses PhEDEx tool [10] to initiate transfers and to keep track of data placement and
transfer status. PhEDEx is based on agent technology. The agents, or daemons, run at each
site and perform various data operations, such as upload, download, stage-in to and stage-out
data from tape. They take care of automatic load-balancing, bookkeeping, logging, file
integrity check, self-monitoring, automatic recovery, and communication with the central
transfer monitoring database. A trivial file catalog (TFC) is used to translate the storage-
technology independent global logical file names to local physical file names according to the
rules specific to a given storage technology and site conventions. The file transfer per se is
performed by the FTS component of the Grid middleware. Data transfers are automatically
controlled by subscriptions, which are manually controlled by data operators.
Services provided by the Tier-1 site local operators include a wide variety of tasks:
preparing tape families,
handling requests for data transfers and deletions (every request must be manually
approved),
clean-up of obsolete data,
maintaining consistency between the site storage contents and the central data catalogs,
monitoring and debugging transfer issues,
support TFC and other site specific configuration files,
managing software upgrades,
managing resources,
attending operational meetings,
providing prompt feedback to the central operations teams,
providing support to associated Tier-2 sites.

Managing all these tasks would not be possible without proper training and expertise.
4. Training and expertise
Tier-1 center operating functions play a crucial role in the whole computing
infrastructure of the experiment, therefore well managed organization and many-sided
expertise are expected from the site support team.
It requires good understanding of the CMS central workflows, requirements and
procedures from one side, and good knowledge of the local storage technologies, familiarity
with the grid middleware infrastructure and setup, resource management, and system
administration from another side.
In the University environment we use various ways to build up the required expertise
and to involve students and scientists in Tier-1 support and other related computing projects.
To improve the quality of support and ensure proper share of the expertise, since the
end of 2010 the KIT Tier-1 support group is practicing expert rotation scheme. For the
period of six to eight weeks a senior member of the group (postdoc level) takes responsibility
for most or all routine operations. His or her duties include responding to the service tickets,
handling data transfer requests, managing site monitoring shifts schedule, troubleshooting any
arising problems, rising service tickets to GridKa technical support team, compiling reports
for the central CMS operations. The rest of the group, including junior members (Graduate
students) provide additional support by taking Tier-1 local monitoring shifts, participating in
various development projects. Tasks of data consistency checking, software upgrades, site
configuration, storage management, system tuning and optimization, and follow-up on
particular problems are covered by the dedicated experts.
An excellent way to get familiar with the central CMS computing operations and workflows
is participation in central computing shifts. At KIT we established a remote CMS Center for
217
conducting central computing shifts. Annual GridKa school provides specialized training for
Grid computing techniques. Regular participation in dCache workshops helped us to establish
direct contacts with the dCache developers, GridKa storage system administrators, dCache
users from other sites and experiments.
Members of the group are actively involved in core CMS computing research and
development projects. This includes development of PhEDEx system, common data
consistency tools, CMS central storage accounting and monitoring, meta-monitoring tools,
job submission tools. In 2011 we took charge for the PhEDEx validation task in CMS
Integration project.
Locally developed meta-monitoring tool HappyFace [12] has become a powerful
instrument for site monitoring, it provides immense help for maintaining the general system
health. It allows to give immediate feedback to the central operation teams for the
optimization of the procedures and workflows, and to other sites for the optimization of the
transfer throughput.
HappyFace modules for monitoring and visualization of the CMS job performance at
GridKa are now adopted by the CMS Facility Operations for central monitoring of job
running at all CMS Tier-1 sites.
5. Summary
Several years of LHC running have clearly proved the concept of the distributed
computing. Thanks to the extended preparations including carefully designed scalability tests,
the data transfer and operation challenges, quality assurance and monitoring, personnel
training, and other necessary measures, the Tier-1 centers established at the national
laboratories over the world successfully provide high quality services critical for the High
Energy Physics experiments.
In accordance with the hierarchical Computing Model adopted by the CMS
experiment, the CMS Tier-1 Centers meet the challenges of transferring, storing and
reprocessing the vast amounts of the experimental data and providing data for further
processing and analysis.
Maintaining high availability and performance of the system requires both expert
knowledge, and operational efforts. The exceptional situation at KIT, where the largest
research laboratory has been recently merged with the advanced Technical University, makes
it possible to bring together experts from various fields to work on immediate service tasks, as
well as a variety of challenging development projects in many key areas of the modern
distributed computing: job submissions, data transfers, data consistency, monitoring,
integration and validation processes.
These projects also play important educational role. Students and young scientists at
KIT gain experience and train their skills in the real conditions of the running experiment.
The on-going tasks include constant work on tuning and optimization of the system, keeping
it up to date with the hardware and software upgrades, following the changes in the
operational environment, and the evolution of the computing model itself.
Acknowledgments
I would like to thank my colleagues in the Institute of Experimental Nuclear Physics at
KIT and all CMS colleagues for many years of fruitful collaboration, dedication and
inspiration they bring in our work every day. I also thank German ministry of Science and
Education, BMBF, for the financial support of this work.
218
References
[1] CMS Collaboration. 1994 CMS, the Compact Muon Solenoid: Technical proposal,
CERN-LHCC-94-38.
[2] O. Bruning et al. 2004 LHC design report. Vol. I: The LHC main ring, CERN-2004-003.
[3] CMS, the TRIDAS Project. Technical Design report Volume2; Data Acquisition and
Higher Level Trigger. CERN/LHCC/2002-26, CMS TDR 6.2, 2002.
[4] LHC Computing Grid Technical Design Report. CERN-LHCC-2005-024, 2005.
[5] Worldwide LHC Computing Grid Memorandum of Understanding,
http://lcg.web.cern.ch/lcg/mou.htm
[6] CMS Collaboration. 2005 CMS computing technical design report, CERN-LHCC-
2005-023.
[7] Grid Computing Centre Karlsruhe (GridKa), http://www.gridka.de/
[8] H. Marten et al. A Grid Computing Centre at Forschumgzentrum Karlsruhe.
Response on the Requirements for a Regional Data and Computing Centre in
Germany, 2001, http://grid.fzk.de/RDCCG-answer-v8.pdf
[9] Distributed Data Transfers in CMS. International Conference on Computing in High
Energy and Nuclear Physics 2010, Taipei, Taiwan, Oct 18-22, CHEP 2010.
[10] J . Rehn et al. 2006 PhEDEx high-throughput data transfer management system. Proc.
of CHEP06, Mumbai, India.
[11] Consistency checking tools,
https://twiki.cern.ch/twiki/bin/view/CMS/PhedexProjConsistency
[12] V. Buege et al. Site specific monitoring of multiple information systems - the
HappyFace Project. CHEP 2009 conference.

219
ATLAS Distributed Computing
on the way to the automatic site exclusion

J . Schovancova
1
on behalf of the ATLAS Collaboration
1
Institute of Physics, Academy of Sciences of the Czech Republic,
Prague, Czech Republic

This paper details the different aspects of ATLAS Distributed Computing experience after the first
1.5 years of LHC data taking. We describe the performance of the ATLAS Distributed Computing system and the
lessons learned during the 2010 - 2011 runs, pointing out parts of the system which were in good shape, and also
spotting areas which required improvements. We discuss improvements of the ATLAS Tier-0 computing
resources, and study data access patterns for Grid analysis to improve the global processing rate. We present
recent updates in the ATLAS data distribution model.

1. Introduction
ATLAS Distributed Computing [1] (ADC) supports 24x7 simulation production of the
ATLAS experiment [2] at the Large Hadron Collider (LHC) at CERN, data reprocessing and
data management operations at tens of computing centers located world-wide. ATLAS uses
PanDA (Production and Distributed Analysis) [3] and DQ2 (Distributed Data Management)
[4] system for distributed back-end work flow, along with work injection systems (ProdSys)
to manage automated data distribution and processing. PanDA continuously completes 200k-
300k jobs per day all over the world on the Grid, and DQ2 has reached more than 10 GB/s
integrated transfer rate over all ATLAS sites.
In Section 2 of this paper we describe data work flow of the Tier-0 and CERN
Analysis Facility. In Section 3 we describe how data transfers are managed. In Section 4 we
discuss data processing, including central ATLAS or physics groups production, and user
analysis. In section 5 we describe validation of ATLAS grid sites. In section 6 we expound
exclusion of degraded services.

2. Tier-0 and CERN Analysis Facility
In 2010 the ATLAS experiment recorded over 45 pb
-1
of proton-proton and heavy ion data,
in ATLAS recorded 5,245 fb
-1
of proton-proton data, which corresponds to more than 2 PB of RAW
data. Data recorded by the ATLAS Data Acquisition System is then processed in the Tier-0 facility.
Primary role of the ATLAS Tier-0 center is to collect RAW data from the ATLAS Data
Acquisition System, merge, and write to tape. The first pass processing, which also takes
place on the Tier-0 facility, creates different level of data reduction from RAW data. There can
be up to 3400 concurrent reconstruction jobs run on dedicated resources, and up to extra
1200 jobs on the public shared resources of the CERN Analysis Facility.
Tier-0 was designed to handle data throughput of 320 MB/s at 200 Hz data acquisition
rates [1]. Since early August 2011 data is taken with average data acquisition rates of 400 Hz,
however, the Tier-0 facility managed to timely process data without accumulating backlog.
Once data is processed at Tier-0, it is registered to the data catalog, and exported to ATLAS
Tier-1 centers and to the calibration centers. Maximal overall I/O on Tier-0, including internal I/O
for the Tier-0 processes, archiving to tapes, and data export from Tier-0, is as high as 6 GB/s.

3. Distributed Data Management
The goal of the Distributed Data Management (DDM) is to deliver data to ATLAS
physicists located world-wide in a timely manner. Before the first year of data taking strictly
220
planned data placement policy was in place, and we followed the data processing model with
jobs running on resources where data was located. During the first year of data taking we
figured that DDM performance no longer constraints us and moved from a strictly planned
data placement policy to policy of more data copies. When a data copy is not used, it is
deleted. New data placement policy is thus limited only by data deletion rates.
Over the first year of data taking the resources utilization policy evolved from jobs go
to data to data and jobs move to the available CPU resources. Besides the planned data
placement a dynamic data placement approach has been employed.
We have two different dynamic data placement algorithms for Tier-1 and Tier-2 sites. The
Tier-1 algorithm assumes that primary copies of ATLAS data will be placed at the Tier-1 sites
based on planned data placement policies. In addition, a secondary copy of ATLAS data is created
when the data is popular and widely used by ATLAS physicists. Location of the secondary dataset
replicas follow the shares based on the pledges in MoU of the ATLAS grid sites.
The Tier-2 algorithm is independent of the Tier-1 algorithm. When an analysis job is
submitted to PanDA, dynamic data placement is triggered. When there is no dynamic replica
to a Tier-2 site, there is a certain number of user analysis jobs waiting for a particular datasets,
and the dataset is reasonably popular, dataset is replicated to a Tier-2 site with the highest
weight. Weight takes into account recent site performance and activity, number of dynamic
dataset subscriptions made in the cloud of that Tier-2 site in past 24 hours, and the number of
replicas of the dataset in the same cloud.
During the second year of data taking we are moving towards data caching model
rather than strict planned data placement. We also revised lifetime policies for various derived
data types based on their usage.

4. Data processing
Data processing consists of production and analysis activities. Production activities
involve data processing conducted either centrally by ATLAS or by physics or performance
groups, and Monte Carlo simulations. Analysis activities concern data processing activities
conducted by single user. At any time ATLAS is able to run 60k production and 15k user
analysis jobs on the Grid. Data exported from Tier-0 is reprocessed at Tier-1s. All reduced
data is analyzed at any grid site. Outputs from data processing jobs are stored on ATLAS
storage resources available from Grid.
Production activities jobs have success rate of 90-95 %. Analysis jobs have 79 %
success rate, while 11 % of analysis jobs fail due to grid issues (e.g. site has issues with
storage, SW installation, batch system, etc.) and 10 % of analysis jobs fail due to bugs in or
misconfiguration of the user SW.

5. Site validation
We aim to expose only grid resources which provide reliable services to the physics
groups and users. In order to achieve this goal grid services at the sites are continuously
monitored and validated. Whole ATLAS Distributed Computing infrastructure is monitored
24x7 by several teams of shifters with different responsibilities. Data processing at Tier-0 and
data export from Tier-0 to Tier-1s and calibration centers are monitored by the Comp@P1
Shift Team located at CERN. Data transfers between ATLAS grid sites and data processing
jobs of production activities are monitored by the ATLAS Distributed Computing Operation
Shift Team distributed world-wide [5]. User activity is monitored by the Distributed Analysis
Support Team distributed world-wide.
DDM services at sites are validated through the DDM functional tests. Functional tests
221
track how well a site can transfer data, and how fast files of different sizes can be transferred.
Functional tests run between Tier-1 and Tier-2 sites from the same cloud, as well as between
any pair of sites no matter where they are located. Functional tests represent less than 1 % of
data throughput, therefore do not affect performance of the complex DDM system.
ATLAS SW installation system validates each release just installed at the site. WLCG
Site Availability Monitor probes availability of various services at sites, e.g. Computing
Element, Storage Element, FTS, and file catalogue. Monitoring information from different
sources is aggregated in the ATLAS Site Status Board [6].
Analysis functional tests are using the HammerCloud [7]. There are at least 3 flavors of
analysis test jobs running on every analysis site every hour. Analysis test job examines correctness
of the job environment, availability of required SW release, condition data, data stage-in from
storage element, data output to storage element, registration of output to the DDM.

6. Service exclusion and recovery
ATLAS Grid sites can declare downtimes in common site information systems. Such a
downtime then propagates to the ATLAS Grid Information System (AGIS) [9]. When a site is
on downtime or is failing functional tests, it is excluded from affected activity according to
the ATLAS Site Exclusion Policy. Once downtime is over or the issue is fixed, site is tested,
and included back and enabled for an activity after it passes tests conducted automatically, or
by the Shift Team. Workflow of the service exclusion and recovery based on downtime
published in the ATLAS Grid Information System is depicted in Fig. 1.
In DDM a site on downtime is automatically excluded and automatically included
back after the end of downtime. HammerCloud automatically excludes sites from analysis
activity based on test failures, and re-includes them once a certain number of tests in a row
succeed. HammerCloud production tests are in the testing phase. We aim to use
HammerCloud to automatically exclude sites also from production activities in the early 2012.
In order to take huge work load off the Shifter Teams and ADC Experts, and to increase
automation level of the exclusion, test and re-inclusion procedure, the SSB team works on
implementation of the ATLAS Site Exclusion Policy. The ATLAS Site Exclusion takes into
account performance of a site in the ATLAS activities. Performance is determined by results
of the activity functional tests, and by the activity efficiency itself.

7. Conclusions
As stated by the spokesperson of the ATLAS experiment during the ATLAS
Collaboration week in J une 2011, Limitation to release results in ATLAS is not with
computing. There are two challenges for future years of the ATLAS Distributed Computing:
to assure resiliency of the complex computing infrastructure, and to achieve automation of
recurring tasks to save operational manpower. To keep up with resiliency of the whole
computing infrastructure we need robust and reliable ADC systems and monitoring tools. The
ADC infrastructure has to sustain higher load with higher rate of data transfers, and more data
to be analyzed. ATLAS Distributed Computing is ready to face up the future challenges and to
deliver data to the ATLAS physicists.

Acknowledgements
Jaroslava Schovancov gratefully appreciates support from the Academy of Sciences of the
Czech Republic and from the ATLAS Experiment. Support from the NEC'2011 organizers, the grant
LA08032 of the MEYS (MMT), Czech Republic, the grant 316911/2011 of the Grant Agency of
the Charles University in Prague, and the grant SVV-2011-263309 is greatly acknowledged.
222


References
[1] The ATLAS Collaboration. Atlas Computing: technical design report. Geneva, CERN, 2005.
[2] G. Aad et al. The ATLAS Experiment at the CERN Large Hadron Collider. J INST 3, 2008,
p. S08003.
[3] T. Maeno. PanDA: Distributed production and distributed analysis system for ATLAS. J .
Phys. Conf. Ser., 119, 2008, p. 062036.
[4] M. Branco et al. Managing ATLAS data on a petabyte-scale with DQ2. J . Phys. Conf. Ser.,
119, 2008, 062017.
[5] K. De et al. ATLAS Distributed Computing Operations Shift Team Experience.
Proceedings of the 18th International Conference on Computing in High Energy and
Nuclear Physics, 18 - 22 October 2010, Taipei, Taiwan.
[6] C. Borrego et al. Aggregated monitoring and automatic site exclusion of the ATLAS
computing activities: the ATLAS Site Status Board. Proceedings of the 5th Iberian Grid
Infrastructure Conference, 8 - 10 J un 2011, Santander, Spain.
[7] D. van der Ster et al. HammerCloud: A Stress Testing System for Distributed Analysis.
Proceedings of the 18th International Conference on Computing in High Energy and
Nuclear Physics, 18 - 22 October 2010, Taipei, Taiwan.
[8] A. Klimentov et al. ATLAS Grid Information System. Proceedings of the 18th International
Conference on Computing in High Energy and Nuclear Physics, 18 - 22 October 2010,
Taipei, Taiwan.
Fig. 1. Workflow of the service exclusion and recovery based on downtime published in
the ATLAS Grid Information System

223
The free-electron maser RF wave centering and power density
measuring subsistem for biological applications

G.S. Sedykh, S.I. Tutunnikov
Joint Institute for Nuclear Research, Dubna, Russia
Main ideas of biological experiment
Cooperative influence of microwaves and conducting micro- or nano-particles provides
local and selective action of microwaves on cancer cells. The application of high power nanosecond
microwave radiation for cancer cell treatment, is studied by scientific teams at the Laboratory of
High Energy Physics and Laboratory of Radiation Biology of JINR, Dubna, Russia.

Fig. 1. Main ideas of the biological experiment [1,2]

The microwave source is a free electron maser (FEM) based on linear induction accelerator LIU-3000.


Fig. 2. The general view of the experimental facility for exposure of biological objects

224
The subsystem for RF wave power density measuring and centering of the biological object
To expose biological objects in the established mode, it is necessary to make the RF
wave centering and power density measuring subsystem. There are metallic fillings glued
onto the pedestal for putting the irradiated object on, and there is a glow of these metallic
fillings in a powerful pulsed RF wave. Pointing the camera at the glowing fillings and using
specialized software for pattern recognition, we get a glowing circle area and a deviation from
the center. Then, by means of a specialized kinematic device, the system performs tuning of
the height and angle of the lens, which focuses the RF wave on the irradiated object.

Fig. 3. The pilot scheme of the subsystem for RF wave centering and power
density measuring

The position of the lens is regulated by using four electromagnets arranged at the
corners of the base. They are controlled by means of the master controller.



Fig. 4. The block-diagram of the ontroller for the RF wave centering and power
density measuring subsystem

225


Fig. 5. The controller for the RF wave centering and power density measuring


Fig. 6. The block diagram of the subsystem for the RF wave centering and power
density measuring

The next figure shows the real and ideal images of the glowing rings for subsequent
pattern recognition.


Fig.7. Pattern recognition for RF wave centering and power density measuring

For pattern recognition the authors have developed a specialized software based on
DirectShow technology. The program is a graph consisting of a sequence of video filters, such
as a video capture filter, processing filters, and a filter for rendering the output video. For
video processing the authors have developed the filters for binarization, removal of noises,
calculation of the required parameters of the glowing spot. The structure of DirectShow Filter
Graph, developed for Pattern recognition, is shown in the picture below.

226

Fig. 8. The structure of DirectShow Filter Graph developed for Pattern recognition


Fig. 9. Original image Fig. 10. Binarized image Fig. 11. The borders
found by using the method
of Roberts

The original frame is binarized to obtain only two kinds of points: the point of
background and the points of interest. The video cleaning filter is necessary to remove the
video noises, which turned out to be a result of exposure of X-rays on the camera matrix. The
area of the ring is determined by the number of pixels. In addition to the area defined by the
distance from the center. For the subsequent selection of the rings the program searches for
the boundaries by using the method of Roberts with an aperture of 2*2 pixels.

Summary:
- the subsystem for RF wave centering and power density measuring for exposure of
biological object, has been developed,
- the kinematic scheme of lens control has been offered,
- the scheme of the ontroller for the subsystem has been developed,
- the printed circuit board has been made, assembling and debugging of the controller have been
fulfilled,
- the software for the controller has been developed,
- the software for the image analysis to control the RF wave power density and its
centering, has been developed.
Plans:
- Debugging of software for pattern recognition,
- Installation of the system, its debugging and commissioning.
References
[1] N.I. Lebedev, A.V. Pilyar, N.V. Pilyar, T.V. Rukoyatkina, S.N. Sedykh. Data acquisition system for
lifetime investigation of CLIC accelerating structure.
[2] D.E. Donets, N.I. Lebedev, T.V. Rukoyatkina, S.N. Sedykh. Distributed control systems for
modernizing of JINR linear accelerators and of HV power supplies for polirized deuterons source
POLARIS 2.
227
Emittance measurement wizard at PITZ, release 2

A. Shapovalov
DESY, Zeuthen, Germany
NiYaU MEPhI, Moscow, Russia

The Photo Injector Test Facility at DESY, Zeuthen site (PITZ) develops electron sources of high
brightness electron beams, required for linac based Free Electron Lasers (FELs) like FLASH or the European
XFEL. The goal of electron source optimization at PITZ is to minimize the transverse projected emittance. The
facility is upgraded continuously to reach this goal. Recent updates of the PITZ control system resulted in
significant improvements of emittance measurement algorithms. The standard method to measure transverse
projected emittance at PITZ is a single slit scan technique. The local beam divergence is measured downstream
of a narrow slit, which converts a space charge dominated beam into an emittance dominated beamlet. The
program tool Emittance Measurement Wizard (EMWiz) is used by PITZ for automated emittance
measurements. The second release of the EMWiz was developed from scratch and now consists of separated
sub-programs which communicate via shared memory. This tool provides the possibility to execute complicated
emittance measurements in an automatic mode and to analyze the measured transverse phase space. A new
modification of the method was made called fast scan for its short measurement time while keeping excellent
precision. The new release makes emittance measurement procedure at PITZ significantly faster. It has a friendly
user interface which simplifies the tasks of operators. Now the program architecture yields more flexibility in its
operation and provides a wide variety of options.

Introduction
At the PITZ facility, the electron source optimization process is continuously ongoing.
The goal is to reach the XFEL specifications for beam quality projected transverse
emittance less than 0.9 mm mrad at a bunch charge of 1 nC. The speed of an individual
emittance measurement, its reliability and reproducibility are the key issues for the electron
source optimization. That is why an automatization of emittance measurement at PITZ is of
great importance. The nominal method to measure the transverse projected emittance is a slit
mask technique. Many machine parameters have to be tuned simultaneously in order to
achieve high performance of the photo injector. This task is organized at PITZ through the
emittance measurement wizard (EMWiz) software [1]. This advanced high-level software
application interacts through a Qt [2] graphical user interface with the DOOCS [3] and TINE
[4] systems for machine control and ROOT [5] for data analysis, visualization and reports.
For communication with the video system and acquiring images from cameras at several
screen stations, a set of video kernel libraries have been created [6]. During the past years
some measurement hardware was replaced with faster and more precise devices. Accordingly,
new methods and algorithms have been implemented by the emittance measurement
procedures. It increases measurement accuracy and reduces measurement time. In this paper,
details about the new Emittance Measurement System for both hardware and software parts
are described.

Emittance measurement hardware
The transverse emittance and phase space distribution are measured at PITZ using the
single slit scan technique [7, 8]. The Emittance Measurement SYstem (EMSY) contains
horizontal and vertical actuators with 10 and 50 m slit masks and a YAG screen for the beam
size measurement. The slit mask angle can be precisely adjusted for the optimum angular
acceptance of the system (Fig. 2). Three EMSY stations are located in the current setup as
shown in Fig. 1. The first EMSY station (EMSY1) behind the exit of the booster cavity is
used in the standard emittance measurement procedure. It is at 5.74 m downstream of the
228
photocathode corresponding to the expected minimum emittance location. For this single slit
scan technique, the local divergence is estimated by transversely cutting the electron beam
into thin slices. Then, the size of the beamlets created by the slits is measured at the YAG
screen at some distance downstream the EMSY station. The 10 m slit and a distance
between the slit mask and the beamlet observation screen of 2.64 m are used in the standard
emittance measurement. Stepper motors are applied to move each one of the four axes
separately. They give the precise spatial positioning and orientation of the components.



Fig. 1. Layout of the Photo Injector Test facility at DESY, Zeuthen site (PITZ)

Every EMSY has four stepper motors which are controlled by the new XPS-C8
(Newport) controller, that were mounted in the beginning of 2011. This new controller type
gives the possibility to read all hardware values during movement. The average value read
time is about 5 msec which is a big improvement compared to the old controller which has a
read time about 50 msec. With the new possibility, the EMSY software was redeveloped and
in turn opened new horizons for improving quality and speed of emittance measurement
processes. On each of the actuators besides slit mask in both x- and y- planes, a YAG screen
is mounted to observe the beam distribution.



Fig. 2. Layout of Emittance Measurement System


229
A CCD camera is used to observe the images on the screens (Fig. 2). During the past
years the PITZ video system was also under continuous improvement. Hardware and software
parts were updated by the third release [9]. The most important for emittance measurement
was that the problem of missed and unsynchronized frames is now solved due to new
hardware and improved software. Earlier a considerable part of beam and beamlet
measurements was rejected, because a lot of frames were missed or frame grabbing was not
synchronized with the actuator movement. Sometime operators lost up to 50% of operating
time because of these problems.

Emittance measurement analysis
A schematic representation of the single slit technique is shown in Fig. 3. For this
technique the local divergence is estimated by transversely cutting the electron beam into thin
slices and measuring their size on a screen after propagation in a drift space. The so called
2D-scaled emittance is then calculated using the following definition [1]:

2
2 2
2
x x x x
x
x
n
' ' =
o
| c (1)
Here ) (
2
x and ) ' (
2
x are the second central moments of the electron beam distribution
in the trace phase space obtained from the slit scan, where
z x
p p x / = ' represents the angle of
a single electron trajectory with respect to the whole beam trajectory. The Lorentz factor |
is measured using a dispersive section downstream of EMSY.



Fig. 3. Schematic representation of the single slit scan technique

The factor
2
/ x
x
o is applied to correct for possible sensitivity limitation of low
intensity beamlets, where
x
o is the rms whole beam size measured at the slit location. In the
emittance measurement setup and procedure, intrinsic cuts have been minimized by e.g. using
high sensitive screens, a 12 bit signal resolution CCD-camera and a large area of interest to
cover the whole beam distribution. Therefore, emittance value is called 100% rms emittance.
The measurement system was optimized to measure emittance lower than 1 mm mrad for
1 nC charge per bunch with precision of about 10 % [1].

Emittance measurement wizard
Since December 2010 a new release of EmWiz is available at PITZ. This second
release has replaced the previous version [1] completely. All modern features of the Qt
framework software and new hardware possibilities of the PITZ control system were
implemented in this new wizard. It has a flexible design by being developed as a module set
230
when each module has a specific task. A basic idea for this version was to simplify EmWiz
GUI by decreasing the number of buttons, user readable information, number of windows and
operator actions which are needed for a measurement process. Further on the goal was to give
an interface for easy further development and for easy adding of new tools to the current work
version. It is written for 64 bit Scientific Linux CERN version 5.0, but can be recompiled for
other platforms.

Emittance measure procedure using EMWIZ
The first unit is named Fast emittance scanner (FES). This program (Fig. 4) provides
measurement processes and hardware control. It can be started only by shift operators in the
control room, because solely one program instance can be in the online mode with
measurement hardware.



Fig. 4. Fast emittance scanner, options (FES)

The upper part of the GUI frame has a list box, which contains two types of messages:
[REPORT] and [ERROR] (Fig. 4). All actions of the operator to the program and machine,
system status, error events and alarms with time-stamp are stored to a separate own log-file
(black box). Utilizing this file an expert can support the shift crew remotely to fix a problem
and it is useful to explain some unusual results. In the list box error messages are marked by
red color and content information about an error and an instruction how to fix it. This
feature is common for all programs of EmWiz. Before using the program for a measurement,
the machine parameters have to be adjusted. Some necessary values, e.g. gun and booster
energy, laser beam rms size are measured by other tools and put in the corresponding field
(Fig. 5). An operator can set the actuator speed. The measurement precision gets better with
less speed, but the measurement time is increased.
The typical emittance measurement time for a selected current value with the default
speed (0.5 mm/s) is about 3 minutes (selected scan region = 4 mm). The time is about
5 minutes if the speed equals 0.2 mm/s.

231


Fig. 5. Set values dialog (FES)

An operators next step is set an EMSY device. 6 EMSY devices are available for
emittance measurement: 3 EMSY x 2 axes. The video system at PITZ has more than
20 digital cameras with 8 and 12 effective bits per pixel and >7 video servers [6]. The video
part, which can control the cameras, changing settings and connecting to a video server, is
excluded from EmWiz. Currently it is realized via a set of programs which are written by
Java, yet FES makes it possible to get the video image, to apply filters, to grab and to save
video images and read cameras properties. An operator has to set a proper video server,
check the camera status and set a scan region. All last used values are stored in the EmWiz. If
it is necessary an operator can set an own file name or use a predefined unique file name and
set a path addition for a predefined path name. It is done for flexibility, because this program
is used for other (not only for emittance) measurements.
The scan frame of FES shows appointed hardware parameters, hardware status, gives a
set of measurement and control buttons when a measurement is possible for a current
hardware status (Fig. 6).



Fig. 6. Scan panel, bottom part of FES

For checking quality of an emittance measurement two report diagrams are available
(Fig. 7). They appear during the measurement procedure. Absence of dramatic saturation level
is one important criteria for obtaining good data quality. The major part of signal image pixels
should have an intensity between 50% and 70% of maximum intensity. For example, the
signal rate should be less than 3000 units (X-axis) for 12bpp camera. It is a criterion for a
measurement without saturation.
232
(a) Spectrum plot

(b) Qualitative plot

Fig. 7. Spectrum and qualitative plots (FES)

(a) X-axis signal rate of a video matrix pixel; Y-axis number of pixels with the signal rate;
the red line in the right part of the plot indicates saturated pixels.
(b) X-axis position of beamlet, mm. Y-axis - reference unit; the colors means: green
good frame, red missed frame, sky inaccurate frame position, violet late frame, white -
early frame, blue signal sum of beamlet, white numbers number of saturated points.

Each frame corresponds to a certain actuator position. Both video frame and actuator
position recording times are controlled. If a frame comes too late or too early this frame is
bad for emittance measurements. The qualitative plot gives information about missed and
bad frames, local saturation, actuator movement and signal level. With the help of the report
plots an operator can understand if this measurement is successful or not for sure and can
interrupt unsatisfactory measurement procedure without saving data.
The transverse beam images and background frames at the slit location (EMSY) and at
the beamlet screen (MOI) are recorded via Fast Scan, EMSY and MOI buttons (Fig. 6).
These images are required for the emittance calculation using formula (1). Then the beamlet
scan procedure Fast Scan can be started via the same button. At first the background
statistics is collected in the next step, the slit is moved continuously with a constant speed in
the selected scan region. At the same time, the attached CCD camera to the selected video
server grabs the image frames from the beamlet observation screen with fixed rate (10 Hz);
measurement times and actuator positions for each image frame are recorded in parallel. The
operator repeats the scan for all main solenoid current values of interest.



Fig. 8. Emittance calculator, panel Options (EC)

233
The measurement time for one solenoid setting is about 3 minutes. This time includes
all necessary procedures for the measurement. The process of data collection in FES stores all
machine parameters continuously and informs the operator about critical fluctuations of
controlled parameters, which can influence the measurement reliability. The data is recorded
with cross-reference to the number of the grabbed frame. That means the actuator position,
RF- power and gun temperature are known for each video frame. This gives the possibility to
process and explore data with more precision.




Fig. 9. Dialog boxes for emittance calculation, manual mode (EC)

The last step in the measurement is to calculate the emittance using the Emittance
calculator (EC) tool (Fig. 8). FES sends measurement data to EC with a request of an
operator. If EC is not started yet FES starts EC. EC makes calculations and sends the plot data
to a program Root plotter (RP). The task of RP is plotting diagrams and reports. This
program RP is a symbiosis of the plot system ROOT and Qt. Also the operator can set a
folder with saved data by hand (Fig. 9), and then EC calculates the data and plots results.
A lot of data processing is ongoing during calculation. Different filters, formulas etc.
can be applied to the data. If necessary some parameters can be customized via options
(Fig. 8). A user can select plotting options (Fig. 10) to plot all intermediate results after each
complex process. At present time up to 27 different plots are available for using.



Fig. 10. Bottom part of emittance calculator, panel Plot (EC)

At the end of the calculation the emittance plot is shown by default (Fig. 11). It is
possible to transform the plot data into CSV/TXT formats.

234


Fig. 11. Phase Space plot, emittance report (RP)

Currently, the wizard consists of separated programs for each logical task.
Communication between program components is realized by a shared memory. This approach
gives the possibility to use EMWiz components on different user stations operating on one
host. This increases graphics productivity and decreases the CPU loading. The disadvantage
of using shared memory is that the operational system cannot realize the used shared memory
without special actions.
An instance of the tool Memory Watcher (Fig. 12) is always started together with an
instance of EMWiz. The tool is hidden to the user, only some useful information can be read.
MW closes unused programs automatically after predefined time, cleans probable lost shared
memory, kills possible hanging components and informs about user conflict which blocks
starting of components of EMWiz. It is very useful at PITZ, because the computer system has
a lot of computers and users. The system has to be continuously in operation (7/24).




Fig. 12. Memory watcher (MW)


235
Conclusions
The Emittance Measurement Wizard (EMWiz) is being one of the main measurement
tools at PITZ, which consists of a set of applications. The new version of the EmWiz
significantly decreased the measurement time while accuracy is being improved. The wizard
strongly interfaces with the machine control and video system. Using this program, the
transverse phase space and emittance value can be measured much faster and more reliable. A
friendly GUI and a wide variety of options make the operator job more effective. The
majority of wizard components are universal and can be used for others tasks. With the help
of this tool operators can solve a wide of spectrum tasks. The PITZ facility is upgraded
continuously and development of EMWiz is also ongoing. The work with EMWiz is going in
the directions of complete automation of the measurement process, simplification of using,
improving the quality of the experimental data and calculation algorithms.

References
[1] A. Shapovalov, L. Staykov. Emittance measurement wizard at PITZ. BIW2010, May 2010.
[2] http://qt.nokia.com
[3] http://tesla.desy.de/doocs/doocs.html
[4] http://adweb.desy.de/mcs/tine/
[5] http://root.cern.ch/drupal/
[6] S. Weisse et al. TINE video system: proceedings on redesign. Proceedings of ICALEPCS,
Kobe, Japan, 2009.
[7] L. Staykov et al. Proceeding of the 29th International FEL Conference, Novosibirsk, Russia,
MOPPH055, 2007, p. 138.
[8] F. Stephan, C.H. Boulware, M. Krasilnikov, J. Bahr et al. Detailed characterization of electron
sources at PITZ yielding first demonstration of European X-ray Free-Electron Laser beam
quality, Phy. Rev. St Accel. Beams 13, 2010, p. 020704.
[9] S. Weisse, D. Melkumyan. Status, applicability and perspective of TINE-powered video
system, release 3. PCaPAC2010, October 2010.

236
Modernization of monitoring and control system of
actuators and object communication system of experimental
installation DN2 at 6a channel of reactor IBR-2M

A.P. Sirotin, V.K. Shirokov, A.S. Kirilov, T.B. Petukhova
Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia

Introduction
Determination of composition and development of a unified architecture for actuator
control systems and data acquisition systems from a complex of sensors is a topical task
connected with spectrometer modernization at the reactor IBR-2M. From the moment of
putting into operation of reactor IBR-2, the object communication systems (OCS), i.e.
systems for communication with experimental installation and sample unit, consisted first
of digital and analog input/output units in the CAMAC standard, and later on, in the VME
standard.
However, then processors built-in into VME were displaced by PCs as control computers
[1]. The latter, in its turn, has caused a cancellation of the use of VME bus as an OCS base.
The present work considers the modernization of monitoring and control system of the
installation DN2. The gained experience will also be used at the modernization of of other
spectrometers of rector IBR-2M. The same is true for main approaches and criteria in the
design of control and monitoring systems of experimental installations at reactor IBR-2M,
which are formulated in the present paper.

1. Existing control and monitoring system of experimental installation DN2
The Fig. 1 represents block diagram of control and monitoring system of the DN2
installation before modernization.
VME-bus



*









Control bus of step motor drivers



Fig.1. Block diagram of monitoring and control system of DN2 installation before
modernization
O/I
register VME
Angle sensor
CONV-3
4
RS232
Asynchrony
motor
Controller
of beam
Background
Chopper
Temperature
controller
(DRC)
Temperature
controller
(Eurotherm)
Step motor drivers 1-5
Step motors 1-5
Manual
Controller
Step
motor
controller
VME

237
Both the control of rotating platform with a detector and sensor reading pickup were
executed through the input/output register located at the VME bus.
Step motors were controlled by a controller located at the VME bus through a
corresponding driver of each step motor (1-5).
Step motor controller is located in the VM crate and it executes and it executes a
simplest operation a given number of steps in a given direction with a given velocity and
under control of limit switches.



Fig. 2. Goniometer GKS-100 with sample cassette

Goniometer GKS-100 (Fig. 2.) provides for sample orientation along 3 rotation axes:
vertical axis and two orthogonally related horizontal axes.
According to experiment conditions, the GKS-100 was replaced, in a number of cases,
by a vertical rotation platform Huber with vertical rotation axis. The rotation was also
limited by 2 control points.
The gate valve and interrupter phase are also controlled through input/output units in
VME standard.
Communication with temperature regulators DRC and Eurotherm was also executed
through the unit Conv-3 in the VME standard.

2. Structural scheme of control and monitoring system of experimental
installation DN2 after modernization
A simplified structural scheme (Fig. 3) of step motor and sensor data acquisition
control system is proposed. Its realization is possible both on the CAN interface base and on
the base of RS485 [2, 3].
At the modernization of actuator control systems, it seems to be reasonable to retain
the separation of controller/driver and step motor, which simplifies the task of replacing
motor or controller type. In the future, step motors of dimension-types 42, 57 or 86, possibly,
in combination with reducers =3-150, are supposed to be used.
In modern actuator control systems, the controller is more often integrated into step
motor driver, at that nearly not raising its cost. CAN or RS485 can be used as step motor
controller interface. The use of adapters USB-CAN and USB-RS485 with galvanic separation

238
provides for a reliable high-speed communication with a PC, a possibility of operation at a
distance up to 1000 m.
CAN step motor controllers/drivers of types KSMC-1, KSMC-8 and KUMB203 [4]
with currents up to 1, 8 and 40 correspondingly are proposed to be used. These
controllers have a compatible software and they include the whole range of motors used at the
FLNP spectrometers. However it does not exclude the usage, in the framework of one
spectrometer, of controllers with a different interface, for example, the RS485.

USB USB CAN (RS485)



***



***



* ***


RS485

Fig. 3. Structural scheme of motor control and sensor data acquisition system

In some cases the task of reduction of reactor time losses for the check of actuator
position becomes very urgent. That task is reliably solved by the application of absolute
actuator position sensors. Most suitable are multiple turn angle sensors, consisting of a
one-revolution sensor (12-16 digits) and revolutions sensor (12-16 digits). They can be
used for the control of both angular and linear movements.
Absolute multi-turn angular sensors with a SSI synchronous interface are proposed to
be used. That hardware interface and all sensors are compatible with this interface.
The SSI-RS485 converter is used for the connection of multi-turn angular sensor to the
RS485 bus, and the USB/RS485 converter to PC and it is emulated as a -port of PC.
RS485/SSI converters of one type are proposed to be used as this approach will allow to
connect all sensors to the PC via a single RS485 line through USB-RS485 converter.
Thus, the structural scheme of monitoring and control system of DN2 spectrometer at
6a-channel of reactor IBR-2M takes the form of Fig. 4.
The spectrometer composition comprises two types of step motor controllers those
on the base of CAN bus (2ps.) and RS485 (10ps).

3. Main elements of control and monitoring systems of spectrometer DN2 at the
channel 6a of rector IBR-2M
The system deploys absolute multiple turn angle sensors OCD-SL00-B-1212 and
step motors FL86 and FL86 with a reduction of 1/1, 1/5 and 1/25. The following adapters
Step motor
controller + driver
1
USB / CAN
(RS485)
Step motor
controller + driver
N
Sensor Sensor
Converter
SSI-RS485
USB / RS485
Step motor 1

Step motor N
Converter
SSI-RS485

239
are used as converters: converter LIRA-916 (SSI/RS485), concentrator UPort 1450-I
(USB/RS485) and adapter USB-CAN2 (USB/ CAN).
All controllers KSMC-8 are mounted on the 19 rack, in the 19 3U frame at a
distance of 20 m from a PC. The distance to motors and sensors is up to 10 m.
All controllers KSMC-8 are connected to one CAN line, and further they are connected
to a PC through adapter USB/CAN2. Each controller has its own address at the CAN line.
USB-USB extenders serve for location of concentrators UPort 1450-I directly at
the experimental installation. The concentrator is mounted in the 19 rack at a distance of
20 m from a PC, and then connection cables diverge, in the star-like form, to control and
monitoring devices.
2 USB/RS485 converters UPort 1450-I are included into control system, each of
which has 4 separated connection channels with external devices. Each channel can
function as RS232, RS 422 or RS 485. One RS485 channel is used for the connection of
multi-turn angular sensor of the OCD type.
The adaptor RS485/SSI of LIR-916 type provides the sensor output with connection
line SSI to the general line RS485. The total number of digits of turn sensor and turn
quantity may reach 32. Each RS485/SSI adaptor has its own address in the range 1-256 at
the RS485 line. The access to RS485 line is emulated as a -port of a PC.
The input/output module (110-220. 4.4) provides monitoring and control of
up to 4 relay input channels and 4 relay output channels.

Conclusions
Approaches to the design of actuator monitoring and control system and of object
communication system (OCS) of the DN2 spectrometer allowed to formulate main criteria
for the modernization of other spectrometers at the IBR-2M reactor.
Actuator control systems should comply with the following requirements:
Separation of motor control channels and sensor data acquisition channels,
Design-stage separation of controller/driver and step motor,
Step motors with 2 or 4 coils and coil current up to 1A and 8A are recommended
to be used,
Design-stage separation of communication interface and sensor,
Sensors: recommended are to be used absolute multiple turn angle sensors with
SSI interface and up to 32 digits,
One-type interfaces SSI/RS485 are recommended for sensor connection to RS485 line.
The issue of standardization is solved by the usage of one-type controllers/drivers
and adapter SSI/RS485.
The use of a ready-made intergrated solutionis advisable for simpler control systems.
Object communication systems (OCS), i.e. communication systems with an
experimental installation and sample unit should comply with the following requirements:
Spectrometer OCS systems are connected to a PC through the USB interface,
Separation of control channels of experimental installation parameters, which are
not connected functionally, i.e. which are not one task in spectrometer software,
It is recommended to use new OCS equipment with standard USB or RS485
interfaces through USB/RS converters emulated as COM ports,
Equipment with standard interfaces RS232, RS422 is connected to a PC through
USB/RS converters emulated as COM ports.
The issue of standardization is solved by the application of one-type digital and
analog input/output, which operate at the RS485 line.

240
In conclusion, the authors would like to express their gratitude to Dr. V.I. Prikhodko
for useful discussions and consultations.


USB















* * * * * *



* * *
* * *










USB

CAN-bus


* * *


* * *

Fig.4. Structural scheme of control and monitoring system of spectrometer DN2 at the 6a
channel of reactor IBR-2M
Step motor 1 Step motor 10
2 x UPort 1450I - 2x4-port RS-232/422/485 USB-to-serial converter
RS485 RS232 RS232 RS485
Converter
USB-USB (20)
O/I register
(110-220)
Step motor controller 1
OSMC-17RA-BL(1,7)

Temperature controller
(DRC)
Temperature controller
(Eurotherm)
Controller of
beam shutter
Background
Chopper
power supply
+48V10
+24V10
+12V10

Step motor controller 10
OSMC-17RA-BL(1,7)

Angle sensor of detector platform
Angle sensor of
Goniometer Huber
Converter SSI/RS485
(916)

Converter SSI/RS485
(916)

Step motor 1 Step motor 2
Step motor controller 1
KSMC8 (8)

power supply

+48V10

+12V10
Step motor controller 2
KSMC8 (8)
Converter USB-CAN2

241

References

[1] A.V. Belushkin, at al. 2D position-sensitive detector for thermal neutrons.
NEC2007, XXI International Symposium, Varna, Bulgaria, September 10-17,
2007, pp. 116-120.
[2] V.V. Zhuravlev, A.S. Kirillov, T.B. Petukhova and A.P. Sirotin. Actuator control
system of a spectrometer at the IBR-2 reactor as a modern local controller network
- CAN. 13-2007-170, Dubna, JINR, 2007.
[3] N.F. Miron, A.P. Sirotin, T.B. Petukhova, A.S. Kirillov et al. Modernization and
creation of new measurement modes at the MOND installation. XX-th Workshop
on the use of neutron scattering in solid state investigations (WNCSSI-2008), 13-
19 October 2008, Abstracts, Gatchina, 2008, p. 150.
[4] Electronics, electromechanics. JSC Kaskod, St.-Petersburg, www.kaskod.ru



.
242
VME based data acquisition system for ACCULINNA
fragment separator

R.S. Slepnev
1
, A.V. Daniel
1
, M.S. Golovkov
1
, V. Chudoba
1,2
, A.S. Fomichev
1
,
A.V. Gorshkov
1
, V.A. Gorshkov
1
, S.A. Krupko
1
, G. Kaminski
1,3
,
A.S. Martianov
1
, S.I. Sidorchuk
1
and A.A. Bezbakh
1

1
Flerov Laboratory of Nuclear Reaction Joint Institute for Nuclear Research, Dubna Russia
2
Institute of Physics, Silesian University in Opava, Czech Republic
3
Institute of Nuclear Physics PAN, Krakow, Poland

A VME based data acquisition system for the experiments with radioactive ions beams (RIBs) on the
ACCULINNA facility at U-400M cyclotron (Dubna Russia, http://aculina.jinr.ru/) is described. The DAQ system
includes a RIO-3 processor connected with CAMAC crate via GTB resources, a TRIVA-5 master trigger, standard
VME units ADC, TDC, QDC (about 250 parameters in total) and various software (Multi Branch System - MBS
version 5.0 http://www-win.gsi.de/daq/, based on CERN ROOT http://root.cern.ch/drupal/ Go4 version 4.4.3
http://www-win.gsi.de/go4/ and real time OS LynxOS version 3.3.1). The new system provides flexibility to use
new VME modules (registers, digitizers, etc) and possibility to process a triggers rate (~5000 s
-1
) higher than that of
the old system.
1. Introduction
In order to study light proton and neutron rich nuclei close to drip line, the physics
programs for the ACCULINNA [1] facility and the future ACCULINNA-2 [2,3] facility
(Fig. 1) require a relevant data acquisition system (DAQ). Such a system should satisfy
several conditions: it should have a low price per channel, ability to process a high trigger rate
with a low 'dead time' (time when DAQ is insensitive). In addition, it should be scalable and
flexible.

Fig. 1. ACCULINNA fragment separator and ACCULINNA-2 (suggested) facility



2. VME based data acquisition system
The experimental setup is shown on Fig. 2. In this scheme the DAQ includes various
electronic modules, computers and the related software. To save time, we decided to take the
243
existing DAQ from GSI [4] as an architectural prototype. The DAQ is based on VME and
CAMAC modules, real time Operating System LynxOS and MBS software, combined with
Go4 software based on the ROOT data analysis framework for visualization of the
information coming from the detectors.






Fig. 2. Schematic view of the experimental setup is presented

As stated above, we have replaced our old DAQ based solely on CAMAC by a new
one, based on VME, CAMAC, etc. We were ready to begin its use in the experiment for
searching
26
S after over half a year of the development of the new data acquisition system [5].
Our DAQ consists of a VME crate, coupled with CAMAC crates via GTB resources, RIO-3
processor, TRIVA-5 master trigger and other VME and CAMAC modules (Fig. 3). Notably,
we used in the experiments V785 CAEN analog-to-digital converters for amplitudes from
silicon detectors (energy losses of charged particles and its positions), V775 time-to-digital
converters for signals from plastic scintillators to measure time-of-flight (TOF) of charged
particles, V792 charge-to-digital converters for amplitudes from plastic scintillators, and
V560 as scaler. To use these modules, one has to set-up a 'geographical address' for each
module in VME crate by revolution of the rotary switch on it. VME crate was connected with
two CAMAC crates. In the first crate registers modules were placed intended for processing
the signals coming from Multiwire Proportional Chambers for measuring the position of the
radioactive ion beam on the physical target. In the second crate charge-to-digital converters
were placed for amplitudes with corresponding timing via TDCs coming from the neutron
detector array (there are 32 detectors based on stilbene crystals 80 mm in diameter and 50 mm
thick coupled with and 3'' XP4312 photomultipliers). The additional CAMAC crate was used
for installing numerous parameters of discriminators and spectroscopic amplifiers individual
for each channel.




Fig. 3. VME crate with RIO3, TRIVA5 and standard modules (ADC, QDC, TDC etc.)
Detectors Preamplifiers Amplifiers &
Discriminators
DAQ
244

In order to use MBS program and see information from VME and CAMAC modules, one
should have functions in the C language for each type of the module and write user function as a
main program where described was a procedure of initializing, readout, etc. Thereto the C code
obtained should be compiled in LynxOS running on RIO-3 VME module that is booting by
network from the external computer working on Scientific Linux. If everything was done
correctly, one can start the MBS program in LynxOS, which we generally used to read out VME
and CAMAC modules. After that, data acquisition should be started from this program. To see the
events coming from the detectors in graphics online, we used Go4 software. In order to provide it,
we needed to write a code in Go4 subroutines using C++ language. So we had to decode
information coming from MBS program, to create an event structure by describing all electronic
modules, to create histograms and pictures and code how to fill it from the events, etc. Then we
were compiled C++ code in Scientific Linux and thereupon Go4 could be started. Fig. 4 shows
how the real experimental spectra look like in Go4.



Fig. 4. The screen with data output of Go4 software is shown. The top row reflects
data from TOF plastics (two matrix of Left Amplitude vs. Right Amplitude and dE-TOF
spectrum), the bottom row shows beam spot on the target in both directions (data from
MWPCs).

3. New possibilities
An advantage of said above DAQ system was observed during the methodical
experiment for searching optimal tuning of the ACCULINNA facility for the maximal
purification of
18
Ne secondary beam resulting of fragmentation
20
Ne (53 MeV/nucleon)
projectile on beryllium target (Fig. 5).
The 'dead time' was found only about 40% at ~5000 s
-1
triggers that provides the
operating speed 10 times better than CAMAC. This experiment was aimed at the investigation
in the future of
18
Ne(p,d)
17
Ne
15
O+2p reaction. Previous DAQ did not allow us to work
with such amount of triggers and in this case we worked with tuning for
18
Ne that was not
optimal. This experiment has allowed us to show that thereby we can work now on the
ACCULINNA facility with RIBs (generally proton rich) when in the beam there is only a
very small percent of nuclei that we want to obtain and utilization of the detectors, electronics
and DAQ is very large. For the future we are planning to use advantages of new DAQ by
245
starting working with new VME modules registers, digitizers, etc. In addition we can plug
in to our DAQ the crates in other standards (Fastbus, VXI) which give us possibility to use for
example detectors and electronics for registering discrete gamma rays. Since the data files
from DAQ can be easily converted to ROOT format one can use this framework to analyze
the experimental data. Thus, thereby we have got the working DAQ that meets the modern
experimental requirements in heavy ion physics.



Fig. 5. Typical identification plot Energy loss in 68 mkm silicon vs. TOF when
ACCULINNA facility was tuned for the maximal yield of
18
Ne nuclei (contamination of
unwonted nuclei in the focal plane F4 was ~75%)

Conclusion
We have studied and applied a new data acquisition system. The existing software
(Go4 v.4.4.3) was upgraded on 64 digit platform and several new user functions were
developed (Mesytec MADC-32, CAEN V775NC). The performed work allows us to conduct
some world-level experiments now that were unavailable earlier for us.

Acknowledgements
The authors are grateful to GSI colleagues N. Kurz, S. Linev and H. Simon for successful
consultations and FLNR director Prof. S.N. Dmitriev for his overall support of this work. The
authors acknowledge the financial support from the RFBR, Grant No. 11-02-00657-, and from
the JINR-BMBF program Superheavies and Exotics.

References
[1] A.M. Rodin et al. Nucl. Instrum. Methods B204, 2003, p. 114.
[2] A.S. Fomichev et al. JINR Commun. E13-2008-168, Dubna 2008.
[3] A.S. Fomichev et al. Acta Phys. Pol. B, Vol. 41 (2010), No 2.
[4] http://www-win.gsi.de/daq/
[5] Int. J. Mod. Phys. E, Vol. 20, No. 6, 2011, pp. 14911508.

246
Ukrainian Grid Infrastructure. Current state

S. Svistunov
Bogolyubov Institute of Theoretical Physics, National Academy of Sciences of Ukraine

This report presents a current state of the hierarchical structural organizational model of the Ukrainian
Grid-infrastructure. The main topics are: Three-level organizational management of computer resource and grid-
service; Information concerning current state of cluster resource, high-speed fiber optic networks and
middleware; The concept of the State scientific technical program of implementation and usage of grid
technology for 2010-2013 accepted by Cabinet of Ministers of Ukraine in 2009; Main steps in the State
program realization and main results of the State program implementation in 2010; The most interesting results
of solving scientific tasks using grid technology; Information on integration of the Ukrainian institutes and
universities in the international Grid-projects.

1. Introduction
Today supercomputer technology is considered the most important factor for the
competitiveness of the economy. Therefore, the advanced countries are moving to a new,
more progressive, infrastructure - the grid infrastructure with powerful supercomputer centers
connected by ultrafast communication channel.
The basis of grid infrastructure in Ukraine has been built by applying two programs:
"Implementation of grid technologies and the creation of clusters in the National Academy of
Sciences of Ukraine" and " Information and Communication Technologies in Education and
Science ", the main performers of which were National Academy of Sciences of Ukraine and
Ministry of Education and Science of Ukraine [1], [2].
Ukrainian National Grid (UNG) is a grid infrastructure, which shares the computer
resources of the institutes of National Academy of Science and Universities. The principal
task of the UNG is to develop the distributed computing and grid technologies to advance
computational calculations of fundamental and applied science. Besides, UNG has to ensure a
participation of the Ukrainian scientists in various major international Grid projects.
Currently grid infrastructure shares resources of 30 institutes and universities, which
are operating under the ARC middleware and three clusters included in EGI structure under
gLite middleware. It's not really much, but currently it's enough to support research activity of
Ukrainian institutions.
Ukrainian Grid infrastructure is a geographically distributed computing complex, which
currently provides solution of the complex scientific problems in different application areas. In
essence these are very various tasks. Here are a few examples of abovementioned tasks:
LHC (CERN) experimental data processing, their analysis and comparison to the
theoretical results and phenomenological models aiming the full scale participation of
the Ukrainian institutes in the ALICE and CMS experiments,
Dynamical computing of an evolution of the star concentration in the Galaxys
external field. The hydrodynamic modeling of collision and fragmentation of the
molecular clouds. Analysis of N-body algorithm and parallel computing on the
GRAPE clusters,
Theoretical analysis, observation and processing of primary, roentgen and gamma
radiation data which are obtained from the satellite telescopes INTEGRAL, SWIFT
and others,
Computing of thermodynamic characteristics, infrared and electron spectra of sputter
DNA fragments. Study of bionanohybrid system structures composed by DNA and
RNA of different sequence,

247
Molecular dynamic computing of Fts-Z-protein systems with the low molecular
associations,
Computer simulation of the spatial structure and molecular dynamics of cytokine-
tyrosine-RNA synthetase.

Ukrainian institutes dont have enough financial resources for development of grid
technologies. State scientific-technical Program of implementation and usage of grid
technology for 2010-2013 by Bogolyubov Institute of Theoretical Physics of NASU was
presented to the Cabinet of Ministers of Ukraine and was accepted at the end of 2009 year
Currently, financing of grid technologies is provided by the State Budget of Ukraine.
2. State scientific-technical program
Ukrainian National Grid is the targeted state scientific and technological project
(program) on the development and implementation of grid-technologies for 2009-2013,
adopted by the Cabinet of Ministers of Ukraine in 2009. Goal of the program is creation of
national Grid infrastructure and wide implementation of Grid technologies into all the
spheres of social-economical life in Ukraine.
The project goal is to build a national grid-infrastructure and to introduce grid-
technologies in all areas of scientific, social and economic activities in Ukraine, as well as to
train specialists on grid-technologies.
The objectives of the project are:
Introduction and application of grid-technologies in scientific research,
Creation of conditions for implementation of grid-technologies in economy, industry,
financial and social spheres,
Creation of multilevel interdepartmental grid-infrastructure with elements of
centralized control that takes into account the peculiarities of grid-technologies usage
in various fields,
Creation of specialist training system to work with grid-technologies.
The following management bodies are created for project control:
Interdepartmental Coordinating Council, which defines the general principles of
development, grid infrastructure program and operational plans,
Project Coordination Committee, which is the executive body and has the authority to
represent the national grid-infrastructure at the national and international levels,
Basic Coordination Centre, which is responsible for operation of the national grid-
infrastructure.
The main project activities are:
Creation of new and update of existent clusters. Raising the bandwidth of internet
channels,
Middleware and technical support,
Security of Grid Environment,
Implementation of Grid technologies and Grid applications in scientific research, in
economics, industry, financial activity,
Implementation of Grid technologies and Grid applications in medicine,
Development for purpose of storing, processing and open access to scientific and
educational information resources (data bases, archives, electronic libraries) by using
Grid applications,
Organizational and methodical providing of specialists training for them to work with
Grid technologies.

248
Total financing for four years should be 30,0 million . Two-thirds of this sum is
planned to be spent during first two years mainly on creation of new and update of existent
clusters and on raising bandwidth of internet channels. The second direction of work
according to financing plan is related to usage of grid technologies. Main performers of the
project have been defined by the National Academy of Science, Ministry of Education and
Science and Ministry of Health. Note that Bogolyubov Institute for Theoretical Physics NAS
of Ukraine is the leading organization for implementation of state program.
The program has been in progress for two years already. In 2010 only 0,616 million
were allocated instead of the planned 11,0 million . However, in 2010, 29 projects in various
areas of the program related to grid technologies usage in scientific research were
implemented. In 2011, the amount of funding programs totaled 0,87 million instead of the
planned 9,0 million . That allowed starting 43 scientific projects. As you can see this is far
from the amount planned. This amount of investments cannot perform all the tasks of the
State program, but the area of implementation of the grid - technology is one of the five
priority scientific researches in Ukraine, so it should be financed, even on a reduced volume.
The additional information about state program implementation can be found on the
site http://grid.nas.gov.ua.
3. Grid-infrastructure
It is known that grid system is based on three principal elements: computer resources
(clusters), high-speed and reliable access of resources to Internet and middleware which
unifies these resources into one computer system.
Though the middleware was available even at the beginning of the grid infrastructure
creation, the clusters and fiber-optic network of NASU were simultaneously built together
with the grid facilities system development.
The well-known Beowulf idea (www.beowulf.org) as a conceptual model for the
computer cluster construction was selected and adopted for realization in UNG. This
conception is based on using servers with standard PC architecture, distributed main storage
and Gigabit Ethernet technology which unifies computer system. So, all grid clusters in UNG
are built with x86, x86_64 architecture, two- (four-) processor servers of 1-4 GB main storage
and 36-500 GB HDD. 1GB/s switchers have been used to provide inter-server exchange and
the InfiniBand is used only in some clusters.
At present time the UNG shares resources of 30 institutes and universities (more than
2700 CPU and 200 TB of disk memory).
It should be emphasized that the footprint on hard disks of computer nodes is used for
operating system (loading from local disks), program packages and temporary files, but it is
inaccessible for user files storage. Each cluster has its disk array to store programs, users data
and information of common use. A free distributed operational system Linux of various
modifications (Scientific Linux 2.6.9, Fedora 2.6, CentOS-4.6) is installed on clusters and the
task management system OpenPBS is used to start tasks and allocate the cluster utilization.
To enlarge number of grid users the idea of so-called grid platforms has been fulfilled,
its core idea is to install the grid clusters with minimal configuration. Such a cluster
includes the control server with installed middleware, two working nodes and network
equipment. Working permanently in the conditions of limited funding, such grid platforms
have allowed access to the grid for the specialists of Institutes without operable cluster. With
financing available any mini cluster can be easily extended to the full scale cluster. This
strategy enables to train system administrators and users for the work with the grid.
The high-speed and reliable access channel to Internet networking is one of the
necessary conditions at the grid infrastructure building. Fiber optic channels in UNG are

249
owned by two providers: UARNet (which works mainly with academic institutes) and URAN
(which works mainly with educational structures).
At present, UARNet is one of the biggest Internet providers in Ukraine with its own
data transmission network and external channels to the global Internet. The total capacity of
UARNet external channels amounts to over 100 Gb/s. The access to non-Ukrainian Internet
resources is provided via Tier1-providers Level3, Cogent and Global Crossing, Russian
provider ReTN.net. UARNet is a member of European Internet Traffic Exchanges DE-CIX,
AMS-IX and PL-IX. UARNet has its sites (POPs) in all regional centers of Ukraine as well as
in Frankfurt (Germany) and Warsaw (Poland). In 2009 cable from Lvov to Poland border has
been built. Network equipment has been installed for connection to Pioneer network. In 2009
Ukraine has been connected to a GEANT.
All sites are interconnected by ring topological structured by multiple 10 Gb/s data
channels. UARNet is also connected to the Ukrainian Internet Traffic Exchange Network UA-IX
through four 10 Gb/s data channels. To KIPT cluster (Kharkov) that takes part in CMS
experiment and has status of Tier-2 CERN with guaranteed capacity 300 MBit/s. To BITP cluster
(Kiev), that takes part in ALICE experiment at CERN, with guaranteed capacity 1 GBit/s.
The main efforts this year were aimed at the harmonization of Ukraines grid
infrastructure to the requirements of EGI. According to the WLCG scheme the infrastructure
of UNG has been built as three-level system in respect to an organization and management:
First level: Basic Coordinating Centre (BCC) which is responsible for core services
and controls UNG,
Second level: Regional Operating Centers which coordinates the activity of grid
sites in regions,
Third level: Grid sites (institutes) or minimal grid network access platforms which
belong to any virtual organization (VO). VO temporarily joins institutes (not
necessarily from the same region) of common scientific interests to solve a problem.
Basic Coordinating Centre is a non-structural subdivision of the Bogolyubov Institute
for Theoretical Physics of the NAS of Ukraine, which is the basic organization for the
implementation of the project. BCC provides management and coordination of works on
supporting the operation of the national grid-infrastructure and resource centres (grid-sites) to
provide grid-services (high-performance distributed computing, access to distributed
database, access to software) for users.
BCC on technological and operational level coordinates the work of grid-sites and
entire UNG grid-infrastructure according to the requirements of European and international
grid-infrastructure.
BCC on technological and operational level serves as the National Operations Centre
(Resource infrastructure Provider) of UNG grid-resources in relation with the International
grid-communities.
BCC has the right to conclude international agreements in the field of cooperation at
the operational level, which shall become valid after ratification by the Program Coordinating
Committee.
The Basic Coordination Centre includes:
Centre for monitoring of grid-infrastructure and registration of grid-sites,
Centre for registration of virtual organizations and members of virtual
organizations,
Certification authority,
International relations group,
Scientific and analysis group,

250
Technical assistance and middleware maintenance group,
User support group,
Study and training centre.
The principal BCC teams and their functions are following:
International relations group. The key task of this group is creating the conditions
for UNG participants for cooperation with the international organizations and the international
grid collaboration.
Scientific and analysis group. The task of this group is to analyze the applicability of
grid technologies in the different fields, analysis of perspective research trends of the grid
technologies and their realization, expert evaluation of new propositions and projects for
development and the grid technology applications.
Technical assistance and middleware maintenance group. The group assists
administrators of computer clusters in installation of system-wide software and providing
assistance in computer security topics. The team coordinator maintains the permanent contact
with the system administrators and security administrators of each local grid site. Software
support team assists administrators of grid site on installation of middleware and
maintenance. Moreover, the team experts are responsible for task analysis of the whole grid
system, new middleware installation, and compatibility of software with middleware.
User support group. The main task of group is supporting of the GGUS system -
monitoring the questions of grid users. Usually this is a web-portal which should answer to
the practical questions: how to involve in the grid activity, how to obtain the grid certificate,
how to join to the virtual organization. Web-site contains the information about the structure
of national grid, references to the grid activity documentations and reference to international
grid projects and virtual organizations, announcements on the grid seminars and conferences.
Study and training centre is in charge of organization of the training process on
theoretical and practical implementation of the grid technology. The individual education
programs for system administrators and grid users are to be created.
The Coordinating Committee of the Ukrainian State Program has accepted and
approved the basic documents defining the operation the grid sites in UNG. They are:
- Ukrainian National Grid. Operation architecture,
- Agreement for the use of grid resources in the UNG,
- Procedure of registration of grid sites in the UNG,
- Procedure of registration of virtual organizations in the UNG,
- Basic Coordination Center. Operation structure.
All this documents were developed according to EGI requirements.

Negotiation on Memorandum of Understanding between EGI.eu and BCC are
on. Resource Infrastructure Provider MoU document is coming. The purpose of this
Memorandum of Understanding is to define a framework of collaboration between EGI.eu
and BCC for access of Ukraine to the operation level in EGI. Additional information about
this activity can be found on the site http://infrastructure.kiev.ua.

4. Grid-applications
I would like to add few words on applications which are used by the grid
infrastructure. This is a research, which has been performed in Ukrainian institutes and which
required lot of computing calculations.

251
Due to the efforts of NASU several well-known and widely used scientific software
packages were obtained and have been installed on UNG resources (Gaussian, Turbomole,
FlexX, Wolfram Research gridMathematica 2.1, Amber 9, Molpro Quantum Chemistry
Package, Gromacs). The main problem is that all the software has only command line user-
interface, which restricts its usage by the scientists, who are not well familiar with Linux
operating system and its console interface. These things are even more complicated while
using software from remote resources. That can be done only through grid, which expects
user to know grid principles and grid-middleware interface.
Possible solution of this problem is development of web-based science gateway [3].
Web-based science gateway is the solution that integrates a group of scientific applications
and data collections into a portal so scientists can easily access the resources of grid-
infrastructure to submit their tasks and manage their data. The main part of science gateway is
the grid-portal which gives to the grid users necessary tools with simple and friendly
interface for access to grid resources.
The SDGrid (System Development by Grid) portal was developed in Educational
Scientific Complex 'Institute for Applied System Analysis' (ESC IASA) of National
Technical University of Ukraine 'Kyiv Polytechnic Institute. The portal is built on a base of
CMS Gridsphere 2.1.5 and now works with Globus Toolkit 4.0.7 and NorduGrid 0.6.1. Portal
contains following portlets: GridPortlets provides users authentication, submission of
tasks, work with FTP, viewing of the tasks status, Gpir gives users information about
clusters loading, Queue prediction tasks scheduling.
The BITP portal was developed in Bogolyubov Institute for Theoretical Physics NAS
of Ukraine. The portal is built on a base of CMS Gridsphere 3.1 and Vine Toolkit 1.1.1 and is
used as test platform for development of the grid applications. The first part of portal is
intended for development of web applications which allows access to engineering packages,
which are installed on BITP cluster. The second part is intended for scripts development for
grid usage. In collaboration with Institute of Cell Biology and Genetic Engineering of NAS of
Ukraine interface and scripts for software package GROMACS were developed. They work
both on local cluster and in a mode of grid resources usage. Calculation using Gromacs
consists of same consecutive steps. The initial steps are intended for preparation of files for
calculations. On the last stage these files are used for direct calculation of molecular
dynamics. Developed portal allows carrying out initial actions on local cluster and using grid
infrastructure for simulations. The second part of BITP Portal uses Vine Toolkit 1.1.1
framework for access to grid resources. At the moment portal works with gLite middleware.
The main goal of creation of MolDynGrid Virtual Laboratory is to develop an
effective infrastructure for realization of calculations of molecular dynamics of protein
complexes with a low molecular compound. The grid portal (http://moldyngrid.org) was
developed by specialists of Institute of Molecular Biology and Genetics NAS of Ukraine and
Computer Center of Taras Shevchenko National University of Kiev. This portal allows
ensured tasking on accomplishment in grid and saving of obtained results while using user-
friendly interface. For portal development such tools as POSIX Shell, PHP, JavaScript and
MySQL database were used.
Other portal was developed by specialists of Glushkov Institute of Cybernetics NAS of
Ukraine, Verkin Institute of low Temperature Physics and Engineering NAS of Ukraine,
MELKOM Company. Based on the Supercomputer Management System SCMS 4.0 program Web
Portal for Grid-Cluster under ARC middleware was developed. Along with standard features of
cluster management portal provides full cycle of grid resource usage from submitting tasks to
receiving the results. Demo version of portal is available on devel.melkon.com.ua.

252
One of the main problems of Ukrainian Grid community is the lack of specialists who
know and are able to use the grid technology for scientific research. State program offers to
build a full system of training for grid users starting with educational courses in institutes and
followed by training the grid users of academic institutes. In 2011 by the initiative of the ESC
IASA a new specialty "Systems engineering was introduced for purpose of training
specialists in the field of distributed intelligent environments, in particular, grid technologies
in science and education. Request for implementation of this in magister education program
was included in the course "Distributed Computing and Grid technology which summarizes
the current understanding of the grid - technology and the problems which occur in the
process of their design and implementation. Kyiv Polytechnic Institute has organized the
computer class for grid administrators training and first advanced training courses.

Conclusion
Despite all difficulties and problems in developing of grid technologies in Ukraine the
background for the widest application of grid technologies has been provided. There is a good
reason to believe that grid would exist and operate in Ukraine. The collaboration with
international grid community is intensified and Ukrainian National Grid will be built with
joint efforts and will fit in the worlds grid infrastructure.

References
[1] . Martynov, G. Zinovjev, S. Svistunov. Academic segment of Ukrainian Grid
infrastructure. System Research and Information Technologies, N. 3, 2009, pp. 31-42.
[2] E. Martynov. Ukrainian Academic Grid: State and Prospects. Organization and
Development of Digital Lexical Resources. Proceedings of MONDILEX Second Open
Workshop Kyiv, Ukraine, 2-4 February, 2009, pp. 9-17.
[3] O. Romaniuk, D. Karpenko, O. Marchenko, S. Svistunov. Complex Science Gateway: use
of different grid infrastructures to perform scientific applications. Proceedings of the 4-th
International Conference ASCN-2009 (Advanced Computer Systems and Networks: Design
and Application), November 9-11, Lviv, Ukraine, 2009, pp. 81-82.
253
GRID-MAS conception: the applications in bioinformatics and
telemedicine

A. Tomskova, R. Davronov
Institute of Mathematics & ICT, Academy of Sciences, Uzbekistan

Keywords: Multi-Agent Systems, decision-making agent, clusterization, gen expression,diagnostics

1. Introduction
The GRID and MAS (Multi-Agent Systems) communities believe in the potential of
GRID and MAS to enhance each other because they have developed significant
complementarities. Thus, both communities agree on the what to do: promote an integration
of GRID and MAS models. However, even if the why to do it has been stated and assessed,
the how to do it is still a research problem.
The adoption of agent technologies constitutes an emerging area in bioinformatics.
The avalanche of data that has been generated, particularly in biological sequences and more
recently also in transcriptional and structural data, interactions and genetics, has led to the
early adoption of tools for unsupervised automated analysis of biological data during the mid-
1990s [1,2]. Computational analysis of such data has become increasingly more important,
however, some tools require training, and improving. The use of agents in bioinformatics
suggests the design of agent-based systems, tools and languages for above mentioned
problems.
These kinds of resources available in the bioinformatics domain, with numerous
databases and analysis tools independently administered in geographically distinct locations,
lend themselves almost ideally to the adoption of a multi-agent approach. There are likely to
be large numbers of interactions between entities for various purposes, and the need for
automation is substantial and pressing. Grid [3] e-Science project
(http://www.mygrid.org.uk), may also merit the application of the agent paradigm [4].
Another project is the Italian Grid, (http://www.grid.it) which aims to provide platforms for
high-performance computational grids oriented at scalable virtual organizations. Promising
experimental studies on the integration of Grid and agent technology are also being carried
out in the framework of a new project, LITBIO (Interactive Laboratory of Bioinformatics
Technologies; http://www.litbio.org) for genome analysis, is demonstrated by the
GeneWeaver project in the UK [5], and work using DECAF in the US [6,7].
The agent paradigm in telemedicine involves the analysis and the design step of a
system project; this is achieved by means of agent development tools or agent frameworks
where the system designer work is naturally driven by the agent concept, exactly as object
oriented tools help in analyzing, designing and implementing object oriented systems. The
agent also must have some intelligent capabilities, e.g. clusterizaton as basic practical tools in
computer diagnostics and prediction of disease outcomes.
This paper addresses the problem of designing agents for decision making by means of
a clusterization oriented approach. We present comparative analysis of two methods of
clusterization applied to problems of gen expression and on-line diagnostics of acute
myocardial infarction (AMI), because its well known that clusterization is one of the popular
tools for understanding the relationships among various conditions and the features of various
objects.
In [81] was proposed a new clustering method applicable to either weighted or
unweighted graphs in which each cluster consists of a highly dense core region surrounded by
254
a region with lower density. The nodes belonging to dense cores of cluster then divided into
groups, each of which is the representative of one cluster. These groups are finally expanded
into complete clusters covering all the nodes of the graph.
The support vector machine method (SVM) [9,10,2,3] has been one of the most
popular classification tools in informatics now. The main idea of SVM is that the points of the
two classes cannot be separated by a hyperplane in the original space. These points may be
transformed to a higher dimensional space so that they can be separated by a hyperplane. In
SVM, the kernel is introduced so that computing the separation hyperplane becomes very fast.
Saddle point search algorithm requires finding projections on intersection of cube and plane.
The goal was to compare these two approaches, improving them in some
modifications, described below and testing both on tasks of diagnostics and prediction
problems.

2. Coring clusterization and SVM problems
Let us consider an undirected proximity graph G = (V, E, W), where V is a set of
nodes, E is a set of edges, W is a matrix with entry w
ij
being the weight of the edge between
nodes i and j. In proximity graphs, V represents a set of data objects, w
ij
0 represents the
similarity of the objects i and j. A higher value of wij reflects a higher degree of similarity.
Thus, applying a graph clustering method proximity graph will produce a set of subgraphs,
such that each subgraph corresponds to a group of similar objects, which are dissimilar to
objects of groups corresponding to other subgraphs.
We assume that every cluster of the input graph has a region of high density called a cluster
core, surrounded by sparser regions (non-core). The nodes in cluster cores are denoted as
core nodes, the set of core nodes as the core set, and the subgraph consisting of core nodes
as the core graph.
For each node i of H _ V, the local density at i is defined as d(i, H) = (

eH j
ij
w )/|H|.
The node with the minimum local density in H is referred to as the weakest node of H:
H ie
min arg d(i, H). We define the minimum density of H as D(H) =
H ie
min d(i, H) to measure
the local density of the weakest node of H. By analyzing the variation of the minimum density
value D, we identify core nodes located in the dense cores of clusters. The method clusters a
proximity graph in some steps. Our contribution to this method is change the function d(i,
H) = (

eH j
ij
w )/|H| on d(i, H) = (max w
ij
)/|H|. This is correct, because the functions
property of monotony remains valid.
We remind standard SVM problem in learning classification. We denote by
2 1
, x x
inner product of vectors
1
x and
2
x . Suppose that we have a learning sample:
. ,..., 1 }, 1 ; 1 { , }, , { l i y R x y x
i
n
i i i
= e e
Below is standard formulation of SVM problem:
2
, ,
1
1
min( || || ),
2
( , ) 1 ,
0, 0, 1,..., .
i
l
i
b
i
i i
i
C
y x b
C i l
e o
e o
e o
o
=
+
+ >
> > =



Solution * *, *, o e b gives optimal hyperplane 0 * *, = + b x e . Our contribution to the SVM
method is that we preliminary calculated a significance of all variables based on Kullback-
Leibler divergence [11] and used this list in simulation running.
255

3. Computing experiments
Coding the methods described above we used the test problem from [8] in order to
compare the results of coring clusterization, and modification of SVM. Clustering
applications to gene expression analysis were demonstrated already in [12]. The problem of
tissue clustering aims to find connections between gene expressions and statuses of tissues,
that is can we predict the status of a tissue based on its gene expressions (cancer or no). The
dataset used in the experiment is available http://microarray.princeton.edu/oncology/affydata/index.html.
This data contains 62 samples including 40 tumor and 22 normal colon tissues. Each sample
consists of a vector of 2000 gene expressions. We will set aside the sample labels
(tumor/normal) and cluster the samples based on the similarities between their gene
expressions. Ideally, the task was to partition the sample set into two clusters such that one
contains only tumor tissues and the other contains only normal tissues.
The next task was the problem of acute myocardium (AMI) outcomes prediction [14].
Statement of a problem was formulated as forecasting of outcomes of acute myocardial
infarction (AMI) on the basis of the data of the initial stage of the disease. Total number of
patients from three different clinics was 1224.The number of the features used was 39
parameters, but after processing by Kullback-Leibler method were selected 15 most
informative features.

4. Results and discussion
A. Cancer diagnostics results
A1. Coring clusterization
The proximity graph constructed from the gene expression vectors is a complete graph
of 62 nodes. Edge weights that reflect the pair wise similarities of samples are computed
based on the Pearson correlation coefficient. The coring method identify 12 core nodes. The
dendrogram of these core nodes exposes two well-separated groups, one contains 10 nodes
and the other has 2 nodes. Expanding these cluster cores yields two clusters. One has
40 samples consisting of 37 tumor and 3 normal tissues. The other contains 22 samples
consisting of 3 tumor and 19 normal tissues.








Fig. 1. Comparison of clustering results by the coring method, [12] and [13]

Fig. 1 shows the comparison of clustering results by the coring method, [12] and [13]. The
result of [13] consists of 6 clusters, but joining clusters 1, 4 and 5 into one group of normal
tissues and 2, 3 and 6 into another group of tumor tissues yields a clustering similar to the
result of []. Our exchange of basic function d (i,H) method didnt change the number of errors
(6 errors in total).




Tumor
Normal
[5] [6] Coring

Ideal
256
Ideal Coring SVM (standard) SVM
(transformed features)
Tumor
Normal
A2. SVM method
The matrix was centered at expectation and rated, then was processed using of a
standard SVM method [3]. On base of training set (32 samples) was developed -margin
plane. This computing experiment resulted six errors, particularly 2 samples from the first
class, and 4 samples of the second class. Therefore the quality of partition can be estimated as
93, 75 % on training sample and 86, 67 % on testing sample.
In the issue of Kullback-Leibler divergence formula using we have a vector of
significance in R
m
space, where m - number of columns of an initial matrix. Further, j
th
the
matrix column has been multiplied on j
th
component of significance vector. Then we get the
new matrix with weighted variables for which again were used standard SVM, e.g. we repeat
above described computing experiment. But the results we received were different: only four
errors, 2 errors from the first class and 2 of the second. Naturally, the accuracy of partition
became equal 93,75 % on training set and 93,33 % on testing samples.










Fig. 2. Comparison of clustering results by SVM standard and modified procedures

B. AMI prediction results
The dimension of matrix is 1224 (objects)* 39 (features); matrix was centered at
expectation and rated. Computing experiment was carried out on two sets of features,- on set
of initial features (39), and on set of most informative features (15), which was calculated by
Kullback-Leibler divergence formula.

B1. Coring clusterization
Previously we defined two classes of patients,- with and without any complications of
the disease.
It was found that the first class consists of 420 patients and the second one from 804
patients. Then data was processing by algorithm of coring clusterization. Results are shown in
the table below.

Number of objects in
1class/2 class

Quality of clusterization (% of accuracy: 1 class/2
class) at different sets of features
39 features 15 features
420/804 54/92 67,8/89,3

B2. SVM method
The procedure of clusterization based on SVM methods is the following. At first we
define the support vectors: in total their number is 771.Then from set of support vectors one
chooses the set of learning sample, remaining data of support vectors forms test sample. Note
257
that number of learning and test samples are varied. Below the results of SVM clusterization
is shown.

Number of objects
in learning sample
( I class/ II class)
Quality of clusterization
(% of accuracy) based on
standard SVM method
(39 features)
Quality of clusterization
(% of accuracy) based on
standard SVM method
(15 features)
Learning
sample
Test
sample
Learning
sample
Test
sample
150/150 64 52,87 63,44 52,87
200/200 60 53,13 62 61,66
250/250 56,20 61,17 60,75 62,71
300/300 58,50 88,64 62,12 90,25
350/350 59,71 94,87 64,32 98,56

Its obviously that accuracy on learning sample is considerably worse than on test
sample. As far as support vectors enclose the entire information about given matrix, one can
conclude that support vectors are much disembodied, and therefore separating plane is not
adequate. At the same time on test sample we get good results, because information about test
sample embedded in support vectors of learning sample already. Table implies that refinement
of clusterization defined by growth of learning sample volume and using of informative
features. Both reasons are quite clear.

Conclusion
We have tested the method for clustering a graph and SVM method in standard and
modified form in two computing experiments. Experiments with proximity graphs built from
gene expressions have shown good clustering results. Really, the method is simple and fast,
but definition the values of two proposed parameters for a good setting needs future
researches. Core nodes can represent informative data objects and also make the method
robust to noise. Standard SVM method gives the same results as coring clusterization, but
after transformation of initial features space based on Kullback-Leibler divergence, the
accuracy of partition improved on 7.68%. Thus we can conclude, that coring clusterization
gives more possibilities for interpretation, its more robust to noise, but SVM used in
transformed space of initial variables is more accurate.
Experiments on data of AMI confirmed that the set of 15 informative features (in both
methods), and increasing of set of training and learning samples (SVM method) are optimal
conditions for correct clusterization.

References
[1] T. Gaasterland and C. Sensen. Fully automated genome analysis that reflects user needs and
preferences a detailed introduction to the magpie system architecture. Biochimie, 78:302
310, 1996.
[2] W. Fleischmann, S. Mller, A. Gateau, and R. Apweiler. A novel method for automatic
functional annotation of proteins. Bioinformatics, 15:228233, 1999.
[3] R.D. Stevens, A.J. Robinson, and C.A. Goble. myGrid: personalised bioinformatics on the
information grid. Bioinformatics, 19(suppl. 1):i302304, 2003.
[4] L. Moreau, S. Miles, C. Goble et. al. On the use of agents in a bioinformatics grid. In
NETTAB Agents in Bioinformatics, 2002.
[5] J.M. Bradshaw. An introduction to software agents. In J.M. Bradshaw, editor, Software
Agents, chapter 1, pp. 3-46. AAAI Press/The MIT Press, 1997.
258
[6] J.R. Graham, K. Decker, and M. Mersic. Decaf - a flexible multi agent system architecture.
Autonomous Agents and Multi-Agent Systems, 7(1-2):727, 2003.
[7] K. Decker, S. Khan, C. Schmidt et al.. Biomas: A Multi-Agent System For Genomic
Annotation. Int. J. Cooperative Inf. Systems, V. 11, pp. 265-292, 2002.
[8] Thang V. Le, Casimir A. Kulikowski, Ilya B. Muchnik. Coring Method for Clustering a
Graph. DIMACS Technical Report, 2008.
[9] V.N. Vapnik. The Nature of Statistical Learning Theory. 1995, New York
[10] B. Schoelkopf and A.J. Smola. Learning with Kernels: Support Vector Machines,
Regularization, Optimization, and Beyond. MIT Press, 2002.
[11] S. Kullback, R.A. Leibler. On Information and Sufficiency. Annals of Mathematical
Statistics 22 (1), pp. 7986, 1951.
[12] U. Alon, N. Barkai, D. Notterman, K. Gish, S.Ybarra, D. Mack and A. Levine. Broad patterns of
gene expression revealed by clustering analysis of tumor and normal colon tissues probed by
oligonucleotide array. Proc. Natl. Acad. Sci. USA, 96(12):6745-6750, 1999.
[13] A. Ben-Dor, R. Shamir and Z. Yakhini. Clustering gene expression patterns. Journal of
Computational Biology, 1999.
[14] L.B. Shtein. Trial of forecasting in medicine based on computers. Leningrad University,
Leningrad, p. 145, 1987.

259
Techniques for parameters monitoring at Datacenter

M.R.C. Trusca, F. Farcas, C.G. Floare, S. Albert, I. Szabo

National Institute for Research and Development of Isotopic and Molecular Technology,
Cluj Napoca, Romania

A datacenter is a facility used to house computer systems and associated components
(telecommunications, storage systems, etc). The goal of the data center monitoring is to provide an IT view into
the data center facility to give an accurate real-time picture of the current state of the critical infrastructure. There
is a need to determine the source of performance problems as well as to tune the systems for a better operation.
The main tools for our Datacenter monitoring are the opensource Ganglia (http://ganglia.sourceforge.net) and
NAGIOS (http://www.nagios.org) packages. If Ganglia allows a remote viewing of live or historical statistics
(such as CPU load averages or network utilization) for all machines that are being monitored, NAGIOS offers a
complete monitoring and alerting for servers, computing nodes, switches, applications, and services.
Keywords: datacenter, Grid, cluster, Ganglia, Nagios.

Introduction
A datacenter is a facility used to house computer systems and associated components
(telecommunications, storage systems, etc). Today the Grid systems are applied in many
research areas, namely:
(i) The fundamental research in particle physics uses advanced methods in computing and
data analysis. Particle physics is the theory of fundamental constituents of matter and the forces
by which they interact and ask some of the most basic questions about Nature. Some of these
questions have far-reaching implications for our understanding of the origin of the Universe [1].
In 2009 the Large Hadron Collider accelerator (LHC) [2] with ALICE, ATLAS, CMS and LHCb
experiments, will started taking data. LHC collides protons at the highest energy s=14TeV and
luminosity L=10
34
cm
-2
s
-1
among all the other accelerators and due to these performances, high
precision measurements will be possible and the results means new physics.To fulfil these
requirements a high performance distributed computing system is needed.
(ii) Computational simulations based on the atomic description of biological
molecules have been resulted in significant advances on the comprehension of biological
processes. A molecular system has a great number of conformations due to the number of
degrees of freedom in the rotations around chemical bonds, leading to several local minima in
the molecular energy hyper-surface. It has been proposed that proteins, among the great
number of possible conformations, express their biological function when their structure is
close to the conformation of global minimum energy [3]. This type of research involves a
large amount of computing power and fills to be a very suitable application for grid
technology.
(iii) A sizable body of experimental data on charge transport in nanoscopic structures
has been accumulated. We face the birth of a whole new technological area: the molecule-
based and molecule-controlled electronic device research, often termed molecular
electronics (ME). The simplest molecular electronic device that can be imagined consists of
a molecule connected with two metallic nanoelectrodes. There is now a variety of approaches
to form such nanojunctions (NJ) [4] that differs in the types of electrode, materials employed,
the way in which the molecule-electrode contacts are established and the number of molecules
contacted. Recently the realization of the first molecular memory was reported [5].
The computation of the current-voltage characteristics of a realistic nanodevice needs
almost a 6-10 GB memory and one week computing time, which brings to the idea of using a
Grid environment with a high processing power and a MPI support.
260
In this contribution we present an overview of our Datacenter and the techniques that
we use to monitor its parameters.

NIRDIMT Datacenter
At our site we are hosting a High Performance Computation Cluster used for internal
computational needs and the RO-14-ITIM Grid site. The goal of the data center monitoring is
to provide an IT view into the data center facility to give an accurate real-time picture of the
current state of the critical infrastructure. There is a need to determine the source of
performance problems as well as to tune the systems for its better operation.



Fig. 1. Datacenter view - GRID and Cluster hardware

Datacenter overview (Fig. 1):
2 x Hewlett Packard Blade C7000 with 16 Proliant BL280c G6 (2 Intel Quad-core Xeon
x5570 @ 2.93 GHz, 16 Gb RAM, 500 Gb HDD) running, open source TORQUE, MAUI,
GANGLIA, NAGIOS, configured from scratch - Scientific Linux 5.5 (Boron),
different Intel compilers, mathematical and MPI libraries,
different Quantum chemistry codes like: AMBER, GROMACS, NAMD, LAMMPS,
CPMD, CP2K, Gaussian, GAMESS, MOLPRO, DFTB+, Siesta, VASP, Accelrys
Materials Studio.
We also host the RO-14-ITIM Grid site (http://grid.itim-cj.ro).

RO-14-ITIM grid site
RO-14-ITIM Grid Site is an EGEE/WLCG type, running gLite 3.2 as middleware for
the public Network computing elements and for the private Work nodes on which the
operating system is Scientific Linux 5.5 x86-64.
RO-14-ITIM Grid site is certificated for production and is registered at the Grid
Operations Center GOC.
The public network consists of four public address systems: a Cream central element,
a user interface (UI), a site-BDII server, a storage element (SE), a monitoring element
(APEL). The Storage element has a 120 TB capacity.
The private network contains 60 dual processor quad core servers with 16 GbRam
comprising two HP Blade systems and one IBM blade system.
The wide area connection works at 10 Gbps. The speed between the Grid elements and
the network router is 10 Gpbs, 20 Gbps between the public network and 40 Gbps inside the
private network and with the public Grid network. INCDTIM local and wide area network
operates now at 1 Gbps.
261
System management and monitoring tools
Fig. 2 presents a datacenter that comprises a Grid site and a MPI Cluster dedicated to
parallel computing solutions.
The everyday activities are in Grid Computing, MPI cluster and Networking.
Networking sustains the whole Institute activities within the Internet connection, e-mail,
website, database. Grid Computing is about an active site named RO-14-ITIM which
dedicates 90% time in processing jobs from ATLAS. The other 10% go into testing the site
for a reliable functionality and storing or processing data through special request from inside
or outside the Institute through the Virtual Organization voitim. In parallel with the Grid site
there is a MPI cluster aimed at numerical simulations of computational physics with direct
application in biophysics and nanostructures in which the Institute is involved.
The Grid site and MPI cluster are installed with Scientific Linux 5.5. For a future
compatibility between them we installed on the Grid site gLite 3.2 based upon Torque and
Maui for job managing as well as a last version of Torque and Maui on the MPI Cluster. [6]
The 3 Blade systems give us an advantage in monitoring and operating them from the
distance, so our IT team does not have to enter the datacenter when any problems occur. In
Fig. 3 we show the Blade system interface on which one may be able to determine through an
advanced system of colors which system is fault. We are able to install the system from
outside. We manage and monitor the whole datacenter through the advanced APC system
installed in it.
The monitored datacenter is shown on Fig. 3.


Fig. 2. Blade management module Fig. 3. Datacenter monitoring tools

The main tools for our Datacenter monitoring are the opensource Ganglia (Fig. 5)
(http://ganglia.sourceforge.net) and NAGIOS (Fig. 6) (http://www.nagios.org) packages,
which we installed and configured. If Ganglia allows a remote viewing of the live or historical
statistics (such as CPU load averages or network utilization) for all machines that are being
monitored, NAGIOS offers a complete monitoring and alerting for servers, computing nodes,
switches, applications, and services. For more than one year we tested and we principally used
them to monitor our High Performance Computing Cluster. If Ganglia work flawlessly for the
NAGIOS monitoring system, we still have to do different optimizations and to implement and
configure more alerts.

262


Fig. 4. Ganglia Cluster monitoring system http://hpc.itim-cj.ro/ganglia




Fig. 5. NAGIOS IT Infrastructure Monitoring

Conclusions
The state-of-art open-source monitoring systems Ganglia and NAGIOS have been
installed and configured. Ganglia allows a remote viewing of the live or historical statistics
(such as CPU load averages or network utilization) for all machines that are being monitored,
NAGIOS offers a complete monitoring and alerting for servers, computing nodes, switches,
applications, and services.
263
The different optimization and implementation of new alerts in NAGIOS are still
required. We plan to extend the monitoring to the entire Datacenter infrastructure.

Acknowledgement
With this in mind, the financially supported by the Romanian Research and
Development Agency through EU12/2010, EU15/2010 and POS-CEE Ingrid/2009 projects
brought us to three Blade systems and a MSA.

References

[1] ATLAS Collaboration. Exploring the Mystery of Matter. Papadakis Publisher, UK, 2008.
[2] LHC Homepage, http://cern.ch/lhc-new-homepage/
[3] S.P. Brown, S.W. Muchmore. J. Chem. Inf. Model., 46, 2006, p. 999.
[4] A. Salomon et al. Comparison of Electronic Transport Measurements on Organic
Molecules. Adv. Mater. 15(22), 2003, pp. 1881-1890.
[5] J.E. Green, et al. Heath. A 160-kilobit molecular electronic memory patterned at
1011 bits per square centimetre. Nature 445(7126), 2007, p. 414.
[6] White paper, TIA-942 Data Centre Standards Overview ADC Krone.
264
Solar panels as possible optical detectors for cosmic rays

L. Tsankov
1
, G. Mitev
2
, M. Mitev
3

1
University of Sofia, Bulgaria
2
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences,
Bulgaria
3
Technical University, Sofia, Bulgaria

Photovoltaic cells have relatively high sensitivity to visible light and are available as large area panels
at a reasonable price. Their potential use as air Cherenkov detectors for the extended atmospheric showers,
caused by high energy cosmic rays, is very attractive.
In this paper we make an evaluation of different types of photovoltaic (PV) cells. Assemblies of several
cells are studied, both connected in series and in parallel, aiming for the increase of the sensitive area,
performance improvement etc. We propose a schematic for optimal separation of the fast component of the
detector system signal. The threshold sensitivity of the different configurations is estimated and their ability to
detect very high energy cosmic rays is discussed.
Introduction
In 1936 the Austrian physicist V.F. Hess (1883 1964) receives the Nobel Prize for
Physics for the discovery of the cosmic rays. Ever since, the largest and most expensive
research complexes that have been built, have been dedicated to the registration and
measurement of the primary and secondary cosmic radiation parameters. For that purpose
there have been conducted many observation tests with blimps and artificial Earth satellites.
Many underground and surface observatories (at sea level and high-mountain), with detector
area sometimes reaching 100 km
2
, have been built [1, 2].
Most frequently during observation of extensive air showers (EAS) in surface stations
can be registered the muon component, using groups of organic plastic photo-scintillation
detectors (PSDs), whose output signals are passed on to fast coincidence circuits [1, 3]. They
allow for very precise definition of the moment of the event occurrence but offer more limited
capabilities for defining the energy parameters of the registered event. Rarely, liquid (most
often water) Cherenkov detectors are used.
Another popular method is the detection of the Cherenkov radiation caused in the
atmosphere by the primary high-energy charged particles (at 30 60 km altitude), or by the
secondary products of the EAS. Photomultiplier tubes (PMTs), placed in the focus of the
optical system, are used for detectors.
The rapid evolution of the photovoltaic (PV) cells in the last decade made them
accessible at low prices. Their high efficiency and the possibility for construction of systems
with significant area, allows for their use in Air Cherenkov detectors [5, 6].
The purpose of this work is to evaluate the possibilities of PV-cell based detector
systems to distinguish between the short light pulses of the Cherenkov radiation and the slow
component arising from background light.
This imposes the need to look into new schematic solutions for the signal acquisition
and shaping and the evaluation of their usability as components of Air Cherenkov detectors of
EAS. It is necessary to define whether the sensor is capable of reacting to short light flux (the
duration of the Cherenkov radiation is under 1 s) and what is the minimum threshold for
light impulse value (represented in number of photons in the interval) that can be registered
with the corresponding detector.


265
Review
1. Particularities and limitations
The use of PVs in such nontrivial mode leads to the following characteristic
particularities and problems, which should be considered during the development of the
design of the signal acquisition and shaping circuit:
1. The detectors output signal comes from the charge generated in the detector
volume, not the output voltage or current;
2. The mean output current arising from the background light (twilight, full moon,
urban area light etc.) can be some orders of magnitude higher than the signal level;
3. PVs have significant capacitance (10-50 nF/cm
2
), exceeding the capacitance of the
semiconductor detectors, used for ionized emissions detection, manyfold.
In order to achieve large detector surface, for detectors constructed by PVs, exist two
approaches parallel or sequential connection of the cells. Both methods are equivalent in
relation to the volume of the occurring charges. When n elements are connected in parallel the
equivalent capacitance of the PV cell is n-times greater, while connected in series it is
respectively n-times smaller. Obviously, having the PV cells sequentially connected is
preferred for obtaining larger signal with the same generated charge.

2. Signal acquisition with PV cells
In [7] is shown that if the capacitance of the
semiconductor detector doesnt change in a wide range, the
signal acquisition with transimpedance, or charge sensitive
amplifier give equivalent results. The presence of a significant
offset current from the PVs makes the galvanic connection to
the amplifier impossible. In [5] are reviewed the options given
through the introduction of capacitive separation of the input,
or the use of an isolation transformer.
Another possible solution is the compensation of the
offset current. The PV cell output can be presented as a sum of
two components (Fig. 1) a slow one, due to the background light (I
BL
), and a second one,
fast changing short pulse, due to the Cherenkov light of EAS (I
CL
). Our idea is to connect
opposite the cell a current generator I
C
, whose value is equal to the slow component and
adaptively follows it. For the short pulse the high output resistance of the current generator I
C

takes the role of a load, which guarantees the full collection of the charge in the detector
volume, caused by the Cherenkov light.
Fig. 1
I
BL
I
CL

LFF
CSA
I
C

266
3. Front-end-electronics schematic
The PV signal preamplifier
is realized using AD8033
operational amplifier (Fig. 2).
Depending on the Rfb/Cfb ratio, it
works either as a transimpedance or
as a charge sensitive preamplifier.
That is illustrated by the form of the
output signal on Fig. 3. That ratio
influences the amplitude-frequency
diagram of the amplifier (Fig. 4).
An overcompensation is seen for
Cfb < 5 pF, so the amplifier can
lose stability. That can be seen both
on the amplitude-frequency and
output signal shape diagrams.


An OTA CA3080 is used to
compensate the background and the slow
component of the PV current. Fig. 5 shows
the devices performance for input pulses
with duration 1 s and amplitude 10 A,
while the slow component changes from 0 to
500 A. It can be seen that the disturbance is
successfully compensated up to 450 A. We
get an output signal with considerable
amplitude without any significant offset.


That guarantees the schematic would work flawlessly in high background lighting
conditions e.g twilight, full moon or urban areas.
A first order low pass filter is implemented based on Rf and Cf, followed by a buffer
amplifier CA3140. The signals with frequency lower than the cut-off frequency of the filter
are fed to the input of the adjustable current generator and change its output value,
compensating the low frequency background and noises. The high frequency signals (in this
case caused by the short light pulses) cant get through the filter, the current generator keeps
Fig. 2
R2
10k
R1
10k
PV CELL
Uout
-
+
V+
V-
U2
AD8033
Cf
2,2
Rf
220k
-
+
U3
CA3140
ICL IBL
CDET
4n7
Cfb
6p2
+12V
-12V
U1
CA3080
Rfb
1M
Rb
47k
+12V
-12V
+12V
-12V
R2 1M
C1
4n7
R3
5k1
-1,0
-0,5
0,0
0,5
1,0
1,5
2,0
2,5
0 20 40 60 80 100
2
5
10
20
50
I0
t, s
Um, V
Fig.3. Output signal shape vs Cfb
0
2
4
6
8
10
12
14
16
1 10 100 1000 10000 100000 1000000
2
5
10
20
50
Um, V
Fig. 4. Bandwidth of the amplifier vs Cfb
f, Hz
Fig. 3
Fig. 4
Fig. 5
0,0
0,5
1,0
1,5
2,0
2,5
3,0
3,5
4,0
4,5
0 5 10 15 20 25 30




IBACKLIGHT
ICL = 10 A
Um,V
t, s
IBL is out of range
Fig. 5. Compensation of the slow component
267
Fig. 8
t
u

PV cell
CSA
OTA
pulse
generator
L
E
D

TDS 2022
6-Digit Multimeter Hameg 8112-3
I
LED

I
PV

its value, corresponding to the mean value of
the offset current of the PV cell. The high
output resistance of the current generator
guarantees the full collection of the charge,
induced by the short light pulse. The influence
of the notch frequency on the output is
illustrated on Fig.6. It can be seen that the
compensation circuit does not deteriorate the
rise time and the output amplitude of the pulse
i.e. its operation doesnt change the signal.
The output signal vs. the detector's
capacitance is shown on Fig.7. It is seen that
the amplifier might become unstable at high
capacitance. This suggests the general rule
when connecting PV cells into batteries - they
should be connected in series in order to
decrease the total capacitance. In our
experiments this value was decreased to 7,2 nF
and 3,6 nF for the different configurations.

4. Test SETUP
Our experimental setup (Fig. 8) uses a light pulse generator [8] with adjustable
amplitude and duration of the signal, and interchangeable LEDs (red, green and blue). It is
possible to set 15 different levels of the LED drive current either in continuous, or pulse
mode. The light pulse length is step-adjustable in the rage between 50ns 250s.
The PV output current is measured in continuous mode lighting, using highly sensitive
digital microampermeter (6-Digit Multimeter Hameg 8112-3). That makes possible, by
factoring-in the light pulse length, the calculation of the charge generated in the volume of the
PV cell. The amplitude and the shape of the output pulses are monitored with a digital
oscilloscope.











Results
Series of measurements were conducted using commercially available PV panels,
consisting of either 36 cells sized 60x15 mm (nominal output 5W at 12V), or 36 cells -
60x30 mm each (nominal output 10W at 12V). The PV cells are internally connected in
series. Two of the smaller panels were also used connected in series as an aggregate panel.
The shape of the output pulse is presented on the oscillogram (Fig. 9), together with the pulse
lighting the LED. The experiments, carried out using slowly changing background light (e.g.
-1,0
-0,5
0,0
0,5
1,0
1,5
0 20 40 60 80 100
100n
200n
500n

t, s
Um, V
Fig. 6. Output signal shape as a function of Cf
(filter's cutoff frequency)
-0,4
-0,2
0,0
0,2
0,4
0,6
0,8
1,0
1,2
1,4
1,6
0 10 20 30 40 50 60 70 80
1n
2n
5n
10n
20n
50n
Io
Fig. 7. Output signal shape vs detector's capacitance
t, s
Um, V
Fig. 6
268
reflected luminescence light), have proven that the schematic successfully compensates these
disturbances.
Fig. 10 shows the signal/noise (S/N) ratio for the three panel assemblies as a function of
the signal (number of photoelectrons generated by the LED pulse (1s) per m
2
). The S/N ratio
of 3 is reached at about10
8
pe/m
2
, while the night sky is estimated to give about 10
12
pe/m
2
(i.e. 10
6
pe/m
2
for the pulse duration).









Conclusion
The experimental results show that both the performance and the sensitivity of
PV cells are sufficient to register the Cherenkov component of very high energy EAS. The
compensation circuit for the slow component (due to the background light) allows to increase
the observation period significantly - during the whole night, even at full moon, as well as
performing observations in sites having poor astro climate.

Acknowledgement
The present research is supported by the Technical University - Sofia under Contract
112051-3.

References
[1] P.K.F. Grieder. Cosmic rays at earth: researcher's reference manual and data book. ELSEVER.
Amsterdamp, 2001, p. 1093.
[2] V.F. Sokurov. Physics of cosmic rays: cosmic radiation. Rostov/D.: Feniks, 2005 (in Russian).
[3] http://livni.jinr.ru
[4] V.S. Murzin. Introduction to Physics of Cosmic Rays. Moscow: Atomizdat, 1979 (in Russian).
[5] S. Cecchini et al. Solar panels as air Cherenkov detectors for extremely high energy cosmic
rays. arXiv:hep-ex/0002023v1 (7 Feb 2000).
[6] D.B. Kieda. A new technique for the observation of EeV and ZeV cosmic rays. Astroparticle
Physics 4, 1995, pp. 133-150.
[7] H. Spieler. Semiconductor detector systems. Oxford Press, 2005.
[8] G. Mitev, L. Tsankov, M. Mitev. Light Pulse Generator for Photon Sensor Analysis. Annual
Journal of Electronics, ISSN 1313-1842, Vol. 4, N. 2, 2010, pp. 111-114.
0
1
2
3
4
5
6
7
8
9
10
0,0E+00 1,0E+08 2,0E+08 3,0E+08 4,0E+08 5,0E+08 6,0E+08
Charge [pe/m
2
]
2x5Win series
1x5W
1x10W
S
i
g
n
a
l

t
o

N
o
i
s
e

R
a
t
i
o
Fig. 10
Fig. 9
Fig. 10
269
Managing Distributed Computing Resources with DIRAC

A. Tsaregorodtsev
Centre de Physique des Particules de Marseille, France

Many modern applications need large amounts of computing resources both for calculations and data
storage. These resources are typically found in the computing grids but also in commercial clouds and computing
clusters. Various user communities have access to different types of resources. The DIRAC project provides a
solution for an easy aggregation of heterogeneous computing resources for a given user community. It helps also
to organize the work of the users by applying policies regulating the usage of common resources. DIRAC was
initially developed for the LHCb Collaboration - large High Energy Physics experiment on the LHC accelerator
at CERN, Geneva. The project now offers a generic platform for building distributed computing systems. The
design principles, architecture and main characteristics of the DIRAC software will be described using the LHCb
case as the main example.
1. Introduction
The High Energy Physics (HEP) experiments and, first of all, LHC experiments at
CERN have dramatically increased the needs in computing resources necessary to digest the
tremendous amounts of accumulated experimental and simulation data. Most of the
computing resources needed by the LHC HEP experiments as well as by some other
communities are provided by computing grids. The grids provide a uniform access to the
computing and storage resources, which simplifies a lot their usage. The grid middleware
stack offers also the means to manage the workload and data for the users. The success of the
grid concept resulted in emergence of several grid middlewares non-compatible between each
other. Multiple efforts to make different grid middleware working with each other were not
successful leaving to seek the solution for interoperability elsewhere.
The grids are not the only way to provide resources to the user communities. There are
still many sites (universities, research laboratories, etc), which hold computing clusters of
considerable size, but they do not make part of any grid infrastructure. These resources are
mostly used by local users and cannot be easily contributed to the pool of common resources
of a wider user community if even the site is belonging to its scientific domain. Installing the
grid middleware to include such computing clusters to the grid infrastructure is prohibitively
complicated especially if there are no local experts to do that. There are also emerging new
sources of computing power, which are now commonly called computing clouds. Commercial
companies provide most of these resources now but open source cloud solutions of production
quality are also appearing.
The grid users are organized in communities with common scientific interests. These
communities are also sharing common resources provided by the community members.
Apparently, the contributed resources can come from any source listed above and apriori they
are not uniform in their nature. Including all these resources in a coherent system seen by the
users is still a challenging task. In addition, the common resources assume common policies
of their usage. Formulation and imposing such policies while acknowledging the possible
requirements of the resource providers (sites) is yet another challenging task.
The variety of requirements of different grid user communities is very large and it is
difficult to meet everybodys needs with just one set of the middleware components.
Therefore, many communities, and most notably the LHC experiments, have started to
develop their own sets of tools, which are evolving towards complete grid middleware
solutions. Examples are numerous, ranging from subsystem solutions (PANDA workload
management system [1] or PHEDEX data management system [2]) or close to complete Grid
solutions (AliEn system [3]).
270
The DIRAC project is providing a complete solution for both workload and data
management tasks in a distributed computing environment [4]. It provides also a software
framework for building distributed computing systems. This allows easy extension of the
already available functionality for the needs of a particular user community. The paradigm of
the grid workload management with pilot jobs introduced by the DIRAC project brings an
elegant solution to the computing resource heterogeneity problem outlined above. Although
developed for the LHCb experiment, the DIRAC project is designed to be a generic system
with LHCb specific features well isolated as plugin modules [5].
In this paper we describe the main characteristics of the DIRAC workload
management system in Section 2. Its application to managing various computing resources is
discussed in Section 3. Section 4 presents the way in which community polices are applied in
DIRAC.
2. DIRAC Workload Management
The workload management system ensures dispatching of the user payloads to the
worker nodes for execution. One can distinguish two types of workload management
paradigms. The first consists in using a broker service, which chooses an appropriate
computing resource with capabilities meeting the user job requirements, and pushes the job
to this resource. In this case the user job is not kept by the broker until the execution and is
directly transmitted to the target computing cluster to enter into the local waiting queue. This
paradigm is used by the standard gLite grid middleware, for example [6]. The alternative
paradigm uses a special kind of jobs called pilot jobs. The pilot jobs are dispatched to the
worker nodes and pull the actual user jobs, which are kept in a Task Queue service central
for a given community of users. The DIRAC project uses the pull paradigm with pilot jobs,
its properties and advantages are described in this section [7].
2.1. Pilot jobs scheduling paradigm
In the pilot job scheduling paradigm
(Fig. 1.), the user jobs are submitted to the
central Task Queue service. At the same
time the pilot jobs are submitted to the
computing resources by specialized
components called Directors. Directors use
the job scheduling mechanism suitable for
their respective computing infrastructure:
grid brokers, batch system schedulers,
virtual machine dispatchers for clouds, etc.
The pilot jobs start execution on the worker
nodes, check the execution environment,
collect the worker node characteristics and
present them to the Matcher service. The
Matcher service chooses the most
appropriate user job waiting in the Task
Queue and hands it over to the pilot for
execution. Once the user job is executed and
its outputs are delivered to the user, the pilot
job can take another user job if the
remaining time of the worker node
reservation is sufficient.
Fig. 1. Workload Management with pilot jobs
271
There are several obvious advantages of this scheduling paradigm. The pilot jobs
check the sanity of the execution environment like the available memory, disk space, installed
software, etc before taking the user job. This reduces dramatically the failure rate of the user
jobs compared with the case when the user jobs are dispatched directly to the worker nodes.
The ability of the pilot jobs to execute multiple user jobs reduces considerably the load on
grid infrastructures because less number of jobs are managed by the grid brokers. On the sites
there is no more need to configure multiple queues of different length to better accommodate
short and long user jobs because pilot jobs can fully exploit time slots in the long queues.
However, there are also other important advantages of this job scheduling method that are
vital for large user communities and which will be discussed in subsequent sections.
2.2. Security aspects of the pilot job scheduling
The pilot job scheduling paradigm have several security properties different from the
standard grid middleware. Lets look into the details of the delegation of the identity of the
owners of the user and pilot jobs. The user jobs are submitted to the central Task Queue
service with the credentials (grid proxies ) of the users owners of the jobs. The pilot jobs are
submitted when there are user jobs waiting in the Task Queue. There are two possible cases.
In the first case, the pilot jobs are submitted to the grid resources with the same credentials as
the corresponding user jobs. The so submitted pilots jobs are called private. The private pilot
jobs can only pick up the jobs of the user with the same identity as the one of pilot job itself.
In this case the owner of the executed user payload is the same as the owner of the pilot job as
seen by the site computing cluster. Site managers have full control of the job submission and
can easily apply site policies like blacklisting misbehaving users by banning the submission of
their pilot jobs.
In the second case, the pilot jobs are submitted with special credentials, the same for
the whole user community like the LHCb Collaboration. These pilot jobs are not submitted
for the jobs of a particular user and their credentials allow them to pick up the jobs of any user
of this community. These are the so-called generic or multiuser pilot jobs. In this case the site
managers see the identities of the pilot jobs, which are not the same as the actual payload. The
managers can either delegate to the user community the traceability of the executable
payloads or impose the requirement for the pilot jobs to interrogate the site security services
to verify the rights of the payload owner before executing it on the site. The latter is achieved
by using the gLExec [8] tool recently developed for this purpose and widely deployed on
WLCG sites.
The advantage of the first case is that the security properties of the jobs scheduling
system are the same as for the base workload management for this grid infrastructure, for
example the gLite WMS in the case of the WLCG grid. However, in this case there is no
possibility to manage the community policies in one central place as described in Section 4.
Since the possibility to take into account the community policies while job scheduling is
extremely important, the multiuser pilot job based scheduling became very popular and is
used by all the four LHC experiments, in particular.
The use of the pilot jobs and especially the generic pilot jobs requires advanced
management of the user proxies. While execution, the user payloads need secure access to
various services with proper authentication based on the payload owner proxy. The pilot jobs
can be submitted with the credentials different from the owners of the user jobs. Therefore,
the pilots are enabled to initiate delegation of the user proxies to the worker nodes after a user
job is selected for the execution. The DIRAC Project provides a complete proxy management
system to support these operations. The ProxyManager service contains a secure proxy
repository where users are depositing long living proxies. These proxies are used for
272
delegation of short living limited proxies to the worker nodes on the pilot requests. The same
mechanism is used for the user proxy renewal if it expires before end of the user job
execution.
3. Using Heterogeneous Computing Resources
The pilot job scheduling paradigm brings a natural and elegant solution to the problem
of the aggregation of heterogeneous resources in one coherent system from the user
perspective. In one phrase this can be explained by the fact that all the computing
infrastructures are different but all the worker nodes are basically the same. Running the pilot
jobs on the worker nodes hides the differences of the computing infrastructures and presents
the resources to the Matcher service (Fig. 1) in a uniform way. No interoperability between
different infrastructures is required. The DIRAC users see resources belonging to different
grids as just a common list of logical site names in the same workload management system.
Lets take a closer look at how various types of resources are used with the DIRAC
WMS by the LHCb Collaboration.
3.1. Using WLCG resources
Originally, the WLCG grid was used
by LHCb/DIRAC job scheduling by
submitting pilot jobs through the native
gLite middleware workload management
system. This was the simplest way to build
the corresponding pilot Directors, however,
this method quickly showed important
difficulties. By design the gLite resource
brokers are supposed to be central
components. They take the site status
information from the central information
system (BDII) and then decide which site is
the most appropriate for the given job.
Once the decision is taken, the job is
submitted to the chosen site and enters the local task queue there. The capacity of a single
resource broker is not sufficient to accommodate the load of a large community like the LHCb
experiment. Therefore, LHCb is obliged to use multiple independent resource brokers. In this
case the job submission is obviously not optimal. Indeed, all the brokers use the same site
state information and chose the same site as the most attractive one (see the right branch in
Fig. 2). They start to submit jobs to this site without knowing about each other. The gLite
information system is not reactive enough in order to propagate the changed site status
information to the brokers. As a result the site often gets too many submitted jobs, which wait
in the local task queue, whereas other sites remain underloaded.
With the recent development and wide deployment of the CREAM Computing
Element service, it became easy to submit pilot jobs directly to the sites (left branch in Fig. 2).
The CREAM interface allows obtaining the more up to date Computing Element status
information directly from the service and not from the BDII system. This information is used
by the CREAM Director to decide whether the load of the site permits to submit pilot jobs
provided that there are suitable user jobs in the DIRAC Task Queue. In this case, there are
individual independent Directors for each site and the number of pilot jobs submitted is
chosen in order to avoid unnecessary site overloading. A typical strategy consists in
maintaining a given number of pilot jobs in the site local queue. With the direct pilot job
Fig. 2. WMS versus direct pilot job
submission to the Computing Element
273
submission, the sites are competing with each other for the user jobs making their turnaround
faster. This is the ultimate goal of any workload management system. In 2011 more than a
half of the LHCb jobs were executed using direct submission to the CREAM Computing
Elements. There are plans to migrate the pilot job submission completely to the direct mode
and effectively stop using the gLite WMS service.
3.2. DIRAC sites
DIRAC WMS system can also provide access to the resources on the sites, which are
not making part of any grid infrastructure. It is quite common to encounter sites owning
considerable computing power in their local clusters but not willing to be integrated into a
grid infrastructure because of the lacking expertise or other constraints. The DIRAC solution
to this problem is similar to the case with the direct pilot job submission to the CREAM
Computing Element service. It consists in providing a dedicated Director component. Two
variations of DIRAC site Directors are available.
In the first case, the Director is placed on the gateway node of the site, which is
accessible from the outside of the site, and, at the same time, has access to the local
computing cluster (Fig. 3). The gateway host must have a host grid certificate which is used to
contact the DIRAC services. The Director interrogates the DIRAC Task Queue to find out if
there are waiting user jobs suitable for running on the site. If there are user jobs and the site
load is sufficiently low, the Director gets the pilot credentials (proxy) from the DIRAC
ProxyManager service, prepares a self-extracting archive containing a pilot job bundled with
the pilot proxy and submits it as a job to the local batch system. Once the pilot job starts
running on the worker node, it behaves in exactly the same way as any other pilot job, for
example, the one submitted through the gLite resource broker as described above.



In the second case, the Director runs as part of the central DIRAC service. A special
dirac user account is created on the site gateway, which is capable to submit jobs to the local
computing cluster. This account is used by the Director to interact with the site batch system
through an ssh tunnel using the dirac user account credentials (either ssh keys or password).
Fig. 3. DIRAC site with on-site Pilot Director Fig. 4. DIRAC site with off-site Pilot Director
274
Otherwise, the behaviour is similar to the previous case. The self-extracting archive
containing the pilot job and proxy is transmitted to the site gateway through the ssh tunnel and
submitted to the batch system. After the pilot job is executed, its pilot output is retrieved via
the same ssh tunnel mechanism.
The first method is used in case the site managers want to have a full control of the
pilot submission procedure, for example, providing their own algorithms for evaluation of the
site availability. The second method requires minimal intervention on the sites, only creation
of the dedicated user account and, possibly, setting up a dedicated queue available for this
account. This makes incorporation of new sites extremely easy. The second method is most
widely used by the LHCb and other DIRAC user communities. The batch systems that can be
used in this way include PBS/Torque, Grid Engine, BQS. Access to other batch systems can
be easily provided by writing the corresponding plug-ins.
In LHCb the most notable DIRAC site is the one provided by the Yandex commercial
company. As it is shown in Fig. 5, the Yandex site is providing the second largest
contribution to the LHCb MC simulation capacity yielding only to CERN.



3.3. Other computing infrastructures
Other computing infrastructures accessible through the DIRAC middleware include OSG
and NorduGrid grids, AMAZON EC2 Cloud and other cloud systems. The development of DIRAC
gateway to the BOINC based volunteer grids is under development as well. These infrastructures
are not used by the LHCb Collaboration and, therefore, are not detailed in this paper.
4. Applying community policies
As it was mentioned in Section 2, the workload management system with pilot jobs
offers the possibility of easy application of the community policies. In the large user
communities, or Virtual Organizations (VOs), sharing large amounts of computing resources
managing priorities of different activities as well as resource quotas for various users and user
groups becomes a very important task. One way to approach this problem is to define the VO
specific policies on sites by setting up special queues for each VO group and assign priorities
to them. However, with the high number of sites serving the given VO (about 200 for the

Fig. 5. The LHCb site usage for MC simulation, Sep. 2011
275
LHC experiments) and the number of the VO policy rules rapidly increasing with the number
of user groups within the VO, this way becomes extremely heavy especially for keeping the
rules up to date.
In the DIRAC workload management systems with pilot jobs all the community
payloads pass through the central Task Queue. This is where the policies can be applied
efficiently and precisely in one single place instead of scattering them over multiple sites.
When the pilots are querying the Matcher service for the user jobs, the Matcher chooses not
only the jobs which requirements correspond to the worker node capabilities but it also selects
the current highest priority job among the eligible ones. It is important to note that the so
selected user jobs start execution immediately and not entering into the site local batch queue
with yet another loosely defined delay before the execution. It increases the precision of this
method of priorities application.
The application of the VO policies in the central Task Queue is of course limited if the
pilot jobs are private (see 2.2). Indeed, it is likely that the highest priority job, at the moment
when a pilot queries the Matcher service, is not belonging to the same user as the pilot job. In
this case, only the highest priority job of the same user is taken. This, of course, limits
dramatically the usefulness of the policy application basically only allowing users to prioritize
their own payloads. In the case of generic pilot jobs there is no such limitation and the VO
policies can be fully applied. In practice, there are sites that allow generic pilot jobs but there
are also sites that do not. In the case of LHCb a mixture of generic and private pilot jobs is
used. This is somewhat complicating the application of the community policies but is
mandatory in order to respect the site local rules.
4.1. Static versus dynamic shares
In DIRAC the priorities can be applied on the level of individual users and user
groups.
Users can assign priorities to their jobs as an integer arbitrary non-negative number
assigned to the Priority job J DL parameter. These priorities are used when selecting among
the same user jobs.
Each DIRAC user group can have a definition of its job share. These shares are used
by the Matcher services to recalculate the job priorities such that the average number of
currently executed jobs per group is proportional to the group shares. It allows to define user
groups per distinct activity (for example, MC simulation, data reprocessing, analysis for the
conference X, etc) and to assign priorities for each activity. It is important to mention that the
production jobs and user jobs are all managed in the same way using the same computing
resources. Avoiding separating resources for production and user activities helps increasing
the overall efficiency of the policy application system.
The described priorities and shares are defined statically in the DIRAC Configuration
Service (CS). However, it is important that the actual job priorities which are taken into
account while the pilot matching operation depend also on the history if the resources usage
by the users and groups. If a given user or group were intensively working for some time,
their priorities are lowered to give more resources to others. The recalculation of the priorities
is performed in such way that the average shares of consumed resources over long periods of
time are preserved as defined in the DIRAC CS.
The DIRAC central Task Queue and the pilot jobs form dynamically a workload
management system similar in its properties to a classical batch system. This allows, in
principle, to apply the same schedulers as used together with the standard batch systems. In
particular, the use of the MAUI batch scheduler in conjunction with the DIRAC Task Queue
276
was demonstrated [9]. In practice, the statically defined user and group shares were sufficient
for the LHCb experiment for all the purposes so far.
4.2. Site policies
It is possible that for some sites extra requirements have to be imposed. For example, a
site does not allow more than a certain number of MC production jobs if it is dedicated mostly
to the user analysi or real data processing. In this case, one can define for each site limits for
the jobs of some special kind. In this way it is possible to ban certain users or groups to
execute their job on the site if even generic pilots are used and the site cannot selectively
reject user jobs. Another use case for this facility is to limit the number of simultaneously
running jobs requiring a certain resource. For example, one can limit the number of data
reprocessing jobs to lower the needed I/O bandwidth of the local storage system.
5. Conclusion
The DIRAC Project started for the LHCb experiment in 2003 has now evolved into a
general purpose grid middleware framework, which allows to build distributed computing
systems of arbitrary complexity. The innovative workload scheduling paradigm with pilot
jobs introduced by the project opened many new opportunities for a more efficient usage of
the distributed computing resources. It allows combining in a single system heterogeneous
computing resources coming from different middleware infrastructures and from different
administrational domains. This aggregation does not require any level of interoperability
between the different infrastructures. It is also transparent for the actual resource providers
and does not require any DIRAC specific services running locally on the sites. For the LHCb
experiment, the DIRAC workload management system ensures full access to the WLCG grid
resources either via the gLite WMS brokers or with the direct access to the CREAM
Computing Elements. DIRAC
also provided access to several
non-grid sites by fully including
them into the LHCb production,
monitoring and accounting
systems. Other user communities
use DIRAC for accessing other
resources like the ILC
Collaboration using the OSG grid
or the Belle Collaboration using
the AMAZON EC2 Cloud.
Access to other types of
computing resources can be
provided with new plug-ins being
developed now.
The performance of the
DIRAC workload management
system is sufficient for a large
community like the LHCb experiment as illustrated in Fig. 6. It is shown that the system can
sustain up to 40K simultaneously running jobs at more than one hundred sites.
A very important ingredient of the DIRAC workload management is the system for
definition and application of the community policies for joint use of the common computing
resources. This facility is fully exploited by the LHCb Collaboration to control simultaneous
activities of data production managers and user groups.
Fig. 6. Simultaneously running job in
DIRAC WMS, 2011
277
On the whole the DIRAC workload management system has proven to be versatile
enough to meet all the current and future requirements of the LHCb Collaboration. Therefore,
it is now starting to be widely used by other user communities in the HEP and other
application domains.

References
[1] T. Maeno. PanDA: distributed production and distributed analysis system for
ATLAS. J . Phys.: Conf. Ser. 119, 2008, p. 062036.
[2] R. Egeland et al. PhEDEx Data Service. J . Phys.: Conf. Ser., 219, 2010, p. 062010.
[3] S. Bagnasco et al. AliEn: ALICE environment on the GRID. J . Phys.: Conf. Ser.
119, 2008, p. 062012.
[4] http://diracgrid.org
[5] A. Tsaregorodtsev et al. DIRAC3: The New Generation of the LHCb Grid Software.
J . Phys.: Conf. Ser., 219, 2010, p. 062029.
[6] Lightweight middleware for grid computing, http://glite.web.cern.ch/glite
[7] A. Casajus, R. Graciani, S. Paterson, A. Tsaregorodtsev. DIRAC pilot framework and
the DIRAC Workload Management System. J. Phys.: Conf. Ser., 219, 2010, p. 062049.
[8] D. Groep et al. gLExec: gluing grid computing to the Unix world. J . Phys.: Conf.
Ser. 119, 2008, p. 062032.
[9] G. Castellani and R. Santinelli. J ob prioritization and fair share in the LHCb
experiment. J . Phys.: Conf. Ser., 119, 2008, p. 072009.
278
On some specific parameters of PIPS detector

Yu. Tsyganov
Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, Dubna, Russia

When applying a PIPS detector for detecting of heavy ions/or their decays products one deals with not
only standard operating silicon radiation detector parameters, such as resolution (energy&position), but also with
some set of specific parameters, like surface recombination velocity, mean value of recombination charge losses
of the whole PHD (Pulse Height Defect), parameter which estimates to some extent a probability for charge
multiplication phenomena and so on. Some of these parameters are used for simulations of EVR registered
energy spectra in PIPS detector.
Key words: PIPS detector, computer simulation, registered energy, rare decays.

1. Surface Recombination Concept (SRC)
Charge losses, which cause PHD (pulse- height defect) in silicon nuclear- radiation
detectors irradiated by heavy ions, are determined by the recombination of nonequilibrium
current carriers in heavy-ion tracks at the plasma stage.
The value of the relative charge losses are defined by the expression [1-3]:
P
sT
R
= ,
where s is the surface-recombination velocity and R is the particle path in the silicon.
Tipically ~10
3
- 10
4
and n10
2
-10
3
cm/s for n_Si(Au) / PIPS, respectively.
Of course, the total value of PHD is composed from three components, namely:
stopping component (is calculated e.g. by Wilkins formula) and direct loss in the metal (or p+
implantation contact of the top electrode).
The most important factor, which influences the detector resolution (for HI, EVRs ,FF)
at the registration of strongly ionizing particles [4] creating tracks with a high density of non-
equilibrium current carriers, is a fluctuations of charge collected on the detector electrodes. It is
necessary to distinguish between two components:
a) is caused by no-homogeneities of the detector parameters responsible for the collected
charge value -
1
R R

, where R

is a parameter, describing the combined no-


homogeneity of the silicon resistivity and the effective life-time of no-equilibrium carriers,
b) component, which is related with the statistical nature of trapping of non-equilibrium
carriers by the recombination centers.

2. Example of simulation EVR SHE Spectrum for the DGFRS Detecting Module
(PIPS+TOF)
EVR registered energy spectrum was calculated by a Monte Carlo simulation taking
into account neutron evaporation, energy losses in the different media
1
, energy straggling,
equilibrium charge states distribution width in hydrogen, pulse height defect in PIPS detector,
fluctuations of PHD. In [5] a simple empirical equation was obtained as
2 3
2.05 0.73 0.0015 ( )
40
in
REG in in
E
E E E = + + .
Here, E
in
is an incoming ER energy value in MeVs and E
REG
the registered detector
value. In the Fig.1a simulation reported in [5] for ER Z=118
2
is shown. An agreement

1
Target material, hydrogen in the DGFRS volume, Mylar window, pentane in the TOF module
2
Kinematics close to Z=117 experiment conditions
279
between simulation and measured events is evident. Moreover, if to use the formulae one can
obtain, taking into account 18.14 MeV calculated incoming energy, the calculated registered
value as 11.59 MeV, whereas the mean measured value is equal to 11.220.89 MeV.
Correction value against the incoming EVR energy dependence is shown in the Fig. 1b.


Fig.1a. Computer simulation of Z=118 EVR spectrum for
249
Cf+
48
Ca reaction. Amplitudes
for Z=117 are shown by long arrows, whereas Z=118 three EVR events are shown by short
arrows.


Fig.1b. Error (correction) function for different incoming EVR energies


3. Spectral shift of SHE ER with respect to model reaction

If:
252
1
252
A
<< , then: 252
MEAS
A h E ~ + A .
Taking into account all points in the Fig. 2 [5-7] one could estimates h parameter as
following:

1 1 1 252
19.8 7
/ 102
dA
h
dE dA Slope dZ Slope
= ~ ~ .



280

Fig. 2. The dependence for SHE ER registered energy against Z (for the DGFRS detection module

Fig. 3 for different reactions with
48
Ca.



Fig. 3. PIPS focal plane detector & 8
th
backward detectors of the DGFRS

References
[1] V.F. Kushniruk. J INR Comm. 13-11983. Dubna, 1978 (in Russian).
[2] V.F. Kushniruk, Yu.S. Tsyganov. PTEN.3, 1998, pp. 30-33 (in Russian).
[3] Yu.S. Tsyganov. Nucl. Instrum.&Meth. In Phys. Res. A363, 1995, pp. 611-613.
[4] V.F. Kushniruk, Yu.S. Tsyganov. Appl. Radiat. & Isotopes, V.48, N.5, 1997, pp. 691-693.
[5] Yu.S. Tsyganov. Phys. Of Part.&Nuclei, V.42, N.5, 2011, pp. 812-845.
[6] Yu.S. Tsyganov. Proc. of RT-2010, J une, Lissbon, Portugal.
[7] Yu.S. Tsyganov, A.N. Polyakov, A.M. Sukhov et al. This volume.
[8] Yu.S. Tsyganov. XXII Symp. NEC2009. Varna, Bulgaria, Sept.7-14, 2009, pp. 278-280.
281
Automation of the experiments aimed to the synthesis of
superheavy elements

Yu. Tsyganov, A. Polyakov, A. Sukhov, V. Subbotin. A. Voinov,
V. Zlokazov, A. Zubareva
Joint Institute for Nuclear Rresearch, Dubna, Russia

PC based integrated detection- parameters monitoring and protection system of the Dubna Gas Filled
Recoil Separator (DGFRS) is considered. It is developed for the long-term experiments at the U400 FLNR
cyclotron and is aimed at the synthesis of super heavy nuclei in heavy ion induced complete fusion reactions.
Parameters and event related to:
a) beam and cyclotron,
b) separator by itself,
c) detection system,
d) target and entrance window
are measured and stored in the protocol file of the experiment. Special attention is paid to generating the
alarm signals and implementing further the appropriate procedures. The method of active correlations is
considered in details too. Namely with this technique it has became possible to search in real-time mode pointer
for potential forthcoming multi-chain event and to provide a deep suppression of background products coming
from cyclotron. Test nuclear reaction to extract calibration coefficients for automotive search in real-time mode
of recoil-alpha correlation sequences are considered in brief.

Keywords: Cyclotron, heavy ion, PC based spectrometer, silicon detector, nuclear reaction, automation,
recoil-alpha correlation, high beam intensity, real-time algorithm, rare events, background suppression.

1. Introduction
During the long term experiments aimed to the synthesis of superheavy elements
(SHE) at the Dubna Gas-Filled Recoil Separator (DGFRS) the system to measure
technological parameters of the experiment and to provide a definite response to some
abnormal (alarm) situations is strongly required ( Sukhov, 2010).
Usually, a list of parameters/signals includes the following:
- Dipole, quadruples current values measurements as well as setting of alarm thresholds,
- Rotation speeds both entrance window and radioactive target wheels,
- Pressure value in working area of the DGFRS and pentane pressure value in the TOF
(time-of-flight module),
- Temperature parameters,
- Beam associated parameters,
- Vacuum parameters,
- Pressure of saturated vapor of pentane in liquid pentane volume,
- Photo-diode rotation target output signal amplitude.

Layout of the system is shown in the Fig. 1.

282
-
-
Fig. 1. Layout of the system. U-400 cyclotron, gas-filled separator and beam chopper are shown
2. Monitoring/protection-detection system of the Dgfrs
In the chapters below two basic subsystems are considered. Note, that the whole
DGFRS system operates in the experiments aimed to the synthesis of SHE together with the
main Central control System of U-400 cyclotron. In addition to that, some autonomous sub-
systems are applied. For example, aerosol controls sub-system - to monitor the atmosphere in
the vicinity the rotating target position, vacuum autonomous system and so on.

2.1. Parameters Monitoring and Protection System
Design of the system in brief: CAMAC, one (digital) crate, KK012 controller, program
(Windows XP, Builder C++ ). Main CAMAC modules in the digital apparatuses crate:
BUK01 CAMAC-1M (to create time intervals window to measure parameters,
NF16A0 (3 in parallel, 0.02,02 or 2 s interval) + (three independent outputs) NF16 (A2/A3,
A4/A5, A6/A7).
BZ01 (alarm modules (2 mod 2M) each one 8 inputs and 1 output to switch the
beam OFF) ; F24/26( A0-A7) off/on protection mode, NF2(read), NF10 (to reset alarms
register).
3 16 bit counters KS019,
1 ADC PA 01 (8 inp ADC),
2 ADC PA-24K (10 bit, target control, TOF U400 (in reserve use),
5 KS022 rate meters (~rotation control, +some others),
1 KV009 1M - DAC thresholds setting for low Dipole magnet current and Wobbler
current.
Gauges(sensors): MCS Baratrons (N=4), vacuum gauges Pfiefer (N=7, long scale!),
Target (entrance window) - el.Motors(N=2) Siemens asynchronous AC.
- Basic dc parameters measurements: voltage-frequency-code conversions
- Main code Builder Timers 5 (CAMAC, calibration, visualization, imitation of
rotation, protocol writing etc).

283
In the Fig. 2 user interface is shown. Picture (Prt Scr) corresponds to real SHE
experiment with
48
Ca ion as a projectile.



Fig. 2. Schematic of the system interface (four monitors- near the separator, in the remote
control room, in the detection system room, and system itself monitor)

1) (examples of values meanings):

a) green field means the parameter under control, if alarm will occur- color will red.
Action beam is turned off,
b) 16.18- value (~ add 10%) of projectile beam at the target,
c) 1.736 (Tor) pentane pressure,
d) 28.0 and 16.3 target and entrance window wheel rotation speed (1/s)// double
control: rotating itself+ optical pairs light source-photodiode,
e) 0.976 H
2
pressure in the separator,
f) 6.74-06 - cyclotron vacuum value in the point before the separator window,
g) 449, 1622, 1619 A current values in Dipole magnet and in the lenses,
h) 165 351 at yellow field (left-upper corner) rates of events (focal plane PIPS
+ side detector) and TOF camera operation true start,
i) 14.8 A (right-upper corner) wobbler winding current,
j) Green button in the left-upper corner: start spectrum measure from additional
detector( p-i-n, 8x8 mm
2
) located close to rotating target in order to estimate its
state,
k) buster pressure of saturated pentane vapor.

2.2. Detection System of the DGFRS
For the synthesis and study of heavy nuclides, the complete fusion reactions of target
nuclei with bombarding projectiles are used. The resulting excited compound nuclei (CN) can
de-excite by evaporation of some neutrons, while retaining the total number of protons. Recoil
separators are widely used to transport EVRs from the target to the detection system

284
(Tsyganov, 2004) while simultaneously suppressing the background products of other
reaction, incident beam of ions, and scattered target nuclei. A distinctive feature of gas-filled
separators is the fact that atoms recoiling from the target with the broad distribution of high
charge states interact with the gas such that both average charge and dispersion are reduced.
The decrease of average charge of EVRs results in their larger rigidity in the magnetic field in
comparison with the background ions. Thus, EVRs can be rapidly separated in flight from
unwanted reaction products and collected at detection system. From the viewpoint of the
separator design D-Q-Q (dipole magnet and two quadrupole lenses) is applied. The simple but
new idea of the algorithm is aimed at searching in real-time mode of time-energy-position
recoil-alpha links, using the discrete representation of the resistive layer of the position
sensitive PIPS (Passivated Implanted Planar Silicon) detector separately for signals like
recoil and alpha-particle. So, the real PIPS detector is represented in the PCs RAM in the
form of two matrixes, one for the recoils (static) and one for alpha-particles (dynamic). Those
elements are filled by values of elapsed times of the given events. The second index number
of the matrix element is defined from the vertical position, whereas the first index is in fact
strip number (112). In each case of alpha-particle detection, a comparison with recoil-
matrix is made, involving neighboring elements (+/-3). If the minimum time is less or equal to
the setting time, the system turns on the beam chopper (as a final control element) which
deflects the heavy ion beam in the injection line of the cyclotron for a 1-5 min. The next step
of the PC code ignores the vertical position of the forthcoming alpha-particle during the
beam-off interval. If such decay takes place in the same strip that generated the pause, the
duration of the beam-off interval is prolonged up to 10-30 min (see example of the code
fragment below, coded in C++. Processes of Recoil matrix filling and time comparison for -
particle signal are shown. EPSILEN is a variable time parameter which depends on the
incoming -particle energy value, LOW_ALP low threshold for comparison procedure, reco
- name of the recoil matrix; strip and npix are the first and second indexes, respectively, dt
minimum time parameter under researchers interest).


i f ( t of ==0 && e > EAMI N && e < EAMAX && ww[ 14] < METKAMAX ) {

dt 1 = t i m1 - r eco[ st r i p] [ npi x] ;
dt 2 = t i m1 - r eco[ st r i p] [ npi x+1] ;
dt 3 = t i m1 - r eco[ st r i p] [ npi x- 1] ;
dt 4 = t i m1 - r eco[ st r i p] [ npi x+2] ;
dt 5 = t i m1 - r eco[ st r i p] [ npi x- 2] ;
dt 6 = t i m1 - r eco[ st r i p] [ npi x+3] ;
dt 7 = t i m1 - r eco[ st r i p] [ npi x- 3] ;
dt = ( dt 1<dt 2) ? dt 1 : dt 2; dt = ( dt <dt 3) ? dt : dt 3; dt = ( dt <dt 4) ?
dt : dt 4;
dt = ( dt <dt 5) ? dt : dt 5; dt = ( dt <dt 6) ? dt : dt 6; dt = ( dt <dt 7) ?
dt : dt 7;
i f ( e > 9800 && e <= 11000) EPSI LEN=0. 0025*( 11000. 0- e) ;
i f ( dt >= 0 && dt <= EPSI LEN && cor r el at i on==0 && e >= LOW_ALP )
{
cor r el at i on = 1; cnt _cor ++;
i f ( cnt _cor <NA)
{
cr _al p[ cnt _cor ] =e; cw_al p[ cnt _cor ] =ew;
cr _t i m[ cnt _cor ] =dt ; w11[ cnt _cor ] =ww[ 11] ; w12[ cnt _cor ] =ww[ 12] ;
el _t i m[ cnt _cor ] =t i m1; st r _al p[ cnt _cor ] =st r i p; ar r _pi x[ cnt _cor ] =npi x;
}

}
}/ / t hen act i on t o f i nal cont r ol el ement vi a CAMAC ( beamchopper )


285
2.2.1. Examples of Applications in the Heavy Ion Induced Reactions
During the last 8 years the mentioned method was successfully applied in the HI
induced nuclear reactions:
238
U +
48
Ca
286-x
112+ xn;
242,244
Pu +
48
Ca
290,292-x
114 + xn;
245,248
Cm +
48
Ca
293,296-x
116+xn;

243
Am +
48
Ca
291-3,4
115+3,4n ;

237
Np +
48
Ca
282
113+3n;
249
Cf +
48
Ca
294
118+3n;
249
Bk +
48
Ca 117 +3,4n;
226
Ra +
48
Ca
270
Hs + 4n.
For example, in the Fig. 3 the result of application is shown for
249
Bk+
48
Ca 117 + 3n
complete fusion reaction (Oganessian, 2010). Due to search for first ER- correlation chain in
a real-time mode, the forthcoming decays were detected in background free mode.



Fig. 3. Chain of Z=117 SHE detected with application of active correlations technique
3. Conclusions
PC-based experiment detection and parameter monitoring-protection system of the
Dubna Gas Filled Recoil separator is designed and successfully applied in experiment aimed
to the synthesis of SHE. It allows monitoring the parameters associated with cyclotron beam,
detection system of DGFRS and the separator by itself. In some moments it provides fast
switching the beam off in the case of detecting the alarm situations. Namely with this system
it has became possible to provide long-term irradiations of actinide targets by intense heavy
ion beams of
48
Ca.
References
[1] A. Sukhov, A. Polyakov, Yu. Tsyganov. Phys. Of Part.&Nucl. Lett., Vol. 7, N.5, 2010, pp. 370-377.
[2] Yu. Tsyganov, A. Polyakov, V. Subbotin. Nucl. Instrum. And Meth. In Phys. Res. A 525,
2004, pp. 213-216.
[3] Yu. Oganessian, Phys.Rev.Lett. Vol.104, N.14, 2010, p. 142502.
286
Calibration of the silicon position-sensitive detectors using the
implanted reaction products
A.A. Voinov, V.K. Utyonkov, V.G. Subbotin, Yu.S. Tsyganov, A.M. Sukhov,
A.N. Polyakov and A.M. Zubareva
Joint Institute for Nuclear Research, Dubna, Russia


Using the Dubna Gas-Filled Recoil Separator (DGFRS) for the synthesis and study of the decay
properties of the superheavy nuclei implies employing an appropriate fail-proof detector and efficient measuring
system. For the reliable definition of the nuclear decay properties we should measure the energy, x and y
positions and time of occurrence of detected events accurately. A method of energy and coordinate calibrations
of all the spectroscopic electronic devices operated with the DGFRS detector module is discussed. The recent
experimental results obtained at the DGFRS with the usage of this method are presented.
1. Introduction
The synthesis of the new nuclei in the vicinity of the island of stability of superheavy
nuclei is one of the advanced fields of nuclear physics. For the detection of the reaction
products (isotopes of nuclei with Z=113-118 [1]), determining of their decay properties and
for data acquisition [2-4] the measurement system has been developed and employed at the
DGFRS (JINR, Russia).
Separated in the DGFRS evaporation residues (ER) of the complete fusion reactions pass
through the time of flight (TOF) system consisting of two low-pressure multi-wire proportional
gas chambers and are implanted into the array of 12 silicon semiconductor 41-cm
2
focal-plane
detectors (strips). Each strip measures the energy of the incoming nucleus and the energies of its
descendant decays and/or spontaneous fission, and, as well, determines their vertical position on
the detector surface (Fig.1). When a nucleus is implanted into some position of a strip, its decay
products, i.e., particles or fission fragments (FF), should be observed in the same position
(according to the position resolution of the detector). To register the particles that escape the
focal-plane detector the latter is surrounded by the eight side detectors (4 cm4 cm) forming a
box-like structure with an open side that faces the separator; this increases the particle detection
probability up to 87% of 4. A set of three similar Si detectors was mounted behind the detector
array and operated in veto mode in order to eliminate signals from low-ionizing light particles
which could pass through the 300-m thick focal-plane detector without being detected by TOF
system.



Fig. 1. The focal-plane DGFRS detector array
287
2. Registering electronics and circuits
Electronic measuring system of the DGFRS (analog circuits like charge preamplifiers,
shaping amplifiers; mixed circuits like amplitude discriminators, multiplexers; and digital
ones ADCs, counters, buffer memory, triggering system and controller) has been designed
and performed as the modules of NIM and CAMAC standards. The ionizing particles
implanted into the detector create a charge inside the Si volume which is collected and
amplified by the following electronic circuits (Fig. 2), digitized and stored in the PC. In more
detail measuring system of the DGFRS is described in the works [2-4].

Fig. 2. Block diagram of the measuring electronics (for focal detectors only one detector circuit is
shown completely): 1 one of 12 focal position sensitive strips: 2 charge sensitive preamplifier
PUTCH-7 [5]: 3 sum-invert amplifier; 4 4 channels shaping amplifier 4SU-212; 5 shaping
amplifier ORTEC 575A; 6 16 channels analog multiplexer AM209-16; 7 ADC PA-24K [6];
8 4 channels amplitude discriminator DF-206; 9 logical system trigger KL-203 [7]; 10 time
counter KC-011 [8]; 11 buffer memory [9]; 12 CAMAC crate-controller [10].

The most recent development of the system concerned obtaining better energy
resolution (35 keV for 9.26-MeV particles of
217
Th) with the design of the new charge
sensitive preamplifier [5] and reducing of the registration system dead-time (a value of 7 s
compared to previous 85 s was achieved [7]).
The PC-based code was designed for singling out the short correlation sequences such
as recoil- or - in a real-time mode [11]. For detection of the daughter nuclides in the
absence of beam-associated background, the beam was switched off after a recoil signal was
detected with implantation energy E
ER
=6-17 MeV expected for complete-fusion reaction
products followed by an -like signal with the expected energy in the same strip, within
preset position window and time interval.
For example, in our recent experiment aimed at the synthesis of element 117 four of five
decays of the daughter nuclei of
293
117 (
289
115
285
113
281
111SF) were observed in the
low-background conditions (Fig. 3) after switching the beam off by the position correlated
ER- signals from
293
117 [12].

288

Fig. 3. Energy spectra recorded during the 252-
MeV
48
Ca+
249
Bk run.
a) Total energy spectra of beam-on -like signals
and beam-off particles.
b) Total fission-fragment energy spectra, both
beam on and beam off. The arrows show the
energies of events observed in the correlated decay
chains.



3. Results of the calibration
For registering implantation in the detector
and subsequent decays of nuclei the spectrometric
analyzers (in our measuring system 11-bit ADC
PA-24K [6], 2048 channels for the maximum 5 V
amplitude) have to record both the signals from
possible low-energy particles (~1 MeV or even less) escaping from the detector and from
spontaneous fission events with total kinetic energy up to 250 MeV. For this purpose we
separate signals by energy value (Fig. 2) into two independent spectrometric channels (to
measure the energy and coordinate in the strips for ERs and particles separately from those
for FFs). The energies of the particles that escaped from the focal strips were measured by
side detectors with ADC PA-24K.
The detection system was calibrated by registering the recoil nuclei of No and Th
produced in the reactions
206
Pb(
48
Ca, 2n) and
nat
Yb(
48
Ca, 35n) and their descendant decays
( decay or SF).
The detected nuclei from
208
Rn to
217
Th with well-known decay properties [13] (-particle
energies from 6.04 MeV to 9.26 MeV) give us rather clear energy spectra (Fig. 4) with a good
statistics and allow to calculate precisely the weight of a channel in the amplitude converters. The
full-width-at-half-maximum (FWHM) energy resolution for particles implanted in the focal-
plane detector was measured to be 3565 keV (in the beginning of experiment), depending on the
strip and the position within the strip. The particles that escaped the focal-plane detector at
different angles but hit a side detector were registered with an energy resolution of 120150 keV
for the summed signals (energy deposited in side detector plus residual energy in focal-plane
detector). If the energy deposited by an particle as it recoiled out of the focal-plane detector was
lower than the detection threshold of 0.91.1 MeV (such that its position was also lost) and it was
detected only by a side detector, its total energy was estimated as the sum of the energy measured
by the side detector and half of the threshold energy (0.5 MeV) with an uncertainty in
determining the total energy of 0.4 MeV.




289







The total kinetic energy (TKE) released in the SF of nuclei with Z102 was
determined to be the sum E
tot
+23 MeV, where E
tot
is the observed energy signal (with a
systematic uncertainty of about 5 MeV when both fission fragments were detected) and
23 MeV is the correction for the pulse height defect and energy loss in the dead layer of
detector as determined from
252
No measurement (Fig. 5).
The FWHM position resolutions for registering correlated decays of nuclei implanted
in the detectors were 1.11.6 mm for ER- signals and 0.50.8 mm for ER-SF signals. If an
particle was detected by both the focal plane (E
1
) and a side detector (E
2
), i.e.
E=E
1
+E
2
, the position resolution was dependent on the amplitude E
1
, but was
generally inferior to that obtained for the full-energy signal (Fig. 6).

Fig.6. Distributions of position deviations of ER- and ER-SF signals measured for
217
Th and
252
No produced in
48
Ca-induced reactions with
nat
Yb and
206
Pb, respectively (left).
Dependence of ER- FWHM position resolution on the amplitude of -particle signal (right).


Fig.4. Energy spectrum of particles
detected in the
nat
Yb +
48
Ca reaction
Fig.5. Spectrum of total energies deposited
by the fission fragments of
252
No implants
as measured by the both focal-plane and
side detectors (lower line) and by the focal-
plane detector only (upper line).
290
According to the obtained position resolution of the single strips in the vertical
direction the whole focal detectors area can be represented as a number of about
320 individual cells; this results in a low probability of random events when registering the
decay chains strongly correlated in position.
For the calibration of the fast branch of the registering system [10] the
220
Th
nuclei were used (E

= 8.79 MeV, T
1/2
= 9.7 s).
Summary
The described method of the energy and position calibrations (for and spontaneous-
fission scales) of the measuring system by the products of the complete fusion reactions
206
Pb(
48
Ca, 2n)
252
No and
nat
Yb(
48
Ca, 35n)
215-220
Th and their descendant nuclei allows us to
have fail-safe and effective registering system. This detection system in combination with the
efficient setup DGFRS and heavy-ion cyclotron U-400 (FLNR JINR, Dubna) allowed us to
synthesize six new superheavy elements with Z = 112 118 (Fig.7) and investigate their
decay properties during last decade [1, 12].
For further research of the domain of the SHEs in the vicinity of closed shells Z=114
and N=184 we need to develop the detecting and measuring system of the DGFRS with the
aim to increase the reliability of the detection of nuclei (even in the case of the only event)
and to get better energy and position resolutions. It is planned to employ the modern
measuring modules with digital signal processors, for example PIXIE-16 [14].
Acknowledgements
This work has been performed with the support of the Russian Foundation for Basic
Research under grants Nos 11-02-1250 and 11-02-12066.



Fig.7. The top part of the chart of nuclides

291
References
[1] Yu.Ts. Oganessian et al. J. Phys. G 34, R165 (2007).
[2] Yu.A. Lazarev et al. JINR Report P13-97-238, Dubna, 1997.
[3] Yu.S. Tsyganov et al. Nucl. Instr. and Meth. in Phys. Res. A 392, 1997, p. 197.
[4] Yu.S. Tsyganov et al. Nucl. Instr. and Meth. in Phys. Res. A 525, 2004, pp. 213-216.
[5] V.G. Subbotin et al. (to be published).
[6] V.G. Subbotin, A.N. Kuznetsov. JINR Report 13-83-67, Dubna, 1983.
[7] V.G. Subbotin et al. Proceedings of the XXI International Symposium on Nuclear
Electronics and Computing NEC2007, Varna, Bulgaria, 2007, pp.401-404.
[8] N.I. Zhuravlev et al. JINR Report 10-8754, Dubna, 1975.
[9] N.I. Zhuravlev et al. JINR Report P10-88-937, Dubna, 1988.
[10] A.M. Sukhov et al. JINR Report P13-96-371, Dubna, 1996.
[11] Yu.S. Tsyganov, A.N. Polyakov. Nucl. Instr. and Meth. in Phys. Res. A 513, 2003,
p. 413; A 558, 2006, pp. 329-332; A 573, 2007, p. 161.
[12] Yu.Ts. Oganessian et al. PRL 104, 2010, p. 142502.
[13] Evaluated Nuclear Structure Data File (ENSDF), Experimental Unevaluated Nuclear
Data List (XUNDL), http://www.nndc.bnl.gov/ensdf.
[14] http://www.xia.com/DGF_Pixie-16.html.
292
High performance TDC module with Ethernet interface

V. Zager, A. Krylov
Joint Institute for Nuclear Research, Dubna, Russia.

Fast measurement with high precision is one of the main requirements for the automation hardware.
There is a number of hardware standards applied at the Flerov Laboratory of Nuclear Reactions: CAMAC,
VME, ORTEC, etc. The CAMAC system is the most popular and at the same time the most outdated one. The
specialists of the Automation group of the Accelerator Division developed a high performance system that
exceeds CAMAC controllers in all aspects. At present there is a unit testing of the ADC and TDC modules with
an Ethernet interface. This presentation describes a principle of the TDC module, algorithms; fundamentally new
ideas were used as well, when designing and writing software.
Introduction
A facility for testing electronic products was established on the basis of the heavy ion
accelerator MC-400 FLNR JINR. Tests have been performed in accordance with a method
based on the international standards. By the requirements measured should be: density of the
ions beam, fluence, homogeneity of the beam on irradiated product and ions energy.
Scintillation detectors were used for testing electronic devices to measure the ion energy.
These detectors are smaller in comparison with the cross-section beam and mounted on the
periphery of the ions transport channel, so as not to shade each other and the field of
exposure. A delay time is measured between the far and near detectors located on the ions
transport channel. Signal cables from the detectors are strictly same length, so the delay time
of the signals in both measurement channels had the same value. Time-to-digital converter
SmartTDC-01 (Fig. 1) was designed to measure the time delay.
General description
The SmartTDC-01 is a universal 2-channel multihit Time-to-Digital Converter. This
complete multi-functional and wide-range device is well suited for industrial applications and for
research. The module is based on the chip TDC-GP1 Acam mess electronic GmbH (Germany), it
can operate in several measurement modes, which are selected using the software.



Fig. 1. SmartTDC-01 unit
293

SmartTDC-01 has two measurement channels "Stop 1" and "Stop 2" with a 15 bit
resolution.
The measuring unit for both channels is started by the sensitive edge of the Start
pulse. Every channel can receive four independent stops. The various stops pulses can not
only be calculated against the start pulse, but also each other. It makes no difference if the
stops arrive over the same or different channels. All time difference combinations between the
8 possible results can be calculated. If one compares the events which arrive by different
channels, it is possible to measure time differences down to zero. When comparing the events
that arrive on one channel, the double pulse resolution of the specific channel limits the
precision. Figure 2 illustrates the timings. The double pulse resolution is in the range of 15 ns
typ. I.e. if two stops arrive on the same channel within less than 15 ns the second stop will be
ignored since it arrived during the recovery time of the measurement unit.

Key features:
2 channels with 250 ps resolution or 1 channel with 125ps resolution,
4-fold multihit capabilities per channel Queuing for up to 8-fold multihit,
Resolution on both channels absolutely identical,
Double pulse resolution approx. 15 ns,
Retriggerability,
2 measurement ranges,
3 ns - 7,6 s,
60 ns -200 ms (with predivider, only 1 channel),
The 8 events on both channels can be measured against one another arbitrarily, no
minimum time,
difference, negative time differences possible,
Edge sensitivities of the measurement inputs are adjustable,
Efficient internal 16-bit ALU, the measured result can be calibrated and multiplied
with a 24 bit integer,
Ethernet interface, Tcp/Ip protocol.
Hardware description:
TDC module assembled on separate board that is compatible with the processor
module ACPU-01.
The boards connected by mezzanine technology. The processor module ACPU-01 has
the following characteristics:
Ethernet chip -WIZnet w5300:
Supports hardwired TCP/IP protocols TCP,UDP, ICMP, IPv4, ARP, Ethernet,
Supports 8 independent SOCKETs simultaneously,
High network performance Up to 80Mbps (DMA),
10BaseT/100BaseTX Ethernet PHY,
Internal 128Kbytes memory for data communication(Internal TX/RX memory).
Main CPU Atmega-128:
Up to 16MIPS Throughput at 16MHz,
128Kbytes of In-System Self-programmable Flash program memory,
4Kbytes EEPROM,
4Kbytes Internal SRAM,
Programmable Watchdog Timer with On-chip Oscillator.
294
There is a USB interface for configuring and debugging the local software.
Theory of operation:
The measurement range from 3 ns to 7.6 ms was chosen to measure the energy. The
triggering input TDC "start" is used in this mode, which is connected to photomultiplier
located closer to the accelerator, and the stop input "stop one", which is located farther from
the photomultiplier on a ions beam channel.

Fig. 2. Timings measurement

Software description:
The special software was developed by Labview for working with Smart TDC-01
module (Fig. 3).


Fig. 3. Software for the ion beam energy measurement
295

Conclusions
At present, the SmartTDC-01 is applied on the accelerator MC-400. The beam energy
is calculated twice per second. This time is enough to update the bar chart on the operator
screen. The ion energy is measured by the SmartTDC-01 module with an accuracy of
1% MeV/nucleon. The measurements of the ion energy were carried out jointly with the
Russian Federal Space Agency and the measurement result was very accurate.

References
[1] acam messelectronic gmbh: TDC-GP1, http://www.acam.de
[2] WIZnet: Innovative Embedded Networking, http://www.wiznet.co.kr/
[3] Atmel AVR ATmega128,
http://www.atmel.com/dyn/resources/prod_documents/doc2467.pdf
[4] Russian Federal Space Agency Roscosmos, http://www.federalspace.ru/



296
Front End Electronics for TPC MPD/NICA

Yu. Zanevsky, A. Bazhazhin, S. Bazylev, S. Chernenko, G. Cheremukhina,
V. Chepurnov, O. Fateev, S. Razin, V. Slepnev, A. Shutov, S. Vereschagin and
V. Zryuev
Laboratory of High Energy Physics, Joint Institute for Nuclear Research, Dubna, Russia


1. Introduction
A new scientific program on heavy-ion physics launched recently at JINR (Dubna) is
devoted to the study of in-medium properties of hadrons and the equation of state of nuclear
matter. The program will be realized at the future accelerator facility NICA. It will provide a
luminosity of up to L=10
27
cm
-2
s
-1
for Au
79+
over the energy range 4 < NN
S
<11 GeV. Two
interaction points are foreseen at the collider. One of two detectors is the Multi Purpose
Detector (MPD) optimized for the study of properties of hot and dense matter in heavy-ion
collisions [1, 2]. At the design luminosity, the event rate capability of MPD is about ~ 5
kHz; the total charged particle multiplicity exceeds 1000 in the most central Au+Au collisions
NN
S
= 11GeV.

2. Requirements to TPC readout electronics
The Time-Projection Chamber (TPC) is the main tracking detector of the MPD
(Fig. 1). The TPC readout system will be based on Multi-Wire Proportional Chambers
(MWPC) with cathode readout pads. The TPC will provide efcient tracking at
pseudorapidities up to || = 1.2, high momentum resolution for charged particles, good two-
track resolution and efcient hadron and lepton identication by dE/dx measurements. In
order to fulfill these performances and the main TPC features (Table 1) parameters of the
Front End Electronics (FEE) has to be fixed to several strong requirements (Table 2).




Fig. 1. Common view of TPC/MPD


297
Table 1. TPC parameters

Required performance of the TPC
Parameter Value
Spatial resolution
x~0,6 mm, y~1,5 mm,
z
~ 2 mm
Two track resolution < 1 cm
Momentum resolution < 3% (0.2 < p
t
< 1 GeV/c)
dE/dx resolution < 8%

TPC main features
Size 3.4 m (length) x 2.2 m (diameter)
Drift length 150cm
Data readout 2x12 sectors
(MWPC, cathode pad readout)
Total number of time samples 350
Total number of pads ~80.000
Gas gain (Ar + 20%CO
2
) 10
4

Data from 24 readout chambers is collected by TPC FEE. The FEE has to provide
reliable operation, low noise, optimal shaping and complicate signal processing, small power
consumption etc. (Table 2).
The electronics has to take several samples for each ionization cluster reached the pad
and then a fit can be used to localize the hit. Estimations show that the contribution of the
electronics noise to the space resolution is comparable to the chamber resolution when the
signal to noise ratio (S/N) is about 30:1 for mean of MIP (ENC < 1.000 electrons).
The dynamic range is determined by the energy loss dE/dx of produced particles.
Taking into account that the maximum ionization of a 200 MeV/c proton is 10 times more
than ionization of MIP, the path length is longer at non-zero dip angle, signal to noise ratio
is ~30 and Landau fluctuations dynamic range value is about 1000. Therefore a 10 bits
sampling ADC is required.
Drift velocity, drift length and diffusion of primary electrons determine timing
constants of the FEE. The average longitudinal diffusion determines peaking time and the
electronics is best matched to the signal of cluster if the shaping time is comparable to the
width of this signal (about 160-180 ns, FWHM).
Power consumption should be not more than 40mW per channel to keep of the TPC
gas volume temperature stable (t 0.1
0
C) using appropriate cooling system.

Table 2. Main parameters of the FEE

Parameter Value
Total number of channels 80.000
Signal to noise ratio, S/N > 20:1 @ MIP (ENC < 1000e
-
)
Dynamic Range 1000 (10 bits sampling ADC)
Shaping time ~ 170 ns
Sampling 12 MHz
Tail cancellation < 1% (after 1 s)
P
cons
~ 40mW/ch


298
3. 64-channels Front End Boards (FEB-64)
A version of the FEE based on two PASA (analogue) and ALTRO (digital) has been
chosen [3]. The FEB-64 card contains 64 channels as the most flexible solution for our case (Fig. 2).























Fig. 2. Block-scheme of the 64-chs FEE for TPC/MPD

A single readout channel consists of three basic units: a charge sensitive
amplifier/shaper; a 10-bit low power sampling ADC and a digital circuit that contains a
shortening digital lter for the tail cancellation, the baseline subtraction, zero-suppression
circuits and a multi-event buffer. The charge induced on the pad is amplied by PASA. It is
based on a charge-sensitive amplifier followed by a semi-gaussian pulse shaper of forth order.
It produces a pulse with a rise time of 120ns and a shaping time of about 190 ns (e.g. near the
optimal value). ENC of the PASA is less than 1000 e
-
.
The output of the differential amplifier/shaper chip is fed to the input of the ALTRO.
ALTRO contains 16 channels which digitize and process the input signals. After digitization,
a baseline correction unit removes systematic perturbations of the baseline by subtracting a
pattern (stored in a memory). The tail cancellation lter removes the long complex tail of the
detector signal, thus narrowing (up to 2 times) the clusters to improve identification. It can
also remove undershoot that typically distorts the amplitude of the clusters when pile-up
occurs. A second correction of the baseline is performed based on the moving average of the
samples that fall within a certain acceptance window. This procedure can remove non-
systematic perturbations of the baseline. The zero-suppression scheme removes all data that is
below a certain threshold.
After digital processing data flow to the several events deep buffer memory to
eliminate loosing of data due to DAQ dead time.
PASA
ADC
Digital
Circuit
Buffer
memory
R
E
A
D
O
U
T


C
O
N
T
R
O
L
L
E
R
4 CHIP
x
16 CH/CHIP
4 CHIP
x
16 CH/CHIP

FPGA
(control)
ALTRO
TPC_MPD
L1: 5 kHz















8
0
.
0
0











p
a
d
s

BASELINECORR.
TAILCANCELL.
ZEROSUPPR.
10 BIT


MULTI-
EVENT
MEMORY
Shaping
FWHM = 190 ns

GAIN =12 mV/fC


299
The designed FEB-64 contains chips on both sides of the PCB. Control functions are
performed with FPGA. Several important features will be also read out, namely U, I and
temperature. Each card will be put into a Cu-plated envelope with tubes for cooling water.
Maximum data readout rate of the FEB-64 is > 200 MB/s.
The large granularity of TPC (~ 80.000 pads x 350 time bins) leads to event size of
~0.5 MB after zero-suppression. At trigger rate of ~5 kHz maximum data flow is ~2.5 GB/s
for whole TPC which will be compressed further by several times.

Conclusion
Prototyping of the FEE based on the PASA and ALTRO has been started. The first
test of 64-channels card (FEB-64, prototype) is expected to be performed in start of 2012.

References
[1] The Multipurpose Detector (MPD), Conceptual Design Report, 2009, JINR Dubna.
[2] Kh.U. Abraamyan et al. The MPD detector at the NICA heavy-ion collider at JINR.
NIM A628, 2011, pp. 99-102.
[3] L. Musa. The ALICE Time Projection Chamber for the ALICE Experiment. 16
th

International Conference on Ultrarelativistic Nucleus-Nucleus Collisions: Quark
Matter 2002, Nantes, France, July 2002. Nuclear Physics A715, 2003, pp. 843-848.

300
Mathematical model for the coherent scattering of a particle beam
on a partially ordered structure

V.B. Zlokazov
Joint Institute for Nuclear Research, Dubna, Russia

Neutron diffraction is the most prospective means for the investigation of solid materials with partially
ordered structure such as, e.g., lipid membranes, but the problems which the physicist faces while analyzing the
data of such a diffraction belong mathematically to the most ill-posed ones. The report describes an approach for
the regularization of such problems to guarantee obtaining a stable solution and estimating the accuracy of these
solutions.

Introduction

Let us consider a scheme of a part of a multi-layer lipid membrane

Fig. 1. A multi-layer structure

The Fig. 1 depicts a sheme of a 3 bilayer membrane; the circle denotes a hydrophile
component of the bilayer, and the vector - a hydrophobe one.
It is assumed that the small-angle diffraction on this structure is defined essentially
only by the periodicity along the z-axis (perpendicular to the membrane plane) and its unit
cell will be one-dimensional. The symmetries of this cell can be considered as one-
dimensional cubic. If we set up the "zero", at a point C, the cell will be centrosymmetrical and
consist of 3 scattering items: two negative at the edges, and a positive one in the middle.
The theoretical structure factor for a centrosymmetrical cell looks as follows

F(h) =

=
n
j 1
(z
j
)cos(h z
j
), (1)

here ( z
j
) is the measure of the scatterer density, proportional to the coherent scattering
length and the occupation of the position in the cell.
It is a characteristic of the quality and quantity of the coherent scattering of a particle
beam on a lipid membrane, and is defined in the reciprocal space as a function of Miller
indices h.
301
The latter make nodes d(h) of the coordinate net of the reciprocal space - d-spacings.
The nodes d(h) are determined by the formula d(h) = 1/ (a*
2
h
2
), where a* is the reciprocal
space parameter, and h are all integer values.
In our case d(h) = 1/(Dh), since D is the mean of the periodicity of the lipid
membrane, and it is a direct parameter of the unit cell.

The expression (1) describes the interference part of the particle beam, scattered on the
lipid membrane for an index h, and in an ideal case would have looked as a narrow peak
(delta-function) at the node d(h). However, the crystal structure of the membrane is not
ideally periodic, and the particle beam is not ideally collimated, therefore, a more realistic
than (1) model of the structure factor would be the following expression
F(d,d(h)) = F(h) =

=
n
i 1

}
zi
(z, z
i
)cos(h z
i
)dz, (2)

where n is the number of scattering elements with the centers z
i
.
It has the following meaning: the operator (2) (the Fourier one) maps a set of items
(z, z
i
) of the direct space of the crystal onto an element F(d,d(h)) (or F(h)) of the reciprocal
space for a given Miller index h.
It describes the results of the interferential scattering as a sum of several continuous
functions of a continuous variable d, which have a peak-like appearance with a certain width
near the nodes d(h).

The Fig. 2 illustrates a typical graph of a real diffraction spectrum of scattered neutron
beam on such a structure.


Fig. 2. An experimental distribution F(d,d(h))

This distribution has an additive (background) component, and its interferential part is
essentially broadened.
The standard problem of the crystallographic analysis of the data of the particle
scattering on such a structures is the determination of the unit cell parameter (in our case D)
and scatterer (further atomic) coordinates (in our case the density distribution (x)).
302
Before going to the analysis of concrete models (2), let us see how the Fourier
operator works for the mostly typical functions (x).

Fourier transformation of the scattering density
Let us consider the different types of the scattering density
(U) (x) =

=
n
i 1
A
i
o (x-C
i
);
Then from (2) it follows
F(h) =

=
n
j 1
A
j
exp(- iC
j
h).
This case corresponds to the point atomic scatterer, when all the scattering mass is
concentrated in the atomic centers. F(h) is the classical one-dimensional structure factor for
the Miller indices h.
In the case of the central symmetry ( (x)= (-x)) we have
F(h) =

=
n
i 1
2 A
i
cos(C
i
h).
The Fourier operator F transforms the delta-functions of the coordinate space to the
delta-functions of the frequency space - they are imaginary exponentials

F o (x-x
0
) exp(- i x
0
h). (3)

In the case of the central symmetry they are the usual cos (x
0
h) functions. The
analogy follows from the fact that the conditioned integrals along the whole axis (x and h)
exist for the both delta-functions and are the characteristic functions of the point x
0
.

(Uu) The point model is, as a rule, not realistic, and then we have some continuous
distribution of the scattering mass. The simplest kind of such scatterer is the uniform
distribution

(x) = A , if |x-c| s a.
0, otherwise

On the basis of (2) we get F(h) = (2 A sin(ah))/h exp(ich). The function F(h) is a continuous
distribution.

(Uuu) Next, (x) = A exp(-a|x-c|); In this case F(h) = 2Aa/(h
2
+a
2
) exp(ich).
It is another example of a non-point-like scatterer.

(Uuuu) And, finally,

(x)=Aexp(-((x-c)/w)
2
).

In this case F(h) =Aw t exp(-(hw)
2
/4)exp(ich); it is an instance of a non-point-like
scatterer, which, however, is concentrated around the center and fast diminishes while the
distance from it grows - in a certain sense it is an intermediate case between (U) and (Uuuu).
303
While analyzing the data of a coherent scattering of a beam from the lipid membranes we
normally use the models of a continuous distribution of the scatterers - (Uu) or (Uuuu).

The algorithm for the problem solution
A mathematically correct method was described in [3], but it can be used only in a
particular case.
In the general case, starting from papers [1], [2] and the similar ones, the method for
the solution of the problem of the scatterer density parameter determination consists in the
following. From the experimental diffraction spectrum the background is subtracted; the
residue is the sum of intensities, which after different corrections represent squared modules
of the structure factors F(h). With their help an expression is built
G(x)=

=
n
h 0
F(h)cos(2t hx) , (3)
where x= -D/2 + iD/m belongs to the interval -D/2 D/2; m is an arbitrary number, and
i=0,1,..,m/2.
Here chances that (3) will be a correct estimate of the density (x) are based on the
fact that it looks similarly to an inverse Fourier transform.
Next, the fitting of G(x) by a sum of functions modelling (x) and depending on the
sought parameters is made. Normally Gaussians are taken as such functions, each of which
depends on the 3 parameters: the amplitude A, the center C, and the halfwidth W.
In the best case the method is rough and inexact, but, as a rule, erroneous.
Mathematically deficient is the approach itself - using a sample of n numbers (F(h)) an
attempt is made to build estimates of 3n parameters A
i
,C
i
,W
i
.
The G(x) will be a somehow reliable estimate of the density (z) in a unique case,
when this density is a delta-function, i.e., in an rather ideal than real one. The practical use of
the method is abundant of mathematical curiosities - no single and non-stable solutions,
immense errors of parameters, etc.
Apparently, a mathematically correct method of the problem solution would be some
adaptation of the Rietveld method to the given case.

Models of the structure factor
Let us consider the function (z) from the case (Uu). For some range H - set of Miller
indices h let a diffraction spectrum s(d) be registered, which is a sum of Fourier
transformants of the type (Uu)
F(d,h)=

=
n
h 0
2 A
h
sin(Aa

hd)/d exp(ic hd).
We don't know the "natural" centers in s(d), therefore, we make these centers fitted
parameters, so that we can rewrite the parametrical representation (7) as follows
F(d,h)=

=
n
h 0
A
h
sin((d-U
h
)/ W
i
)/(d- U
h)
exp(i (d- U
h
)/C
h
)),
where the parameter C
h
is the center of the scatterer density function, W
i
, A
h
are its halfwidth
and the amplitude, and U
h
is the center (reflection order) of the interval, which corresponds to
the Miller index h.
We subtract the background from the spectrum s(d), make a correction for Lorentz
factor, for absorption, etc., and extract the square root from the residue, so that further we will
mean under the spectrum s(d) just this square root. The variance (in the Poisson case) is a
304
constant, equal to 0.25; accordingly, the error at each channel d is 0.5, and with account of the
square root of s(d) it will be equal to 1.
If the intervals of Miller indices for different density components, characterized by
quantities C
h
, don't overlap, we can analyze the scattering intensities at these intervals with
the centers at U
h
separately.
The estimates of the parameters A
h
, C
h
, W
i
, U
h
for each i can be obtained with the help of
least-square estimator (LSE), i.e., by the minimization of the expression with respect to
parameters

=
m
h 1
(s(d)-|F(d, A
h
, C
h
, W
i
, U
h
|)
2
.

(4)

The function |F(d,h)| in the space of parameters is differentiable everywhere except for
the points, where it is equal to zero. Accordingly, care should be taken that such a situation
does not occur. Besides, the minimization of (4) should be made under restrictions on | A
h
|>0,
W
i
>0 and U
h
>0 (to avoid the degeneration of the LSE-matrix).
On the parameters C
h
crystallographic restrictions should be imposed. In the reciprocal
space of the one-dimensional membrane structure the reflection planes are families of points
with the coordinates C
h
+i/D, where h are Miller indices, and {i} are all integer numbers. But
C
1
= 1/D,
= C
h
/(Dh) (formula of the interplanar distance for the one-dimensional cubic
symmetries).
From this we get the restrictions on the parameters C
h
: C
h
= C
1
/h, h=1,2,3,...
Thus, we have an interval of the Miller indices h e H with the center U
h
, where the
spectrum s(d) was measured. For simplicity reason, we can assume that in each interval the
quantities d vary between 1 and some m, the difference between such quantities from different
intervals is accounted for by the correction D
h
, i.e., d of the hth interval is shifted to the 1st
interval by D
h
; otherwise, its center U
h
= U
1
+ D
h
.
This follows from the fact that if we in (1) make a replacement h h+D, this will
multiply the integral in (1) by exp( ihD), and so shift the parameter U: U U+D.
Let us write the function F(d,h) for the LSE in the following form
F(d,h)= Asin(q(d-U)/W)/d-U exp(q(d-U)/C/2)),
here q=i2 t /(m+1).
This is the most general case, corresponding to the non-centrosymmetrical scatterer
structure. To make the consideration comlete, let us take also the centrosymmetrical case.
Let the graph of the function (x) look as follows

Fig. 3. The centrosymmetrical case

It is the case (Uu). We have
F(h) =A
}

+
a
w a ) (
exp(-ihx)dx + A
}
+w a
a
exp(-ihx)dx \ =
-A/ (ih)(exp(iha))-exp(ih(a+w))+ exp(-ih(a+w))- exp(-iha)).

Let us make 4 groups of exponentials in the following way
305
exp(ih(a+w/2))(exp(-ih(w/2)-exp(ih(w/2)) + exp(-ih(a+w/2))(exp(-ih(w/2)-exp(ih(w/2))).
Making use of the formulae 2sin(t)=(exp(-it)-exp(it))/i, 2cos(t)=exp(-it)+exp(it) and
the introduced parametrization, we get finally
F(h) = 2Asin(q(h-U)W/2)/(h-U)cos(qhC/2),
here q turns to q=2t /(2m+1).
Similarly for the case (Uuuu) we have (the non-centrosymmetrical variant)
F(h) =A exp(-((h-U)/W)
2
) exp( i(h-U)/C)),
where the parameters A,C,W,U have the same meaning, as in the case (Uu).
For the fitting in the centrosymmetrical case we write this expression as follows
F(h) =2A exp(-((h-U)/W)
2
) cos(q(h-U)/C/2)),
and q=2t /(2m+1).
If the fitting was success, the expressions A
h
sin((x-C
h
)/W
i
)/(x- C
h
) and
A exp(-((x- C
h
)/ W
h
)
2
) will be densities of the scattering masses, sought for.
On the Fig. 4, 5 graphs of a diffraction spectrum with 5 praks and results of its fitting
by the models (Uu) (Uuuu) are given.
Comparing the results of the peak analysis by different models we can conclude that
the hypothesis: the scatterer density is described by the model (Uu}) - is more plausible than
its alternative. Meanwhile the standard method (fitting the cosine sum) uses the Gaussian
model which is obviously inadequate to the data, and this increases the inaccuracy and
unreliability of its estimates.



Fig. 4. The diffraction spectrum of a multi-layer lipid membrane with 5 peaks and its fitting
by the Gaussian (Uuuu) model of the scatterer density model. The dotted line is the initial
spectrum, and the thick line is its fitting.
306


Fig. 5. The same spectrum and its fitting by the (Uu) density model. The _
2
per degree of
freedom is significantly greater than in the former case.

Conclusion
The described approach represents a regularization of an ill-conditionned problem, the
solution of which will be stable with respect to both the statistical errors and the small
variations of the initial data. Within its framework an anecdotical situation, when one tries to
estimate 15 parameters on the basis of 5 numbers becomes impossible.

References

[1] G. Zaccai, J.K. Blasie & B.P. Schoenborn. Proc.Natl. Acad.Sci. USA, 72, 1975, pp. 376-380.
[2] G. Zaccai, G. Bueld, A.Seelig & J. Seelig. J.Mol.Biol. 134, 1979, pp. 693-706.
[3] V.I. Gordeliy and N.I. Chernov. Acta Cryst. D53, 1997, pp. 377-384.
307

The distributed subsystem to control parameters of the Nuclotron extracted beam

E.V.Gorbachev, N.I.Lebedev, N.V.Pilyar, S.V.Romanov, T.V.Rukoyatkina, V.I.Volkov

JINR, Dubna, Russia

The distributed control subsystem is intended to solve 3 tasks: to measure and remote look
through results of the extracted beam spatial characteristics measurements in 4 points of the
transportation channel by means of proportional chambers; to measure the beam intensity using
the ionization chamber; to control the gains of proportional chambers by means of their HV
power supply control.
The equipment of the signal preliminary registration from the detectors is located on the beam
transportation channel. The server computer and data acquisition modules are placed in the
accelerator control room at ~400m from the detectors. The results are shown on users
computers; one of them is in the central accelerator control room.
The subsystem has been made on the basis of multifunctional modules NI USB-6259 BNC, NI
PCI-6703, SCB-68 produced by National Instruments, HV power supplies N1130-4 produced by
WENZEL and DDPCA-300 I/U convertor with trans-impedance from E+4 to E+13 V/A.
The client-server software was made in LabView by National Instruments for Windows. The
subsystem was tested on the beam in the March 2011 run of the Nuclotron.

The problem definition and basics to solve it

Now the project to construct a new experimental complex NICA/MPD on the basis of the
modernized Nuclotron-M is under active development at JINR. When the Nuclotron equipment
is modernized, there is a task to develop and test up-to-date the data acquisition and control
systems during accelerator runs avoiding functionality damages of the operating subsystems.
Usual consumer properties must be preserved while improving the graphic user interface,
operation speed and stability.
Fundamental principles of the control system organization are as follows:
the client-server distributed model of data interchange,
the TCP transport protocol socket using,
the server/equipment interoperation stability,
the client operation stability in the long accelerator run which is achieved due to
reconnect in every socket connection.
The wide admission of the electronics and software produced by National Instruments is quite
satisfactory to solve the above tasks. The software is based on .NET Framework, NI-DAQmx
universal driver set and powerful graphics. The injection subsystem [1] was made on the same
principles and the experience of using it, obtained in the December and March 2011 runs of the
Nuclotron, has confirmed and justified the correct choice of the approach.

The subsystem structure

The extracted beam subsystem is purposed to solve 3 tasks (Fig.1):
to measure the extracted beam spatial characteristics in 4 points of the transportation channel
by means of proportional chambers ( NI USB-6259);
to control the gains of proportional chambers by means of their HV power supply control:
o voltage management NI PCI-6703, SCB-68;
o power supplies - WENZEL N1130-4 with 4 HV-channels by 0.1 - 6kV;
o HV voltage measurement - NI USB-6259;
308

to measure and summarize the beam intensity using the ionization chamber and DDPCA-
300 I/U convertor (with control of its gain and output voltage-to-frequency conversion to
transfer the data into the NI USB-6259 integration counter).
The subsystem makes it possible to control and measure HV power supplies and control the
beam intensity parameters either locally in the Server or remotely in the Client. Only one
Client may control HV. The access permission for writing in Server computer must be fixed
for this IP-address only. That is why we have additionally used a specialized server DSS
(DataSocketServer) from LabView base packet. It starts on the server computer and, besides
the server access permission fixing, realizes the low level DSTP transport protocol operations
that simplifies the task of organizing simultaneous interchange with 9 socket connections in
one cycle of the Server program. The publishing and subscribing of the data, located in the
DSS URL-named memory, are preformed via a simple low-level interface of DataSocket
Read and Write functions. These functions handle the low-level TCP/IP programming for us.
Due to DSS the Clients become light Clients, their executables contain only NI Runtime
Engine and NI DataSocket API for DSS. Clients have time to represent a new portion of data
with a measurement time stamp, selectively to store and look through the data according to
the operators desire. In addition we have used an independent XControl for every of the 3
above tasks [2, 3]. It results in additional Client speed increasing since every XControl has its
own event processing structure.


Fig.1 The Client-Server subsystem structure

The Server program structure

The Server program intends to control, measure and send the results to the Clients. It takes the
session-specific settings - socket URL-names, chambers and module channels parameters from
the XML-file, and starts to operate. After that it opens 9 socket connections on DSS-server.
Every socket connection uses its own unique URL-name to address to its data memory field at
the DSS. Then Server signals that it is in operation in the network. Then it sends to all Clients the
Server Remote-enable signal flag and HV values set on power supplies in the previous control
session.

NI USB Client
NI USB server


Start
Hardware Server Clients

NI USB-6259
DDPCA-300
(E+4...E+13 V/A)
U/F converter
NI PCI-6703

NI USB Client

NI USB Client
HV control
The calculation of the beam intensity



Profile-meters
HV
SCB-68


WENZEL
N1130-4


The calculation of the beam spatial parameters

Stop
I
309

The application consists of 2 continuous working cycles While and a Timed Loop cycle. In the
first cycle While permanent measurements of the beam spatial characteristics are going by means
of proportional chambers (MWD, profile-meters) to transfer them to the Clients as well as
measurements of voltage supply sources of the profile-meters by using module NI USB-6259.
The second cycle reads-out the remote control signal recordings and on their basis the following
question is answered: who will control- the server or the controlling Client. If there is the
controlling Client in the network the Client is controlling, if not then the Server does. The
cycle Timed Loop is purposed to organize the start of measuring the beam intensity in a
precisely given moment of time: 32-bit counter of the NI USB-6259 module counts the number
of pulses of the voltage-frequency converter between the signals of the beginning and end of the
slow extraction plato. At the moment when the count is over, the value on the counter is read-
out, summed up and sent to the Clients together with the other parameters of the measurements
and calculations of the beam intensity. If there is a remote reset command this cycle carries out a
reset of the summed intensity into 0.

Fig.2 The Client-Server subsystem block diagram

The Client program structure

The Client application is purposed for on-line - off-line processing and looking through the
remote measurement results and control. If the Remote flag enabled by the Server permits to do
this in this Client, then the Client application may control.
The Client application consists of the two program loops: the Event structure and a continuous
working cycle While. The first cycle treats the events every 300ms from all the buttons, sliders
and dropping lists of the application. Joining to the DSS on the transport protocol the working
cycle gets 9 client connections. All 9 connections open simultaneously (TimeOut=20s). Inside
the working cycle the data are consequently treated in 3 stages (a, b, c):
Out1

In

In R

Out PU


X
Start
NI USB-6259
Dig
I/O ADC Count-32















A/D Clock
PFI 1
A/D Trigger
PFI 0

USB 2.0



Server
Client

Y


TCP
EPIK-3
Client
The beam spatial parameters
from the MWD (30x30)

WENZEL
N1130-4
MWD HV
DDPCA-300
(E+4...E+13 V/A)
U/F converter
converter
Ionization chamber
I

F

NI PCI-6703
Strobe
SSB-68
4

4
E+4 E+11 ions
32x64
64

PFI 9

PFI 8

ai16:19

port0/
line8:11
ai0:3

MWD
(30X30)
The slow extraction gate
310

a. Reconstruction of each connection if it is lost is performed with a preliminary closing of the
lost connection. The read-out of the new data from the socket is performed if the connection
is normal.
b. The choice of the new (regime Work) or previous data array from the profile-meters (regime
FileOpen) for calculations and imaging or its remembering into the file (regime
FileSave/FileSave as);
c. Processing and imaging of the data array in regimes Work and FileOpen.
At first for 32 measurements the program calculated 32 beam profile shots by using the
method of the least squares taking into account the chosen distance between the wires in
each profile-meter. Then the averaged integral beam profile is calculated for each profile-
meter. After that for each chosen profile-meter the 32 beam profile shots and the integral
beam profile are reflected in the screen tab Profiles on axes X, Y. The tab Integral
profiles shows the integral profiles of all 4 profile-meters simultaneously.


a)

b)

c)

d)
Fig.3 The beam tests of Client-Server subsystem to control extracted beam parameters
(21.03.2011 16:55). The Client window tabs: a) Integral profiles, b) Profiles,
c) Intensity. d) HV supply control

Every 400ms the working cycle (stage a) looks through 9 socket connections (TimeOut=50ms).
The number of the current Client cycle informs the Server that the Client is in the network. In its
turn the Client controls the renewing of the Server working cycle. When the Server is not
accessible in the network, only off-line look-through is available for the Client to see the saved
data. The Client work is carried out in the on-line regime if the Server is valid. At the moment
311

when module NI USB-6259 (Timeout=1000ms) has completed the measurements, the Client
reads out a two-dimensional data array of the profile-meters with the timing stamp from the
socket, remembers and shows everything on the screen tabs and signals about the readiness of
the data. Then the Client application actions are determined relatively the control over the beam
intensity parameters and high voltage suppliers:
If the Client is allowed to control remotely, the parameters chosen in this Client are then
established to the Servers parameters and they are recorded in the modules and sent to the
other Clients.
If not the variable values are established by the Server and sent to all the Clients.
When the measurements of the beam intensity are completed by the Server, the Clients read out
the beam intensity value together with the parameters of measurements and calculations of the
intensity.

The distributed subsystem beam tests

In the March 2011 run the subsystem was tested on the extracted beam (Fig.2) with 3 profile-
meters and two Clients. Fig. 3 shows the screen tabs of the Client window. Fig.3a illustrates the
integral profiles of the 3 profile-meters - PIK-1, PIK-2 and PIK-3. Fig.3b gives 32 measurements
of the shot profiles in PIK-2 and their integral profile. Each profile-meter chooses its own color
to present the data of this profile-meter on the both screen tabs. Fig.3.c shows the screen tab
Intensity. The remote control for this case is forbidden (the lamp is not shining), that is why
the measurements are performed with the values established by the Server. The amplification
coefficient of DDPCA-300 is equal to 1E+8. 1584 cycles of measurements were performed.
Since the screen tab with the integral profile values must be watched permanently, the values of
the intensities are given also below all the screen tabs. Fig.3d in the screen tab HV supply
control shows that the control sliders are not accessible for the user because the remote control
is forbidden (the lamp is not shining).

Conclusion

The distributed subsystem to control parameters of the Nuclotron extracted beam was made on
client-server technology in LabView by National Instruments. The subsystem test on the beam in
the March 2011 run of the Nuclotron showed the correct approach to the hardware-software
selection and a stable Server interaction with some Clients during 24 hours. A successfully tested
application of DSS-server and XControl is planned to be used in the Nuclotron injection
subsystem developed by us earlier and operated already in 2 runs of the accelerator.

Literature

1. E.V. Gorbachev et al. Dubna; JINR, 2010, pp.144-149.
2. http://www.ni.com
3. http://zone.ni.com/devzone/
312
INDEX of REPORTERS

Akishina Valentina JINR 10
Antchev Gueorgui INRNE-BAS,Switzerland Gueorgui.Antchev@cern.ch 29
Atkin Eduard MEPhI, Moscow, Russia atkin@eldep.mephi.ru 77
Balabanski Dimiter INRNE-BAS, Bulgaria balabanski@inrne.bas.bg 42
Barberis Dario University of Genova, Italy Dario.Barberis@cern.ch 52
Bazarov Rustam IMIT, AS Uzbekistan rustam.bazarov@gmail.com 64
Belov Sergey JINR belov@jinr.ru 68,74
Bezbakh Andrey JINR Delphin.silence@gmail.com 242
Chernenko Sergey JINR chernenko@jinr.ru 296
Dannheim Dominik CERN dominik.dannheim@cern.ch 100
Derenovskaya Olga JINR odenisova@jinr.ru 107
Dimitrov Lubomir INRNE-BAS, Bulgaria ludim@inrne.bas.bg 112,191
Dimitrov Vladimir University of Sofia , Bulgaria cht@fmi.uni-sofia.bg 115
Dolbilov Andrey JINR dolbilov@jinr.ru 20
Elizbarashvili Archil SU, Tbilisi, Georgia archil.elizbarashvili@tsu.ge 122
Farcas Felix NIRDIMT, Romania felix@itim-cj.ro 259
Filozova Irina JINR fia@jinr.ru 132
Garelli Nicoletta CERN nicoletta.garelli@cern.ch 138
Golunov Alexander JINR agolunov@mail.ru 154
Gorbunov Ilia JINR ingorbunov@gmail.com 145
Gorbunov Nikolay JINR gorbunov@jinr.ru 154
Grebenyuk Victor JINR greben@jinr.ru 60,86
Isadov Victor JINR brahman63@mail.ru 16
Ismayilov Ali IP, Azerbaijan alismayilov@gmail.com 8
Ivanoaica Teodor NIPNE (IFIN-HH), Romania iteodor@nipne.ro 163
Ivanov Victor JINR ivanov@jinr.ru 20
Kalinin Anatoly JINR kalinin@nusun.jinr.ru 90
Kirilov Andrey JINR akirilov@nf.jinr.ru 169,174,236
Korenkov Vladimir JINR korenkov@cv.jinr.ru 20,68,145,148,
154
Kouba Tomas IP, AS Czech Republic koubat@fzu.cz 94
Kreuzer Peter RWTH Aachen, Germany/CERN Peter.Kreuzer@cern.ch 179
Lebedev Nikolay JINR nilebedev@gmail.com 158
Lyublev Y. ITEP, Russia lublev@itep.ru 186
Mitev Georgi INRNE-BAS, Bulgaria gmmitev@gmail.com 191,264
Mitev Mityo TU, Sofia, Bulgaria mitev@ecad.tu-sofia.bg 264
Murashkevich Svetlana JINR svetlana@nf.jinr.ru 174
Nikiforov Alexander JINR, MSU nikif@inbox.ru 60
Osipov Dmitry MEPhI, Moscow, Russia DLOsipov@MEPHI.RU 77
Petrova Petia ISER-BAS, Bulgaria Petia.Petrova@cern.ch 196
Polyakov Aleksandr JINR polyakov@sungns.jinr.ru 281,286
Prmantayeva Bekzat ENU, Kazakhstan Prmantayeva_BA@enu.kz 200
Ratnikov Fedor IT, Karlsruhe, Germany fedor.ratnikov@kit.edu 206
Ratnikova Natalia IT, Karlsruhe, Germany ratnik@ekp.uni-karlsruhe.de 212
Rukoyatkina Tatiana JINR rukoyt@susne.jinr.ru 158
Schovancova Jaroslava IP, ASCR, Czech Republic jschovan@cern.ch 219
Sedykh George JINR eg0r@bk.ru 223
Shapovalov Andrey MEPhI, Moscow, Russia andrey.shapovalov@desy.de 227
Sidorchuk Sergey JINR sid@nrmail.jinr.ru 242
Sirotin Alexander JINR sirotin@nf.jinr.ru 236
Slepnev Roman JINR roman@nrmail.jinr.ru 242
Strizh Tatyana JINR strizh@jinr.ru 68
Svistunov Sergiy ITP, Ukraine svistunov@bitp.kiev.ua 246
Tarasov Vladimir JINR vtarasov51@mail.ru 158
Tikhonenko Elena JINR eat@cv.jinr.ru 68,148
Tleulessova Indira ENU, Kazakhstan indira.t.nph@gmail.com 200
Tomskova Anna IM & ICT Uzbekistan tomskovaanna@gmail.com 253
313
Trusca M.R.C. NIRD of IMT, Romania Radu.Trusca@itim-cj.ro 259
Tsaregorodtsev Andrei CPP, Marseille, France atsareg@in2p3.fr 269
Tsyganov Yury JINR tyura@sungns.jinr.ru 278,281,286
Tutunnikov Sergey JINR tsi@sunse.jinr.ru 223
Voinov Alexey JINR voinov@sungns.jinr.ru 281,286
Zager Valery JINR valery@jinr.ru 292
Zhiltsov Victor JINR zhiltsov@jinr.ru 68,148
Zlokazov Victor JINR zlokazov@jinr.ru 281,300

You might also like