Professional Documents
Culture Documents
& COMPUTING
NEC2011
Proceedings
of the XXIII International Symposium
Joint Institute for Nuclear Research
NUCLEAR ELECTRONICS
& COMPUTING
XXIII International Symposium
Varna, Bulgaria, September 12-19, 2011
Proceedings of the Symposium
NEC2011
XXIII
, , 12-19 2011 .
2011
The Proceedings of the XXIII International Symposium on Nuclear Electronics &
Computing (NEC'2011) contain the papers presented at NEC'2011, which was held on
1219 September, 2011 (Varna, Bulgaria). The symposium was organized by the Joint Institute
for Nuclear Research (Dubna, Russia), the European Laboratory for Particle Physics (CERN)
(Geneva, Switzerland) and the Institute for Nuclear Research and Nuclear Energy of the
Bulgarian Academy of Sciences (Sofia, Bulgaria). The symposium was devoted to the problems
of detector & nuclear electronics, computer applications for measurement and control in
scientific research, triggering and data acquisition, methods of experimental data analysis,
computing & information systems, computer networks for scientific research and GRID
computing.
XXIII
(NEC'2011) , NEC'2011,
1219 2011 . . ().
(, ),
(CERN) (, )
(, ).
,
, ,
,
, ,
GRID-.
3
General Information
The XXIII International Symposium on Nuclear Electronics and Computing (NEC'2011) was
held on 12-19 September, 2011 in Varna, Bulgaria. The symposium was organized by the
Joint Institute for Nuclear Research (JINR) (Dubna, Russia), European Organization for
Nuclear Research (CERN) (Geneva, Switzerland) and the Institute for Nuclear Research and
Nuclear Energy of the Bulgarian Academy of Sciences (INRNE) (Sofia, Bulgaria). About
100 scientists from 15 countries (Russia, Bulgaria, Switzerland, Czech Republic, Poland,
Belarus, Azerbaijan, Germany, Georgia, France, USA, Italy, Romania, Ukraine and
Kazakhstan) have participated in NEC'2011. They have presented 61 oral reports and 28
posters.
JINR (Dubna):
V.V. Korenkov co-chairman, E.A. Tikhonenko scientific secretary, A. Belova (secretary),
A.G. Dolbilov, N.V. Gorbunov, S.Z. Pokuliak, Y.K. Potrebenikov, A.V. Prikhodko ,
V.I. Prikhodko, S.I. Sidortchuk, T.A. Strizh, A.V. Tamonov, N.I. Zhuravlev
CERN (Geneva):
L. Mappelli co-chairman, T. Kurtyka, P. Hristov
INRNE (Sofia):
I.D. Vankov co-chairman, L.P. Dimitrov secretary, S. Piperov, K. Gigov
International Program Committee
O. Abdinov (IoP, Baku), F. Adilova (IM&IT, AS, Tashkent), D. Balabanski (INRNE, Sofia),
A. Belic (IP, Belgrade), I. Bird (CERN, Geneva), J. Cleymans (University of Cape Town),
S. Enkhbat (AEC, Ulan Bator), D. Fursaev (Dubna University), I. Golutvin (JINR, Dubna),
H.F. Hoffmann (ETH, Zurich), V. Ilyin (SINP MSU, Moscow), V. Ivanov (JINR, Dubna),
B. Jones (CERN, Geneva), V. Kantser (ASM, Chisinau), A. Klimentov (BNL, Upton),
M. Korotkov (ISTC, Moscow), V. Sahakyan (IIAP, Yerevan), M. Lokajicek (IP ASCR,
Prague), G. Mitselmakher (UF, Gainesville), S. Newhouse (EGI, Amsterdam), V. Shirikov
(JINR, Dubna), N. Rusakovich (JINR, Dubna), N. Shumeiko (NSECPHEP, Minsk),
N.H. Sweilam (Cairo University), M. Turala (IFJ PAN, Krakow), A. Vaniachine (ANL,
Argonne), G. Zinovjev (ITP, Kiev).
MAIN TOPICS
Detector & Nuclear Electronics;
Accelerator and Experiment Automation Control Systems. Triggering and Data
Acquisition;
Computer Applications for Measurement and Control in Scientific Research;
Methods of Experimental Data Analysis;
Data & Storage Management. Information & Data Base Systems;
GRID & Cloud computing. Computer Networks for Scientific Research;
LHC Computing;
Innovative IT Education: Experience and Trends.
4
CONTENTS
Creating a distributed computing grid of Azerbaijan for collaborative research
O. Abdinov, P. Aliyeva, A. Bondyakov, A. Ismayilov 8
Fast Hyperon Reconstruction in the CBM
V.P. Akishina, I.O. Vassiliev 10
Remote Control of the Nuclotron magnetic field correctors
V.A. Andreev, V.A. Isadov, A.E. Kirichenko, S.V. Romanov, V.I. Volkov 16
The Status and Perspectives of the JINR 10 Gbps Network Infrastructure
K.N. Angelov, A.E. Gushin, A.G. Dolbilov, V.V. Ivanov, V.V. Korenkov,
L.A. Popov 20
The TOTEM Roman Pot Electronics System
G. Antchev 29
Novel detector systems for nuclear research
D.L. Balabanski 42
Data handling and processing for the ATLAS experiment
D. Barberis 52
Time-of-flight system for controlling the beam composition
P. Batyuk, I. Gnesi, V. Grebenyuk, A. Nikiforov, G. Pontecorvo, F. Tosello 60
Development of the grid-infrastructure for molecular docking problems
R. Bazarov, V. Bruskov, D. Bazarov 64
Grid Activities at the Joint Institute for Nuclear Research
S.D. Belov, P. Dmitrienko, V.V. Galaktionov, N.I. Gromova, I. Kadochnikov,
V.V. Korenkov, N.A. Kutovskiy, V.V. Mitsyn, D.A. Oleynik, A.S. Petrosyan,
I. Sidorova, G.S. Shabratova, T.A. Strizh, E.A. Tikhonenko, V.V. Trofimov,
A.V. Uzhinsky, V.E. Zhiltsov 68
Monitoring for GridNNN project
S. Belov, D. Oleynik, A. Pertosyan 74
A Low-Power 9-bit Pipelined CMOS ADC for the front-end electronics of
the Silicon Tracking System
Yu. Bocharov, V. Butuzov, D. Osipov, A. Simakov, E. Atkin 77
The selection of PMT for TUS project
V. Borejko, A. Chukanov, V. Grebenyuk, S. Porokhvoy, A. Shalyugin, L. Tkachev 86
The possibility to overcome the MAPD noise in scintillator detectors
V. Boreiko, V. Grebenyk, A. Kalinin, A. Timoshenko, L. Tkatchev 90
5
Prague Tier 2 monitoring progress
J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec 94
Detector challenges at the CLIC multi-TeV e+e- collider
D. Dannheim 100
J/ -> e+e- reconstruction in Au + Au collision at 25 AGeV in the CBM experiment
O.Yu. Derenovskaya, I.O. Vassiliev 107
Acquisition Module for Nuclear and Mossbauer Spectroscopy
L. Dimitrov, I. Spirov, T. Ruskov 112
Business Processes in the Context of Grid and SOA
V. Dimitrov 115
ATLAS TIER 3 in Georgia
A. Elizbarashvili 122
JINR document server: current status and future plans
I. Filozova, S. Kuniaev, G. Musulmanbekov, R. Semenov, G. Shestakova,
P. Ustenko, T. Zaikina 132
Upgrade of Trigger and Data Acquisition Systems for the LHC Experiments
N. Garelli 138
VO Specific Data Browser for dCache
M. Gavrilenko, I. Gorbunov, V. Korenkov, D. Oleynik, A. Petrosyan, S. Shmatov 145
RDMS CMS data processing and analysis workflow
V. Gavrilov, I. Golutvin, V. Korenkov, E. Tikhonenko, S. Shmatov, V. Zhiltsov,
V. Ilyin, O. Kodolova, L. Levchuk 148
Remote operational center for CMS in JINR
A.O. Golunov, N.V. Gorbunov, V.V. Korenkov, S.V. Shmatov, A.V. Zarubin 154
JINR Free-electron maser for applied research: upgrade of the control system
and power supplies
E.V. Gorbachev, I.I. Golubev, A.F. Kratko, A.K. Kaminsky, A.P. Kozlov, N.I. Lebedev,
E.A. Perelstein, N.V. Pilyar, S.N. Sedykh, T.V. Rukoyatkina, V.V. Tarasov 158
GriNFiC - Romanian Computing Grid for Physics and Related Areas
T. Ivanoaica, M. Ciubancan, S. Constantinescu, M. Dulea 163
Current state and prospects of the IBR-2M instrument control software
A.S. Kirilov 169
Dosimetric Control System for the IBR-2 Reactor
A.S. Kirilov, M.L. Korobchenko, S.V. Kulikov, F.V. Levchanovskiy,
S.M. Murashkevich, T.B. Petukhova 174
6
CMS computing performance on the GRID during the second year of LHC collisions
P. Kreuzer on behalf of the CMS Offline and Computing Project 179
The Local Monitoring of ITEP GRID site
Y. Lyublev, M. Sokolov 186
Method for extending the working voltage range of high side current sensing circuits,
based on current mirrors, in high-voltage multichannel power supplies
G.M. Mitev, L.P. Dimitrov 191
Early control software development using emulated hardware
P. Petrova 196
Virtual lab: the modeling of physical processes in Monte-Carlo method the
interaction of helium ions and fast neutrons with matter
B. Prmantayeva, I. Tleulessova 200
Big Computing Facilities for Physics Analysis: What Physicists Want
F. Ratnikov 206
CMS Tier-1 Center: serving a running experiment
N. Ratnikova 212
ATLAS Distributed Computing on the way to the automatic site exclusion
J. Schovancova on behalf of the ATLAS Collaboration 219
The free-electron maser RF wave centering and power density measuring
subsistem for biological applications
G.S. Sedykh, S.I. Tutunnikov 223
Emittance measurement wizard at PITZ, release 2
A. Shapovalov 227
Modernization of monitoring and control system of actuators and
object communication system of experimental installation DN2 at 6a channel
of reactor IBR-2M
A.P. Sirotin, V.K. Shirokov, A.S. Kirilov, T.B. Petukhova 236
VME based data acquisition system for ACCULINNA fragment separator
R.S. Slepnev, A.V. Daniel, M.S. Golovkov, V. Chudoba, A.S. Fomichev,
A.V. Gorshkov, V.A. Gorshkov, S.A. Krupko, G. Kaminski, A.S. Martianov,
S.I. Sidorchuk, A.A. Bezbakh 242
Ukrainian Grid Infrastructure. Current state
S. Svistunov 246
GRID-MAS conception: the applications in bioinformatics and telemedicine
A. Tomskova, R. Davronov 253
7
Techniques for parameters monitoring at Datacenter
M.R.C. Trusca, F. Farcas, C.G. Floare, S. Albert, I. Szabo 259
Solar panels as possible optical detectors for cosmic rays
L. Tsankov, G. Mitev, M. Mitev 264
Managing Distributed Computing Resources with DIRAC
A. Tsaregorodtsev 269
On some specific parameters of PIPS detector
Yu. Tsyganov 278
Automation of the experiments aimed to the synthesis of superheavy elements
Yu. Tsyganov, A. Polyakov, A. Sukhov, V. Subbotin. A. Voinov,
V. Zlokazov, A. Zubareva 281
Calibration of the silicon position-sensitive detectors using the implanted
reaction products
A.A. Voinov, V.K. Utyonkov, V.G. Subbotin, Yu.S. Tsyganov, A.M. Sukhov,
A.N. Polyakov, A.M. Zubareva 286
High performance TDC module with Ethernet interface
V. Zager, A. Krylov 292
Front End Electronics for TPC MPD/NICA
Yu. Zanevsky, A. Bazhazhin, S. Bazylev, S. Chernenko, G. Cheremukhina,
V. Chepurnov, O. Fateev, S. Razin, V. Slepnev, A. Shutov,
S. Vereschagin, V. Zryuev 296
Mathematical Model for the Coherent Scattering of a Particle Beam
on a Partially Ordered Structure
V.B. Zlokazov 300
The distributed subsystem to control parameters of the Nuclotron extracted beam
E.V.Gorbachev, N.I.Lebedev, N.V.Pilyar, S.V.Romanov, T.V.Rukoyatkina, 307
V.I.Volkov
INDEX of REPORTERS 312
8
Creating a distributed computing grid of Azerbaijan for
collaborative research
O. Abdinov, P. Aliyeva, A. Bondyakov, A. Ismayilov
Institute of Physics, Baku, Azerbaijan
In this article, we briefly review the results of a distributed computing system with
Grid (grid) architecture based on a set package of middleware gLite in the Institute of
Physics of ANAS. It was formed to meet the challenges of distributed data processing in
experimental particle accelerator Large Hadron Collider (LHC) - the Large Hadron Collider
(LHC) in Geneva (Switzerland).
A number of scientific centers of Azerbaijan, such as BSU, IP have many years of
traditional high level of cooperation with international research centers in the area of basic
research. The rapid development of network and computer and information technologies in
recent years have created the preconditions for unification of network and information
resources of Europe and Azerbaijan, aimed at solving specific scientific and applied problems,
the successful implementation of which is impossible without the use of high-performance
computing, new approaches in the conduct of distributed and parallel calculations and the use
of large amounts of data storage systems.
Creating a Grid infrastructure will significantly improve the effectiveness of
cooperation between research centers of Azerbaijan and Europe. The scope of cooperation
will join the new research group of the Baku State University, Institute of Physics, National
Academy of Sciences of Azerbaijan, Institute of Information Technology, and others,
significantly expands the spectrum of scientific and applied research of mutual interest.
In 2009, we started the creation of computing Gridinfrastructure in Azerbaijan and
work on installing the necessary clusters and application programs on it. In the process of the
project following results were accomplished:
creation of the computing center in the Institute of Physics (Fig. 1), which functions
24/7. The center includes 300 multiprocessing computers on the basis of processor Intel
Xeon, data storage (of ~ 140 TB) on the basis of client-server architecture, 160 cores
(blade servers), 4 UPS,
setting of the high speed connection to the Internet by the means of optical fiber
cables (with the speed of 25 MB/s) (Fig. 2),
installation of middleware, accomplishment of text trials, adjustment of an
uninterrupted functioning of the Grid-segment,
preparation of the necessary conditions for other scientific and educational centers
of Azerbaijan in order to connect to the given Grid- segment.
Fig. 1. The computing center in the Institute of Physics of ANAS in Baku
9
Fig. 2. The network infrastructure of AZRENA (Research and Educational Network in
Azerbaijan)
At the same time began work on several projects in different organizations: research
institutes, and universities. Among the first, of course, were the groups and organizations that
are already having problems with research in biology and medicine to study the properties of
matter in particle accelerators (in experimental nuclear physics and high energy physics). All
work of this type (grid) was carried out jointly, the large number of specialists from various
disciplines. As a result:
Local certification center (local CA) was created and it is testing now,
Azerbaijan grid services will be connected to EDU VO (JINR) with local CA,
Local CA will be registered in EUGridPMA,
Agreement with Ukraine to be participant of Medgrid VO (Medical Grid-system
for population research in the field of cardiology with electrocardiogram database),
10 TB disk spaces,
Research on solid state physics (charge density and electronic structure of systems
made of electrons and nuclei (molecules and periodic solids) within Density
Functional Theory (DFT), using pseudopotentials and a planewave basis ),
Research on astrophysics (calibration, data analysis, image display, plotting, and a
variety of ancillary tasks on Astronomical Data).
AZRENA Network
PSTN Backbone
ASEU
Odlar Yurdu
University
Med.
University
Foreign Lang.
University
Cooperation
University
Pedagogical
University
BSU ANAS
Arch.
University
AzTU
Teachers
University
10
Fast Hyperon Reconstruction in the CBM
V.P. Akishina
1,2
, I.O. Vassiliev
2,3
1
Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia
2
Goethe University, Frankfurt, Germany,
3
GSI, Darmstadt, Germany
Introduction
The main goal of the Compressed Baryonic Matter (CBM) Experiment [1,2] is to
study the behavior of nuclear matter in the conditions of high baryonic density in which the
transition to a deconfined quark gluon plasma phase is expected. One of the signatures of this
new state is the enhanced production of strange particles; therefore hyperons reconstruction is
essential for the understanding of the heavy ion collision dynamics. Also the yield of particles
carrying strange quarks is expected to be sensitive to the fireball evolution.
O
-
-hyperon decay reconstruction
O
-
-hyperon consist of 3 strange quarks, therefore is one of the most interesting objects.
Like all other hyperons O
-
will be measured in the CBM-detector by their decay into charged
hadrons, which are detected in the STS.
Input to the simulation
To study the feasibility of fast hyperon reconstruction in the CBM experiment a set of
10
4
central Au+Au UrQMD [3] events and a set of 10
4
O
-
-> K
-
A decay events were
processed at 25 AGeV using detector simulation tool CBMROOT with GEANT3 engine. To
investigate the dependence of O
-
-reconstruction efficiency on the track multiplicity 10
4
minimum bias Au+Au UrQMD events and a set of 10
4
O
-
->K
-
A decays embedded into
central UrQMD events were simulated.
Central Au-Au UrQMD event at 25 AGev contains in average 362 pions, 161 protons,
32 As and 13 kaons, which make a contribution to the background. The realistic STS
geometry with 2 MAPS at 5 cm and 10 cm and 8 double-sided segmented strip detectors was
used as the tracker. Monte Carlo (MC) -identification for protons was used, which can be
successfully replaced by the particle identification with Time-of-Flight (TOF) detector in the
experiment.
Event reconstruction
The reconstruction of O
-
decay events includes several steps. First, fast SIMDized
track finding and fitting [7,8] in the STS is performed using the L1 reconstruction package
[4]. The track with at least 4 MC points in the STS stations is considered as a reconstructable.
The reconstructed track is assigned to the MC particle if at least 70 % of track hits were
caused by this particle. Reconstructed track is called ghost if it is not assigned to any particle
according to the 70 % criterion. The track fitting algorithm was based on the Kalman filter
[5]. The primary vertex was determined from all reconstructed tracks in the STS excluding
the ones coming from well detached vertex.
Fast SIMDized A-decays reconstruction is already implemented. The average times of
different stages of reconstruction are listed in the table. Online O
-
-decay reconstruction is
under development.
11
Table 1. Time requirements for reconstruction stages [6]
Finder: 80 ms
Fitter: 1.6 ms
PV: 51.2 ms
A: 43.8 ms
Total: 176.6 ms/16 core
Detection strategy
To distinguish the signal from the background a set of cuts on single tracks
reconstructed A -candidates and O
-
-hyperons parameters were obtained. The cuts were chosen
with respect to the significance by studding the simulated distributions of the cut variables for
signal and background pairs. For each type of particles the significance function, which shows
the feasibility of signal detection against the background fluctuation, was calculated using
equation:
Background + Signal
Signal
= ce Significan
Cuts on protons, pions and kaons tracks.
In order to reduce combinatorial background we, first of all apply the cuts on single
tracks level. Impact parameter of charged track is defined as the distance between the primary
vertex and the track extrapolation point to the target plane with z=z
pv
. This value measured in
os is called _
2
prim
and takes into account track extrapolation errors, which depend on the
particle momentum. This cut is intended to reduce the amount of primary particles: _
2
prim
is
smaller for particles coming from primary vertex than for signal ones. Significance for
protons, pions and kaons reaches maximum at _
2
primcut
=5.5; 7; 8 o respectively (Fig. 1 for
proton), therefore cuts _
2
prim
>_
2
primcut
were chosen as optimal in case of each type of
particles.
Fig.1. Significance for proton
_
2
prim
.
Fig. 2. Significance for A
_
2
geo
.
12
Cuts on A-candidates parameters
After off-vertex tracks were selected all protons are combined with negatively charged
tracks to create a A-candidate, using KFParticle package [5]. Thus, the second step is aimed to
suppress A -determined background.
The next cut which was used for A candidates is a cut on _
2
geo
, which measures in os
the distance of closest approach for the pair of tracks, calculated as the distance between
tracks at the z-position of the secondary vertex. Thus, the optimal cut _
2
geo
<3 o (Fig. 2)
reduces random combinations of tracks.
The next cut, z-position of fitted secondary vertex, also reduces random combinations.
Most part of A decay points are well detached from the target plane, therefore z >5 cm cut
was applied. As one can see on Fig. 4, significance in this case has a plane wide maximum
with efficiency varying in a narrow diapason, therefore 5 cm was chosen for the cut value in
order to save more signal with almost the same significance value (Fig. 3). The _
2
topo
of the
A- candidate defined as the distance between primary vertex and the extrapolation of the
reconstructed particles momentum to the interaction vertex in the target plane, measured in
os. Primary As come from primary vertex, while signal ones come from O
-
decay point,
therefore primary As have _
2
topo
smaller than signal daughter ones and cut _
2
topo
>7.5 o was
applied, for which significance reaches the maximum (Fig. 4). This cut reduces random
combinations as well as primary As.
Fig. 3. Significance for A
zposition
Fig. 4. Significance for A _
2
topo
Fig. 5. t
-
p candidates invariant mass
spectrum
Fig. 6. t
-
p candidates invariant
mass spectrum: m
inv
=m
pdg
6 o
13
The obtained A candidates invariant mass distribution has o =1.282 MeV/c
2
(Fig. 5).
Reconstructed mass value 1.116 0.003 (Gev/c
2
) is in a good agreement with m
pdg
[9]. The
last cut we used for A-candidates is the cut m
inv
=m
pdg
6 o on invariant mass (Fig. 6).
Cuts on O
-candidates parameters
For the last step of O
-candidates are
accepted if they have a good quality geometrical and topological detached vertex: _
2
topo
<3 o,
_
2
geo
<3 o, z >5 cm.
Reconstructed invariant mass distribution of O
-
-candidates
After applying cuts for 10
4
background central Au+Au UrQMD events the distribution was
fitted with fourth-degree polynomial. The shape of background and signal was normalized to 10
8
central Au-Au UrQMD events. Also statistic fluctuations were added. The obtained signal and
background invariant mass spectra are shown on Fig. 7. Signal reconstruction efficiency is 0.5 %.
Obtained signal to background ratio S/B
2o
=0.38. Reconstructed mass value 1.6724 0.005 (Gev/c
2
)
is in a good agreement with simulated one 1.677245 (Gev/c
2
).
Fig. 7. Reconstructed invariant mass distribution
of AK
-
-candidates. O
reconstruction efficiency is
0.55 %, S/B ratio is 0.4, reconstructed mass value
is 1.672 GeV/c
2
Fig. 8. O
-hyperon reconstruction
efficiency vs track multiplicity
Efficiency analysis
In order to investigate the dependence of reconstruction efficiency on the track
multiplicity 10
4
minimum bias Au+Au UrQMD events and a set of 2 10
4
O
-decays
embedded into full UrQMD events were simulated. As a result the dependence O
-hyperon
reconstruction efficiency vs track multiplicity was obtained (Fig. 8). In the case when the
signal events are reconstructed alone the reconstruction efficiency after all cuts were applied
is c =3.1 %. In the case of a central UrQMD Au-Au collisions c drops down to 0.5 %. These
two values were obtained with high statistics and are shown with precise dots in the figure.
The average minimum bias efficiency is 2.4 %. This efficiency drop is caused by clustering in
STS detector.
14
-decays reconstruction
To study the feasibility of
and A reconstruction in the CBM experiment, a set of
10
4
central Au+Au UrQMD events at 4.85 AGeV were simulated. At 4.85 AGeV central
Au+Au UrQMD event contains in average 7 As and 0,034
. The
decays to At
-
with
branching ratio 99.9 % and ct =4.91 cm. The STS geometry with 8 double-sided segmented
strip detectors were used for tracking. No kaon, pion or proton identification is applied. In
order to reconstruct the Apt
-
decay the proton mass was assumed for all positively charged
tracks and pion mass for all negatively charged ones. The combination of single track cut
(_
2
prim
>3 o) and geometrical vertex (_
2
geo
<3 o) cut allows to see clear signal (Fig. 9) of A.
The
event reconstruction includes several steps: fast SIMDized tracks finding and
fitting [7, 8], where all tracks are found; tracks with _
2
prim
>8o and 5o (positively and negatively
charged respectively) are selected for a A search, where positively charged tracks were combined
with the t
-
-tracks to construct a A-KFParticle; good quality geometrical vertex (_
2
geo
<3 o) was
required to suppress combinatorial background.
The invariant mass of the reconstructed pair is compared with the A mass value; only
pairs inside 1.116 6 o were accepted; primary A rejection, where only A with _
2
prim
>5 o and
z-vertex greater than 6 cm are chosen. Selected As were combined with the secondary t
-
(_
2
prim
>
8o) tracks and
-
, or the determination of slepton masses in
SUSY models. This leads to a required resolution of (p
T
)/p
T
2 x 10
-5
GeV
-1
. High-
resolution pixel vertex detectors are required for efficient tagging of heavy states through
displaced vertices, with an accuracy of approximately 5 m for determining the transverse
impact parameters of high-momentum tracks and a multiple scattering term of approximately
15 m. The latter can only be achieved with a very low material budget of less than 0.2% of a
radiation length per detection layer, corresponding to a thickness of less than 200 m of
silicon, shared by the active material, the readout, the support and the cooling infrastructure.
The time structure of the collisions, with bunch crossings spaced by only 0.5 ns, in
combination with the expected high rates of beam-induced backgrounds, poses severe
challenges for the design of the detectors and their readout systems. Of the order of one
interesting physics event per 156 ns bunch train is expected, overlaid by an abundance of
particles originating from two-photon interactions. These background particles will lead to
large occupancies (number of hits per readout cell) in the inner and forward detector regions
and will require time stamping on the nano-second level in most detectors, as well as
sophisticated pattern-recognition algorithms to disentangle physics from background events.
The gap of 20 ms between consecutive bunch trains will be used for trigger-less readout of the
entire train. Furthermore, most readout subsystems will be operated in a power-pulsing mode
with the most power-consuming components switched off during the empty gaps, thus taking
advantage of the low duty cycle of the machine to reduce the required cooling power.
4. Detector Concepts
The detector concepts ILD [3] and SiD [4] developed for the International Linear
Collider (ILC) [5] with a center-of-mass energy of 500 GeV form the starting point for the
two general-purpose detector concepts CLIC_ILD and CLIC_SiD. Both detectors will be
102
operated in one single interaction region in an alternating mode, moving in and out every few
months through a so-called push-pull system. The main CLIC-specific adoptions to the ILC
detector concepts are an increased hadron-calorimeter depth to improve the containment of
jets at the CLIC centre-of-mass energy of up to 3 TeV and a redesign of the vertex and
forward regions to mitigate the effect of high rates of beam-induced backgrounds.
Fig. 2 shows cross-section views of CLIC_ILD and CLIC_SiD. Both detectors have a
barrel and endcap geometry with the barrel calorimeters and tracking systems located inside a
superconducting solenoid providing an axial magnetic field of 4 T in case of CLIC_ILD and 5 in
case of CLIC_SiD. The highly granular electromagnetic and hadronic calorimeters of both
detectors are designed for the concept of particle-flow calorimetry, allowing to reconstruct
individual particles combining calorimeter and tracking information and thereby improving the
jet-energy resolution to the required excellent levels. The total combined depth of the
electromagnetic and hadronic calorimeters is about 8.5 hadronic interaction lengths. The hit-time
resolution of the calorimeters is of the order of 1 ns.
Fe Yoke
2
.
6
m
Fig. 2. Longitudinal cross section of the top quadrant of CLIC_ILD (left) and CLIC_SiD (right)
In the CLIC_ILD concept, the tracking system is based on a large Time Projection
Chamber (TPC) with an outer radius of 1.8 m complemented by an envelope of silicon strip
detectors and by a silicon pixel vertex detector. The all-silicon tracking and vertexing system
in CLIC_SiD is more compact with an outer radius of 1.3 m.
Both vertex detectors are based on semiconductor technology with pixels of 20 m x 20 m
size. In case of CLIC_ILD, both the barrel and forward vertex detectors consist of three double
layers which reduce the material thickness needed for supports. Fig. 3 shows a sketch of the vertex-
detector region of CLIC_ILD. For CLIC_SiD, a geometry with five single barrel layers and 7
single forward layers was chosen. The high rates of incoherently produced electron-positron pair
background events constrains the radius of the thin beryllium beam pipes and of the innermost
barrel layers. For CLIC_ILD the beam pipe is placed at a radius of 29 mm, while the larger
magnetic field in CLIC_SiD leads to a larger suppression of low-p
T
charged particles and therefore
allows for a reduced radius of the beam pipe of 25 mm. The material budget of 0.1% - 0.2% of a
radiation length per detection layer assumes that cooling can proceed through forced air flow
without additional material in the vertex region. The resulting impact-parameter resolutions are as
precise as 3 m for high-momentum tracks and the momentum resolution of the overall tracking
103
systems reach the required value of (p
T
)/p
T
2
2 x 10
-5
GeV
-1
. Time stamping of the pixel and strip
hits with a precision of 5 10 ns will be used to reject out-of-time background hits.
The superconducting solenoids are surrounded by instrumented iron yokes allowing to
measure punch through from high-energy hadron showers and to detect muons. Two small
electromagnetic calorimeters cover the very forward regions down to 10 mrad. They are
foreseen for electron tagging and for an absolute measurement of the luminosity through
Bhabha scattering.
5. Backgrounds in the Detectors
Beamstrahlung off the colliding electron and positron bunches will lead to high rates
of electron-positron pairs, mostly at low transverse momenta and small polar angles. In
addition, hadronic events are produced in two-photon interactions with larger transverse
momenta and polar angles. Fig. 4 (right) compares the polar-angle distributions of the main
sources of beam-induced background events. Electron-positron pairs produced coherently and
through the so-called trident cascade do not affect the detectors, as they leave the detector
towards the past-collision line with a design acceptance of <10 mrad. The incoherently
produced electron-positron events affect mostly the forward regions and the inner tracking
detectors. Approximately 60 particles from incoherent pair events per bunch crossing will
reach the inner layers of the vertex-detector. The hadron events will result in
approximately 54 particles per bunch crossing in the vertex detectors. Fig. 4 (left) shows the
expected hit rates in the barrel vertex-detector layers of the CLIC_ILD detector, as obtained
with two different simulation setups. Readout train occupancies of up to 2% are expected in
the barrel layers and of up to 3% in the forward layers, including safety factors for the
simulation uncertainties and cluster formation.
Due to their harder p
T
spectrum, the hadron events will also lead to large
occupancies and significant energy deposits in the calorimeters. The expected train
occupancies are up to 50% in the electromagnetic endcap calorimeters and up to 1000% in the
hadronic endcap calorimeters. Multiple readouts per train and possibly a higher granularity
for the high-occupancy regions will be required to cope with these high rates. The total energy
deposition in the calorimeters from electron-positron pairs and from hadron events is
37 TeV per train, posing a severe challenge for the reconstruction algorithms. Cluster-based
timing cuts in the 1-3 ns range are applied offline to mitigate the effect of the backgrounds on
the measurement accuracy for high-p
T
physics objects.
Fig. 3. Longitudinal cross section of the barrel and forward vertex region of the
CLIC_ILD detector. Dimensions are given in millimeters
104
Fig. 4. Polar-angle distribution of the main sources of beam-induced backgrounds, normalised
to one bunch crossing (left); average hit densities in the CLIC_ILD barrel vertex detectors for
particles originating from incoherent electron-positron pairs and from gg-->hadrons (right).
The radiation exposure of the main detector elements is expected to be small,
compared to the corresponding regions in high-energy hadron-colliders. For the non-ionizing
energy loss (NIEL), a maximum total fluence of less than 10
11
n
eq
/ cm
2
/ year is expected for
the inner barrel and forward vertex layers. The simulation results for the total ionizing dose
(TID) predict approximately 200 Gy / year for the vertex detector region.
6. Detector R&D
Hardware R&D for the proposed CLIC detectors has a large overlap with the
corresponding developments for the ILC detectors. In several areas, however, CLIC-specific
requirements need to be addressed. The following list contains examples of ongoing R&D
projects for the CLIC detectors:
- Hadronic calorimetry. The higher jet energies expected at CLIC require a denser
absorber material for a given maximal radius of the barrel hadronic calorimeter,
compared to ILC conditions. Tungsten is therefore foreseen as absorber for the barrel
hadronic calorimeter. Prototypes of highly granular tungsten-based calorimeters with
either analog or digital readout are currently under study in test beams performed
within the CALICE collaboration. One of the main goals of these tests is to improve
the simulation models describing the enlarged slow component of the hadronic
showers in tungsten, compared to the ones in steel absorbers;
- Vertex detector. The vertex detectors have to fulfill a number of competing
requirements. Small pixels and therefore small feature sizes are needed to reach very
high measurement accuracy and to keep the occupancies low. Time stamping in the
5-10 ns range requires fast signal collection and shaping. The amount of material has
to stay within a budget of 0.1% - 0.2% of a radiation length per detection layer, asking
for ultra-thin detection and readout layers and low-mass cooling solutions. Two
principal lines of vertex-detector R&D are pursued to reach these ambitious goals: In
the hybrid-detector approach, thinned high-resistivity fully depleted sensor layers will
be combined with fast low-power and highly integrated readout layers through low-
mass interconnects. The integrated technology option combines sensor and readout in
105
one chip. The charge collection proceeds in an epitaxial layer. Hybrid solutions
factorize the sensor and readout R&D and take advantage of industry-standard
processes for the readout layers. Drawbacks are the higher material budget, the
additional material and cost for interconnects and the additional complication of
handling the thinned structures. Integrated technologies can reach lower material
budgets and very low power consumption. On the other hand, fast signal collection
and readout has not been demonstrated yet in these technologies. A concern for a
future application at CLIC is also the limited availability of the custom-made
integrated CMOS processes;
- Low-mass cooling solutions. A total power of approximately 500 W will be dissipated
in the vertex detectors alone. The small material budget for the inner tracking
detectors constrains severely the permitted amount of cooling infrastructure. For the
vertex barrel layers, forced air-flow cooling is therefore foreseen. Fig. 6 shows a
calculation of the temperature distribution inside the barrel layers of the CLIC_SiD
vertex detector in dependence of the air-flow rate. A flow rate of up to 240 liter / s,
corresponding to a flow-velocity of 40 km/h, is required to keep the temperatures at an
acceptable level. Further R&D is required to demonstrate the feasibility of this air-
flow cooling scheme. Possible vibrations arising from the high flow velocities are of
particular concern. Supplementary micro-channel cooling [6] or water-based under-
pressure cooling may be required in the forward vertex regions;
- Power pulsing and power distribution. The ambitious power-consumption targets for
all CLIC sub detectors (for example < 50 mW / cm
2
in the vertex detectors) can only
be achieved by means of pulsed powering, taking advantage of the low duty cycle of
the CLIC machine. The main power consumers in the readout circuits will be kept in
standby mode during most of the empty gap of 20 ms between consecutive bunch
trains. Furthermore, efficient power distribution will be needed to limit the amount of
material used for cables. Low drop-out regulators or DC/DC converters will be used in
combination with local energy storage to limit the current and thereby the cabling
material needed to bring power to the detectors. Both the power pulsing and the
power-delivery concepts have to be designed and thoroughly tested for operation in a
magnetic field of 4-5 T;
- Solenoid coil. Design studies for high-field thin solenoids are ongoing, building up on
the experience with the construction and operation of the LHC detector magnets.
Principal concerns are the uniformity of the magnetic field, the ability to precisely
measure the field map and the requirement to limit the stray field outside the detector;
- Overall engineering design and integration studies. Various CLIC-specific
engineering and integration studies are ongoing. The main areas of these studies are
the design of the experimental caverns including centralized infrastructure for cooling
and powering, access scenarios in the push-pull configuration and integration issues
related to the machine-detector interface.
106
Fig. 6. Calculated average temperatures of the five barrel layers of the CLIC_SiD vertex
detector in dependence of the total air-flow rate
Conclusion
The detectors of the multi-TeV CLIC machine will have unsurpassed physics reach for
discoveries and for precision measurements complementing the results expected from the
LHC experiments. The proposed CLIC detector concepts will be able to measure the physics
with good precision, despite the high energies and challenging background conditions.
Detector R&D studies are ongoing worldwide, in collaboration with the ILC detector
community, aiming to meet the required performance goals.
References
[1] CLIC Conceptual Design Report: Physics & Detectors, 2011, available at
https://edms.cern.ch/document/1160419
[2] CLIC Conceptual Design Report: The CLIC Accelerator Design, in preparation.
[3] T. Abe et al. The International Large Detector: Letter of Intent, 2010, arXiv:1006.3396s.
[4] H. Aihara et al. SiD Letter of Intent, 2009, arXiv:0911.0006, SLAC-R-944.
[5] J. Brau, (ed.) et al. International Linear Collider Reference Design Report, 2007, ILC-
REPORT-2007-001.
[6] A. Mapelli et al. Low material budget microfabricated cooling devices for particle
detectors and front-end electronics. Nucl. Phys. Proc. Suppl., 215, 2011, pp. 349352.
107
J/ -> e+e- reconstruction in Au + Au collision at 25 AGeV in the
CBM experiment
O.Yu. Derenovskaya
1
, I.O. Vassiliev
2,3
1
Laboratory of Information Technologies, Joint Institute for Nuclear Research, Dubna, Russia
2
Goethe University, Frankfurt, Germany
3
GSI, Darmstadt, Germany
Introduction
The Compressed Baryonic Matter (CBM) Experiment [1, 2] is designed to investigate
high-energy heavy ion collisions at the future international Facility for Antiproton and Ion
Research (FAIR) in Darmstadt, Germany. A scientific goal of the research program of the
CBM experiment is to explore a phase diagram of strongly interacting matter in the region of
the highest baryon densities.
The proposed detector system is schematically shown in Fig. 1. Inside the dipole
magnet there is a Silicon Tracking System (STS) which provide track and vertex
reconstruction and track momentum determination. Ring Imaging Cherenkov (RICH) detector
has to identify electrons among about one thousand other charged particles. Transition
Radiation Detectors (TRD) arrays additionally identify electrons with momentum above
1 GeV/c. TOF provides time-of-flight measurements needed for hadrons identification.
Electromagnetic calorimeter (ECAL) measures electrons and photons.
Fig. 1. CBM experimental setup
The investigation of charmonium production is one of the key goals of the CBM
experiment.
The main difficulty lies in the extremely low multiplicity expected in Au+Au
25 AGeV collisions near J/ production threshold. Thats why, the efficient event selection
based on J/ signatures are necessary in order to reduce the data volume to the recordable
rate. Here we present results in reconstruction of J/ meson in its di-electron decay channel
using KFParticle with complete reconstruction includes RICH, TRD, TOF and assume the
realistic STS detector set-up.
108
Input to the simulation
To study the feasibility of J/ detection, background and signal events have been
simulated. Decay electrons from J/ were simulated by PLUTO [5] generator. The
background was calculated with set of central gold-gold UrQMD [6] events at 25 AGeV.
Signal embedded into background was transported through the standard CBM detector setup.
In the event reconstruction, particles are first tracked by the silicon tracking system placed
inside a dipole magnetic field, providing the momentum of the tracks. Global tracking provide
additional particle identification information using RICH, TRD and TOF subdetectors.
Electron identification
In order to reconstruct J/ we used full electron identification procedure includes the
RICH, TRD and Time Of Flight detectors. In the CBM experiment the electrons and positrons
are identified via their Cherenkov radiation measured with the RICH. The Cherenkov ring
positions and radii are determined by dedicated ring recognition algorithms, and the ring
centers are attached to the reconstructed particle tracks. The radius of the reconstructed rings
is shown in Fig. 2 as a function of the particle momentum. We use the elliptic ring fit and
apply ring quality cuts based on the neural network to separate electrons from pions.
Fig. 2. The radius of the reconstructed rings as a function of the particle momentum.
Electrons and pions are clearly separated up to momentum about 10 GeV/c
Also the electrons are identified via their transition radiation measured with the TRD.
Fig. 3 shows distributions of energy losses of electrons and pions in the first TRD layer.
Distributions of energy losses in the other TRD layers are similar. Based on the individual and
total energy loss we employed neural network (a three-layered perceptron from the ROOT
package) to discriminate electrons from pions.
109
Fig. 3. Distribution of energy losses by electrons (dE/dx + TR) and pions (dE/dx) in the first
TRD layer
In addition to RICH and TRD, the information from TOF is also used to separate
hadrons from electrons (Fig. 4). The squared mass of charged particles is calculated from the
length traversed by the particle and the time of flight. A momentum dependent cut on squared
mass is used to reject hadrons (mainly pions) from the identified electron sample.
Fig. 4. The squared mass of charged particles as a function of the momentum in the TOF for
RICH identified electrons
The electron identification efficiency as well as the -suppression factor as function of
momentum is shown on Fig. 5.
110
Fig. 5. Efficiency of electron identification (left) and pion suppression factor (right) as
function of momentum
With the combined information from all detectors, we achieve an efficiency of 60%.
The combined RICH and TRD identification suppressed pions to a level of 13000. Evidently,
the use of TRD information significantly improves the electron-pion separation.
Reconstruction procedure
After electron identification, the positively charged tracks emerging from the target
which was identified as electrons by the RICH, TRD and TOF detectors are combined with
negatively charged tracks to construct a J/ - candidate, using the KFParticle package [3]. In
order to suppress the physical electron background a transverse momentum cut at 1 GeV/c
was applied to track - candidates for J/ decays to electrons. Fig. 6 demonstrates z - vertex of
reconstructed J/. We have got a quite good z-vertex resolution, which shows that using
KFParticle allows us to distinguished J/ vertex with high accuracy.
Fig. 6. Distribution of z - vertex of constructed J/ rectangle is target area
For the study of the signal-to-background ratio, the signal mass spectrum was
generated from events with one J/ decay embedded into UrQMD background. The
combinatorial background was obtained from pure UrQMD events. To increase the statistics
the event mixing technique was applied. The signal spectrum was added to the background
after proper scaling, taking into account the assumed multiplicity (HSD transport code),
111
J/ reconstruction efficiency and the branching ratio. The resulting invariant-mass spectrum
in the charmonium mass region is displayed in Fig. 7.
Fig. 7. Invariant mass spectra of J/ and mesons for central Au+Au collisions at 25 AGeV
The spectrum corresponds to 10
11
central gold-gold collisions at 25AGeV, or roughly
28 hours of beam time at full CBM interaction rate.
Table 1. Multiplicity, branching ratio, signal-to-background ratio, reconstruction efficiency
and mass resolution for J/ and in central Au+Au collisions at 25 AGeV
Multiplicity Br. ratio S/B Efficiency
J/ 1.92 * 10
-5
0.06 ~ 2 0.19 24 MeV
2.56 * 10
-7
8.8 * 10
-3
~ 0.043 0.19 25MeV
Conclusion
CBM detector allows to collect about 3150 J/ and 1.4 per hour with signal to
background ratio about 2 and 0.04 correspondently at 10 MHz interaction rate. The
simulations were preformed using realistic detector setup. Complete electron identification
including RICH, TRD and TOF detectors was used. We conclude that the feasibility of J/
and even measurements in central collisions of heavy-ions with CBM looks promising.
25 m gold target will be used in order to reduce -conversion. The study will be continued
with new STS, RICH, TRD and TOF geometries.
References
[1] Compressed Baryonic Matter n Laboratory Experiments. The CBM Physics Book, 2011,
http://www.gsi.de/forschung/fair_experiments/CBM/PhysicsBook.html
[2] Compressed Baryonic Matter Experiment. Technical Status Report, GSI, Darmstadt, 2005,
http://www.gsi.de/onTEAM/dokumente/public/DOC-2005-Feb-447 e.html
[3] S. Gorbunov and I. Kisel. Reconstruction of Decayed Particles Based on the Kalman Filter.
CBM-SOFT-note-2007-003, http://www.gsi.de/documents/DOC-2007-May-14.html
[4] O. Derenovskaya. 17th CBM collaboration meeting, Drezden, Germany, 2011.
[5] http://www-hades.gsi.de/computing/pluto/html/PlutoIndex.html
[6] M. Bleicher, E. Zabrodin, C. Spieles et al. Relativistic Hadron-Hadron Collisions in the
Ultra-Relativistic Quantum Molecular Dynamics Model (UrQMD). (1999-09-16). In
J. Phys. G 25, p. 1859.
112
Acquisition Module for Nuclear and Mossbauer Spectroscopy
L. Dimitrov, I. Spirov, T. Ruskov
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Bulgaria
1. Introduction
Until recently the eight bit microcontrollers (MCU) have been dominant in industrial
control systems, as well as in portable measuring devices, including those in different nuclear
applications. Sharply decreasing in prices and power consuming have lead to significant
growth in the use of 16-bits and recently of 32-bits high performance MCUs. This process is
facilitated by the fact that the developers can benefit from low cost development tools and
free software largely available.
In this paper a PC controlled, USB powered portable acquisition module for nuclear
and Mossbauer spectroscopy is described. Numerous modes of operation are possible:
- 4k channels pulse height spectrum analyzer using a 12-bits successive approximation
ADC with 5 us conversion time (with or without sliding scale linearization),
- 1k channels pulse height spectrum analyzer using a 10-bits successive approximation
ADC with 1 us conversion time (with or without sliding scale linearization),
- up to 8k channels pulse height spectrum analyzer using an external high-performance
spectroscopy ADC,
- 256 to 2k channels (in steps of 256 channels) Mossbauer spectrum analyzer (multiscaler),
- single channel analyzer,
- double counter,
- double ratemeter,
- timer.
2. General Description
The block diagram of the module is shown in Fig. 1. The module has one analog input
(Ain, BNC connector), two multipurpose digital inputs (Din1 and Din2, BNC connectors) and
one type B USB connector and one 26-pin header for an external spectroscopy ADC. Except
for the PZ adjustment the whole module control is carried out by the MCU mainly through the
PCI interface functions:
- select mode of operation,
- Spectroscopy Amplifier gain adjust and input polarity select,
- reading both external 12 bits ADC and internal 10 bits ADC,
- setting voltage levels in Single Channel Analyzer,
- loading sliding scale voltage value in Sliding Scale DAC,
- configuring Control Logic for selected mode of operation.
2.1. Power Supplies and References
All voltage supplies (3.3 V for MCU and logic and 5 V for linear part of the module)
and all reference voltages are derived from 5 V USB voltage. The MCU manages biasing
voltages depending on the current mode of operation thus reducing power consumption. When
no measurement is running all power except that required for the MCUs is switched off.
113
Fig. 1. Block diagram
2.2. Spectroscopy amplifier (SA)
The SA has four stages. The input stage consists of a passive differentiator with pole-
zero cancellation circuitry and operational amplifier with unity gain. The two way analog
switch controlled by the MCU is used to select the appropriate input polarity. The input stage
is followed by two programmable gain operational amplifiers with passive integrators in
front. The SA has fixed time constants of 1 us. The total gain can be set by the MCU from 1
to 1024 in twenty steps. The last stage includes an output buffer operational amplifier and a
base line restorer. All operational amplifiers are rail-to-rail, low power type, supplied by a
single 5 V source.
Fig. 2. Spectroscopy amplifier
114
2.3. Peak Detector (PD)
The Peak Detector samples the peak value of the pulse from SA and in the same time
issues a Ready pulse. The sampled voltage (Vpeak) is hold until it is sampled by either 12 bits
or 10 bits ADC when the discharge of the sampling capacitor is started.
2.4. Single Channel Analyzer
While measuring both pulse height and Mossbauer spectrum, only input pulses with
amplitudes between a predetermined lower and upper limit are allowed to be processed.
2.5. Sliding Scale DAC
The 12 bits DAC delivers the voltage for sliding scale linearisation. Simultaneously
with reading ADC12 the MCU loads the sliding scale DAC thus avoiding introduction of
additional dead time.
2.6. Control logic
The whole module logic including Control Logic is implemented in one CPLD chip.
The control logic is configured by MCU depending on selected mode of operation.
2.7. MCU
The powerful 32-bit, 80 MHz MCU is the core of the module. Due to its enhanced
features, multiple power management modes and multiple interrupt vectors with individually
programmable priority the MCU allows flexibility and efficiency at low energy consumption
about 40 mA for the whole module.
Acknowledgments
The financial support of National Fund contract DTK-02/77/09 is gratefully
acknowledged.
115
Business Processes in the Context of Grid and SOA
V. Dimitrov
Faculty of Mathematics and Informatics, University of Sofia, Bulgaria
The term business process has very broad meaning. In this paper the term is investigated in the context
of Grid computing and Service-Oriented Architecture. Business process in this context is a composite Web
service written in WS-BPEL and executed by a specialized Web service, called orchestrator. Grid computing
is intended to be an environment for business process of this kind and even more, this environment to be
implemented as Web services. The problems in implementation of business processes with WS-BPEL in Grid
environment are investigated and discussed.
Business Process and Web Services
What is business process? OMG in [1] defines business process as A defined set of
business activities that represent the steps required to achieve a business objective. It includes the
flow and use of information and resources. These business activities are sometimes called
tasks.
A task can be local one (implemented as part of the business process) or external one
(available for reuse to the other processes). In the last case, the task is specified as Web
service, following OASIS WS-* specifications. The process is a Web service too. So, it is
available as a Web service to the other processes. A process can have as a step sub process,
but it is implemented as a part of the enclosing process.
Web services could be simple or composed ones. The first ones are implemented in
some programming language. The second ones are implemented as composition of other Web
services, i.e. as business process. Sophisticated hierarchies of composite Web services could
be implemented at different abstraction levels.
The business process has a business objective. This business objective could be
scientific one. This means that a business process could be classified as scientific business
process. How the business process achieves its objective has to be measured. This
measurement could be implemented via monitoring of Key Performance Indicators (KPI) of
the business process. So, monitoring subsystem is an essential part of the Business Process
Management System (BPMS) the execution environment of the business process.
The business process is defined by its workflow the all possible sequences of execution
of its steps (tasks). Every task needs resources and possibly information to run. These resources
and information have to be provisioned to the process for its execution. The information can be
local one (process state) or derived from an external source (i.e. Web service). In such a way,
Web services supporting local state are called stateful to distinguish them from the stateless
Web services that do not support conversational state.
Business process could be specified in BPMN or in WS-BPEL [2]. A BPMN specification
is at higher level of abstraction and does not contain any implementation information. WS-BPEL
specifies business process as composite Web service of other Web services as is presented above.
It is possible a BPMN specification to be directly interpreted and executed in a business process
execution environment (IBM WebSphere Lombardi [5]), but usually it is translated in WS-BPEL
specification that is interpreted by a specialized Web service called orchestrator (IBM
WebSphere Business Modeler & Process Server [6, 7]). The first approach is used for fast
prototyping of business processed developed from scratch. The second approach is used for
development of sophisticated business processes integrating reusable software exposed as Web
services.
116
Business Process Management
Business process management (BPM) is a holistic management approach [3] focused
on aligning all aspects of an organization with the wants and needs of clients. It promotes
business effectiveness and efficiency while striving for innovation, flexibility, and integration
with technology. BPM attempts to improve processes continuously. It can therefore be
described as a "process optimization process." It is argued that BPM enables organizations to
be more efficient, more effective and more capable of change than a functionally focused,
traditional hierarchical management approach.
BPM is business process development process. Its life cycle is vision, design,
modeling, execution, monitoring, and optimization.
Vision. Functions are designed around the strategic vision and goals of an organization. Each
function is attached with a list of processes. Each functional head in an organization is
responsible for certain sets of processes made up of tasks which are to be executed and
reported as planned. Multiple processes are aggregated to accomplishment a given function
and multiple functions are aggregated to and achieve organizational goals.
Design. It encompasses both the identification of existing processes and the design of "to-be"
processes. Areas of focus include representation of the process flow, the actors within it, alerts
& notifications, escalations, Standard Operating Procedures, Service Level Agreements, and
task hand-over mechanisms. Good design reduces the number of problems over the lifetime of
the process. Whether or not existing processes are considered, the aim of this step is to ensure
that a correct and efficient theoretical design is prepared. The proposed improvement could be
in human-to-human, human-to-system, and system-to-system workflows, and might target
regulatory, market, or competitive challenges faced by the businesses.
Modeling. Modeling takes the theoretical design and introduces combinations of variables
(e.g., changes in rent or materials costs, which determine how the process might operate under
different circumstances). It also involves running "what-if analysis" on the processes.
Execution. One of the ways to automate processes is to develop or purchase an application
that executes the required steps of the process; however, in practice, these applications rarely
execute all the steps of the process accurately or completely. Another approach is to use a
combination of software and human intervention; however this approach is more complex,
making the documentation process difficult. As a response to these problems, software has
been developed that enables the full business process to be defined in a computer language
which can be directly executed by the computer. The system will either use services in
connected applications to perform business operations or, when a step is too complex to
automate, will ask for human input. Compared to either of the previous approaches, directly
executing a process definition can be more straightforward and therefore easier to improve.
However, automating a process definition requires flexible and comprehensive infrastructure,
which typically rules out implementing these systems in a legacy IT environment. Business
rules have been used by systems to provide definitions for governing behavior, and a business
rule engine can be used to drive process execution and resolution.
Monitoring. Monitoring encompasses the tracking of individual processes, so that
information on their state can be easily seen, and statistics on the performance of one or more
processes can be provided. An example of the tracking is being able to determine the state of
a customer order so that problems in its operation can be identified and corrected. In addition,
this information can be used to work with customers and suppliers to improve their connected
117
processes. These measures tend to fit into three categories: cycle time, defect rate and
productivity. The degree of monitoring depends on what information the business wants to
evaluate and analyze and how business wants it to be monitored, in real-time, near real-time
or ad-hoc. Here, business activity monitoring (BAM) extends and expands the monitoring
tools generally provided by BPMS. Process mining is a collection of methods and tools
related to process monitoring. The aim of process mining is to analyze event logs extracted
through process monitoring and to compare them with an a priori process model. Process
mining allows process analysts to detect discrepancies between the actual process execution
and the a priori model as well as to analyze bottlenecks.
Optimization. Process optimization includes retrieving process performance information
from modeling or monitoring phase; identifying the potential or actual bottlenecks and the
potential opportunities for cost savings or other improvements; and then, applying those
enhancements in the design of the process. Overall, this creates greater business value.
Re-engineering. When the process becomes too noisy and optimization is not fetching the
desire output, it is recommended to re-engineer the entire process cycle. BPR has become an
integral part of manufacturing organization to achieve efficiency and productivity at work.
Workflow
The definition of this term is given in [4] as follows: Workflow is concerned with the
automation of procedures where documents, information or tasks are passed between
participants according to a defined set of rules to achieve, or contribute to, an overall business
goal. Whilst workflow may be manually organized, in practice most workflow is normally
organized within the context of an IT system to provide computerized support for the
procedural automation and it is to this area that the work of the Coalition is directed.
Workflow is associated with business process re-engineering.
Workflow is the computerized facilitation or automation of a business process, in
whole or part. This means that workflow is a more specialized term than business process.
The last one do not concerns only automation and here the term business process is preferably
used instead workflow.
Workflow Management System is a system that completely defines manages and
executes workflows through the execution of software whose order of execution is driven
by a computer representation of the workflow logic. All WFM systems may be characterized
as providing support in three functional areas:
the Build-time functions, concerned with defining, and possibly modeling, the
workflow process and its constituent activities,
the Run-time control functions concerned with managing the workflow processes in an
operational environment and sequencing the various activities to be handled as part of
each process,
the Run-time interactions with human users and IT application tools for processing the
various activity steps.
SOA
Service-Oriented Architecture (SOA) by [8] is the architectural solution for
integrating diverse systems by providing an architectural style that promotes loose coupling
and reuse. SOA is architectural style, which means that it is not a technology. SOA
fundamental construct are the service (logical, self-contained business function), service
provider and service consumer. The service is specified with implementation independent
118
interface and the interaction between service consumer and service provider is based only on
this interface.
As architectural style, SOA defines some requirements to the services:
Stateless. SOA services neither remember the last thing they were asked to do nor do
they care what the next is. Services are not dependent on the context or state of other
services, only on their functionality. Each request or communication is discrete and
unrelated to requests that precede or follow it;
Discoverable. A service must be discoverable by potential consumers of the service.
After all, if a service is not known to exist, it is unlikely ever to be used. Services are
published or exposed by service providers in the SOA service directory, from which
they are discovered and invoked by service consumers;
Self-describing. The SOA service interface describes, exposes, and provides an entry
point to the service. The interface contains all the information a service consumer
needs to discover and connect to the service, without ever requiring the consumer to
understand (or even see) the technical implementation details;
Composable. SOA services are, by nature, composite. They can be composed from
other services and, in turn, can be combined with other services to compose new
business solutions;
Loose coupling. Loose coupling allows the concerns of application features to be
separated into independent pieces. This separation of concern provides a mechanism
for one service to call another without being tightly bound to it. Separation of concerns
is achieved by establishing boundaries, where a boundary is any logical or physical
separation that delineates a given set of responsibilities;
Governed by policy. Services are built by contract. Relationships between services
(and between services and service domains) are governed by policies and service-level
agreements (SLAs), promoting process consistency and reducing complexity;
Independent location, language, and protocol. Services are designed to be location
transparent and protocol/platform independent (generally speaking, accessible by any
authorized user, on any platform, from any location);
Coarse-grained. Services are typically coarse-grained business functions. Granularity
is a statement of functional richness for a servicethe more coarse-grained a service
is, the richer the function offered by the service. Coarse-grained services reduce
complexity for system developers by limiting the steps necessary to fulfill a given
business function, and they reduce strain on system resources by limiting the
chattiness of the electronic conversation. Applications by nature are coarse-grained
because they encompass a large set of functionality; the components that comprise
applications would be fine-grained;
Asynchronous. Asynchronous communication is not required of an SOA service, but
it does increase system scalability through asynchronous behavior and messaging
techniques. Unpredictable network latency and high communications costs can slow
response times in an SOA environment, due to the distributed nature of services.
Asynchronous behavior and messaging allow a service to issue a service request and
then continue processing until the service provider returns a response.
The most popular technological implementation of SOA is Web services. Starting
from version 1.1, Web services specifications diverge to capture more SOA. Not all above
mentioned requirements are directly supported by Web services, but SOA can be supported at
design time. Some of the requirements are not desirable in some environments, like stateless
services in Grid that is why SOA is an architectural style but not technology.
119
By the way, Web services specifications have been modified to capture Grid
requirements to Web Services. The most important result of this initiative is Web Services
Resource Framework (WSRF) - an extension to Web services. More details on these
extensions are discussed below.
OGSA
Open Grid Services Architecture (OGSA) as specified in [9]: OGSA realizes the
logical middle layer in terms of services, the interfaces these services expose, the individual
and collective state of resources belonging to these services, and the interaction between these
services within a service-oriented architecture (SOA). The services are built on Web service
standards, with semantics, additions, extensions and modifications that are relevant to Grids.
This means Grids that are OGSA-compliant, are SOA-compliant, based on Web services. The
Core WS-* specifications does not include orchestration of Web services, but the full power
of SOA could be achieved only with WS-BPEL orchestrators. This is well understood by the
main SOA vendors and they usually offer several variants of orchestration. Why it is so
important SOA suitcases to have orchestrator service? Because with the orchestrator
reusability of the services is accomplished very well and it is possible simply to compose new
Web services in hierarchies at different abstraction levels.
In reality it is still far away from OGSA-compliant Grids. Even OGSA specification
still continues to visualize the Grid as mega batch computer. There is terminology mismatch
in OGSA from the past and the future. In the next versions of OGSA we hope that this will be
overcome GGF and WS are very closely working together. Today, OGF uses WS-*
specifications deriving its own profiles for OGSA and do not extend them. In these profiles
the word MAY is mainly changed to MUST for Grids.
Our focus here is on the business processes. What is specified in OGSA on that topic?
Many OGSA services are expected to be constructed in part, or entirely, by invoking other
servicesthe EMS Job Manager is one such example. There are a variety of mechanisms that
can be used for this purpose.
Choreography. Describe required patterns of interaction among services, and
templates for sequences (or more structures) of interactions.
Orchestration. Describe the ways in which business processes are constructed from
Web services and other business processes, and how these processes interact.
Workflow. Define a pattern of business process interaction, not necessarily
corresponding to a fixed set of business processes. All such interactions may be
between services residing within a single data center or across a range of different
platforms and implementations anywhere.
OGSA, however, will not define new mechanisms in this area. It is expected that
developments in the Web services community will provide the necessary functionality. The
main role of OGSA is therefore to determine where existing work will not meet Grid
architecture needs, rather than to create competing standards.
In other words, OGSA do not say anything about the central competition issue of SOA
platforms vendors. OGF wait for solutions from the Web services world.
Some Considerations and Conclusion
There are two important thinks that have to be mentioned when we talk about service
orientation of Grids. First is that a Grid could be service-oriented implemented, but this
doesnt necessary means that it is a service-oriented environment for execution of service-
oriented solutions. Second one is that an environment could be service-oriented environment
120
for execution of service-oriented solutions, but this does not means that this environment has
to be service-oriented implemented.
OGSA specifies service-oriented architecture for Grid implementation, but it does not
necessary specify Grid as service-oriented platform supporting execution and development of
service-oriented solutions. It is extremely primitive to consider a batch job as a business
process and job tasks as services as is done by some authors. What is the difference? Services
have a fixed location even in the case of WSRF extension. This means that they are installed,
configured, managed and supported at given locations. Every service has an owner
responsible for it. The service needs of supporting execution environment it is not possible a
service to be executed at any free computing resource. This is the same to expect that a
program written in high level programming language can be executed directly on the
computer without compiling etc. Service-oriented solution is like a program written in high
level programming language and in comparison batch job is like a machine code program.
Ideas are the same, but technologies, tools and the most important programmer productivity
are extremely different.
The next question is how business processes could be implemented in Grids, i.e. how
Grid could become service-oriented platform? In practice, SOA platforms are very different
from what is manifested by their vendors. SOA solutions running on one platform in one data
center are really execution optimal. When a business process has to access remote Web
service, it has to do intensive XML-interchange with the remote Web service. This problem
could be solved only with specialized hardware solutions that off-load the servers from XML-
processing and security protection tasks. When a business process is running on one server, it
is translated to simple object-oriented program in Java or C#.
It is clear for now that business process specification for Grids would be WS-BPEL
with some possible Basic Profiles. The most important is the orchestrator Web service.
Nowadays we have many orchestrators from different vendors. All of them are using WS-
BPEL with some extensions. There is no WS specification for orchestration service and such
a service no vendor has intention to specify. The situation is like in the pre-Internet era: many
vendors have had supplied incompatible private network solutions and standardization
organization have had tried to develop commonly accepted networking standards. The
Internet has tried to create network of networks and then has happened that its protocols have
become local ones protocols. That is why today OGF has to try to establish orchestrator Web
service specification orchestrator of the orchestrators, no vendor would do that.
One remark on the business process specification language: Some researchers have
tried to use specification languages other than WS-BPEL, arguing that it is not human
friendly, but there is no BPEL-engine that has no tools for graph representation of WS-BPEL
that is really human friendly. Even some of these researchers try to argue that Petri nets as a
simple formal technique are better than WS-BPEL. First, WS-BPEL uses Petri nets (links)
and by my opinion it is the worst part of the specification my experience shows that most of
the bad errors are resulted of links. Petri nets are good formal technique; their power as a
formal technique is that their expression power is more than that of finite state automata and
less of that of Turing machine. So extending Petri nets to Turing machine expressing power is
nonsense, but thats the way of business process composition with Petri nets in these
researches.
There are two scenarios for BPEL orchestrator. In the first one, orchestrator engine is
located in the Grid. In this case, all Grid functionality is fully available to the engine. At this
point we have to mention some performance issues. SOA solutions could be high
performance or not. This is achieved mainly with SOA architect efforts it is not a problem
of automatic optimization. How this is done? The orchestrator has its own supporting
121
infrastructure. The last one is located in the site where is located the orchestrator. Then most
of the services used in the business process composition are local one on the same site. Only
some of the services are not local one. Think of data as services the standard SOA approach.
Then this means that the data exchange among the services of the business process are
optimized and supported in the local site by high performance local area network and other
applicable technics. Interactions with remote services are slow, but acceptable. Business
process is an optimal solution developed by the SOA architect it is not a subject of
automatic optimization.
In the first scenario, the orchestrator and its entire infrastructure is a part of the Grid. This
means that orchestrated Web services are located in the Grid on the orchestrators site. Some of
them could be exposed outside the Grid, but they are mainly for consumption in the Grid. Every
site could have different BPEL-orchestrator it is exposing Web services. In the Grid overall
compatibility of these orchestrator engines have to be defined using a Basic Profile.
In the second scenario, the orchestrator engine is outside the Grid. This means that its
business process could use some Grid services as remote services as is mentioned above.
These processes could be results visualization ones. In this scenario the main problem is the
security issue how Grid services have to be accessed from the outside. Not that this problem
does not exists inside the Grid, but in this situation it is more difficult. WS-Security
framework cold be used, but it has to be part of the Grid functionality.
One final remark, do not think of Grid as a mega batch engine. This old vision still
exists in OGSA, but think of Grid as an ocean of services. These services are located at fixed
Grid sites; they are exposed and could be used. What is the purpose of this remark? Many
researchers continue to accept the Grid in the old fashioned way and as result of that their
efforts are mainly directed to capsulate the job batch engine as Web service. For them the task
is a task in the job but not in the Web service. Even the business process is compiled to batch
job that is sent for execution in the Grid. This is not OGSA.
Acknowledgments
This research is supported by 02/22/20.10.2011 funded by the Bulgarian
Science Fund.
References
[1] OMG, Business Process Model and Notation (BPMN). Version 2.0, 3 January 2011,
http://www.omg.org/spec/BPMN/2.0
[2] OASIS, Web Services Business Process Execution Language Version 2.0, OASIS Standard,
11 April 2007, http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.pdf
[3] J. vom Brocke. Handbook on Business Process Management: Strategic Alignment.
Governance, People and Culture (International Handbooks on Information Systems). /HKVJH
& M. Rosemann. Vol. 1. Berlin, Springer, 2010.
[4] Workflow Management Coalition. The Workflow Reference Model. 19 January 1995,
http://www.wfmc.org
[5] IBM, WebSphere Lombardi Edition, http://www-01.ibm.com/software/integration/lombardi-
edition
[6] IBM, WebSphere Business Modeler Advanced,
http://www-01.ibm.com/software/integration/webphere-business-modeler/advanced/features
[7] IBM, WebSphere Process Server, http://www-01.ibm.com/software/integration/wps
[8] K. Holley, A. Arsanjany. 100 SOA Questions, Asked and Answered. Pearson Education Inc.,
Published by Prentice Hall, Upper Saddle River, New Jersey, 2011, p. 07458.
[9] OGF, GFD-I.080, The Open Grid Services Architecture, Version 1.5, 24 July 2006,
http://www.ogf.org/documents/GFD.80.pdf
122
ATLAS TIER 3 in Georgia
A. Elizbarashvili
Ivane Javakhishvili Tbilisi State University, Georgia
PC farm for ATLAS Tier 3 analysis
Arrival of ATLAS data is imminent. If experience from earlier experiments is any
guide, its very likely that many of us will want to run analysis programs over a set of data
many times. This is particularly true in the early period of data taking, where many things
need to be understood. Its also likely that many of us will want to look at rather detailed
information in the first data which means large data sizes. Couple this with the large number
of events we would like to look at, and the data analysis challenge appears daunting.
Of course, Grid Tier 2 analysis queues are the primary resources to be used for user
analyses. On the other hand, its the usual experience from previous experiments that analyses
progress much more rapidly once the data can be accessed under local control without the
overhead of a large infrastructure serving hundreds of people.
However, even as recently as five years ago, it was prohibitively expensive (both in
terms of money and people), for most institutes not already associated with a large computing
infrastructure, to set up a system to process a significant amount of ATLAS data locally. This
has changed in recent years. Its now possible to build a PC farm with significant ATLAS
data processing capability for as little as $5-10k, and a minor commitment for set up and
maintenance. This has to do with the recent availability of relatively cheap large disks and
multi-core processors.
Lets do some math. 10 TB of data corresponds roughly to 70 million Analysis Object
Data (AOD) events or 15 million Event Summary Data (ESD) events. To set the scale,
70 million events correspond approximately to a 10 fb
-1
sample of jets above 400-500 GeV in
PT and a Monte Carlo sample which is 2.5 times as large as the data. Now a relatively
inexpensive processor such as Xeon E5405 can run a typical analysis Athena job over AODs
at about 10 Hz per core. Since the E5405 has 8 cores per processor, 10 processors will be able
to handle 10 TB of AODs in a day. Ten PCs is affordable. The I/O rate, on the other hand, is a
problem. We need to process something like 0.5 TB of data every hour. This means we need
to ship ~1 Gbits of data per second. Most local networks have a theoretical upper limit of
1 Gbps, with actual performance being quite a bit below that. An adequate 10 Gbps network
is prohibitively expensive for most institutes.
Enter distributed storage. Fig. 1A shows the normal cluster configuration where the
data is managed by a file server and distributed to the processors via a Gbit network. Its
performance is limited by the network speed and falls short of our requirements. Today,
however, we have another choice, due to the fact that we can now purchase multi-TB size
disks routinely for our PCs. If we distribute the data among the local disks of the PCs, we
reduce the bandwidth requirement by the number of PCs. If we have 10 PCs (10 processors
with 8 cores each), the requirement becomes 0.1 Gbps. Since the typical access speed for a
local disk is > 1 Gbps, our needs are safely under the limit. Such a setup is shown in Fig. 1B.
123
First activities on the way to Tier3s center in ATLAS Georgian group
The local computing cluster (14 CPU, 800 GB HDD, 8-16GB RAM, One Workstation
and 7 Personal Computers) have been constructed by Mr. E. Magradze and Mr. D. Chkhaberidze
at High Energy Physics Institute of Ivane Javakhishvili Tbilisi State University (HEPI TSU). The
creation of local computing cluster from computing facilities in HEPI TSU was with the aim of
enhancement of computational power (resources). The scheme of the cluster network is following
on Fig. 2.
The Search for and Study of a Rare Processes Within and Beyond Standard Model at
ATLAS Experiment of Large Hadron Collider at CERN.
Fig. 1. A. Centralized data storage; B. Distributed data storage
Fig. 2. Scheme of cluster at High Energy Physics Institute of TSU
124
INTERNATIONAL SCIENCE & TECHNOLOGY CENTER (ISTC); Grant G-1458 (2007-
2010)
Leading Institution : Institute of High Energy Physics of I. Javakhishvili Tbilisi State
University (HEPI TSU), Georgia.
Participant Institution: Joint Institute for Nuclear Research (JINR), Dubna, Russia.
Participants from IHEPI TSU: L. Chikovani (IOP), G. Devidze (Project Manager),
T. Djobava, A. Liparteliani, E. Magradze,
Z. Modebadze, M. Mosidze, V. Tsiskaridze
Participants from JINR: G. Arabidze, V. Bednyakov, J. Budagov (Project
Scientific Leader),
E. Khramov, J. Khubua, Y. Kulchitski, I. Minashvili,
P. Tsiareshka
Foreign Collaborators: Dr. Lawrence Price, (Senior Physicist and former
Director of the High Energy Physics Division, Argonne
National Laboratory, USA)
Dr. Ana Maria Henriques Correia (Senior Scientific
Staff of CERN, Switzerland)
G-1458 Project Scientific Program:
1. Participation in the development and implementation of the Tile Calorimeter Detector
Control System (DCS) of ATLAS and further preparation for phase II and III
commissioning,
2. Test beam data processing and analysis of the combined electromagnetic liquid argon
and the hadronic Tile Calorimeter set-up exposed by the electron and pion beams of
1 350 GeV energy from the SPS accelerator of CERN,
3. Measurements of the top quark mass in the dilepton and lepton+jet channels using the
transverse momentum of the leptons with the ATLAS detector at LHC/CERN,
4. Search for and study of FCNC top quark rare decays t Zq and t Hq
(where
q= u, c; H
is a Standard Model Higgs boson) at ATLAS experiment (LHC),
5. Theoretical studies of the prospects of the search for large extra dimensions trace at
the ATLAS experiment in the FCNC-processes,
6. Study of the possibility of a Supersymmetry observation at ATLAS in the mSUGRA
predicted process gg for EGRET point.
125
ATLAS Experiment Sensitivity to New Physics
Georgian National Scientific Foundation (GNSF); Grant 185
Participating Institutions:
Leading Institution : Insititute of High Energy Physics of I. Javakhishvili
Tbilisi State University (HEPI TSU), Georgia
Participant Institution: E. Andronikashvili Institute of Physics (IOP)
Participants from IHEPI TSU: G. Devidze (Project Manager), T. Djobava (Scientific
Leader), J. Khubua, A. Liparteliani, Z. Modebadze,
M.Mosidze, G. Mchedlidze, N. Kvezereli
Participants from IOP: L. Chikovani. V. Tsiskaridze, M. Devsurashvili,
D. Berikashvili, L. Tepnadze, G. Tsilikashvili,
N. Kakhniashvili
The cluster was constructed on the basis of PBS (Portable Batch System) software on
Linux platform and for monitoring was used Ganglia software. All nodes were
interconnected using gigabit Ethernet interfaces.
The required ATLAS software was installed at the working nodes in SLC 4 environment.
The cluster have been tested with number of simple tests and tasks studying various
processes of top quarks rare decays via Flavor Changing Neutral Currents tZq (q= u,c quarks),
tHqbbar,q , tHqWW*q (in top-antitop pair production) have been run on the cluster.
Signal and background processes generation, fast and full simulation, reconstruction and analysis
have been done in the framework of ATLAS experiment software ATHENA. (L. Chikovani,
T. Djobava, M. Mosidze, G. Mchedlidze).
Activities at the Institute of High Energy Physics of TSU (HEPI TSU):
PBS consist of four major components: (working model is shown on the Fig. 3):
Commands: PBS supplies both command line commands and a graphical interface.
These are used to submit, monitor, modify, and delete jobs. The commands can be
installed on any system type supported by PBS and do not require the local presence
of any of the other components of PBS. There are three classifications of commands,
Job Server: The Job Server is the central focus for PBS. Within this document, it is
generally referred to as the Server or by the execution name pbs_server. All
commands and the other daemons communicate with the Server via an IP network.
The Server's main function is to provide the basic batch services such as
receiving/creating a batch job, modifying the job, protecting the job against system
crashes, and running the job (placing it into execution),
126
Job executor: The job executor is the daemon which actually places the job into
execution. This daemon, pbs_mom, is informally called Mom as it is the mother of all
executing jobs,
Job Scheduler: The Job Scheduler is another daemon which contains the site's policy
controlling which job is run and where and when it is run. Because each site has its
own ideas about what is a good or effective policy, PBS allows each site to create its
own Scheduler.
Activities at the Institute of High Energy Physics of TSU (HEPI TSU):
on that Batch cluster had installed Athena software 14.1.0 and 14.2.21,
the system was configured for running the software in batch mode and the cluster had
been used on some stages of the mentioned ISTC project,
also the system used to be a file storage.
Example of PBS batchjob file for athena 14.1.0 (Fig. 4).
Fig. 3. PBS working schema
127
Plans to modernize the network infrastructure
It is planned to rearrange the created the existing computing cluster into ATLAS Tier
3 cluster. But first of all TSU must have the corresponding network infrastructure.
Nowadays the computer network of TSU comprises 2 regions (Vake and Saburtalo).
Each of these two regions is composed of several buildings (the first, second, third, fourth,
fifth, sixth and eighth in Vake, and Uptown building (tenth), institute of applied mathematics,
TSU library and Biology building (eleventh) in Saburtalo). Each of these buildings is
separated from each other by 100 MB optical network. The telecommunication between the
two regions is established through Internet provider the speed of which is 1 000 MB (Fig. 5).
MagliviRegion
U N I V E R S I T Y U N I V E R S I T Y
VakeRegion
U N I V E R S I T Y U N I V E R S I T Y
Fiber Cable FiberCable
ISP
Servers and controllable network facilities are predominantly located in Vake region
network. Electronic mail, domain systems, webhosting, database, distance learning and other
services are presented at TSU. Students, administrative staff members and academic staff
members, research and scientific units at TSU are the users of these servers. There are 4 (four)
Fig. 4. PBS batchjob example
Fig. 5. TSU existing network topology
128
Internet resource centers and several learning computer laboratories at TSU. The scientific
research is also supported by network programs. Total number of users is 2500 PCs. The
diversity of users is determined by the diversity of network protocols, and asks for maximum
speed, security and manageability of the network.
Initially, the TSU network consisted only from dozens of computers that were
scattered throughout different faculties and administrative units. Besides, there was no unified
administrative system, mechanisms for further development, design and implementation. This
has resulted in flat deployment of the TSU network.
This type of network:
Does not allow setting up of sub-networks and Broadcast Domains are hard to control.
Formation of Access Lists of various user groups is complicated.
It is hard to identify and eliminate damages to each separate network.
It is almost impossible to prioritize the traffic and the quality of service (QOS).
Because there is no direct connection between the two above-mentioned regions it is
impossible to set up an Intranet at TSU. In the existing conditions it would have been possible
to set up an Intranet by using VPN technologies. However, its realization required relevant
tools equipped with special accelerators in order to establish the 200 MB speed connection.
This is the equipment that TSU does not possess.
The reforms in learning and scientific processes demands for the mobility and
scalability of the computer network. It is possible to accomplish by using VLAN
technologies, however in this case too absence of relevant switches hinders the process of
implementation.
The planned modern network topology of TSU is shown on the Fig. 6 with modern
network materials (Fig. 7) and the cable system structure for each building (Fig. 8).
Fig. 6. TSU planned network topology
129
With all above-said, through implementing all of the devices we will have a
centralized, high speed, secured and optimized network system.
Fig. 7. TSU planned network topology
Fig. 8. Network cable system structure for TSU buildings
130
Improving TSU informatics networks security traffic between the local and global
networks will be controlled through network firewalls. The communications between sub-
networks will be established through Access Lists.
Improving communication among TSU buildings main connections among the ten TSU
buildings are established through Fiber Optic Cables and Gigabit Interface Converters
(GBIC). This facilities increase the speed of the bandwidth up to 1 GB.
Improving internal communication at every TSU building internal communications will
be established through third-level multiport switches that will allow to maximally reducing
the so - called Broadcasts by configuring local networks (VLAN). The Bandwidth will
increase up to 1 GB.
Providing the network mobility and management in administrative terms, it will be
possible to monitor the general network performance as well as provide the prioritization
analysis for each sub-network, post or server.
INSTALLING THE TIER 3g/s SYSTEM at TSU. Atlas Tier-3s model is shown of the Fig. 9.
ATLAS Tier-3s
The minimal requirement is on local installations, which should be configured with a
Tier-3 functionality:
A Computing Element known to the Grid, in order to benefit from the automatic
distribution of ATLAS software releases:
Fig. 9. Atlas Tier-3s model
131
Needs >250 GB of NFS disk space mounted on all WNs for ATLAS software,
Minimum number of cores to be worth the effort is under discussion (~40?),
A SRM-based Storage Element, in order to be able to transfer data automatically from the
Grid to the local storage, and vice versa:
Minimum storage dedicated to ATLAS depends on local user community (20-40 TB?),
Space tokens need to be installed,
LOCALGROUPDISK (>2-3 TB), SCRATCHDISK (>2-3 TB), HOTDISK (2 TB):
Additional non-Grid storage needs to be provided for local tasks (ROOT/PROOF).
The local cluster should have the installation of:
A Grid User Interface suite, to allow job submission to the Grid,
ATLAS DDM client tools, to permit access to the DDM data catalogues and data transfer
utilities,
The Ganga/pAthena client, to allow the submission of analysis jobs to all ATLAS
computing resources.
Tier 3g work model is shown on the Fig. 10.
Fig. 10. Atlas Tier 3g work model
132
JINR document server: current status and future plans
I. Filozova, S. Kuniaev, G. Musulmanbekov, R. Semenov, G. Shestakova,
P. Ustenko, T. Zaikina
Joint Institute for Nuclear Research, Dubna, Russia
1. Introduction
Nowadays various institutions and universities around the world create their
own repositories, depositing there different kinds of scientific and educational documents
making them open for the world community. Open Access to Research a way to make
scientific results available to all scientific and educational community by the Internet. Fig. 1
shows the annual growth of the numbers of repositories and deposited in them records,
according to statistics given by the Registry of Open Access Repositories (ROAR
http://roar.eprints.org). Today there are near 2000 Open Access (OA) repositories with
scientific research documents created in the frameworks Open Archive Initiatives (OAI) [1].
Besides, such kind of initiative has been put forward in education, as well, Open Educational
Resources [2]. Open Access in science is way to collect, preserve the intellectual output of
scientific organization and disseminate it over the world. This is the aim of the Open Access
repository of the Joint Institute for Nuclear Research, JINR Document Server (JDS) which
started its functionality in 2009 [3, 4]. In this paper we describe some peculiarities of filling
and depositing documents, the possibilities of the visualization of search and navigation and
the ways of the further development of the information service at JDS.
Fig. 1. Growth of the repositories and records number over the world
2. JINR Document Server Collections
Building the institutional repository has as its objects to make accessible the scientific
and technical results of JINR researchers for the international scientific community, to
increase the level of informational support of JINR employees by granting an access to other
scientific OA archives and to estimate the efficiency of scientific activity of JINR employees. JDS
133
has been built as OAI-compliant repository to realize these goals. JDS functionality, supported by
the CDS Invenio software [5], covers all the aspects of modern digital library management.
JDS, integrated into the global network of repositories ROAR, makes its content available for
everyone anywhere at any time. The content of JDS is composed of the following objects:
1. The research and scientific-related documents of the following types:
Publications issued in co-authorship with JINR researchers,
Archive documents that describe all the essential stages of the JINR research activity,
2. Tutorials, various educational video, audio and text materials for the students and young
scientists,
3. Documents providing informational support for scientific and technological researches
performed in JINR.
As a digital library JDS consists of two parts: digital collections and service tools. The
objects stored in the repository are grouped into collections: published articles, preprints,
books, theses, conference, proceedings, presentations and reports, reports, dissertation
abstracts, clippings from newspapers and magazines, photography, audio and video materials
[6]. All these collections are arranged hierarchically into two trees: the basic, or regular,
and virtual. The basic tree (left column in Fig. 2) in the JDS is formed according
to classification feature the genre of scientific publications, and the virtual tree by
the subject of publications (right column in Fig. 2).
Narrow by collection:
Articles & Preprints (32,736)
Articles (14,867) Preprints (15,403) Conference Papers (3,248)
Books and Reports (1,653)
Reports (166) Books (1,487)
Conferences, Presentations and Talks (19,427)
Conferences Announcements (4,099) Conferences
Proceedings (15,326) Lectures (0) Notes of Schools and
Seminars (0) Talks (2) Notes (0)
INDICOSEARCH (0)
INDICOSEARCH.events (0)
INDICOSEARCH.contribs (0) INDICOPUBLIC (0)
Handbooks & Manuals (0)
Theses (8) [restricted]
Multimedia (3)
Press (3) Audio (0) Videos (0) Pictures (0) Posters (0)
Bulletins (0)
STL Bulletins (0)
Focus on:
JINR Articles & Preprints (23,652)
JINR Published Articles (12,744) JINR
Preprints (13,640)
JINR Conferences (177)
JINR Annual Reports (137)
JINR (12) VBLHEP (13) BLTP (19) FLNR (16)
FLNP (28) DLNP (10) LIT (20) LRB (12) UC (7)
High Energy Experiments in JINR (179)
FASA-3 (1) MARUSYA (3) EDBIZ (0)
BECQUEREL (0) NUCLOTRON &
NUCLOTRON-M (164) NIS (0) NICA/MPD (19)
Heavy Ion Physics Experiments in
JINR (9)
ACCULINNA (1) DRIBS (1) DRIBS-2 (0)
CORSET-DEMON (0) MASHA (0)
VASSILISSA (7)
Non-accelerator Neutrino Physics &
Astrophysics (162)
BAIKAL (106) EDELWEISS & EDELWEISS-
II (10) GERDA (9) GEMMA & GEMMA-II (5)
LESI (0) NEMO (32)
External Experiments (1,055)
SPS (219) FAIR (59) RHIC (321) LHC (459)
Fig. 2. JDS Collections in basic and subject trees
134
The subject collection allows one to perform selective search. It may be advantageous
to present a different, orthogonal point of view on nodes of the regular tree, based on the other
attributes. Developed user interface of JDS provides a wide range of information services:
search and navigation, creation of groups by interest, saving the searched results,
individual and group bookshelves, deposit of manuscripts and arrangement the discussions on
them, sending out notices and messages.
The main point for a newly created repository is how to fill its content by the documents
of interest issued before. We used various methods of filling and updating the JDS content of
the publications of JINR authors: automatic data collection (harvesting) from arXiv.org and
CERN Document Server (CDS), other OAI compliant archives; semi-automatic collection of
documents from retrieval databases SPIRES, ADS, MathSciNet (Fig. 3).
Fig. 3. JDS Data Sources
The second point is how to deposit manuscripts, preprints and new publications of
JINR authors. A little earlier the Personal INformation (PIN) data base has been developed in
JINR where research workers deposit their personal information in. In addition to other
personal data (affiliation, collaborations, participation in various projects, experiments,
teaching, grants etc.) PIN includes their publications (bibliography and full texts). With the
aim not to force the authors to deposit their manuscripts and publications twice (in PIN and
JDS) we set the communication channel to import data from PIN to JDS. It delivers PIN data
in MARCXML format which are then uploaded into JDS. Furthermore, preprints issued by
JINR Publishing department (bibliography and full texts) are uploaded into JDS without
authors participation. Nevertheless, authors can (by their desire) deposit their manuscripts
and publications in JDS in the self-archiving or by proxy mode (Fig. 3).
Collection INDICOSEARCH with its subcollections is intended for searching in the
administrative data base ADMIN, composing various events like committees, lectures,
reports, meetings, conferences etc. The data base was created and is managed by the
elaborated in CERN the INDICO software [7]. Inasmuch as Indico provides OAIcompliant
output format, it is possible to harvest data from this database.
135
3. Visualization of Search and Navigation
The design and usage of visual interfaces to digital libraries is becoming an active and
challenging field of information visualization. The visualization helps humans mentally
organize, electronically access, and manage large volume of information and provide a
valuable service to digital libraries. Readers, while looking for the relevant documents, are in
need of new tools that can help them to identify and manage enormous amounts of
information. Visual interfaces to digital libraries apply powerful data analysis and information
visualization techniques to generate visualizations of large document sets. The aim of
visualization is to make the usage of digital library resources more efficient in reduced search
time, provide better understanding of a complex data set, reveal relationship among objects
(authors, documents), make data to be viewed from different perspectives. To attain this aim
there are three directions to be explored: i) identification of a composition of search results,
understanding of interrelation of retrieved documents, and improvement of the search engine;
ii) overview of the coverage of a digital library by facilitating browsing; iii) visual
presentation of user interaction data in relation to available documents and services.
Visualization of search and navigation allows analyzing the search results more efficiently.
Thus a visual interface for search and navigation in a digital library should meet the following
requirements:
plain representation of search results for better identification,
finding of interrelations among documents,
improved search engine,
graphical vision and navigation in a digital library,
mapping of users operation with available documents with the aim of better
functionality of a digital library.
In the last decades, a large number of information visualization techniques have been
developed, allowing visualizations of search and navigation in digital libraries. With the aim
of usage in JDS the following tools designed for information visualization were analyzed:
Java Universal Network / Graph Framework (JUNG),
JGraphT,
JavaScript InfoVis Toolkit (JIT),
Graphviz,
Prefuse (set of software tools for creating rich interactive data visualizations).
The package Prefuse, as meeting all our requirements, has been chosen to visualize the
JDS resources [8]. Two visual prototypes were developed by Prefuse on the part of real JDS
data. The tree-map method proposed in paper [9] is applied to display the collection of JDS
documents by the subject (Fig. 4).
Each rectangle (collection) of the tree-map has a certain color; the larger the area of the
rectangle, the more articles in the topic. The next level of the hierarchy displays records, and a
final one publication in the form of interactive display rectangles. The visual
representation also contains the search path and found document which is highlighted. The
publication is a reference to itself. When user points cursor at the publication, the
information about the author and publication date is displayed. Visualization created with
the usage of the threaded tree allows one to explore the repository content by subjects. The search
result sets out in more light tone that allows one to estimate visually the number of publications
that the search finds. It supports zooming and panning which allow one to display more detailed
information.
136
Fig. 4. Visualization of JDS Resources: Tree Map Method
Usage of the radial graph method [10] demonstrates relationship between
authors and coauthors of publications (Fig. 5). The author appears at the center, co-authors
on the second concentric circle, and the co-co-authors on the third one. To avoid the
oversaturation the only three levels of hierarchy are used in rendering. The author and his
coauthors are highlighted by appropriate color. When cursor selects the author / co-author
the information about the number of his publications is displayed.
Fig. 5. Visualization of JDS Resources: Radial Graph Method
137
4. JDS: What is futher?
With the aim to minimize the author's efforts to deposit his publications we are
planning to arrange the reverse channel delivering documents (bibliography and full texts)
from JDS to PIN. The program of the visualization interface is continuing visualization of
search results, statistics and monitoring.
JDS users may join in groups by interests and are able to interact with each other
through the service tool WebGroup to discuss on actual publications using the module
WebComment. The module WebComment provides a socio-oriented tool for the discussed
document ranking by readers. WebMessage facilitates clustering of users in groups via a web-
forum. JDS has a custom module WebStat, providing statistics collection by some parameters
such as the number of calls, downloads, citations, etc. Thus, all these service tools, WebStat,
WebBasket, WebGroup, WebMessage, WebComment facilitate to form a social network
within the scientific community in the framework of the information system. The following
elements for visualization of the users groups as a social network will be developed:
browsing the groups and its members, browsing the detailed information about group
members, browsing relations between the groups, visual search by the user.
Results of the research, scientific and engineering efforts represented in publications
have semantic relations between them via quoting mechanism. Description of
these relationships and their properties opens up new possibilities for studying the document
corpus of digital libraries. Publications, as well as collections of publications, contain other
semantic linkages which reflect the logic of author's thought on certain subjects. These
linkages can be studied and described. There are other types of linkages also. For example,
the linkages relating to personal profile the author of publications, his affiliation,
participation in the collaborations, experiments, projects, etc. So, the personal profiles of
authors are formed in the framework of the digital library. These profiles can serve as a
basis of scientific social network. By analyzing these linkages, we can
obtain important information about motivations of these interactions. For example, intensity
of relations between authors, working in some scientific area (subject project) illustrates the
degree of activity in this area, etc. We see the following directions of JDS development:
creation of JINR social science network, elaboration of semantic search and navigation.
References
[1] Open Archives Initiative, http://www.openarchives.org
[2] Open Educational Resources, http://en.wikiversity.org/wiki/Open_educational_resources
[3] I.A. Filozova, V.V. Korenkov, G. Musulmanbekov. Towards Open Access Publishing at JINR. Proc. of
XXII International Symposium on Nuclear Electronics and Computing (NEC`2009). Dubna: JINR, 2010.
JINR, E10, 11-2010-22. pp. 124-128.
[4] V. Borisovsky et al. On Open Access Archive for publications of JINR staff members. //V. Borisovsky,
V. Korenkov,S. Kuniaev, G. Musulmanbekov, E. Nikonov, I. Filozova. Proc. of XI Russian Conference
RCDL'2009. Petrozavodsk: Karelia Scientific Center of Russian Academy Science, 2009. pp. 451-458 (in
Russian).
[5] CDS Invenio, http://invenio-software.org/
[6] V.F. Borisovsky et al. Open Access Archive of scientific publications: JINR Document Server
//V. Borisovski, I. Filozova, S. Kuniaev, G. Musulmanbekov, P. Ustenko,T. Zaikina, G. Shestakova. Proc.
of XII Russian Conference RCDL'2010. Kazan: Kazan State University, 2010, pp. 162-167 (in russian).
[7] INDICO, http:// http://indico-software.org/
[8] J. Heer, S.K. Card, J.A. Landay. Prefuse: a toolkit for interactive information visualization. Portland
Oregon: SIGCHI Conference on Human Factors in Computing Systems, 2005.
[9] B. Jonson, B. Shneiderman. Treemaps: a space-filling approach to the visualization of hierarchical
information structure. Proc. of th Second Internat. IEEE Visualization Conf., 1991.
[10] S.K. Card, J.D. Mackinlay, B. Shneiderman. Readings in Information Visualization: Using Vision to
Think. San Francisco: Morgan Kaufmann, 1999.
138
Upgrade of Trigger and Data Acquisition Systems for the LHC
Experiments
N. Garelli
CERN, European Organisation for Nuclear Research, Geneva, Switzerland
The Large Hadron Collider (LHC) and its experiments demonstrated high performance over the first
two years of operations, producing many physics results. However, the actual beam conditions are far from the
nominal ones, which will be reached in the next years after following shutdown periods, mandatory to upgrade
and maintain the accelerator complex. Further, there is a plan of increasing the delivered instantaneous
luminosity beyond the design value up to 5 times of it, i.e. 5x10
34
cm
1
2s
will be reached.
Other long shutdown periods are foreseen in 2017 and 2021 and are referred to as Phase 1 and
Phase 2 respectively. During Phase 1 a new collimation system will be deployed since it will become
necessary to protect the machine from the higher losses. Further, the new injector system (Linac4) will
be commissioned. After Phase 1 a peak luminosity of 2*10
34
cm
1
2s
. In particular, the
foreseen activities will be:
The installation of a new muon detector Small Wheel (SW). The muon precision
chambers are expected to deteriorate with the time, thus they will be replaced possibly
exploiting new and more performing technologies, as Micromegas detectors. The
addition of a fourth muon detector layer will provide an additional trigger station, will
reduce the rate of fake signals, will improve the resolution of the transverse
momentum and will allow a resolution of 1 mrad on a level-1 track segment,
To provide increased calorimeter granularity,
To introduce the usage of a Level-1 topological trigger. A proposal still under
discussion foresees additional electronics to have a Level-1 trigger based on topology
criteria, which would keep it efficient at high luminosities. The consequences of this
choice will be longer latency and the need of developing common tools for
reconstructing topology both in muon and calorimeter detectors,
The usage of Fast Track Processor (FTK). FTK will provide tracking for all L1-
accepted events within O(25s) for the full silicon tracker. The pattern recognition
will be done in associative memories, while the track fitting in dedicated FPGAs.
For what concerns Phase-2 ATLAS did not finalize a plan yet. However, major works is
expected in three areas:
Development of a fully digital read-out of the calorimeter detectors for both the data
and the trigger path. This would allow reaching a faster data transmission and having
the trigger access to the full calorimeter resolution. This will provide finer cluster and
better electron identification,
142
Improve the Level-1 muon trigger. The current muon trigger logic assumes the tracks
to come from the interaction point and the resolution on the transverse momentum is
limited by the interaction point smearing. A proposal foresees to use the Monitored
Drift Tube chambers (MDT) information in the trigger logic, since a resolution
100 times better than the actual trigger chambers (RPC) would be achieved, no need
for vertex assumptions would be required and the selectivity for high-pT muons
would be improved. The current limitation of this project is that the MDT read-out
system is serial and asynchronous, so a new fast read-out system has to be deployed,
Introduce a Level-1 track trigger. In 2021 a new inner detector will be installed. It will
account only for silicon sensors providing a better resolution and a reduced occupancy
with respect to today. Combining silicon detectors tracks with the calorimeter data at
Level-1 will improve the electron selection, while a correlation with the muon
detector information will reduce the fakes. It would be possible to perform some b-
tagging also at the Level-1.
5. The upgrade plans for the CMS experiment
The Compact Muon Solenoid (CMS) [4] is a general purpose experiment with
characteristics similar to ATLAS.
The trigger and DAQ system [5] differs from the ATLAS one by being arranged in
eight independent slices containing all DAQ components and the logical interfaces to the
trigger system, having two event building stages, and only one HLT level. A schema of the
CMS trigger and DAQ system is shown in Fig. 2. On a Level-1 trigger, every Front End
Driver (FED) is sending data fragments via the SLINK to the Front-end Readout Link cards
(FRL). In the first stage of the CMS Event-Builder (the Fed-Builder, implemented by a
commercial Myrinet network) these fragments are collected and grouped together to form
larger super fragments. Those are then distributed to eight Readout-Builders according to a
simple round-robin algorithm. Super fragments on average contain eight FRL data fragments.
The resulting 72 super fragments are then concatenated in the second stage of the CMS Event-
Builder (the RU-Builder implemented by a commercial Gigabit Ethernet switching network)
to form entire event data structures in the Builder Units (BUs). In the BUs the events are
analyzed by Filter Unit (FU) processes in order to find the HLT trigger decision. Triggered
events are transferred to the Storage Manager (SM) where they are temporarily stored until
they are transferred to the Tier-0 center. The HLT farm is implemented by a computer cluster,
while the SM by two PC racks attached to a Fiber Channel SAN.
In the consolidation phase in 2013 CMS will complete its design project of having a
fourth layer of forward muon chambers which will improve the trigger robustness and
preserve the low transverse momentum threshold. In order to cope with more demanding
trigger challenges, the processing power of the CMS HLT farm will increase of a factor three.
The requirements and the plans for Phase 1 are similar to the ATLAS ones [6]. All the
upgrades will require a coherent evolution of the DAQ system in order to cope with the new
design:
Due to the radiation damage, the innermost silicon tracker will be replaced. The new
pixel detector envisaged by CMS will be composed of four barrel layers and three
end-cap layers. The goal is to achieve better tracking performance, improve the b-
tagging capabilities and reduce the material using a new cooling system based on CO2
rather than C6F14,
In order to maintain the Level-1 rate below 100 kHz, a low latency, and good selection
over the data, tracking information will be added at the Level-1. In addition, a regional
143
calorimeter trigger will be introduced, exploiting more sophisticated clustering and
isolation algorithms to handle higher rates and complex events. A new infrastructure
based on Advanced Telecommunications Computing Architecture (ATCA) will be
developed, to increase bandwidth, maintenance and flexibility,
An upgrade of the hadron calorimeter detector is foreseen to use silicon
photomultipliers, which allow a finer segmentation of the readout in depth.
Also for CMS the plans for Phase 2 are not yet finalized, but for sure the silicon
tracker will be replaced. R/D projects for new sensors, new front-end, high speed link and
tracker geometry arrangement are on going. The new tracker is expected to have more than
200 millions pixels and more than 100 million strips. As for ATLAS, CMS will need to add
Level-1 tracking information to the Level-1 to cope with the high luminosity.
Fig. 2. A schema of the trigger and DAQ system of the CMS experiment
6. The upgrade plans for the LHCb experiment
The Large Hadron Collider beauty (LHCb) experiment [7] is a single-arm forward
spectrometer with about 300 mrad acceptance designed to perform flavor physics
measurements at the LHC, in particular precision measurements of CP violation and rare B-
meson decays. It has been designed to run with an average number of collisions per bunch
crossing of about 0.5 and a number of bunches of about 2600, meaning with an instantaneous
luminosity of 2 10
32
cm
1
2s
The 30 GHz free-electron maser (FEM) with the output power of 20 MW, pulse
duration of 170 ns and repetition rate 0.5-1 Hz, was made several years ago at J INR,
DUBNA, in collaboration with IAP RAS, Nizhny Novgorod [1]. The FEM was pumped by
the electron beam of linear induction accelerator LIU-3000 which produced the electron
beam (0.8 MeV, 25 A) with the repetition rate up to 1 Hz. Along with the research in the field
of relativistic RF electronics it is supposed to use FEM in the studies on acceleration
technique, biology, medicine [2,3]. For this purpose a specialized RF stand was made on the
basis of FEM (Fig. 1).
Fig. 2. Summary of the results obtained by ultrasonic vibrators, ultraviolet
At present on the basis of the developed experimental stand an opportunity of a
selected damage of the cancer cells is studied by using powerful RF pulse radiation. For this
purpose it is necessary to involve nano-sizable absorbers of RF radiation into the cell tissue in
such a way to concentrate them only on the cancer cells (for example, chemically binding the
absorbers with specific anti-bodies). A significant difference in the absorption of the radiation
by the health tissue and nano-absorbers as well as practically full absence of the heating
transfer due to the radiation pulse regime, give an opportunity of local heating and selected
damage of cells avoiding the damage of the neighbouring healthy cells. Preliminary results on
the exposure of the cancer cells placed on the thin mylar film or on the film coated with gold
50nm thick, have confirmed the opportunity to kill the cancer cells which were in contact with
the metallized film while the control samples remained undamaged (
160
Fig. 3). Now the experiments are performed to select optimal RF absorbers of nano-
meter sizes.
Fig. 3. Microphotographs of the cancer cell samples taken in 60 minutes after irradiation
To solve the above tasks, it was required to provide precision stabilization of the main
systems of the RF stand: accelerator magnetic lens supplies, high voltage pulse systems of the
accelerator and supply pulse systems of the MCE magnetic field: accelerator energy
instability and currents of the acceleration track focusing elements must not exceed 1*1 - 3
from pulse to pulse.
The distributed data acquisition system was constructed several years ago [6, 7], and it
demonstrated its versatility and reliability. Recently some new features have been added. The
scheme of the modernized system is shown in
Fig. 4.
The modular FEM control and acquisition system allow us to control the new
subsystems without disturbing the experiments schedule. The report describes the upgrade of
the following subsystems :
stabilization of the modulators high-voltage power supply,
stabilization of the electron gun high-voltage power supply,
stabilization of the electromagnetic undulator high-voltage power supply.
Three identical stabilization systems were constructed. The injector, modulators and
undulator high voltage stabilization systems are intended for high voltage regulation with the
accuracy better than 0.3%. They allow one to set necessary voltage limits and indicate high
voltage measurement results. Each system can be controlled either locally by using the
embedded keyboard and LCD display or remotely via RS-232 interface.
The stabilization system consists of the control and power units.
The power unit is located in the accelerator hall and includes a high voltage
transformer with a thyristors regulator in the primary winding. The control unit is located in
the control room and all its connections to the power unit are galvanicaly isolated. The control
unit measures high voltage by means of the 12-bit ADC, calculates the error between the
measured value and the limit. It forms two bursts of pulse signals with opposite phases for
161
two IGBT-transistors in the half-bridge inverter. The control signal bursts are referenced to
zero transitions of the AC line phase. They control thyristors to rectify the voltage from the
high voltage transformer to charge the secondary winding capacitor to the necessary level.
RF pulse parameters
Radiation spectrum
Beam currents
RF pulses
Ethernet
TCP Client
TCP Client
Socket
TCP Server Sockets
Digital
Crate
Switch
Start
Magnetic fields
Pulse accelerating
voltages
FEM oscillator
HV
power
supplies
Magnets
power
supplies
Modulators
Injector
Undulator
Lenses
Solenoid
Pulse
shape
recog-
nition
An active
automatic start
Synchronization
Fig. 4. Modernized control system block diagram
162
Micro-
controller
(AT90S8535)
Keyboard &
buttons
LCD display
Timing unit
(Altera EPM7064S)
Galvanic isolation
IGBT inverter
CONTROL UNIT
High Voltage signal
(divider)
Synchronization from
Control System
Galvanic isolation
12-bit ADC
(MAX187)
thyristors
186
The Local Monitoring of ITEP GRID site
Y. Lyublev, M. Sokolov
Institute of Theoretical and Experimental Physics, Moscow, Russia
We describe a local monitoring of the LCG Tier2 ITEP site (Russia, Moscow). Local monitoring includes:
The temperature of the computer hall,
The status of the queues of the jobs,
The status of the GRID services,
The status of the GRID site UPSs,
The site status details, obtained by NAGIOS,
The site status details, obtained by GANGLIA.
Introduction
This article is about local monitoring of the GRID site at ITEP, Moscow, Russia. Our
site has 10 functional servers, 274 CPUs for working nodes, 314 TB disk storage space. It is
used by eight Virtual Organizations groups.
The temperature of the computer hall
The temperatureis obtained from three sensors:
- the external sensor,
- the sensor of cooled air,
- the sensor of the internal temperature in the hall.
There is a possibility to increase the number of sensors to 50. The history of the
collection of statistics according to the temperature makes it possible to examine it with
different accuracy (hour Fig. 1), daily, weekly, monthly, annual.
Fig. 1. The temperature obtained from the three sensors with hourly precision
187
The status of the queues of the jobs
The status of the queues of the jobs allows for the analysis:
- the general current state of the queues of ITEP site (Fig. 2);
Fig. 2. The current state of the queues
- the current state on different CEs (Fig. 3);
Fig. 3. The current state on different CEs
188
- the graphs of the state of jobs on different CEs during different period (Fig. 4);
Fig. 4. The graphs of the state of jobs on different CEs
- the graphs of the state of fundamental characteristics CEs (Fig. 5).
189
The status of the GRID services (Fig. 6)
Fig. 6. The status of the GRID services
The status of the GRID site UPSs (Fig. 7)
Fig. 7. The status of the GRID site UPSs
The site status details, obtained by NAGIOS and by GANGLIA
We use two main tools for monitoring of active site services: NAGIOS and GANGLIA.
They use a standard set and an extended set of plugins. NAGIOS is the main tool for monitoring,
but it has static configuration only, no graph supported by default and a limited number of resources.
GANGLIA allows graphic and easy to store data of its monitoring.
190
By Nagios (Fig. 8).
Fig. 8. The site status details, obtained by NAGIOS
By GANGLIA (Fig. 9).
Fig. 9. The site status details, obtained by GANGLIA
Summary
We described our tools for local monitoring of GRID site into ITEP. We are grateful
to the organizers of NEC`2011 for the invitation to participate in the symposium.
References
[1] Ganglia homepage, http://ganglia.sourcefogge.net
[2] Nagios homepage, http://www.nagios.org
[3] ITEP GRID homepage, http://egee.itep.ru
[4] EGEE Nagios sensors description, http://egee.grid.cyfronet.pl/core-services/nagios
191
Method for extending the working voltage range of
high side current sensing circuits, based on current mirrors,
in high-voltage multichannel power supplies
G.M. Mitev, L.P. Dimitrov
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, Bulgaria
High side current sensing circuits, based on current mirrors, are suitable for measuring small load
currents ( to mA range) under high output voltage, yet they have a limited working range of several hundred
volts, set by the breakdown voltage of the transistors. That range is insufficient for some applications requiring
several hundred to several thousand volts.
This paper discusses a method for extending the voltage range, especially useful in group-controlled
multichannel power supplies. We propose an example circuit and present some comparative results.
Keywords: extended voltage range, current mirror, current sensing, high voltage, DC power supply,
ionizing radiation detector
1. Introduction
In multichannel power supplies, that utilize common return ground for all the
channels, individual channel output currents cant be measured on the low side. High side
output current measurement in ionizing radiation detector power supplies usually involves
complicated circuits and/or poor power efficiency. Current mirror based circuits have been
proposed in [1] and [2] that overcome these problems. Still, they are limited by the working
voltage range of the available transistors at few hundred volts (400-500 V). That is sufficient
for some applications, like semiconductor detectors and HPD power supplies, but insufficient
for other photomultiplier supplies. That is the motivation behind our research to extend
the voltage range of this schematic class and enable a wider array of applications.
2. Problem and solution
Current mirror based current sensing circuits usually
operate by drawing a small current proportional to the load, for
example Fig. 1. In order to extend the voltage range of such a
circuit it is sufficient to add a voltage regulator (Q3, Fig. 1),
controlling the drop on the current sensing circuit. There is a
special requirement posed to the regulator it shouldnt add or
subtract any current into the measurement circuit. This can be
successfully fulfilled by a low leakage MOS transistor. It must
have smaller gate leakage current, negligible to the minimal
measurement current Im, and high breakdown voltage.
3. Test setup
The test circuit is set up according to the schematic on Fig. 2. It contains three main
components a current sensing mirror [1], a voltage extending regulator and a current-to-
voltage convertor with adjustable gain.
The current mirror circuit is already well discussed in [1] and [2] and is not the subject
of this paper. It consists of dual BJT Wilson current mirrors. The upper one sets the ratio of
the output current to the measurement current. The lower one sets the ratio of the currents,
between the two arms of the first mirror, to 1:1. The measurement current is calculated as
Io
Q3
BSP230
Im
Uref
R1
100
R2
10k
Q2
FMMT558
Q1
FMMT558
Ui
Fig. 1
192
approximately
8 7
8
2
R R
R
I I
o m
, separately
for both circuits. It shows the influence of the supply voltage on the measured value especially
large for small currents. The error falls below 10% at about 20 A.
5. Discussion
The measurement characteristics of the circuit are almost unaltered by the voltage
extender. The only notable error region is around 10 A. In that region the influence of the
leakage currents is very high and the measurement characteristics are strongly dependent on
supply voltage. The error introduced by the extender circuit is an order of magnitude smaller
than the supply voltage influence.
194
Fig. 3
0,0001
0,001
0,01
0,1
1
10
0,0001 0,001 0,01 0,1 1 10
Io{mA}
U
m
{
m
V
}
Uem@650V
Um@350V
Uem@350V
Um@200V
Uem@90V
Um@50V
-6
-5
-4
-3
-2
-1
0
1
0,0001 0,001 0,01 0,1 1 10
Io{mA}
U
m
{
%
}
Um{%}@350V
Um{%}@200V
Um{%}@50V
0,1
1
10
100
1000
0,0001 0,001 0,01 0,1 1 10
Io{mA}
U
m
(
U
i
)
{
%
}
Uem {65090V}
Um {35050V}
a)
b)
c)
195
Conclusion
The transfer function is defined by the sensing circuit and almost independent from
the extender circuit. The error introduced by the use of voltage extender is insignificant to the
inherent supply voltage dependency of the sensing circuit.
The use of resistive voltage divider in the voltage extender causes high power
consumption, comparable to that of the sensing circuit, and worsens the power efficiency.
Nevertheless, such circuit is already present in a typical power supply and combining them
mitigates the problem.
Overall, the voltage extended circuit works with negligible deterioration of
measurement accuracy with no additional power efficiency loss. The only drawback is the
slightly increased component count. The main benefit possibility to greatly increase the
working voltage range of high side current sensing circuits, based on current mirrors, in high-
voltage multichannel power supplies.
References
[1] L. Dimitrov, G. Mitev. Novel current mirrors application in high side current sensing
in multichannel power supplies. Proceedings of the XXII International Symposium on
Nuclear Electronics and Computing (NEC2009), ISBN 978-5-9530-0242-4.
[2] G. Mitev. Specifics of using current mirrors for high-side current measurements in
detector power supplies. Annual journal of electronics, ISSN 1313-1842, 2009.
196
Early control software development using emulated hardware
P. Petrova
Institute for System Engineering and Robotics, Bulgarian Academy of Sciences, Bulgaria
In the lengthy system development process, quite often the software requires longer time for design and
testing than the time necessary for the hardware preparation. As an attempt to shorten the overall project period it
is more convenient to have both hardware and software design in parallel. In order to begin the software design
as early as possible, it is essential to be able to emulate the work environment of the finished system, including
even some parts of the system itself. Through software simulation it is possible to substitute almost any missing
hardware for the purposes of control software development and testing, in order to have all system components
ready within a given timeframe.
In this paper is presented an example of a software development process separate from the
complementary hardware, using a LabVIEW emulated equipment.
Keywords: Out-of-the-Box Solution, Rapid Control Prototyping, Hardware-in-the-Loop Simulation,
Hardware/Software Co-Simulation, Virtual Robot, Programming Environments
1. Introduction
Software robot simulators simplify development work for robotics engineers. The
behavior-based simulators allow users to create worlds with different degree of complexity
built-up from objects and light sources, and to program real robots to interact with these
worlds, or design virtual robots to be part of them. Some of the most popular applications for
robotic simulations use a physics engine for precise 3D modeling and rendering of the work
environment along with the motion of the robot in 3D space. The use of a simulator allows for
robotics control programs to be conveniently written and debugged off-line (with no hardware
present) with the final version of the program later tested on an actual robot. One such
simulation environment is the NI LabVIEW Robotics Module. It has its own 3D rendering
engine but can also connect to other third party engines.
LabVIEW is a graphical programming tool based on the dataflow language G. It offers
runtime support for real-time (RT) environment, which caters to the needs of embedded
systems prototyping. Due to its characteristics this environment presents itself as an ideal
medium for both the design and the implementation of embedded software. This approach
provides a key advantage - a smooth transition from design to implementation, allowing for
powerful co-simulation strategies like Hardware-in-the-Loop (HIL), Runtime Modeling, etc.
Such solution gives a state-of-the-art flexibility and control performance possibilities.
2. Requirements and design process
Any real world system, motion systems like robots in particular, include a variety of
elements mechanical elements (wheels and gearboxes) and electrical components (power
converters, digital circuits, sensors). The operation of all those components is coordinated by
embedded software programs that abstract the dynamics of the interacting parts, providing
basis for higher level programs, which perform reasoning and deliberation. This separation of
the domains of responsibility for the different pieces of the control software provides the
necessary modularity for the system hierarchy. From its structure arises the interest of having
tools and techniques that can span on the entire space from low-level to high-level
specifications and provide a common environment for development, prototyping and
deployment.
Some of the requirements for the successful control software development
environment arise from the system structure, but others are function of its desired
performance. It is important that the environment would facilitate the development of reliable
programs and simplify the integration of the different modules. The availability of standard
197
control design and signal processing routines is also essential, as well as the possibility of
generation of easily maintainable, readable and interpretable, platform independent code.
The proposed by the Robotics Module system description language and design
environment address all these issues. It provides connectivity to variety of sensors and
actuators, as well as their virtual simulated counterparts, along with tools for importing code
from other languages, including C/C++ and VHDL, and a physics-based environment
simulator.
In the traditional design process a model of the system is used to devise a possible
control strategy, which is iteratively tested through a computer-based simulation and ported to
a prototype test rig. A design generated in the LabVIEW environment can be directly used for
HIL simulations, in which the control software is tested in a similar to the intended
deployment environment. This reduces the need for compromises based on the compatibility
considerations and provides the designer with many more degrees of freedom, which speeds
up the development process in a cost-effective manner.
Another important principle is the implementation of the lower levels in the control
hierarchy in real time (RT) environments, while the higher levels are implemented in
intensive and intuitive user interface environments, which are both present in the module in
question. This might highly improve the usability, which is commonly an overlooked issue.
Moreover, easy integration of the hierarchical components facilitates the top-to-bottom
and the bottom-to-top data and control flow, carrying information that could be used for the
fine-tuning of the system and for the identification and optimization of its critical
components. This is also a function of the applied control design algorithm. A variety of
algorithms are supported by the development environment, which enables the performance
adaptive control and also implies that the embedded description language and the
environment should have a certain level of sophistication for such implementations to be
easily deployed.
Simulation and Computer-Aided-Design serve very important role in robotics
development. They allow the performance of different tasks, like modeling the behavior of a
real hardware system, using an approximation in software, as well as performing design
verification through 'soft' rapid prototyping, which allows the designer to detect conceptual
errors as early as possible and makes possible the problem identification prior to the
implementation. Another key step is the performance evaluation of a system, which might be
essential for specific and dangerous environments, where actual testing is impossible.
Through the simulation process, comparison of (possibly experimental) architectures
can be performed, which allows for Trade-Off Evaluation of different designs. This approach
also enables designers to separate and make the development of the hardware and the
software independently, not relying on the availability of any of the two, which in turn allows
for their parallel design.
Finally, the design, debugging and validation of the mechanical structure of the robot,
the visualization of its dynamic of motion and the analysis of the kinematics and dynamics of
the robotic manipulators, along with its interaction with the environment can be fully
performed within the virtual medium of the Robotics Module.
3. System overview
The full system (Fig. 1) consists of several major components: the Environment Simulator,
along with the environment model and the Robot Simulator.
198
There is a GUI for the high-
level control and a G-based low-
level control software. The link
between the embedded software and
the GUI is provided by the
LabVIEW Virtual Instruments (Vis)
that control the whole test setup.
The central part in the test
setup takes the Simulation Scene,
which comprises of the simulated
environment, along with the
environment objects, and the
simulated robot.
The Robotics Module
provides methods to read or write
the properties of the simulated
components, or invoke methods on
components, respectively. The
LVODE property and method
classes are arranged in a hierarchy
with each class inheriting
the properties and methods
associated with the class in
the preceding level. They
reflect the geometrical and
physical properties of the
environment objects and the
robots bodies.
The Robot Simulator
(Fig. 2) is responsible for
modeling the hardware found
on different models of
physical robots, to translate
robotic operations into
operations that the
Environment Simulator provides and react to the feedback provided by it.
The Environment Simulator reads and displays simulation scenes you design (the
virtual environment in which the robot will operate), calculates realistic, physics-based
properties of simulated components as they interact, and advances the time in the simulation.
An important part for the simulation plays the Manifest file. When you design a simulation
environment and components, you save their definitions in an .xml file, called a Manifest file. The
simulator reads from Manifest files to render the components they define. Each Manifest defines
one Simulation Scene, which is a combination of simulated components and their properties. You
can render one simulation scene at a time in the simulator. Simulation scenes contain the following
components: (1) Environment - each simulation instance must have an environment that describes
the ground and any attached features. The environment also has associated physical properties, such
as the surface material and force of gravity; (2) Robots - they contain simulated sensors and
actuators; and (3) Obstacles - environments can contain obstacles that are separate from the
environment and have their own associated properties.
Fig. 2
Fig. 1
199
Fig. 3
The Environment Control Panel allows properties of the simulation to be set (e.g.
time-step size, what Environment Model file to use for the simulation, etc.). Its role is
executed by the Master Simulation VI. You write VIs to control the simulator and simulated
components. The VIs contain the same code to control simulated robots that embedded
applications running on real robots might contain.
Fig. 3 shows a LabVIEW robot simulation. This is one type of environment with
different objects, in which is placed a StarterKit robot in an autonomous navigation mode.
The small picture displays a view from the robots camera, from the robots perspective.
Conclusions
Robot simulators can be successfully used not only to simplify the mechanical design
of robots, but also for emulation and testing of their control software, long before the final
phases of development. Testing can be made more rigorous and repeatable, since a test
scenario can be re-run exactly. Simulators are great medium for testing ideas of intelligent
robotics algorithms. Their use makes possible the evaluation of the efficiency of the control
algorithms, the employed parameter values and the variety of sensor-actuator configurations
in an early stage of design.
Using simulations, allows for a top-down robot control system design and offline
programming, and gives way to parallel development of software and hardware, separate from
each other, thus facilitating modular system development.
LabVIEW provides a single environment that is a framework for combining graphical and
textual code, giving the freedom of integration of multiple approaches for programming, analysis,
and algorithm development. It provides all of the necessary tools for effective robotics
development - libraries for autonomy and a suite of robotics-specific sensor and actuator drivers.
References
[1] Ram Rajagopal, Subramanian Ramamoorthy, Lothar Wenzel, and Hugo Andrade. A Rapid
Prototyping Tool for Embedded, Real-Time Hierarchical Control Systems. EURASIP Journal on
Embedded Systems, Vol. 2008, Article ID 162747, 2008, p. 14, doi:10.1155/2008/162747.
[2] D. Ratner & P. M.C. Kerrow. Using LabVIEW to prototype an industrial-quality real-time solution for the Titan
outdoor 4WD mobile robot controller. IROS 2000 Proceedings IEEE/RSJ International Conference on
Intelligent Robots and Systems, 31 October - 5 November 2000, Vol. 2, pp. 1428-1433, Copyright IEEE 2000.
[3] J .-C. Piedboeuf, F. Aghili, M. Doyon, Y. Gonthier and E. Martin. Dynamic Emulation of Space
Robot in One-g Environment using Hardware-in-the-Loop Simulation. 7th ESA Workshop on
Advanced Space Technologies for Robotics and Automation 'ASTRA 2002'ESTEC, Noordwijk, The
Netherlands, November 19 - 21, 2002.
[4] Martin Gomez. Hardware-in-the-Loop Simulation. Embedded Systems Design Magazine, 30 November
2001, http://www.eetimes.com/design/embedded/4024865/ Hardware-in-the-Loop-Simulation.
[5] Brian Bailey, Russ Klein and Serge Leef. Hardware/Software Co-Simulation Strategies for the
Future. Mentor Graphics Technical Library, October, 2003, http://tinyurl.com/6cx95ey
200
Virtual lab: the modeling of physical processes in Monte-Carlo
method the interaction of helium ions and fast neutrons with matter
B. Prmantayeva, I. Tleulessova
L.N. Gumilyov Eurasian National University, Astana, Kazakhstan
1. The purpose of work
The study of interactions of charged particles and neutrons with matter. Measurements
of parameters of nuclear reactions. The study of differential cross sections of elastically
scattered neutral and charged particles on atomic nuclei.
2. A brief theoretical introduction
2.1. Nuclear interactions of charged particles with matter
When passing through matter, charged particles interact with atoms of this matter
(electrons and atomic nuclei). And, accordingly, these particles participate in three types of
interaction - strong (nuclear), electromagnetic and weak. As part of the laboratory work is
considered the nuclear and electromagnetic interaction of charged particles with matter.
Electromagnetic interference is one of intense interactions in nature, but it is weaker
than nuclear interaction (about 100-1000). With the passage of charged particles through
matter, energy losses are mainly due to ionization inhibition.
Nuclear interaction (the strongest interaction in nature) manifests itself in one of the interesting
shapes in the form of direct interaction processes (scattering of particles on nuclei, nuclear reactions),
and these processes are characterized by large cross sections (10
-27
10
-24
2
). Due to the large cross
sections of the strong interaction, the fast particles, going through matter, lose energy through the
processes of nuclear absorption and scattering.
Fig. 1. The interaction of the incident particle with the target nucleus
One frequently discussed process in nuclear physics of the interaction of two particles
is the process of elastic scattering in which the total kinetic energy and momenta of the
colliding two particles are stored and only redistributed between them. As a result, the
particles change its energy and direction of motion. Coulomb and nuclear forces are taken
here as forces, due to the action which can occur the elastic scattering. Nature of scattering by
Coulomb or nuclear forces determined by the sighting parameter b. It is obvious that a
201
charged particle (Fig. 1) passing at a given rate close to the other charged particle with b1,
would be scattered on a larger angle than the particle flying away - b2 where b2>> b1.
Low-energy charged particles are scattered on the Coulomb forces, charged high-
energy particles and neutrons - on the Coulomb and nuclear forces, also there is the
interference of the Coulomb-nuclear interactions taking place.
2.2. The interaction of neutrons with matter
The main type of interaction of neutrons with matter are the different types of nuclear
reaction and elastic scattering on target nuclei. Depending on whether neutron reaches the
nucleus or not, its interaction with nuclei can be divided into two classes: a) elastic potential
scattering on nuclear forces without hitting of neutron in the nucleus (n, n); b) nuclear
reactions of various types ((n, ), (n, p), (n, )), fission, etc., inelastic scattering (n, n `), elastic
scattering with the setting of a neutron in the nucleus (elastic resonance scattering).
The relative role of each process is determined by the relevant sections. In some
substances, for which the role of elastic scattering of relatively high, fast neutron loses energy
in a series of successive elastic collisions with the nuclei of the material (slowing down the
neutrons). The process of deceleration continues as long as the kinetic energy of the neutron is
equal to the energy of thermal motion of atoms in the decelerating material (moderator). Such
neutrons are called thermal neutrons. Further collisions of thermal neutrons with atoms of the
moderator practically do not change the neutron energy and lead only to move them to a
material (the diffusion of thermal neutrons), which continues as long as the neutron not
absorbed by the nucleus.
3. Mathematical description of the theory
The main characteristics of the nuclear reaction
B b A a + + , (1)
are the differential ( ) E
d
d
, u
o
O
, integral ( ) E
int
o and total
tot
o cross-sections.
Fig. 2. The result of the interaction of different colliding particles and theemission of
secondary particles in a given solid angle under the condition of axial symmetry (the
symmetry axis of the beam).
202
The differential cross section ( ) E
d
d
, u
o
O
characterizes the departure of nuclear
reaction at a certain angle to the beam of incident particles (at fixed beam energy E).
Integral cross section
int
o
}
O
=
t
u u
o
t o
0
int
sin 2 d
d
d
, (2)
characterizes the total number of particles emitted from the target in this reaction with fixes
energy of incident particles. The total cross section
tot
o
( )
=
i
i tot int
o o , (3)
is the sum of all the integral cross sections for all open output channels. To obtain the absolute
differential cross-section is necessary to measure the following core values:
N
2
1
cm s
(for neutrons),
0
E - energy of incident particles (assuming a monoenergetic beam) [MeV].
After entering all the data, the number of particles falling on the surface of target
material would be calculated
( )
e Z
I
N
a
a
a
=
0
, (4)
where Cl e
19
10 6 , 1
= - elementary charge.
204
Target parameters
Similarly, you are installing the input parameters for the target:
Fig. 6. Parameters of target
A
A - atomic number of the material (target),
A
Z - charge of the material (target),
sub
d - the thickness of the material (target), [mkm],
| | 2 , 1 e AM - a model of the target (1 - a thin target R d
sub
<< ; 2 half thick target
R d
sub
~ 3 , 0 , where R - the mean free path of particles in matter).
Value of the number of atoms in 1 cm
3
A
- is taken from reference data. Calculation
of mass carried out as
-24
arg
10 1,660531 =
A et t
A M , density as
-24
arg
10 1,660531 =
A A et t
A . Accordingly, the target thickness is recalculated in
mg/cm
2
as
10
10 1,660531
-24
arg
=
A A sub
et t
A d
d
.
4.2. Output results
In the process of calculating the differential cross sections and set of statistics, the
dependence of the angular distribution of the differential cross section as theoretical, as
"experimental" can be viewed in Cartesian and polar coordinates. And choosing The
differential cross section (Cartesian coordinates) or "The differential cross section (polar
coordinates), there will be an appropriate dependence.
205
Fig. 7. 3D-plot of the angular distributions of differential cross sections
depending on the energy beam of alpha particles
After performing the laboratory work, the program generates a complete report, which
reflects all the input parameters of the experiment and all values that were measured in the
virtual experiments. If necessary, the report can be saved on hard disk as a text file and open,
such as spreadsheets, Microsoft Excel for further additional calculations or plotting
differential cross sections.
206
Big Computing Facilities for Physics Analysis: What Physicists
Want
F. Ratnikov
Karlsruhe Institute of Technology, Germany
Producing reliable physics results is a challenging task in a modern High Energy Physics. It requires
close cooperation of many collaborators and involves significant use of different computing resources available
to the collaboration.
This paper is based on our experience as a national support group for CMS users. A multi-tier
computing model is discussed from the perspective of physicists involved in final data analysis. We discuss what
physicists could expect from different computing services.
Introduction
Modern High Energy Physics experiments are organized as huge factory for producing
physics results. The vital components of these factory are modern accelerators delivering
outstanding amount of collisions at outstanding energies, giant detectors built using cutting
edge technologies, high performance trigger and data acquisition systems capable to process
5 THz of input information and selecting potentially interesting events, computing farms
processing data using sophisticated algorithms and producing about 50 PBytes data every
year, and storage systems handling all these data. However all these dramatic efforts make
sense only if physicists can convert collected using all this machinery data into new physics
results. Creating comfortable conditions for effective physics analysis is therefore necessary
for the success of the entire experiment. We will discuss requirements for the effective
physics analysis based on the experience obtained while running CMS [1] experiment at
CERN for several years.
Computing Model
The CMS offline computing is arranged in four tiers. The corresponding data flow
starting with the detector data up to the final physics results is presented in Fig. 1. A single
Tier-0 center at CERN accepts data from the CMS data acquisition system, archives the RAW
data and performs prompt reconstruction of selected data streams. 7 Tier-1 centers are
distributed over collaborating countries and are responsible for data archiving, reconstruction,
skimming, and other data-intensive tasks. 53 Tier-2 centers take the load of producing Monte-
Carlo simulation data, and serving both simulated and detector data to physics analysis
groups. Tier-2 centers provide the most computing resources for processing data specific to
individual analysis. Tier-3 centers provide interactive resources for the local groups. Data are
mostly moved between different computing tiers of the same level, or from more central to
more local facility. RAW data from Tier-0 are distributed to Tier-1, reconstructed and
archived on site, and transferred to other Tier-1 centers for redundancy. Primary data streams
are skimmed on Tier-1 into many smaller secondary streams, and resulting skimmed datasets
are transferred to Tier-2 centers for use by physics groups. Finally, after applying the analysis
specific selections, data are further moved to Tier-3 for the interactive analysis. The only
exception in this one-directional data flow is a Monte-Carlo data: events are generated and
reconstructed in Tier-2 centers, and then are transferred to Tier-1 facilities for archiving.
207
Data Analysis Patterns
Most analyses start with skims datasets stored in Tier-2 centers in either full
reconstruction (RECO) or Analysis Object Data (AOD) format. The data relevant for signal
studies, as well as data for both data driven and MC driven background studies, are processed.
Obtained condensed results are stored in format convenient for the interactive analysis. These
data are transferred to the corresponding Tier-3 center where they are analyzed.
When the analysis converges, obtained results are analyzed using appropriate
statistical methods. Note that in this analysis pattern Tier-2 and Tier-3 are those computing
facilities mostly used for the physics analysis. The level of service of the affiliated Tier-1
affects the performance of the physics analysis only marginally.
Physics analyses are targeted for the conferences. With continuously increasing
amount of collected experimental data, typical analysis has a half-year cycle: results are
prepared for winter and for summer conferences. This puts peak load on the computing
systems during the preparation for the major conferences.
Resources
Three kinds of resources used on every computing facility are: CPU power, storage
space, and network bandwidth. From the user perspective the requirement to the resources is
that the amount of these resources should be big enough. This practically means that waiting
time for the result should be small comparing with the turn over time for this result. For
example, the primary processing of necessary skimmed data is done only few times during the
analysis cycle, thus it is expected to take not more than a week. However the routine scanning
of analysis ntuples producing working histograms is repeated many times a day, thus is
Fig. 1. The characteristic data flow and data processing chain starting from the detector
readout and ending by the publication of the obtained new physics result.
208
expected to take not more than half an hour. People expect no significant limitation in storing
analysis ntuples on disks. It is hard to provide exact numbers for resources requirements; they
significantly vary from one analysis to another. As a reference, Table 1 presents numbers for
CMS German computing resources
223
The free-electron maser RF wave centering and power density
measuring subsistem for biological applications
G.S. Sedykh, S.I. Tutunnikov
Joint Institute for Nuclear Research, Dubna, Russia
Main ideas of biological experiment
Cooperative influence of microwaves and conducting micro- or nano-particles provides
local and selective action of microwaves on cancer cells. The application of high power nanosecond
microwave radiation for cancer cell treatment, is studied by scientific teams at the Laboratory of
High Energy Physics and Laboratory of Radiation Biology of JINR, Dubna, Russia.
Fig. 1. Main ideas of the biological experiment [1,2]
The microwave source is a free electron maser (FEM) based on linear induction accelerator LIU-3000.
Fig. 2. The general view of the experimental facility for exposure of biological objects
224
The subsystem for RF wave power density measuring and centering of the biological object
To expose biological objects in the established mode, it is necessary to make the RF
wave centering and power density measuring subsystem. There are metallic fillings glued
onto the pedestal for putting the irradiated object on, and there is a glow of these metallic
fillings in a powerful pulsed RF wave. Pointing the camera at the glowing fillings and using
specialized software for pattern recognition, we get a glowing circle area and a deviation from
the center. Then, by means of a specialized kinematic device, the system performs tuning of
the height and angle of the lens, which focuses the RF wave on the irradiated object.
Fig. 3. The pilot scheme of the subsystem for RF wave centering and power
density measuring
The position of the lens is regulated by using four electromagnets arranged at the
corners of the base. They are controlled by means of the master controller.
Fig. 4. The block-diagram of the ontroller for the RF wave centering and power
density measuring subsystem
225
Fig. 5. The controller for the RF wave centering and power density measuring
Fig. 6. The block diagram of the subsystem for the RF wave centering and power
density measuring
The next figure shows the real and ideal images of the glowing rings for subsequent
pattern recognition.
Fig.7. Pattern recognition for RF wave centering and power density measuring
For pattern recognition the authors have developed a specialized software based on
DirectShow technology. The program is a graph consisting of a sequence of video filters, such
as a video capture filter, processing filters, and a filter for rendering the output video. For
video processing the authors have developed the filters for binarization, removal of noises,
calculation of the required parameters of the glowing spot. The structure of DirectShow Filter
Graph, developed for Pattern recognition, is shown in the picture below.
226
Fig. 8. The structure of DirectShow Filter Graph developed for Pattern recognition
Fig. 9. Original image Fig. 10. Binarized image Fig. 11. The borders
found by using the method
of Roberts
The original frame is binarized to obtain only two kinds of points: the point of
background and the points of interest. The video cleaning filter is necessary to remove the
video noises, which turned out to be a result of exposure of X-rays on the camera matrix. The
area of the ring is determined by the number of pixels. In addition to the area defined by the
distance from the center. For the subsequent selection of the rings the program searches for
the boundaries by using the method of Roberts with an aperture of 2*2 pixels.
Summary:
- the subsystem for RF wave centering and power density measuring for exposure of
biological object, has been developed,
- the kinematic scheme of lens control has been offered,
- the scheme of the ontroller for the subsystem has been developed,
- the printed circuit board has been made, assembling and debugging of the controller have been
fulfilled,
- the software for the controller has been developed,
- the software for the image analysis to control the RF wave power density and its
centering, has been developed.
Plans:
- Debugging of software for pattern recognition,
- Installation of the system, its debugging and commissioning.
References
[1] N.I. Lebedev, A.V. Pilyar, N.V. Pilyar, T.V. Rukoyatkina, S.N. Sedykh. Data acquisition system for
lifetime investigation of CLIC accelerating structure.
[2] D.E. Donets, N.I. Lebedev, T.V. Rukoyatkina, S.N. Sedykh. Distributed control systems for
modernizing of JINR linear accelerators and of HV power supplies for polirized deuterons source
POLARIS 2.
227
Emittance measurement wizard at PITZ, release 2
A. Shapovalov
DESY, Zeuthen, Germany
NiYaU MEPhI, Moscow, Russia
The Photo Injector Test Facility at DESY, Zeuthen site (PITZ) develops electron sources of high
brightness electron beams, required for linac based Free Electron Lasers (FELs) like FLASH or the European
XFEL. The goal of electron source optimization at PITZ is to minimize the transverse projected emittance. The
facility is upgraded continuously to reach this goal. Recent updates of the PITZ control system resulted in
significant improvements of emittance measurement algorithms. The standard method to measure transverse
projected emittance at PITZ is a single slit scan technique. The local beam divergence is measured downstream
of a narrow slit, which converts a space charge dominated beam into an emittance dominated beamlet. The
program tool Emittance Measurement Wizard (EMWiz) is used by PITZ for automated emittance
measurements. The second release of the EMWiz was developed from scratch and now consists of separated
sub-programs which communicate via shared memory. This tool provides the possibility to execute complicated
emittance measurements in an automatic mode and to analyze the measured transverse phase space. A new
modification of the method was made called fast scan for its short measurement time while keeping excellent
precision. The new release makes emittance measurement procedure at PITZ significantly faster. It has a friendly
user interface which simplifies the tasks of operators. Now the program architecture yields more flexibility in its
operation and provides a wide variety of options.
Introduction
At the PITZ facility, the electron source optimization process is continuously ongoing.
The goal is to reach the XFEL specifications for beam quality projected transverse
emittance less than 0.9 mm mrad at a bunch charge of 1 nC. The speed of an individual
emittance measurement, its reliability and reproducibility are the key issues for the electron
source optimization. That is why an automatization of emittance measurement at PITZ is of
great importance. The nominal method to measure the transverse projected emittance is a slit
mask technique. Many machine parameters have to be tuned simultaneously in order to
achieve high performance of the photo injector. This task is organized at PITZ through the
emittance measurement wizard (EMWiz) software [1]. This advanced high-level software
application interacts through a Qt [2] graphical user interface with the DOOCS [3] and TINE
[4] systems for machine control and ROOT [5] for data analysis, visualization and reports.
For communication with the video system and acquiring images from cameras at several
screen stations, a set of video kernel libraries have been created [6]. During the past years
some measurement hardware was replaced with faster and more precise devices. Accordingly,
new methods and algorithms have been implemented by the emittance measurement
procedures. It increases measurement accuracy and reduces measurement time. In this paper,
details about the new Emittance Measurement System for both hardware and software parts
are described.
Emittance measurement hardware
The transverse emittance and phase space distribution are measured at PITZ using the
single slit scan technique [7, 8]. The Emittance Measurement SYstem (EMSY) contains
horizontal and vertical actuators with 10 and 50 m slit masks and a YAG screen for the beam
size measurement. The slit mask angle can be precisely adjusted for the optimum angular
acceptance of the system (Fig. 2). Three EMSY stations are located in the current setup as
shown in Fig. 1. The first EMSY station (EMSY1) behind the exit of the booster cavity is
used in the standard emittance measurement procedure. It is at 5.74 m downstream of the
228
photocathode corresponding to the expected minimum emittance location. For this single slit
scan technique, the local divergence is estimated by transversely cutting the electron beam
into thin slices. Then, the size of the beamlets created by the slits is measured at the YAG
screen at some distance downstream the EMSY station. The 10 m slit and a distance
between the slit mask and the beamlet observation screen of 2.64 m are used in the standard
emittance measurement. Stepper motors are applied to move each one of the four axes
separately. They give the precise spatial positioning and orientation of the components.
Fig. 1. Layout of the Photo Injector Test facility at DESY, Zeuthen site (PITZ)
Every EMSY has four stepper motors which are controlled by the new XPS-C8
(Newport) controller, that were mounted in the beginning of 2011. This new controller type
gives the possibility to read all hardware values during movement. The average value read
time is about 5 msec which is a big improvement compared to the old controller which has a
read time about 50 msec. With the new possibility, the EMSY software was redeveloped and
in turn opened new horizons for improving quality and speed of emittance measurement
processes. On each of the actuators besides slit mask in both x- and y- planes, a YAG screen
is mounted to observe the beam distribution.
Fig. 2. Layout of Emittance Measurement System
229
A CCD camera is used to observe the images on the screens (Fig. 2). During the past
years the PITZ video system was also under continuous improvement. Hardware and software
parts were updated by the third release [9]. The most important for emittance measurement
was that the problem of missed and unsynchronized frames is now solved due to new
hardware and improved software. Earlier a considerable part of beam and beamlet
measurements was rejected, because a lot of frames were missed or frame grabbing was not
synchronized with the actuator movement. Sometime operators lost up to 50% of operating
time because of these problems.
Emittance measurement analysis
A schematic representation of the single slit technique is shown in Fig. 3. For this
technique the local divergence is estimated by transversely cutting the electron beam into thin
slices and measuring their size on a screen after propagation in a drift space. The so called
2D-scaled emittance is then calculated using the following definition [1]:
2
2 2
2
x x x x
x
x
n
' ' =
o
| c (1)
Here ) (
2
x and ) ' (
2
x are the second central moments of the electron beam distribution
in the trace phase space obtained from the slit scan, where
z x
p p x / = ' represents the angle of
a single electron trajectory with respect to the whole beam trajectory. The Lorentz factor |
is measured using a dispersive section downstream of EMSY.
Fig. 3. Schematic representation of the single slit scan technique
The factor
2
/ x
x
o is applied to correct for possible sensitivity limitation of low
intensity beamlets, where
x
o is the rms whole beam size measured at the slit location. In the
emittance measurement setup and procedure, intrinsic cuts have been minimized by e.g. using
high sensitive screens, a 12 bit signal resolution CCD-camera and a large area of interest to
cover the whole beam distribution. Therefore, emittance value is called 100% rms emittance.
The measurement system was optimized to measure emittance lower than 1 mm mrad for
1 nC charge per bunch with precision of about 10 % [1].
Emittance measurement wizard
Since December 2010 a new release of EmWiz is available at PITZ. This second
release has replaced the previous version [1] completely. All modern features of the Qt
framework software and new hardware possibilities of the PITZ control system were
implemented in this new wizard. It has a flexible design by being developed as a module set
230
when each module has a specific task. A basic idea for this version was to simplify EmWiz
GUI by decreasing the number of buttons, user readable information, number of windows and
operator actions which are needed for a measurement process. Further on the goal was to give
an interface for easy further development and for easy adding of new tools to the current work
version. It is written for 64 bit Scientific Linux CERN version 5.0, but can be recompiled for
other platforms.
Emittance measure procedure using EMWIZ
The first unit is named Fast emittance scanner (FES). This program (Fig. 4) provides
measurement processes and hardware control. It can be started only by shift operators in the
control room, because solely one program instance can be in the online mode with
measurement hardware.
Fig. 4. Fast emittance scanner, options (FES)
The upper part of the GUI frame has a list box, which contains two types of messages:
[REPORT] and [ERROR] (Fig. 4). All actions of the operator to the program and machine,
system status, error events and alarms with time-stamp are stored to a separate own log-file
(black box). Utilizing this file an expert can support the shift crew remotely to fix a problem
and it is useful to explain some unusual results. In the list box error messages are marked by
red color and content information about an error and an instruction how to fix it. This
feature is common for all programs of EmWiz. Before using the program for a measurement,
the machine parameters have to be adjusted. Some necessary values, e.g. gun and booster
energy, laser beam rms size are measured by other tools and put in the corresponding field
(Fig. 5). An operator can set the actuator speed. The measurement precision gets better with
less speed, but the measurement time is increased.
The typical emittance measurement time for a selected current value with the default
speed (0.5 mm/s) is about 3 minutes (selected scan region = 4 mm). The time is about
5 minutes if the speed equals 0.2 mm/s.
231
Fig. 5. Set values dialog (FES)
An operators next step is set an EMSY device. 6 EMSY devices are available for
emittance measurement: 3 EMSY x 2 axes. The video system at PITZ has more than
20 digital cameras with 8 and 12 effective bits per pixel and >7 video servers [6]. The video
part, which can control the cameras, changing settings and connecting to a video server, is
excluded from EmWiz. Currently it is realized via a set of programs which are written by
Java, yet FES makes it possible to get the video image, to apply filters, to grab and to save
video images and read cameras properties. An operator has to set a proper video server,
check the camera status and set a scan region. All last used values are stored in the EmWiz. If
it is necessary an operator can set an own file name or use a predefined unique file name and
set a path addition for a predefined path name. It is done for flexibility, because this program
is used for other (not only for emittance) measurements.
The scan frame of FES shows appointed hardware parameters, hardware status, gives a
set of measurement and control buttons when a measurement is possible for a current
hardware status (Fig. 6).
Fig. 6. Scan panel, bottom part of FES
For checking quality of an emittance measurement two report diagrams are available
(Fig. 7). They appear during the measurement procedure. Absence of dramatic saturation level
is one important criteria for obtaining good data quality. The major part of signal image pixels
should have an intensity between 50% and 70% of maximum intensity. For example, the
signal rate should be less than 3000 units (X-axis) for 12bpp camera. It is a criterion for a
measurement without saturation.
232
(a) Spectrum plot
(b) Qualitative plot
Fig. 7. Spectrum and qualitative plots (FES)
(a) X-axis signal rate of a video matrix pixel; Y-axis number of pixels with the signal rate;
the red line in the right part of the plot indicates saturated pixels.
(b) X-axis position of beamlet, mm. Y-axis - reference unit; the colors means: green
good frame, red missed frame, sky inaccurate frame position, violet late frame, white -
early frame, blue signal sum of beamlet, white numbers number of saturated points.
Each frame corresponds to a certain actuator position. Both video frame and actuator
position recording times are controlled. If a frame comes too late or too early this frame is
bad for emittance measurements. The qualitative plot gives information about missed and
bad frames, local saturation, actuator movement and signal level. With the help of the report
plots an operator can understand if this measurement is successful or not for sure and can
interrupt unsatisfactory measurement procedure without saving data.
The transverse beam images and background frames at the slit location (EMSY) and at
the beamlet screen (MOI) are recorded via Fast Scan, EMSY and MOI buttons (Fig. 6).
These images are required for the emittance calculation using formula (1). Then the beamlet
scan procedure Fast Scan can be started via the same button. At first the background
statistics is collected in the next step, the slit is moved continuously with a constant speed in
the selected scan region. At the same time, the attached CCD camera to the selected video
server grabs the image frames from the beamlet observation screen with fixed rate (10 Hz);
measurement times and actuator positions for each image frame are recorded in parallel. The
operator repeats the scan for all main solenoid current values of interest.
Fig. 8. Emittance calculator, panel Options (EC)
233
The measurement time for one solenoid setting is about 3 minutes. This time includes
all necessary procedures for the measurement. The process of data collection in FES stores all
machine parameters continuously and informs the operator about critical fluctuations of
controlled parameters, which can influence the measurement reliability. The data is recorded
with cross-reference to the number of the grabbed frame. That means the actuator position,
RF- power and gun temperature are known for each video frame. This gives the possibility to
process and explore data with more precision.
Fig. 9. Dialog boxes for emittance calculation, manual mode (EC)
The last step in the measurement is to calculate the emittance using the Emittance
calculator (EC) tool (Fig. 8). FES sends measurement data to EC with a request of an
operator. If EC is not started yet FES starts EC. EC makes calculations and sends the plot data
to a program Root plotter (RP). The task of RP is plotting diagrams and reports. This
program RP is a symbiosis of the plot system ROOT and Qt. Also the operator can set a
folder with saved data by hand (Fig. 9), and then EC calculates the data and plots results.
A lot of data processing is ongoing during calculation. Different filters, formulas etc.
can be applied to the data. If necessary some parameters can be customized via options
(Fig. 8). A user can select plotting options (Fig. 10) to plot all intermediate results after each
complex process. At present time up to 27 different plots are available for using.
Fig. 10. Bottom part of emittance calculator, panel Plot (EC)
At the end of the calculation the emittance plot is shown by default (Fig. 11). It is
possible to transform the plot data into CSV/TXT formats.
234
Fig. 11. Phase Space plot, emittance report (RP)
Currently, the wizard consists of separated programs for each logical task.
Communication between program components is realized by a shared memory. This approach
gives the possibility to use EMWiz components on different user stations operating on one
host. This increases graphics productivity and decreases the CPU loading. The disadvantage
of using shared memory is that the operational system cannot realize the used shared memory
without special actions.
An instance of the tool Memory Watcher (Fig. 12) is always started together with an
instance of EMWiz. The tool is hidden to the user, only some useful information can be read.
MW closes unused programs automatically after predefined time, cleans probable lost shared
memory, kills possible hanging components and informs about user conflict which blocks
starting of components of EMWiz. It is very useful at PITZ, because the computer system has
a lot of computers and users. The system has to be continuously in operation (7/24).
Fig. 12. Memory watcher (MW)
235
Conclusions
The Emittance Measurement Wizard (EMWiz) is being one of the main measurement
tools at PITZ, which consists of a set of applications. The new version of the EmWiz
significantly decreased the measurement time while accuracy is being improved. The wizard
strongly interfaces with the machine control and video system. Using this program, the
transverse phase space and emittance value can be measured much faster and more reliable. A
friendly GUI and a wide variety of options make the operator job more effective. The
majority of wizard components are universal and can be used for others tasks. With the help
of this tool operators can solve a wide of spectrum tasks. The PITZ facility is upgraded
continuously and development of EMWiz is also ongoing. The work with EMWiz is going in
the directions of complete automation of the measurement process, simplification of using,
improving the quality of the experimental data and calculation algorithms.
References
[1] A. Shapovalov, L. Staykov. Emittance measurement wizard at PITZ. BIW2010, May 2010.
[2] http://qt.nokia.com
[3] http://tesla.desy.de/doocs/doocs.html
[4] http://adweb.desy.de/mcs/tine/
[5] http://root.cern.ch/drupal/
[6] S. Weisse et al. TINE video system: proceedings on redesign. Proceedings of ICALEPCS,
Kobe, Japan, 2009.
[7] L. Staykov et al. Proceeding of the 29th International FEL Conference, Novosibirsk, Russia,
MOPPH055, 2007, p. 138.
[8] F. Stephan, C.H. Boulware, M. Krasilnikov, J. Bahr et al. Detailed characterization of electron
sources at PITZ yielding first demonstration of European X-ray Free-Electron Laser beam
quality, Phy. Rev. St Accel. Beams 13, 2010, p. 020704.
[9] S. Weisse, D. Melkumyan. Status, applicability and perspective of TINE-powered video
system, release 3. PCaPAC2010, October 2010.
236
Modernization of monitoring and control system of
actuators and object communication system of experimental
installation DN2 at 6a channel of reactor IBR-2M
A.P. Sirotin, V.K. Shirokov, A.S. Kirilov, T.B. Petukhova
Frank Laboratory of Neutron Physics, Joint Institute for Nuclear Research, Dubna, Russia
Introduction
Determination of composition and development of a unified architecture for actuator
control systems and data acquisition systems from a complex of sensors is a topical task
connected with spectrometer modernization at the reactor IBR-2M. From the moment of
putting into operation of reactor IBR-2, the object communication systems (OCS), i.e.
systems for communication with experimental installation and sample unit, consisted first
of digital and analog input/output units in the CAMAC standard, and later on, in the VME
standard.
However, then processors built-in into VME were displaced by PCs as control computers
[1]. The latter, in its turn, has caused a cancellation of the use of VME bus as an OCS base.
The present work considers the modernization of monitoring and control system of the
installation DN2. The gained experience will also be used at the modernization of of other
spectrometers of rector IBR-2M. The same is true for main approaches and criteria in the
design of control and monitoring systems of experimental installations at reactor IBR-2M,
which are formulated in the present paper.
1. Existing control and monitoring system of experimental installation DN2
The Fig. 1 represents block diagram of control and monitoring system of the DN2
installation before modernization.
VME-bus
*
Control bus of step motor drivers
Fig.1. Block diagram of monitoring and control system of DN2 installation before
modernization
O/I
register VME
Angle sensor
CONV-3
4
RS232
Asynchrony
motor
Controller
of beam
Background
Chopper
Temperature
controller
(DRC)
Temperature
controller
(Eurotherm)
Step motor drivers 1-5
Step motors 1-5
Manual
Controller
Step
motor
controller
VME
237
Both the control of rotating platform with a detector and sensor reading pickup were
executed through the input/output register located at the VME bus.
Step motors were controlled by a controller located at the VME bus through a
corresponding driver of each step motor (1-5).
Step motor controller is located in the VM crate and it executes and it executes a
simplest operation a given number of steps in a given direction with a given velocity and
under control of limit switches.
Fig. 2. Goniometer GKS-100 with sample cassette
Goniometer GKS-100 (Fig. 2.) provides for sample orientation along 3 rotation axes:
vertical axis and two orthogonally related horizontal axes.
According to experiment conditions, the GKS-100 was replaced, in a number of cases,
by a vertical rotation platform Huber with vertical rotation axis. The rotation was also
limited by 2 control points.
The gate valve and interrupter phase are also controlled through input/output units in
VME standard.
Communication with temperature regulators DRC and Eurotherm was also executed
through the unit Conv-3 in the VME standard.
2. Structural scheme of control and monitoring system of experimental
installation DN2 after modernization
A simplified structural scheme (Fig. 3) of step motor and sensor data acquisition
control system is proposed. Its realization is possible both on the CAN interface base and on
the base of RS485 [2, 3].
At the modernization of actuator control systems, it seems to be reasonable to retain
the separation of controller/driver and step motor, which simplifies the task of replacing
motor or controller type. In the future, step motors of dimension-types 42, 57 or 86, possibly,
in combination with reducers =3-150, are supposed to be used.
In modern actuator control systems, the controller is more often integrated into step
motor driver, at that nearly not raising its cost. CAN or RS485 can be used as step motor
controller interface. The use of adapters USB-CAN and USB-RS485 with galvanic separation
238
provides for a reliable high-speed communication with a PC, a possibility of operation at a
distance up to 1000 m.
CAN step motor controllers/drivers of types KSMC-1, KSMC-8 and KUMB203 [4]
with currents up to 1, 8 and 40 correspondingly are proposed to be used. These
controllers have a compatible software and they include the whole range of motors used at the
FLNP spectrometers. However it does not exclude the usage, in the framework of one
spectrometer, of controllers with a different interface, for example, the RS485.
USB USB CAN (RS485)
***
***
* ***
RS485
Fig. 3. Structural scheme of motor control and sensor data acquisition system
In some cases the task of reduction of reactor time losses for the check of actuator
position becomes very urgent. That task is reliably solved by the application of absolute
actuator position sensors. Most suitable are multiple turn angle sensors, consisting of a
one-revolution sensor (12-16 digits) and revolutions sensor (12-16 digits). They can be
used for the control of both angular and linear movements.
Absolute multi-turn angular sensors with a SSI synchronous interface are proposed to
be used. That hardware interface and all sensors are compatible with this interface.
The SSI-RS485 converter is used for the connection of multi-turn angular sensor to the
RS485 bus, and the USB/RS485 converter to PC and it is emulated as a -port of PC.
RS485/SSI converters of one type are proposed to be used as this approach will allow to
connect all sensors to the PC via a single RS485 line through USB-RS485 converter.
Thus, the structural scheme of monitoring and control system of DN2 spectrometer at
6a-channel of reactor IBR-2M takes the form of Fig. 4.
The spectrometer composition comprises two types of step motor controllers those
on the base of CAN bus (2ps.) and RS485 (10ps).
3. Main elements of control and monitoring systems of spectrometer DN2 at the
channel 6a of rector IBR-2M
The system deploys absolute multiple turn angle sensors OCD-SL00-B-1212 and
step motors FL86 and FL86 with a reduction of 1/1, 1/5 and 1/25. The following adapters
Step motor
controller + driver
1
USB / CAN
(RS485)
Step motor
controller + driver
N
Sensor Sensor
Converter
SSI-RS485
USB / RS485
Step motor 1
Step motor N
Converter
SSI-RS485
239
are used as converters: converter LIRA-916 (SSI/RS485), concentrator UPort 1450-I
(USB/RS485) and adapter USB-CAN2 (USB/ CAN).
All controllers KSMC-8 are mounted on the 19 rack, in the 19 3U frame at a
distance of 20 m from a PC. The distance to motors and sensors is up to 10 m.
All controllers KSMC-8 are connected to one CAN line, and further they are connected
to a PC through adapter USB/CAN2. Each controller has its own address at the CAN line.
USB-USB extenders serve for location of concentrators UPort 1450-I directly at
the experimental installation. The concentrator is mounted in the 19 rack at a distance of
20 m from a PC, and then connection cables diverge, in the star-like form, to control and
monitoring devices.
2 USB/RS485 converters UPort 1450-I are included into control system, each of
which has 4 separated connection channels with external devices. Each channel can
function as RS232, RS 422 or RS 485. One RS485 channel is used for the connection of
multi-turn angular sensor of the OCD type.
The adaptor RS485/SSI of LIR-916 type provides the sensor output with connection
line SSI to the general line RS485. The total number of digits of turn sensor and turn
quantity may reach 32. Each RS485/SSI adaptor has its own address in the range 1-256 at
the RS485 line. The access to RS485 line is emulated as a -port of a PC.
The input/output module (110-220. 4.4) provides monitoring and control of
up to 4 relay input channels and 4 relay output channels.
Conclusions
Approaches to the design of actuator monitoring and control system and of object
communication system (OCS) of the DN2 spectrometer allowed to formulate main criteria
for the modernization of other spectrometers at the IBR-2M reactor.
Actuator control systems should comply with the following requirements:
Separation of motor control channels and sensor data acquisition channels,
Design-stage separation of controller/driver and step motor,
Step motors with 2 or 4 coils and coil current up to 1A and 8A are recommended
to be used,
Design-stage separation of communication interface and sensor,
Sensors: recommended are to be used absolute multiple turn angle sensors with
SSI interface and up to 32 digits,
One-type interfaces SSI/RS485 are recommended for sensor connection to RS485 line.
The issue of standardization is solved by the usage of one-type controllers/drivers
and adapter SSI/RS485.
The use of a ready-made intergrated solutionis advisable for simpler control systems.
Object communication systems (OCS), i.e. communication systems with an
experimental installation and sample unit should comply with the following requirements:
Spectrometer OCS systems are connected to a PC through the USB interface,
Separation of control channels of experimental installation parameters, which are
not connected functionally, i.e. which are not one task in spectrometer software,
It is recommended to use new OCS equipment with standard USB or RS485
interfaces through USB/RS converters emulated as COM ports,
Equipment with standard interfaces RS232, RS422 is connected to a PC through
USB/RS converters emulated as COM ports.
The issue of standardization is solved by the application of one-type digital and
analog input/output, which operate at the RS485 line.
240
In conclusion, the authors would like to express their gratitude to Dr. V.I. Prikhodko
for useful discussions and consultations.
USB
* * * * * *
* * *
* * *
USB
CAN-bus
* * *
* * *
Fig.4. Structural scheme of control and monitoring system of spectrometer DN2 at the 6a
channel of reactor IBR-2M
Step motor 1 Step motor 10
2 x UPort 1450I - 2x4-port RS-232/422/485 USB-to-serial converter
RS485 RS232 RS232 RS485
Converter
USB-USB (20)
O/I register
(110-220)
Step motor controller 1
OSMC-17RA-BL(1,7)
Temperature controller
(DRC)
Temperature controller
(Eurotherm)
Controller of
beam shutter
Background
Chopper
power supply
+48V10
+24V10
+12V10
Step motor controller 10
OSMC-17RA-BL(1,7)
Angle sensor of detector platform
Angle sensor of
Goniometer Huber
Converter SSI/RS485
(916)
Converter SSI/RS485
(916)
Step motor 1 Step motor 2
Step motor controller 1
KSMC8 (8)
power supply
+48V10
+12V10
Step motor controller 2
KSMC8 (8)
Converter USB-CAN2
241
References
[1] A.V. Belushkin, at al. 2D position-sensitive detector for thermal neutrons.
NEC2007, XXI International Symposium, Varna, Bulgaria, September 10-17,
2007, pp. 116-120.
[2] V.V. Zhuravlev, A.S. Kirillov, T.B. Petukhova and A.P. Sirotin. Actuator control
system of a spectrometer at the IBR-2 reactor as a modern local controller network
- CAN. 13-2007-170, Dubna, JINR, 2007.
[3] N.F. Miron, A.P. Sirotin, T.B. Petukhova, A.S. Kirillov et al. Modernization and
creation of new measurement modes at the MOND installation. XX-th Workshop
on the use of neutron scattering in solid state investigations (WNCSSI-2008), 13-
19 October 2008, Abstracts, Gatchina, 2008, p. 150.
[4] Electronics, electromechanics. JSC Kaskod, St.-Petersburg, www.kaskod.ru
.
242
VME based data acquisition system for ACCULINNA
fragment separator
R.S. Slepnev
1
, A.V. Daniel
1
, M.S. Golovkov
1
, V. Chudoba
1,2
, A.S. Fomichev
1
,
A.V. Gorshkov
1
, V.A. Gorshkov
1
, S.A. Krupko
1
, G. Kaminski
1,3
,
A.S. Martianov
1
, S.I. Sidorchuk
1
and A.A. Bezbakh
1
1
Flerov Laboratory of Nuclear Reaction Joint Institute for Nuclear Research, Dubna Russia
2
Institute of Physics, Silesian University in Opava, Czech Republic
3
Institute of Nuclear Physics PAN, Krakow, Poland
A VME based data acquisition system for the experiments with radioactive ions beams (RIBs) on the
ACCULINNA facility at U-400M cyclotron (Dubna Russia, http://aculina.jinr.ru/) is described. The DAQ system
includes a RIO-3 processor connected with CAMAC crate via GTB resources, a TRIVA-5 master trigger, standard
VME units ADC, TDC, QDC (about 250 parameters in total) and various software (Multi Branch System - MBS
version 5.0 http://www-win.gsi.de/daq/, based on CERN ROOT http://root.cern.ch/drupal/ Go4 version 4.4.3
http://www-win.gsi.de/go4/ and real time OS LynxOS version 3.3.1). The new system provides flexibility to use
new VME modules (registers, digitizers, etc) and possibility to process a triggers rate (~5000 s
-1
) higher than that of
the old system.
1. Introduction
In order to study light proton and neutron rich nuclei close to drip line, the physics
programs for the ACCULINNA [1] facility and the future ACCULINNA-2 [2,3] facility
(Fig. 1) require a relevant data acquisition system (DAQ). Such a system should satisfy
several conditions: it should have a low price per channel, ability to process a high trigger rate
with a low 'dead time' (time when DAQ is insensitive). In addition, it should be scalable and
flexible.
246
Ukrainian Grid Infrastructure. Current state
S. Svistunov
Bogolyubov Institute of Theoretical Physics, National Academy of Sciences of Ukraine
This report presents a current state of the hierarchical structural organizational model of the Ukrainian
Grid-infrastructure. The main topics are: Three-level organizational management of computer resource and grid-
service; Information concerning current state of cluster resource, high-speed fiber optic networks and
middleware; The concept of the State scientific technical program of implementation and usage of grid
technology for 2010-2013 accepted by Cabinet of Ministers of Ukraine in 2009; Main steps in the State
program realization and main results of the State program implementation in 2010; The most interesting results
of solving scientific tasks using grid technology; Information on integration of the Ukrainian institutes and
universities in the international Grid-projects.
1. Introduction
Today supercomputer technology is considered the most important factor for the
competitiveness of the economy. Therefore, the advanced countries are moving to a new,
more progressive, infrastructure - the grid infrastructure with powerful supercomputer centers
connected by ultrafast communication channel.
The basis of grid infrastructure in Ukraine has been built by applying two programs:
"Implementation of grid technologies and the creation of clusters in the National Academy of
Sciences of Ukraine" and " Information and Communication Technologies in Education and
Science ", the main performers of which were National Academy of Sciences of Ukraine and
Ministry of Education and Science of Ukraine [1], [2].
Ukrainian National Grid (UNG) is a grid infrastructure, which shares the computer
resources of the institutes of National Academy of Science and Universities. The principal
task of the UNG is to develop the distributed computing and grid technologies to advance
computational calculations of fundamental and applied science. Besides, UNG has to ensure a
participation of the Ukrainian scientists in various major international Grid projects.
Currently grid infrastructure shares resources of 30 institutes and universities, which
are operating under the ARC middleware and three clusters included in EGI structure under
gLite middleware. It's not really much, but currently it's enough to support research activity of
Ukrainian institutions.
Ukrainian Grid infrastructure is a geographically distributed computing complex, which
currently provides solution of the complex scientific problems in different application areas. In
essence these are very various tasks. Here are a few examples of abovementioned tasks:
LHC (CERN) experimental data processing, their analysis and comparison to the
theoretical results and phenomenological models aiming the full scale participation of
the Ukrainian institutes in the ALICE and CMS experiments,
Dynamical computing of an evolution of the star concentration in the Galaxys
external field. The hydrodynamic modeling of collision and fragmentation of the
molecular clouds. Analysis of N-body algorithm and parallel computing on the
GRAPE clusters,
Theoretical analysis, observation and processing of primary, roentgen and gamma
radiation data which are obtained from the satellite telescopes INTEGRAL, SWIFT
and others,
Computing of thermodynamic characteristics, infrared and electron spectra of sputter
DNA fragments. Study of bionanohybrid system structures composed by DNA and
RNA of different sequence,
247
Molecular dynamic computing of Fts-Z-protein systems with the low molecular
associations,
Computer simulation of the spatial structure and molecular dynamics of cytokine-
tyrosine-RNA synthetase.
Ukrainian institutes dont have enough financial resources for development of grid
technologies. State scientific-technical Program of implementation and usage of grid
technology for 2010-2013 by Bogolyubov Institute of Theoretical Physics of NASU was
presented to the Cabinet of Ministers of Ukraine and was accepted at the end of 2009 year
Currently, financing of grid technologies is provided by the State Budget of Ukraine.
2. State scientific-technical program
Ukrainian National Grid is the targeted state scientific and technological project
(program) on the development and implementation of grid-technologies for 2009-2013,
adopted by the Cabinet of Ministers of Ukraine in 2009. Goal of the program is creation of
national Grid infrastructure and wide implementation of Grid technologies into all the
spheres of social-economical life in Ukraine.
The project goal is to build a national grid-infrastructure and to introduce grid-
technologies in all areas of scientific, social and economic activities in Ukraine, as well as to
train specialists on grid-technologies.
The objectives of the project are:
Introduction and application of grid-technologies in scientific research,
Creation of conditions for implementation of grid-technologies in economy, industry,
financial and social spheres,
Creation of multilevel interdepartmental grid-infrastructure with elements of
centralized control that takes into account the peculiarities of grid-technologies usage
in various fields,
Creation of specialist training system to work with grid-technologies.
The following management bodies are created for project control:
Interdepartmental Coordinating Council, which defines the general principles of
development, grid infrastructure program and operational plans,
Project Coordination Committee, which is the executive body and has the authority to
represent the national grid-infrastructure at the national and international levels,
Basic Coordination Centre, which is responsible for operation of the national grid-
infrastructure.
The main project activities are:
Creation of new and update of existent clusters. Raising the bandwidth of internet
channels,
Middleware and technical support,
Security of Grid Environment,
Implementation of Grid technologies and Grid applications in scientific research, in
economics, industry, financial activity,
Implementation of Grid technologies and Grid applications in medicine,
Development for purpose of storing, processing and open access to scientific and
educational information resources (data bases, archives, electronic libraries) by using
Grid applications,
Organizational and methodical providing of specialists training for them to work with
Grid technologies.
248
Total financing for four years should be 30,0 million . Two-thirds of this sum is
planned to be spent during first two years mainly on creation of new and update of existent
clusters and on raising bandwidth of internet channels. The second direction of work
according to financing plan is related to usage of grid technologies. Main performers of the
project have been defined by the National Academy of Science, Ministry of Education and
Science and Ministry of Health. Note that Bogolyubov Institute for Theoretical Physics NAS
of Ukraine is the leading organization for implementation of state program.
The program has been in progress for two years already. In 2010 only 0,616 million
were allocated instead of the planned 11,0 million . However, in 2010, 29 projects in various
areas of the program related to grid technologies usage in scientific research were
implemented. In 2011, the amount of funding programs totaled 0,87 million instead of the
planned 9,0 million . That allowed starting 43 scientific projects. As you can see this is far
from the amount planned. This amount of investments cannot perform all the tasks of the
State program, but the area of implementation of the grid - technology is one of the five
priority scientific researches in Ukraine, so it should be financed, even on a reduced volume.
The additional information about state program implementation can be found on the
site http://grid.nas.gov.ua.
3. Grid-infrastructure
It is known that grid system is based on three principal elements: computer resources
(clusters), high-speed and reliable access of resources to Internet and middleware which
unifies these resources into one computer system.
Though the middleware was available even at the beginning of the grid infrastructure
creation, the clusters and fiber-optic network of NASU were simultaneously built together
with the grid facilities system development.
The well-known Beowulf idea (www.beowulf.org) as a conceptual model for the
computer cluster construction was selected and adopted for realization in UNG. This
conception is based on using servers with standard PC architecture, distributed main storage
and Gigabit Ethernet technology which unifies computer system. So, all grid clusters in UNG
are built with x86, x86_64 architecture, two- (four-) processor servers of 1-4 GB main storage
and 36-500 GB HDD. 1GB/s switchers have been used to provide inter-server exchange and
the InfiniBand is used only in some clusters.
At present time the UNG shares resources of 30 institutes and universities (more than
2700 CPU and 200 TB of disk memory).
It should be emphasized that the footprint on hard disks of computer nodes is used for
operating system (loading from local disks), program packages and temporary files, but it is
inaccessible for user files storage. Each cluster has its disk array to store programs, users data
and information of common use. A free distributed operational system Linux of various
modifications (Scientific Linux 2.6.9, Fedora 2.6, CentOS-4.6) is installed on clusters and the
task management system OpenPBS is used to start tasks and allocate the cluster utilization.
To enlarge number of grid users the idea of so-called grid platforms has been fulfilled,
its core idea is to install the grid clusters with minimal configuration. Such a cluster
includes the control server with installed middleware, two working nodes and network
equipment. Working permanently in the conditions of limited funding, such grid platforms
have allowed access to the grid for the specialists of Institutes without operable cluster. With
financing available any mini cluster can be easily extended to the full scale cluster. This
strategy enables to train system administrators and users for the work with the grid.
The high-speed and reliable access channel to Internet networking is one of the
necessary conditions at the grid infrastructure building. Fiber optic channels in UNG are
249
owned by two providers: UARNet (which works mainly with academic institutes) and URAN
(which works mainly with educational structures).
At present, UARNet is one of the biggest Internet providers in Ukraine with its own
data transmission network and external channels to the global Internet. The total capacity of
UARNet external channels amounts to over 100 Gb/s. The access to non-Ukrainian Internet
resources is provided via Tier1-providers Level3, Cogent and Global Crossing, Russian
provider ReTN.net. UARNet is a member of European Internet Traffic Exchanges DE-CIX,
AMS-IX and PL-IX. UARNet has its sites (POPs) in all regional centers of Ukraine as well as
in Frankfurt (Germany) and Warsaw (Poland). In 2009 cable from Lvov to Poland border has
been built. Network equipment has been installed for connection to Pioneer network. In 2009
Ukraine has been connected to a GEANT.
All sites are interconnected by ring topological structured by multiple 10 Gb/s data
channels. UARNet is also connected to the Ukrainian Internet Traffic Exchange Network UA-IX
through four 10 Gb/s data channels. To KIPT cluster (Kharkov) that takes part in CMS
experiment and has status of Tier-2 CERN with guaranteed capacity 300 MBit/s. To BITP cluster
(Kiev), that takes part in ALICE experiment at CERN, with guaranteed capacity 1 GBit/s.
The main efforts this year were aimed at the harmonization of Ukraines grid
infrastructure to the requirements of EGI. According to the WLCG scheme the infrastructure
of UNG has been built as three-level system in respect to an organization and management:
First level: Basic Coordinating Centre (BCC) which is responsible for core services
and controls UNG,
Second level: Regional Operating Centers which coordinates the activity of grid
sites in regions,
Third level: Grid sites (institutes) or minimal grid network access platforms which
belong to any virtual organization (VO). VO temporarily joins institutes (not
necessarily from the same region) of common scientific interests to solve a problem.
Basic Coordinating Centre is a non-structural subdivision of the Bogolyubov Institute
for Theoretical Physics of the NAS of Ukraine, which is the basic organization for the
implementation of the project. BCC provides management and coordination of works on
supporting the operation of the national grid-infrastructure and resource centres (grid-sites) to
provide grid-services (high-performance distributed computing, access to distributed
database, access to software) for users.
BCC on technological and operational level coordinates the work of grid-sites and
entire UNG grid-infrastructure according to the requirements of European and international
grid-infrastructure.
BCC on technological and operational level serves as the National Operations Centre
(Resource infrastructure Provider) of UNG grid-resources in relation with the International
grid-communities.
BCC has the right to conclude international agreements in the field of cooperation at
the operational level, which shall become valid after ratification by the Program Coordinating
Committee.
The Basic Coordination Centre includes:
Centre for monitoring of grid-infrastructure and registration of grid-sites,
Centre for registration of virtual organizations and members of virtual
organizations,
Certification authority,
International relations group,
Scientific and analysis group,
250
Technical assistance and middleware maintenance group,
User support group,
Study and training centre.
The principal BCC teams and their functions are following:
International relations group. The key task of this group is creating the conditions
for UNG participants for cooperation with the international organizations and the international
grid collaboration.
Scientific and analysis group. The task of this group is to analyze the applicability of
grid technologies in the different fields, analysis of perspective research trends of the grid
technologies and their realization, expert evaluation of new propositions and projects for
development and the grid technology applications.
Technical assistance and middleware maintenance group. The group assists
administrators of computer clusters in installation of system-wide software and providing
assistance in computer security topics. The team coordinator maintains the permanent contact
with the system administrators and security administrators of each local grid site. Software
support team assists administrators of grid site on installation of middleware and
maintenance. Moreover, the team experts are responsible for task analysis of the whole grid
system, new middleware installation, and compatibility of software with middleware.
User support group. The main task of group is supporting of the GGUS system -
monitoring the questions of grid users. Usually this is a web-portal which should answer to
the practical questions: how to involve in the grid activity, how to obtain the grid certificate,
how to join to the virtual organization. Web-site contains the information about the structure
of national grid, references to the grid activity documentations and reference to international
grid projects and virtual organizations, announcements on the grid seminars and conferences.
Study and training centre is in charge of organization of the training process on
theoretical and practical implementation of the grid technology. The individual education
programs for system administrators and grid users are to be created.
The Coordinating Committee of the Ukrainian State Program has accepted and
approved the basic documents defining the operation the grid sites in UNG. They are:
- Ukrainian National Grid. Operation architecture,
- Agreement for the use of grid resources in the UNG,
- Procedure of registration of grid sites in the UNG,
- Procedure of registration of virtual organizations in the UNG,
- Basic Coordination Center. Operation structure.
All this documents were developed according to EGI requirements.
Negotiation on Memorandum of Understanding between EGI.eu and BCC are
on. Resource Infrastructure Provider MoU document is coming. The purpose of this
Memorandum of Understanding is to define a framework of collaboration between EGI.eu
and BCC for access of Ukraine to the operation level in EGI. Additional information about
this activity can be found on the site http://infrastructure.kiev.ua.
4. Grid-applications
I would like to add few words on applications which are used by the grid
infrastructure. This is a research, which has been performed in Ukrainian institutes and which
required lot of computing calculations.
251
Due to the efforts of NASU several well-known and widely used scientific software
packages were obtained and have been installed on UNG resources (Gaussian, Turbomole,
FlexX, Wolfram Research gridMathematica 2.1, Amber 9, Molpro Quantum Chemistry
Package, Gromacs). The main problem is that all the software has only command line user-
interface, which restricts its usage by the scientists, who are not well familiar with Linux
operating system and its console interface. These things are even more complicated while
using software from remote resources. That can be done only through grid, which expects
user to know grid principles and grid-middleware interface.
Possible solution of this problem is development of web-based science gateway [3].
Web-based science gateway is the solution that integrates a group of scientific applications
and data collections into a portal so scientists can easily access the resources of grid-
infrastructure to submit their tasks and manage their data. The main part of science gateway is
the grid-portal which gives to the grid users necessary tools with simple and friendly
interface for access to grid resources.
The SDGrid (System Development by Grid) portal was developed in Educational
Scientific Complex 'Institute for Applied System Analysis' (ESC IASA) of National
Technical University of Ukraine 'Kyiv Polytechnic Institute. The portal is built on a base of
CMS Gridsphere 2.1.5 and now works with Globus Toolkit 4.0.7 and NorduGrid 0.6.1. Portal
contains following portlets: GridPortlets provides users authentication, submission of
tasks, work with FTP, viewing of the tasks status, Gpir gives users information about
clusters loading, Queue prediction tasks scheduling.
The BITP portal was developed in Bogolyubov Institute for Theoretical Physics NAS
of Ukraine. The portal is built on a base of CMS Gridsphere 3.1 and Vine Toolkit 1.1.1 and is
used as test platform for development of the grid applications. The first part of portal is
intended for development of web applications which allows access to engineering packages,
which are installed on BITP cluster. The second part is intended for scripts development for
grid usage. In collaboration with Institute of Cell Biology and Genetic Engineering of NAS of
Ukraine interface and scripts for software package GROMACS were developed. They work
both on local cluster and in a mode of grid resources usage. Calculation using Gromacs
consists of same consecutive steps. The initial steps are intended for preparation of files for
calculations. On the last stage these files are used for direct calculation of molecular
dynamics. Developed portal allows carrying out initial actions on local cluster and using grid
infrastructure for simulations. The second part of BITP Portal uses Vine Toolkit 1.1.1
framework for access to grid resources. At the moment portal works with gLite middleware.
The main goal of creation of MolDynGrid Virtual Laboratory is to develop an
effective infrastructure for realization of calculations of molecular dynamics of protein
complexes with a low molecular compound. The grid portal (http://moldyngrid.org) was
developed by specialists of Institute of Molecular Biology and Genetics NAS of Ukraine and
Computer Center of Taras Shevchenko National University of Kiev. This portal allows
ensured tasking on accomplishment in grid and saving of obtained results while using user-
friendly interface. For portal development such tools as POSIX Shell, PHP, JavaScript and
MySQL database were used.
Other portal was developed by specialists of Glushkov Institute of Cybernetics NAS of
Ukraine, Verkin Institute of low Temperature Physics and Engineering NAS of Ukraine,
MELKOM Company. Based on the Supercomputer Management System SCMS 4.0 program Web
Portal for Grid-Cluster under ARC middleware was developed. Along with standard features of
cluster management portal provides full cycle of grid resource usage from submitting tasks to
receiving the results. Demo version of portal is available on devel.melkon.com.ua.
252
One of the main problems of Ukrainian Grid community is the lack of specialists who
know and are able to use the grid technology for scientific research. State program offers to
build a full system of training for grid users starting with educational courses in institutes and
followed by training the grid users of academic institutes. In 2011 by the initiative of the ESC
IASA a new specialty "Systems engineering was introduced for purpose of training
specialists in the field of distributed intelligent environments, in particular, grid technologies
in science and education. Request for implementation of this in magister education program
was included in the course "Distributed Computing and Grid technology which summarizes
the current understanding of the grid - technology and the problems which occur in the
process of their design and implementation. Kyiv Polytechnic Institute has organized the
computer class for grid administrators training and first advanced training courses.
Conclusion
Despite all difficulties and problems in developing of grid technologies in Ukraine the
background for the widest application of grid technologies has been provided. There is a good
reason to believe that grid would exist and operate in Ukraine. The collaboration with
international grid community is intensified and Ukrainian National Grid will be built with
joint efforts and will fit in the worlds grid infrastructure.
References
[1] . Martynov, G. Zinovjev, S. Svistunov. Academic segment of Ukrainian Grid
infrastructure. System Research and Information Technologies, N. 3, 2009, pp. 31-42.
[2] E. Martynov. Ukrainian Academic Grid: State and Prospects. Organization and
Development of Digital Lexical Resources. Proceedings of MONDILEX Second Open
Workshop Kyiv, Ukraine, 2-4 February, 2009, pp. 9-17.
[3] O. Romaniuk, D. Karpenko, O. Marchenko, S. Svistunov. Complex Science Gateway: use
of different grid infrastructures to perform scientific applications. Proceedings of the 4-th
International Conference ASCN-2009 (Advanced Computer Systems and Networks: Design
and Application), November 9-11, Lviv, Ukraine, 2009, pp. 81-82.
253
GRID-MAS conception: the applications in bioinformatics and
telemedicine
A. Tomskova, R. Davronov
Institute of Mathematics & ICT, Academy of Sciences, Uzbekistan
Keywords: Multi-Agent Systems, decision-making agent, clusterization, gen expression,diagnostics
1. Introduction
The GRID and MAS (Multi-Agent Systems) communities believe in the potential of
GRID and MAS to enhance each other because they have developed significant
complementarities. Thus, both communities agree on the what to do: promote an integration
of GRID and MAS models. However, even if the why to do it has been stated and assessed,
the how to do it is still a research problem.
The adoption of agent technologies constitutes an emerging area in bioinformatics.
The avalanche of data that has been generated, particularly in biological sequences and more
recently also in transcriptional and structural data, interactions and genetics, has led to the
early adoption of tools for unsupervised automated analysis of biological data during the mid-
1990s [1,2]. Computational analysis of such data has become increasingly more important,
however, some tools require training, and improving. The use of agents in bioinformatics
suggests the design of agent-based systems, tools and languages for above mentioned
problems.
These kinds of resources available in the bioinformatics domain, with numerous
databases and analysis tools independently administered in geographically distinct locations,
lend themselves almost ideally to the adoption of a multi-agent approach. There are likely to
be large numbers of interactions between entities for various purposes, and the need for
automation is substantial and pressing. Grid [3] e-Science project
(http://www.mygrid.org.uk), may also merit the application of the agent paradigm [4].
Another project is the Italian Grid, (http://www.grid.it) which aims to provide platforms for
high-performance computational grids oriented at scalable virtual organizations. Promising
experimental studies on the integration of Grid and agent technology are also being carried
out in the framework of a new project, LITBIO (Interactive Laboratory of Bioinformatics
Technologies; http://www.litbio.org) for genome analysis, is demonstrated by the
GeneWeaver project in the UK [5], and work using DECAF in the US [6,7].
The agent paradigm in telemedicine involves the analysis and the design step of a
system project; this is achieved by means of agent development tools or agent frameworks
where the system designer work is naturally driven by the agent concept, exactly as object
oriented tools help in analyzing, designing and implementing object oriented systems. The
agent also must have some intelligent capabilities, e.g. clusterizaton as basic practical tools in
computer diagnostics and prediction of disease outcomes.
This paper addresses the problem of designing agents for decision making by means of
a clusterization oriented approach. We present comparative analysis of two methods of
clusterization applied to problems of gen expression and on-line diagnostics of acute
myocardial infarction (AMI), because its well known that clusterization is one of the popular
tools for understanding the relationships among various conditions and the features of various
objects.
In [81] was proposed a new clustering method applicable to either weighted or
unweighted graphs in which each cluster consists of a highly dense core region surrounded by
254
a region with lower density. The nodes belonging to dense cores of cluster then divided into
groups, each of which is the representative of one cluster. These groups are finally expanded
into complete clusters covering all the nodes of the graph.
The support vector machine method (SVM) [9,10,2,3] has been one of the most
popular classification tools in informatics now. The main idea of SVM is that the points of the
two classes cannot be separated by a hyperplane in the original space. These points may be
transformed to a higher dimensional space so that they can be separated by a hyperplane. In
SVM, the kernel is introduced so that computing the separation hyperplane becomes very fast.
Saddle point search algorithm requires finding projections on intersection of cube and plane.
The goal was to compare these two approaches, improving them in some
modifications, described below and testing both on tasks of diagnostics and prediction
problems.
2. Coring clusterization and SVM problems
Let us consider an undirected proximity graph G = (V, E, W), where V is a set of
nodes, E is a set of edges, W is a matrix with entry w
ij
being the weight of the edge between
nodes i and j. In proximity graphs, V represents a set of data objects, w
ij
0 represents the
similarity of the objects i and j. A higher value of wij reflects a higher degree of similarity.
Thus, applying a graph clustering method proximity graph will produce a set of subgraphs,
such that each subgraph corresponds to a group of similar objects, which are dissimilar to
objects of groups corresponding to other subgraphs.
We assume that every cluster of the input graph has a region of high density called a cluster
core, surrounded by sparser regions (non-core). The nodes in cluster cores are denoted as
core nodes, the set of core nodes as the core set, and the subgraph consisting of core nodes
as the core graph.
For each node i of H _ V, the local density at i is defined as d(i, H) = (
eH j
ij
w )/|H|.
The node with the minimum local density in H is referred to as the weakest node of H:
H ie
min arg d(i, H). We define the minimum density of H as D(H) =
H ie
min d(i, H) to measure
the local density of the weakest node of H. By analyzing the variation of the minimum density
value D, we identify core nodes located in the dense cores of clusters. The method clusters a
proximity graph in some steps. Our contribution to this method is change the function d(i,
H) = (
eH j
ij
w )/|H| on d(i, H) = (max w
ij
)/|H|. This is correct, because the functions
property of monotony remains valid.
We remind standard SVM problem in learning classification. We denote by
2 1
, x x
inner product of vectors
1
x and
2
x . Suppose that we have a learning sample:
. ,..., 1 }, 1 ; 1 { , }, , { l i y R x y x
i
n
i i i
= e e
Below is standard formulation of SVM problem:
2
, ,
1
1
min( || || ),
2
( , ) 1 ,
0, 0, 1,..., .
i
l
i
b
i
i i
i
C
y x b
C i l
e o
e o
e o
o
=
+
+ >
> > =
Solution * *, *, o e b gives optimal hyperplane 0 * *, = + b x e . Our contribution to the SVM
method is that we preliminary calculated a significance of all variables based on Kullback-
Leibler divergence [11] and used this list in simulation running.
255
3. Computing experiments
Coding the methods described above we used the test problem from [8] in order to
compare the results of coring clusterization, and modification of SVM. Clustering
applications to gene expression analysis were demonstrated already in [12]. The problem of
tissue clustering aims to find connections between gene expressions and statuses of tissues,
that is can we predict the status of a tissue based on its gene expressions (cancer or no). The
dataset used in the experiment is available http://microarray.princeton.edu/oncology/affydata/index.html.
This data contains 62 samples including 40 tumor and 22 normal colon tissues. Each sample
consists of a vector of 2000 gene expressions. We will set aside the sample labels
(tumor/normal) and cluster the samples based on the similarities between their gene
expressions. Ideally, the task was to partition the sample set into two clusters such that one
contains only tumor tissues and the other contains only normal tissues.
The next task was the problem of acute myocardium (AMI) outcomes prediction [14].
Statement of a problem was formulated as forecasting of outcomes of acute myocardial
infarction (AMI) on the basis of the data of the initial stage of the disease. Total number of
patients from three different clinics was 1224.The number of the features used was 39
parameters, but after processing by Kullback-Leibler method were selected 15 most
informative features.
4. Results and discussion
A. Cancer diagnostics results
A1. Coring clusterization
The proximity graph constructed from the gene expression vectors is a complete graph
of 62 nodes. Edge weights that reflect the pair wise similarities of samples are computed
based on the Pearson correlation coefficient. The coring method identify 12 core nodes. The
dendrogram of these core nodes exposes two well-separated groups, one contains 10 nodes
and the other has 2 nodes. Expanding these cluster cores yields two clusters. One has
40 samples consisting of 37 tumor and 3 normal tissues. The other contains 22 samples
consisting of 3 tumor and 19 normal tissues.
Fig. 1. Comparison of clustering results by the coring method, [12] and [13]
Fig. 1 shows the comparison of clustering results by the coring method, [12] and [13]. The
result of [13] consists of 6 clusters, but joining clusters 1, 4 and 5 into one group of normal
tissues and 2, 3 and 6 into another group of tumor tissues yields a clustering similar to the
result of []. Our exchange of basic function d (i,H) method didnt change the number of errors
(6 errors in total).
Tumor
Normal
[5] [6] Coring
Ideal
256
Ideal Coring SVM (standard) SVM
(transformed features)
Tumor
Normal
A2. SVM method
The matrix was centered at expectation and rated, then was processed using of a
standard SVM method [3]. On base of training set (32 samples) was developed -margin
plane. This computing experiment resulted six errors, particularly 2 samples from the first
class, and 4 samples of the second class. Therefore the quality of partition can be estimated as
93, 75 % on training sample and 86, 67 % on testing sample.
In the issue of Kullback-Leibler divergence formula using we have a vector of
significance in R
m
space, where m - number of columns of an initial matrix. Further, j
th
the
matrix column has been multiplied on j
th
component of significance vector. Then we get the
new matrix with weighted variables for which again were used standard SVM, e.g. we repeat
above described computing experiment. But the results we received were different: only four
errors, 2 errors from the first class and 2 of the second. Naturally, the accuracy of partition
became equal 93,75 % on training set and 93,33 % on testing samples.
Fig. 2. Comparison of clustering results by SVM standard and modified procedures
B. AMI prediction results
The dimension of matrix is 1224 (objects)* 39 (features); matrix was centered at
expectation and rated. Computing experiment was carried out on two sets of features,- on set
of initial features (39), and on set of most informative features (15), which was calculated by
Kullback-Leibler divergence formula.
B1. Coring clusterization
Previously we defined two classes of patients,- with and without any complications of
the disease.
It was found that the first class consists of 420 patients and the second one from 804
patients. Then data was processing by algorithm of coring clusterization. Results are shown in
the table below.
Number of objects in
1class/2 class
Quality of clusterization (% of accuracy: 1 class/2
class) at different sets of features
39 features 15 features
420/804 54/92 67,8/89,3
B2. SVM method
The procedure of clusterization based on SVM methods is the following. At first we
define the support vectors: in total their number is 771.Then from set of support vectors one
chooses the set of learning sample, remaining data of support vectors forms test sample. Note
257
that number of learning and test samples are varied. Below the results of SVM clusterization
is shown.
Number of objects
in learning sample
( I class/ II class)
Quality of clusterization
(% of accuracy) based on
standard SVM method
(39 features)
Quality of clusterization
(% of accuracy) based on
standard SVM method
(15 features)
Learning
sample
Test
sample
Learning
sample
Test
sample
150/150 64 52,87 63,44 52,87
200/200 60 53,13 62 61,66
250/250 56,20 61,17 60,75 62,71
300/300 58,50 88,64 62,12 90,25
350/350 59,71 94,87 64,32 98,56
Its obviously that accuracy on learning sample is considerably worse than on test
sample. As far as support vectors enclose the entire information about given matrix, one can
conclude that support vectors are much disembodied, and therefore separating plane is not
adequate. At the same time on test sample we get good results, because information about test
sample embedded in support vectors of learning sample already. Table implies that refinement
of clusterization defined by growth of learning sample volume and using of informative
features. Both reasons are quite clear.
Conclusion
We have tested the method for clustering a graph and SVM method in standard and
modified form in two computing experiments. Experiments with proximity graphs built from
gene expressions have shown good clustering results. Really, the method is simple and fast,
but definition the values of two proposed parameters for a good setting needs future
researches. Core nodes can represent informative data objects and also make the method
robust to noise. Standard SVM method gives the same results as coring clusterization, but
after transformation of initial features space based on Kullback-Leibler divergence, the
accuracy of partition improved on 7.68%. Thus we can conclude, that coring clusterization
gives more possibilities for interpretation, its more robust to noise, but SVM used in
transformed space of initial variables is more accurate.
Experiments on data of AMI confirmed that the set of 15 informative features (in both
methods), and increasing of set of training and learning samples (SVM method) are optimal
conditions for correct clusterization.
References
[1] T. Gaasterland and C. Sensen. Fully automated genome analysis that reflects user needs and
preferences a detailed introduction to the magpie system architecture. Biochimie, 78:302
310, 1996.
[2] W. Fleischmann, S. Mller, A. Gateau, and R. Apweiler. A novel method for automatic
functional annotation of proteins. Bioinformatics, 15:228233, 1999.
[3] R.D. Stevens, A.J. Robinson, and C.A. Goble. myGrid: personalised bioinformatics on the
information grid. Bioinformatics, 19(suppl. 1):i302304, 2003.
[4] L. Moreau, S. Miles, C. Goble et. al. On the use of agents in a bioinformatics grid. In
NETTAB Agents in Bioinformatics, 2002.
[5] J.M. Bradshaw. An introduction to software agents. In J.M. Bradshaw, editor, Software
Agents, chapter 1, pp. 3-46. AAAI Press/The MIT Press, 1997.
258
[6] J.R. Graham, K. Decker, and M. Mersic. Decaf - a flexible multi agent system architecture.
Autonomous Agents and Multi-Agent Systems, 7(1-2):727, 2003.
[7] K. Decker, S. Khan, C. Schmidt et al.. Biomas: A Multi-Agent System For Genomic
Annotation. Int. J. Cooperative Inf. Systems, V. 11, pp. 265-292, 2002.
[8] Thang V. Le, Casimir A. Kulikowski, Ilya B. Muchnik. Coring Method for Clustering a
Graph. DIMACS Technical Report, 2008.
[9] V.N. Vapnik. The Nature of Statistical Learning Theory. 1995, New York
[10] B. Schoelkopf and A.J. Smola. Learning with Kernels: Support Vector Machines,
Regularization, Optimization, and Beyond. MIT Press, 2002.
[11] S. Kullback, R.A. Leibler. On Information and Sufficiency. Annals of Mathematical
Statistics 22 (1), pp. 7986, 1951.
[12] U. Alon, N. Barkai, D. Notterman, K. Gish, S.Ybarra, D. Mack and A. Levine. Broad patterns of
gene expression revealed by clustering analysis of tumor and normal colon tissues probed by
oligonucleotide array. Proc. Natl. Acad. Sci. USA, 96(12):6745-6750, 1999.
[13] A. Ben-Dor, R. Shamir and Z. Yakhini. Clustering gene expression patterns. Journal of
Computational Biology, 1999.
[14] L.B. Shtein. Trial of forecasting in medicine based on computers. Leningrad University,
Leningrad, p. 145, 1987.
259
Techniques for parameters monitoring at Datacenter
M.R.C. Trusca, F. Farcas, C.G. Floare, S. Albert, I. Szabo
National Institute for Research and Development of Isotopic and Molecular Technology,
Cluj Napoca, Romania
A datacenter is a facility used to house computer systems and associated components
(telecommunications, storage systems, etc). The goal of the data center monitoring is to provide an IT view into
the data center facility to give an accurate real-time picture of the current state of the critical infrastructure. There
is a need to determine the source of performance problems as well as to tune the systems for a better operation.
The main tools for our Datacenter monitoring are the opensource Ganglia (http://ganglia.sourceforge.net) and
NAGIOS (http://www.nagios.org) packages. If Ganglia allows a remote viewing of live or historical statistics
(such as CPU load averages or network utilization) for all machines that are being monitored, NAGIOS offers a
complete monitoring and alerting for servers, computing nodes, switches, applications, and services.
Keywords: datacenter, Grid, cluster, Ganglia, Nagios.
Introduction
A datacenter is a facility used to house computer systems and associated components
(telecommunications, storage systems, etc). Today the Grid systems are applied in many
research areas, namely:
(i) The fundamental research in particle physics uses advanced methods in computing and
data analysis. Particle physics is the theory of fundamental constituents of matter and the forces
by which they interact and ask some of the most basic questions about Nature. Some of these
questions have far-reaching implications for our understanding of the origin of the Universe [1].
In 2009 the Large Hadron Collider accelerator (LHC) [2] with ALICE, ATLAS, CMS and LHCb
experiments, will started taking data. LHC collides protons at the highest energy s=14TeV and
luminosity L=10
34
cm
-2
s
-1
among all the other accelerators and due to these performances, high
precision measurements will be possible and the results means new physics.To fulfil these
requirements a high performance distributed computing system is needed.
(ii) Computational simulations based on the atomic description of biological
molecules have been resulted in significant advances on the comprehension of biological
processes. A molecular system has a great number of conformations due to the number of
degrees of freedom in the rotations around chemical bonds, leading to several local minima in
the molecular energy hyper-surface. It has been proposed that proteins, among the great
number of possible conformations, express their biological function when their structure is
close to the conformation of global minimum energy [3]. This type of research involves a
large amount of computing power and fills to be a very suitable application for grid
technology.
(iii) A sizable body of experimental data on charge transport in nanoscopic structures
has been accumulated. We face the birth of a whole new technological area: the molecule-
based and molecule-controlled electronic device research, often termed molecular
electronics (ME). The simplest molecular electronic device that can be imagined consists of
a molecule connected with two metallic nanoelectrodes. There is now a variety of approaches
to form such nanojunctions (NJ) [4] that differs in the types of electrode, materials employed,
the way in which the molecule-electrode contacts are established and the number of molecules
contacted. Recently the realization of the first molecular memory was reported [5].
The computation of the current-voltage characteristics of a realistic nanodevice needs
almost a 6-10 GB memory and one week computing time, which brings to the idea of using a
Grid environment with a high processing power and a MPI support.
260
In this contribution we present an overview of our Datacenter and the techniques that
we use to monitor its parameters.
NIRDIMT Datacenter
At our site we are hosting a High Performance Computation Cluster used for internal
computational needs and the RO-14-ITIM Grid site. The goal of the data center monitoring is
to provide an IT view into the data center facility to give an accurate real-time picture of the
current state of the critical infrastructure. There is a need to determine the source of
performance problems as well as to tune the systems for its better operation.
Fig. 1. Datacenter view - GRID and Cluster hardware
Datacenter overview (Fig. 1):
2 x Hewlett Packard Blade C7000 with 16 Proliant BL280c G6 (2 Intel Quad-core Xeon
x5570 @ 2.93 GHz, 16 Gb RAM, 500 Gb HDD) running, open source TORQUE, MAUI,
GANGLIA, NAGIOS, configured from scratch - Scientific Linux 5.5 (Boron),
different Intel compilers, mathematical and MPI libraries,
different Quantum chemistry codes like: AMBER, GROMACS, NAMD, LAMMPS,
CPMD, CP2K, Gaussian, GAMESS, MOLPRO, DFTB+, Siesta, VASP, Accelrys
Materials Studio.
We also host the RO-14-ITIM Grid site (http://grid.itim-cj.ro).
RO-14-ITIM grid site
RO-14-ITIM Grid Site is an EGEE/WLCG type, running gLite 3.2 as middleware for
the public Network computing elements and for the private Work nodes on which the
operating system is Scientific Linux 5.5 x86-64.
RO-14-ITIM Grid site is certificated for production and is registered at the Grid
Operations Center GOC.
The public network consists of four public address systems: a Cream central element,
a user interface (UI), a site-BDII server, a storage element (SE), a monitoring element
(APEL). The Storage element has a 120 TB capacity.
The private network contains 60 dual processor quad core servers with 16 GbRam
comprising two HP Blade systems and one IBM blade system.
The wide area connection works at 10 Gbps. The speed between the Grid elements and
the network router is 10 Gpbs, 20 Gbps between the public network and 40 Gbps inside the
private network and with the public Grid network. INCDTIM local and wide area network
operates now at 1 Gbps.
261
System management and monitoring tools
Fig. 2 presents a datacenter that comprises a Grid site and a MPI Cluster dedicated to
parallel computing solutions.
The everyday activities are in Grid Computing, MPI cluster and Networking.
Networking sustains the whole Institute activities within the Internet connection, e-mail,
website, database. Grid Computing is about an active site named RO-14-ITIM which
dedicates 90% time in processing jobs from ATLAS. The other 10% go into testing the site
for a reliable functionality and storing or processing data through special request from inside
or outside the Institute through the Virtual Organization voitim. In parallel with the Grid site
there is a MPI cluster aimed at numerical simulations of computational physics with direct
application in biophysics and nanostructures in which the Institute is involved.
The Grid site and MPI cluster are installed with Scientific Linux 5.5. For a future
compatibility between them we installed on the Grid site gLite 3.2 based upon Torque and
Maui for job managing as well as a last version of Torque and Maui on the MPI Cluster. [6]
The 3 Blade systems give us an advantage in monitoring and operating them from the
distance, so our IT team does not have to enter the datacenter when any problems occur. In
Fig. 3 we show the Blade system interface on which one may be able to determine through an
advanced system of colors which system is fault. We are able to install the system from
outside. We manage and monitor the whole datacenter through the advanced APC system
installed in it.
The monitored datacenter is shown on Fig. 3.
Fig. 2. Blade management module Fig. 3. Datacenter monitoring tools
The main tools for our Datacenter monitoring are the opensource Ganglia (Fig. 5)
(http://ganglia.sourceforge.net) and NAGIOS (Fig. 6) (http://www.nagios.org) packages,
which we installed and configured. If Ganglia allows a remote viewing of the live or historical
statistics (such as CPU load averages or network utilization) for all machines that are being
monitored, NAGIOS offers a complete monitoring and alerting for servers, computing nodes,
switches, applications, and services. For more than one year we tested and we principally used
them to monitor our High Performance Computing Cluster. If Ganglia work flawlessly for the
NAGIOS monitoring system, we still have to do different optimizations and to implement and
configure more alerts.
262
Fig. 4. Ganglia Cluster monitoring system http://hpc.itim-cj.ro/ganglia
Fig. 5. NAGIOS IT Infrastructure Monitoring
Conclusions
The state-of-art open-source monitoring systems Ganglia and NAGIOS have been
installed and configured. Ganglia allows a remote viewing of the live or historical statistics
(such as CPU load averages or network utilization) for all machines that are being monitored,
NAGIOS offers a complete monitoring and alerting for servers, computing nodes, switches,
applications, and services.
263
The different optimization and implementation of new alerts in NAGIOS are still
required. We plan to extend the monitoring to the entire Datacenter infrastructure.
Acknowledgement
With this in mind, the financially supported by the Romanian Research and
Development Agency through EU12/2010, EU15/2010 and POS-CEE Ingrid/2009 projects
brought us to three Blade systems and a MSA.
References
[1] ATLAS Collaboration. Exploring the Mystery of Matter. Papadakis Publisher, UK, 2008.
[2] LHC Homepage, http://cern.ch/lhc-new-homepage/
[3] S.P. Brown, S.W. Muchmore. J. Chem. Inf. Model., 46, 2006, p. 999.
[4] A. Salomon et al. Comparison of Electronic Transport Measurements on Organic
Molecules. Adv. Mater. 15(22), 2003, pp. 1881-1890.
[5] J.E. Green, et al. Heath. A 160-kilobit molecular electronic memory patterned at
1011 bits per square centimetre. Nature 445(7126), 2007, p. 414.
[6] White paper, TIA-942 Data Centre Standards Overview ADC Krone.
264
Solar panels as possible optical detectors for cosmic rays
L. Tsankov
1
, G. Mitev
2
, M. Mitev
3
1
University of Sofia, Bulgaria
2
Institute for Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences,
Bulgaria
3
Technical University, Sofia, Bulgaria
Photovoltaic cells have relatively high sensitivity to visible light and are available as large area panels
at a reasonable price. Their potential use as air Cherenkov detectors for the extended atmospheric showers,
caused by high energy cosmic rays, is very attractive.
In this paper we make an evaluation of different types of photovoltaic (PV) cells. Assemblies of several
cells are studied, both connected in series and in parallel, aiming for the increase of the sensitive area,
performance improvement etc. We propose a schematic for optimal separation of the fast component of the
detector system signal. The threshold sensitivity of the different configurations is estimated and their ability to
detect very high energy cosmic rays is discussed.
Introduction
In 1936 the Austrian physicist V.F. Hess (1883 1964) receives the Nobel Prize for
Physics for the discovery of the cosmic rays. Ever since, the largest and most expensive
research complexes that have been built, have been dedicated to the registration and
measurement of the primary and secondary cosmic radiation parameters. For that purpose
there have been conducted many observation tests with blimps and artificial Earth satellites.
Many underground and surface observatories (at sea level and high-mountain), with detector
area sometimes reaching 100 km
2
, have been built [1, 2].
Most frequently during observation of extensive air showers (EAS) in surface stations
can be registered the muon component, using groups of organic plastic photo-scintillation
detectors (PSDs), whose output signals are passed on to fast coincidence circuits [1, 3]. They
allow for very precise definition of the moment of the event occurrence but offer more limited
capabilities for defining the energy parameters of the registered event. Rarely, liquid (most
often water) Cherenkov detectors are used.
Another popular method is the detection of the Cherenkov radiation caused in the
atmosphere by the primary high-energy charged particles (at 30 60 km altitude), or by the
secondary products of the EAS. Photomultiplier tubes (PMTs), placed in the focus of the
optical system, are used for detectors.
The rapid evolution of the photovoltaic (PV) cells in the last decade made them
accessible at low prices. Their high efficiency and the possibility for construction of systems
with significant area, allows for their use in Air Cherenkov detectors [5, 6].
The purpose of this work is to evaluate the possibilities of PV-cell based detector
systems to distinguish between the short light pulses of the Cherenkov radiation and the slow
component arising from background light.
This imposes the need to look into new schematic solutions for the signal acquisition
and shaping and the evaluation of their usability as components of Air Cherenkov detectors of
EAS. It is necessary to define whether the sensor is capable of reacting to short light flux (the
duration of the Cherenkov radiation is under 1 s) and what is the minimum threshold for
light impulse value (represented in number of photons in the interval) that can be registered
with the corresponding detector.
265
Review
1. Particularities and limitations
The use of PVs in such nontrivial mode leads to the following characteristic
particularities and problems, which should be considered during the development of the
design of the signal acquisition and shaping circuit:
1. The detectors output signal comes from the charge generated in the detector
volume, not the output voltage or current;
2. The mean output current arising from the background light (twilight, full moon,
urban area light etc.) can be some orders of magnitude higher than the signal level;
3. PVs have significant capacitance (10-50 nF/cm
2
), exceeding the capacitance of the
semiconductor detectors, used for ionized emissions detection, manyfold.
In order to achieve large detector surface, for detectors constructed by PVs, exist two
approaches parallel or sequential connection of the cells. Both methods are equivalent in
relation to the volume of the occurring charges. When n elements are connected in parallel the
equivalent capacitance of the PV cell is n-times greater, while connected in series it is
respectively n-times smaller. Obviously, having the PV cells sequentially connected is
preferred for obtaining larger signal with the same generated charge.
2. Signal acquisition with PV cells
In [7] is shown that if the capacitance of the
semiconductor detector doesnt change in a wide range, the
signal acquisition with transimpedance, or charge sensitive
amplifier give equivalent results. The presence of a significant
offset current from the PVs makes the galvanic connection to
the amplifier impossible. In [5] are reviewed the options given
through the introduction of capacitive separation of the input,
or the use of an isolation transformer.
Another possible solution is the compensation of the
offset current. The PV cell output can be presented as a sum of
two components (Fig. 1) a slow one, due to the background light (I
BL
), and a second one,
fast changing short pulse, due to the Cherenkov light of EAS (I
CL
). Our idea is to connect
opposite the cell a current generator I
C
, whose value is equal to the slow component and
adaptively follows it. For the short pulse the high output resistance of the current generator I
C
takes the role of a load, which guarantees the full collection of the charge in the detector
volume, caused by the Cherenkov light.
Fig. 1
I
BL
I
CL
LFF
CSA
I
C
266
3. Front-end-electronics schematic
The PV signal preamplifier
is realized using AD8033
operational amplifier (Fig. 2).
Depending on the Rfb/Cfb ratio, it
works either as a transimpedance or
as a charge sensitive preamplifier.
That is illustrated by the form of the
output signal on Fig. 3. That ratio
influences the amplitude-frequency
diagram of the amplifier (Fig. 4).
An overcompensation is seen for
Cfb < 5 pF, so the amplifier can
lose stability. That can be seen both
on the amplitude-frequency and
output signal shape diagrams.
An OTA CA3080 is used to
compensate the background and the slow
component of the PV current. Fig. 5 shows
the devices performance for input pulses
with duration 1 s and amplitude 10 A,
while the slow component changes from 0 to
500 A. It can be seen that the disturbance is
successfully compensated up to 450 A. We
get an output signal with considerable
amplitude without any significant offset.
That guarantees the schematic would work flawlessly in high background lighting
conditions e.g twilight, full moon or urban areas.
A first order low pass filter is implemented based on Rf and Cf, followed by a buffer
amplifier CA3140. The signals with frequency lower than the cut-off frequency of the filter
are fed to the input of the adjustable current generator and change its output value,
compensating the low frequency background and noises. The high frequency signals (in this
case caused by the short light pulses) cant get through the filter, the current generator keeps
Fig. 2
R2
10k
R1
10k
PV CELL
Uout
-
+
V+
V-
U2
AD8033
Cf
2,2
Rf
220k
-
+
U3
CA3140
ICL IBL
CDET
4n7
Cfb
6p2
+12V
-12V
U1
CA3080
Rfb
1M
Rb
47k
+12V
-12V
+12V
-12V
R2 1M
C1
4n7
R3
5k1
-1,0
-0,5
0,0
0,5
1,0
1,5
2,0
2,5
0 20 40 60 80 100
2
5
10
20
50
I0
t, s
Um, V
Fig.3. Output signal shape vs Cfb
0
2
4
6
8
10
12
14
16
1 10 100 1000 10000 100000 1000000
2
5
10
20
50
Um, V
Fig. 4. Bandwidth of the amplifier vs Cfb
f, Hz
Fig. 3
Fig. 4
Fig. 5
0,0
0,5
1,0
1,5
2,0
2,5
3,0
3,5
4,0
4,5
0 5 10 15 20 25 30
IBACKLIGHT
ICL = 10 A
Um,V
t, s
IBL is out of range
Fig. 5. Compensation of the slow component
267
Fig. 8
t
u
PV cell
CSA
OTA
pulse
generator
L
E
D
TDS 2022
6-Digit Multimeter Hameg 8112-3
I
LED
I
PV
its value, corresponding to the mean value of
the offset current of the PV cell. The high
output resistance of the current generator
guarantees the full collection of the charge,
induced by the short light pulse. The influence
of the notch frequency on the output is
illustrated on Fig.6. It can be seen that the
compensation circuit does not deteriorate the
rise time and the output amplitude of the pulse
i.e. its operation doesnt change the signal.
The output signal vs. the detector's
capacitance is shown on Fig.7. It is seen that
the amplifier might become unstable at high
capacitance. This suggests the general rule
when connecting PV cells into batteries - they
should be connected in series in order to
decrease the total capacitance. In our
experiments this value was decreased to 7,2 nF
and 3,6 nF for the different configurations.
4. Test SETUP
Our experimental setup (Fig. 8) uses a light pulse generator [8] with adjustable
amplitude and duration of the signal, and interchangeable LEDs (red, green and blue). It is
possible to set 15 different levels of the LED drive current either in continuous, or pulse
mode. The light pulse length is step-adjustable in the rage between 50ns 250s.
The PV output current is measured in continuous mode lighting, using highly sensitive
digital microampermeter (6-Digit Multimeter Hameg 8112-3). That makes possible, by
factoring-in the light pulse length, the calculation of the charge generated in the volume of the
PV cell. The amplitude and the shape of the output pulses are monitored with a digital
oscilloscope.
Results
Series of measurements were conducted using commercially available PV panels,
consisting of either 36 cells sized 60x15 mm (nominal output 5W at 12V), or 36 cells -
60x30 mm each (nominal output 10W at 12V). The PV cells are internally connected in
series. Two of the smaller panels were also used connected in series as an aggregate panel.
The shape of the output pulse is presented on the oscillogram (Fig. 9), together with the pulse
lighting the LED. The experiments, carried out using slowly changing background light (e.g.
-1,0
-0,5
0,0
0,5
1,0
1,5
0 20 40 60 80 100
100n
200n
500n
t, s
Um, V
Fig. 6. Output signal shape as a function of Cf
(filter's cutoff frequency)
-0,4
-0,2
0,0
0,2
0,4
0,6
0,8
1,0
1,2
1,4
1,6
0 10 20 30 40 50 60 70 80
1n
2n
5n
10n
20n
50n
Io
Fig. 7. Output signal shape vs detector's capacitance
t, s
Um, V
Fig. 6
268
reflected luminescence light), have proven that the schematic successfully compensates these
disturbances.
Fig. 10 shows the signal/noise (S/N) ratio for the three panel assemblies as a function of
the signal (number of photoelectrons generated by the LED pulse (1s) per m
2
). The S/N ratio
of 3 is reached at about10
8
pe/m
2
, while the night sky is estimated to give about 10
12
pe/m
2
(i.e. 10
6
pe/m
2
for the pulse duration).
Conclusion
The experimental results show that both the performance and the sensitivity of
PV cells are sufficient to register the Cherenkov component of very high energy EAS. The
compensation circuit for the slow component (due to the background light) allows to increase
the observation period significantly - during the whole night, even at full moon, as well as
performing observations in sites having poor astro climate.
Acknowledgement
The present research is supported by the Technical University - Sofia under Contract
112051-3.
References
[1] P.K.F. Grieder. Cosmic rays at earth: researcher's reference manual and data book. ELSEVER.
Amsterdamp, 2001, p. 1093.
[2] V.F. Sokurov. Physics of cosmic rays: cosmic radiation. Rostov/D.: Feniks, 2005 (in Russian).
[3] http://livni.jinr.ru
[4] V.S. Murzin. Introduction to Physics of Cosmic Rays. Moscow: Atomizdat, 1979 (in Russian).
[5] S. Cecchini et al. Solar panels as air Cherenkov detectors for extremely high energy cosmic
rays. arXiv:hep-ex/0002023v1 (7 Feb 2000).
[6] D.B. Kieda. A new technique for the observation of EeV and ZeV cosmic rays. Astroparticle
Physics 4, 1995, pp. 133-150.
[7] H. Spieler. Semiconductor detector systems. Oxford Press, 2005.
[8] G. Mitev, L. Tsankov, M. Mitev. Light Pulse Generator for Photon Sensor Analysis. Annual
Journal of Electronics, ISSN 1313-1842, Vol. 4, N. 2, 2010, pp. 111-114.
0
1
2
3
4
5
6
7
8
9
10
0,0E+00 1,0E+08 2,0E+08 3,0E+08 4,0E+08 5,0E+08 6,0E+08
Charge [pe/m
2
]
2x5Win series
1x5W
1x10W
S
i
g
n
a
l
t
o
N
o
i
s
e
R
a
t
i
o
Fig. 10
Fig. 9
Fig. 10
269
Managing Distributed Computing Resources with DIRAC
A. Tsaregorodtsev
Centre de Physique des Particules de Marseille, France
Many modern applications need large amounts of computing resources both for calculations and data
storage. These resources are typically found in the computing grids but also in commercial clouds and computing
clusters. Various user communities have access to different types of resources. The DIRAC project provides a
solution for an easy aggregation of heterogeneous computing resources for a given user community. It helps also
to organize the work of the users by applying policies regulating the usage of common resources. DIRAC was
initially developed for the LHCb Collaboration - large High Energy Physics experiment on the LHC accelerator
at CERN, Geneva. The project now offers a generic platform for building distributed computing systems. The
design principles, architecture and main characteristics of the DIRAC software will be described using the LHCb
case as the main example.
1. Introduction
The High Energy Physics (HEP) experiments and, first of all, LHC experiments at
CERN have dramatically increased the needs in computing resources necessary to digest the
tremendous amounts of accumulated experimental and simulation data. Most of the
computing resources needed by the LHC HEP experiments as well as by some other
communities are provided by computing grids. The grids provide a uniform access to the
computing and storage resources, which simplifies a lot their usage. The grid middleware
stack offers also the means to manage the workload and data for the users. The success of the
grid concept resulted in emergence of several grid middlewares non-compatible between each
other. Multiple efforts to make different grid middleware working with each other were not
successful leaving to seek the solution for interoperability elsewhere.
The grids are not the only way to provide resources to the user communities. There are
still many sites (universities, research laboratories, etc), which hold computing clusters of
considerable size, but they do not make part of any grid infrastructure. These resources are
mostly used by local users and cannot be easily contributed to the pool of common resources
of a wider user community if even the site is belonging to its scientific domain. Installing the
grid middleware to include such computing clusters to the grid infrastructure is prohibitively
complicated especially if there are no local experts to do that. There are also emerging new
sources of computing power, which are now commonly called computing clouds. Commercial
companies provide most of these resources now but open source cloud solutions of production
quality are also appearing.
The grid users are organized in communities with common scientific interests. These
communities are also sharing common resources provided by the community members.
Apparently, the contributed resources can come from any source listed above and apriori they
are not uniform in their nature. Including all these resources in a coherent system seen by the
users is still a challenging task. In addition, the common resources assume common policies
of their usage. Formulation and imposing such policies while acknowledging the possible
requirements of the resource providers (sites) is yet another challenging task.
The variety of requirements of different grid user communities is very large and it is
difficult to meet everybodys needs with just one set of the middleware components.
Therefore, many communities, and most notably the LHC experiments, have started to
develop their own sets of tools, which are evolving towards complete grid middleware
solutions. Examples are numerous, ranging from subsystem solutions (PANDA workload
management system [1] or PHEDEX data management system [2]) or close to complete Grid
solutions (AliEn system [3]).
270
The DIRAC project is providing a complete solution for both workload and data
management tasks in a distributed computing environment [4]. It provides also a software
framework for building distributed computing systems. This allows easy extension of the
already available functionality for the needs of a particular user community. The paradigm of
the grid workload management with pilot jobs introduced by the DIRAC project brings an
elegant solution to the computing resource heterogeneity problem outlined above. Although
developed for the LHCb experiment, the DIRAC project is designed to be a generic system
with LHCb specific features well isolated as plugin modules [5].
In this paper we describe the main characteristics of the DIRAC workload
management system in Section 2. Its application to managing various computing resources is
discussed in Section 3. Section 4 presents the way in which community polices are applied in
DIRAC.
2. DIRAC Workload Management
The workload management system ensures dispatching of the user payloads to the
worker nodes for execution. One can distinguish two types of workload management
paradigms. The first consists in using a broker service, which chooses an appropriate
computing resource with capabilities meeting the user job requirements, and pushes the job
to this resource. In this case the user job is not kept by the broker until the execution and is
directly transmitted to the target computing cluster to enter into the local waiting queue. This
paradigm is used by the standard gLite grid middleware, for example [6]. The alternative
paradigm uses a special kind of jobs called pilot jobs. The pilot jobs are dispatched to the
worker nodes and pull the actual user jobs, which are kept in a Task Queue service central
for a given community of users. The DIRAC project uses the pull paradigm with pilot jobs,
its properties and advantages are described in this section [7].
2.1. Pilot jobs scheduling paradigm
In the pilot job scheduling paradigm
(Fig. 1.), the user jobs are submitted to the
central Task Queue service. At the same
time the pilot jobs are submitted to the
computing resources by specialized
components called Directors. Directors use
the job scheduling mechanism suitable for
their respective computing infrastructure:
grid brokers, batch system schedulers,
virtual machine dispatchers for clouds, etc.
The pilot jobs start execution on the worker
nodes, check the execution environment,
collect the worker node characteristics and
present them to the Matcher service. The
Matcher service chooses the most
appropriate user job waiting in the Task
Queue and hands it over to the pilot for
execution. Once the user job is executed and
its outputs are delivered to the user, the pilot
job can take another user job if the
remaining time of the worker node
reservation is sufficient.
Fig. 1. Workload Management with pilot jobs
271
There are several obvious advantages of this scheduling paradigm. The pilot jobs
check the sanity of the execution environment like the available memory, disk space, installed
software, etc before taking the user job. This reduces dramatically the failure rate of the user
jobs compared with the case when the user jobs are dispatched directly to the worker nodes.
The ability of the pilot jobs to execute multiple user jobs reduces considerably the load on
grid infrastructures because less number of jobs are managed by the grid brokers. On the sites
there is no more need to configure multiple queues of different length to better accommodate
short and long user jobs because pilot jobs can fully exploit time slots in the long queues.
However, there are also other important advantages of this job scheduling method that are
vital for large user communities and which will be discussed in subsequent sections.
2.2. Security aspects of the pilot job scheduling
The pilot job scheduling paradigm have several security properties different from the
standard grid middleware. Lets look into the details of the delegation of the identity of the
owners of the user and pilot jobs. The user jobs are submitted to the central Task Queue
service with the credentials (grid proxies ) of the users owners of the jobs. The pilot jobs are
submitted when there are user jobs waiting in the Task Queue. There are two possible cases.
In the first case, the pilot jobs are submitted to the grid resources with the same credentials as
the corresponding user jobs. The so submitted pilots jobs are called private. The private pilot
jobs can only pick up the jobs of the user with the same identity as the one of pilot job itself.
In this case the owner of the executed user payload is the same as the owner of the pilot job as
seen by the site computing cluster. Site managers have full control of the job submission and
can easily apply site policies like blacklisting misbehaving users by banning the submission of
their pilot jobs.
In the second case, the pilot jobs are submitted with special credentials, the same for
the whole user community like the LHCb Collaboration. These pilot jobs are not submitted
for the jobs of a particular user and their credentials allow them to pick up the jobs of any user
of this community. These are the so-called generic or multiuser pilot jobs. In this case the site
managers see the identities of the pilot jobs, which are not the same as the actual payload. The
managers can either delegate to the user community the traceability of the executable
payloads or impose the requirement for the pilot jobs to interrogate the site security services
to verify the rights of the payload owner before executing it on the site. The latter is achieved
by using the gLExec [8] tool recently developed for this purpose and widely deployed on
WLCG sites.
The advantage of the first case is that the security properties of the jobs scheduling
system are the same as for the base workload management for this grid infrastructure, for
example the gLite WMS in the case of the WLCG grid. However, in this case there is no
possibility to manage the community policies in one central place as described in Section 4.
Since the possibility to take into account the community policies while job scheduling is
extremely important, the multiuser pilot job based scheduling became very popular and is
used by all the four LHC experiments, in particular.
The use of the pilot jobs and especially the generic pilot jobs requires advanced
management of the user proxies. While execution, the user payloads need secure access to
various services with proper authentication based on the payload owner proxy. The pilot jobs
can be submitted with the credentials different from the owners of the user jobs. Therefore,
the pilots are enabled to initiate delegation of the user proxies to the worker nodes after a user
job is selected for the execution. The DIRAC Project provides a complete proxy management
system to support these operations. The ProxyManager service contains a secure proxy
repository where users are depositing long living proxies. These proxies are used for
272
delegation of short living limited proxies to the worker nodes on the pilot requests. The same
mechanism is used for the user proxy renewal if it expires before end of the user job
execution.
3. Using Heterogeneous Computing Resources
The pilot job scheduling paradigm brings a natural and elegant solution to the problem
of the aggregation of heterogeneous resources in one coherent system from the user
perspective. In one phrase this can be explained by the fact that all the computing
infrastructures are different but all the worker nodes are basically the same. Running the pilot
jobs on the worker nodes hides the differences of the computing infrastructures and presents
the resources to the Matcher service (Fig. 1) in a uniform way. No interoperability between
different infrastructures is required. The DIRAC users see resources belonging to different
grids as just a common list of logical site names in the same workload management system.
Lets take a closer look at how various types of resources are used with the DIRAC
WMS by the LHCb Collaboration.
3.1. Using WLCG resources
Originally, the WLCG grid was used
by LHCb/DIRAC job scheduling by
submitting pilot jobs through the native
gLite middleware workload management
system. This was the simplest way to build
the corresponding pilot Directors, however,
this method quickly showed important
difficulties. By design the gLite resource
brokers are supposed to be central
components. They take the site status
information from the central information
system (BDII) and then decide which site is
the most appropriate for the given job.
Once the decision is taken, the job is
submitted to the chosen site and enters the local task queue there. The capacity of a single
resource broker is not sufficient to accommodate the load of a large community like the LHCb
experiment. Therefore, LHCb is obliged to use multiple independent resource brokers. In this
case the job submission is obviously not optimal. Indeed, all the brokers use the same site
state information and chose the same site as the most attractive one (see the right branch in
Fig. 2). They start to submit jobs to this site without knowing about each other. The gLite
information system is not reactive enough in order to propagate the changed site status
information to the brokers. As a result the site often gets too many submitted jobs, which wait
in the local task queue, whereas other sites remain underloaded.
With the recent development and wide deployment of the CREAM Computing
Element service, it became easy to submit pilot jobs directly to the sites (left branch in Fig. 2).
The CREAM interface allows obtaining the more up to date Computing Element status
information directly from the service and not from the BDII system. This information is used
by the CREAM Director to decide whether the load of the site permits to submit pilot jobs
provided that there are suitable user jobs in the DIRAC Task Queue. In this case, there are
individual independent Directors for each site and the number of pilot jobs submitted is
chosen in order to avoid unnecessary site overloading. A typical strategy consists in
maintaining a given number of pilot jobs in the site local queue. With the direct pilot job
Fig. 2. WMS versus direct pilot job
submission to the Computing Element
273
submission, the sites are competing with each other for the user jobs making their turnaround
faster. This is the ultimate goal of any workload management system. In 2011 more than a
half of the LHCb jobs were executed using direct submission to the CREAM Computing
Elements. There are plans to migrate the pilot job submission completely to the direct mode
and effectively stop using the gLite WMS service.
3.2. DIRAC sites
DIRAC WMS system can also provide access to the resources on the sites, which are
not making part of any grid infrastructure. It is quite common to encounter sites owning
considerable computing power in their local clusters but not willing to be integrated into a
grid infrastructure because of the lacking expertise or other constraints. The DIRAC solution
to this problem is similar to the case with the direct pilot job submission to the CREAM
Computing Element service. It consists in providing a dedicated Director component. Two
variations of DIRAC site Directors are available.
In the first case, the Director is placed on the gateway node of the site, which is
accessible from the outside of the site, and, at the same time, has access to the local
computing cluster (Fig. 3). The gateway host must have a host grid certificate which is used to
contact the DIRAC services. The Director interrogates the DIRAC Task Queue to find out if
there are waiting user jobs suitable for running on the site. If there are user jobs and the site
load is sufficiently low, the Director gets the pilot credentials (proxy) from the DIRAC
ProxyManager service, prepares a self-extracting archive containing a pilot job bundled with
the pilot proxy and submits it as a job to the local batch system. Once the pilot job starts
running on the worker node, it behaves in exactly the same way as any other pilot job, for
example, the one submitted through the gLite resource broker as described above.
In the second case, the Director runs as part of the central DIRAC service. A special
dirac user account is created on the site gateway, which is capable to submit jobs to the local
computing cluster. This account is used by the Director to interact with the site batch system
through an ssh tunnel using the dirac user account credentials (either ssh keys or password).
Fig. 3. DIRAC site with on-site Pilot Director Fig. 4. DIRAC site with off-site Pilot Director
274
Otherwise, the behaviour is similar to the previous case. The self-extracting archive
containing the pilot job and proxy is transmitted to the site gateway through the ssh tunnel and
submitted to the batch system. After the pilot job is executed, its pilot output is retrieved via
the same ssh tunnel mechanism.
The first method is used in case the site managers want to have a full control of the
pilot submission procedure, for example, providing their own algorithms for evaluation of the
site availability. The second method requires minimal intervention on the sites, only creation
of the dedicated user account and, possibly, setting up a dedicated queue available for this
account. This makes incorporation of new sites extremely easy. The second method is most
widely used by the LHCb and other DIRAC user communities. The batch systems that can be
used in this way include PBS/Torque, Grid Engine, BQS. Access to other batch systems can
be easily provided by writing the corresponding plug-ins.
In LHCb the most notable DIRAC site is the one provided by the Yandex commercial
company. As it is shown in Fig. 5, the Yandex site is providing the second largest
contribution to the LHCb MC simulation capacity yielding only to CERN.
3.3. Other computing infrastructures
Other computing infrastructures accessible through the DIRAC middleware include OSG
and NorduGrid grids, AMAZON EC2 Cloud and other cloud systems. The development of DIRAC
gateway to the BOINC based volunteer grids is under development as well. These infrastructures
are not used by the LHCb Collaboration and, therefore, are not detailed in this paper.
4. Applying community policies
As it was mentioned in Section 2, the workload management system with pilot jobs
offers the possibility of easy application of the community policies. In the large user
communities, or Virtual Organizations (VOs), sharing large amounts of computing resources
managing priorities of different activities as well as resource quotas for various users and user
groups becomes a very important task. One way to approach this problem is to define the VO
specific policies on sites by setting up special queues for each VO group and assign priorities
to them. However, with the high number of sites serving the given VO (about 200 for the
Fig. 5. The LHCb site usage for MC simulation, Sep. 2011
275
LHC experiments) and the number of the VO policy rules rapidly increasing with the number
of user groups within the VO, this way becomes extremely heavy especially for keeping the
rules up to date.
In the DIRAC workload management systems with pilot jobs all the community
payloads pass through the central Task Queue. This is where the policies can be applied
efficiently and precisely in one single place instead of scattering them over multiple sites.
When the pilots are querying the Matcher service for the user jobs, the Matcher chooses not
only the jobs which requirements correspond to the worker node capabilities but it also selects
the current highest priority job among the eligible ones. It is important to note that the so
selected user jobs start execution immediately and not entering into the site local batch queue
with yet another loosely defined delay before the execution. It increases the precision of this
method of priorities application.
The application of the VO policies in the central Task Queue is of course limited if the
pilot jobs are private (see 2.2). Indeed, it is likely that the highest priority job, at the moment
when a pilot queries the Matcher service, is not belonging to the same user as the pilot job. In
this case, only the highest priority job of the same user is taken. This, of course, limits
dramatically the usefulness of the policy application basically only allowing users to prioritize
their own payloads. In the case of generic pilot jobs there is no such limitation and the VO
policies can be fully applied. In practice, there are sites that allow generic pilot jobs but there
are also sites that do not. In the case of LHCb a mixture of generic and private pilot jobs is
used. This is somewhat complicating the application of the community policies but is
mandatory in order to respect the site local rules.
4.1. Static versus dynamic shares
In DIRAC the priorities can be applied on the level of individual users and user
groups.
Users can assign priorities to their jobs as an integer arbitrary non-negative number
assigned to the Priority job J DL parameter. These priorities are used when selecting among
the same user jobs.
Each DIRAC user group can have a definition of its job share. These shares are used
by the Matcher services to recalculate the job priorities such that the average number of
currently executed jobs per group is proportional to the group shares. It allows to define user
groups per distinct activity (for example, MC simulation, data reprocessing, analysis for the
conference X, etc) and to assign priorities for each activity. It is important to mention that the
production jobs and user jobs are all managed in the same way using the same computing
resources. Avoiding separating resources for production and user activities helps increasing
the overall efficiency of the policy application system.
The described priorities and shares are defined statically in the DIRAC Configuration
Service (CS). However, it is important that the actual job priorities which are taken into
account while the pilot matching operation depend also on the history if the resources usage
by the users and groups. If a given user or group were intensively working for some time,
their priorities are lowered to give more resources to others. The recalculation of the priorities
is performed in such way that the average shares of consumed resources over long periods of
time are preserved as defined in the DIRAC CS.
The DIRAC central Task Queue and the pilot jobs form dynamically a workload
management system similar in its properties to a classical batch system. This allows, in
principle, to apply the same schedulers as used together with the standard batch systems. In
particular, the use of the MAUI batch scheduler in conjunction with the DIRAC Task Queue
276
was demonstrated [9]. In practice, the statically defined user and group shares were sufficient
for the LHCb experiment for all the purposes so far.
4.2. Site policies
It is possible that for some sites extra requirements have to be imposed. For example, a
site does not allow more than a certain number of MC production jobs if it is dedicated mostly
to the user analysi or real data processing. In this case, one can define for each site limits for
the jobs of some special kind. In this way it is possible to ban certain users or groups to
execute their job on the site if even generic pilots are used and the site cannot selectively
reject user jobs. Another use case for this facility is to limit the number of simultaneously
running jobs requiring a certain resource. For example, one can limit the number of data
reprocessing jobs to lower the needed I/O bandwidth of the local storage system.
5. Conclusion
The DIRAC Project started for the LHCb experiment in 2003 has now evolved into a
general purpose grid middleware framework, which allows to build distributed computing
systems of arbitrary complexity. The innovative workload scheduling paradigm with pilot
jobs introduced by the project opened many new opportunities for a more efficient usage of
the distributed computing resources. It allows combining in a single system heterogeneous
computing resources coming from different middleware infrastructures and from different
administrational domains. This aggregation does not require any level of interoperability
between the different infrastructures. It is also transparent for the actual resource providers
and does not require any DIRAC specific services running locally on the sites. For the LHCb
experiment, the DIRAC workload management system ensures full access to the WLCG grid
resources either via the gLite WMS brokers or with the direct access to the CREAM
Computing Elements. DIRAC
also provided access to several
non-grid sites by fully including
them into the LHCb production,
monitoring and accounting
systems. Other user communities
use DIRAC for accessing other
resources like the ILC
Collaboration using the OSG grid
or the Belle Collaboration using
the AMAZON EC2 Cloud.
Access to other types of
computing resources can be
provided with new plug-ins being
developed now.
The performance of the
DIRAC workload management
system is sufficient for a large
community like the LHCb experiment as illustrated in Fig. 6. It is shown that the system can
sustain up to 40K simultaneously running jobs at more than one hundred sites.
A very important ingredient of the DIRAC workload management is the system for
definition and application of the community policies for joint use of the common computing
resources. This facility is fully exploited by the LHCb Collaboration to control simultaneous
activities of data production managers and user groups.
Fig. 6. Simultaneously running job in
DIRAC WMS, 2011
277
On the whole the DIRAC workload management system has proven to be versatile
enough to meet all the current and future requirements of the LHCb Collaboration. Therefore,
it is now starting to be widely used by other user communities in the HEP and other
application domains.
References
[1] T. Maeno. PanDA: distributed production and distributed analysis system for
ATLAS. J . Phys.: Conf. Ser. 119, 2008, p. 062036.
[2] R. Egeland et al. PhEDEx Data Service. J . Phys.: Conf. Ser., 219, 2010, p. 062010.
[3] S. Bagnasco et al. AliEn: ALICE environment on the GRID. J . Phys.: Conf. Ser.
119, 2008, p. 062012.
[4] http://diracgrid.org
[5] A. Tsaregorodtsev et al. DIRAC3: The New Generation of the LHCb Grid Software.
J . Phys.: Conf. Ser., 219, 2010, p. 062029.
[6] Lightweight middleware for grid computing, http://glite.web.cern.ch/glite
[7] A. Casajus, R. Graciani, S. Paterson, A. Tsaregorodtsev. DIRAC pilot framework and
the DIRAC Workload Management System. J. Phys.: Conf. Ser., 219, 2010, p. 062049.
[8] D. Groep et al. gLExec: gluing grid computing to the Unix world. J . Phys.: Conf.
Ser. 119, 2008, p. 062032.
[9] G. Castellani and R. Santinelli. J ob prioritization and fair share in the LHCb
experiment. J . Phys.: Conf. Ser., 119, 2008, p. 072009.
278
On some specific parameters of PIPS detector
Yu. Tsyganov
Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, Dubna, Russia
When applying a PIPS detector for detecting of heavy ions/or their decays products one deals with not
only standard operating silicon radiation detector parameters, such as resolution (energy&position), but also with
some set of specific parameters, like surface recombination velocity, mean value of recombination charge losses
of the whole PHD (Pulse Height Defect), parameter which estimates to some extent a probability for charge
multiplication phenomena and so on. Some of these parameters are used for simulations of EVR registered
energy spectra in PIPS detector.
Key words: PIPS detector, computer simulation, registered energy, rare decays.
1. Surface Recombination Concept (SRC)
Charge losses, which cause PHD (pulse- height defect) in silicon nuclear- radiation
detectors irradiated by heavy ions, are determined by the recombination of nonequilibrium
current carriers in heavy-ion tracks at the plasma stage.
The value of the relative charge losses are defined by the expression [1-3]:
P
sT
R
= ,
where s is the surface-recombination velocity and R is the particle path in the silicon.
Tipically ~10
3
- 10
4
and n10
2
-10
3
cm/s for n_Si(Au) / PIPS, respectively.
Of course, the total value of PHD is composed from three components, namely:
stopping component (is calculated e.g. by Wilkins formula) and direct loss in the metal (or p+
implantation contact of the top electrode).
The most important factor, which influences the detector resolution (for HI, EVRs ,FF)
at the registration of strongly ionizing particles [4] creating tracks with a high density of non-
equilibrium current carriers, is a fluctuations of charge collected on the detector electrodes. It is
necessary to distinguish between two components:
a) is caused by no-homogeneities of the detector parameters responsible for the collected
charge value -
1
R R
, where R
= 8.79 MeV, T
1/2
= 9.7 s).
Summary
The described method of the energy and position calibrations (for and spontaneous-
fission scales) of the measuring system by the products of the complete fusion reactions
206
Pb(
48
Ca, 2n)
252
No and
nat
Yb(
48
Ca, 35n)
215-220
Th and their descendant nuclei allows us to
have fail-safe and effective registering system. This detection system in combination with the
efficient setup DGFRS and heavy-ion cyclotron U-400 (FLNR JINR, Dubna) allowed us to
synthesize six new superheavy elements with Z = 112 118 (Fig.7) and investigate their
decay properties during last decade [1, 12].
For further research of the domain of the SHEs in the vicinity of closed shells Z=114
and N=184 we need to develop the detecting and measuring system of the DGFRS with the
aim to increase the reliability of the detection of nuclei (even in the case of the only event)
and to get better energy and position resolutions. It is planned to employ the modern
measuring modules with digital signal processors, for example PIXIE-16 [14].
Acknowledgements
This work has been performed with the support of the Russian Foundation for Basic
Research under grants Nos 11-02-1250 and 11-02-12066.
Fig.7. The top part of the chart of nuclides
291
References
[1] Yu.Ts. Oganessian et al. J. Phys. G 34, R165 (2007).
[2] Yu.A. Lazarev et al. JINR Report P13-97-238, Dubna, 1997.
[3] Yu.S. Tsyganov et al. Nucl. Instr. and Meth. in Phys. Res. A 392, 1997, p. 197.
[4] Yu.S. Tsyganov et al. Nucl. Instr. and Meth. in Phys. Res. A 525, 2004, pp. 213-216.
[5] V.G. Subbotin et al. (to be published).
[6] V.G. Subbotin, A.N. Kuznetsov. JINR Report 13-83-67, Dubna, 1983.
[7] V.G. Subbotin et al. Proceedings of the XXI International Symposium on Nuclear
Electronics and Computing NEC2007, Varna, Bulgaria, 2007, pp.401-404.
[8] N.I. Zhuravlev et al. JINR Report 10-8754, Dubna, 1975.
[9] N.I. Zhuravlev et al. JINR Report P10-88-937, Dubna, 1988.
[10] A.M. Sukhov et al. JINR Report P13-96-371, Dubna, 1996.
[11] Yu.S. Tsyganov, A.N. Polyakov. Nucl. Instr. and Meth. in Phys. Res. A 513, 2003,
p. 413; A 558, 2006, pp. 329-332; A 573, 2007, p. 161.
[12] Yu.Ts. Oganessian et al. PRL 104, 2010, p. 142502.
[13] Evaluated Nuclear Structure Data File (ENSDF), Experimental Unevaluated Nuclear
Data List (XUNDL), http://www.nndc.bnl.gov/ensdf.
[14] http://www.xia.com/DGF_Pixie-16.html.
292
High performance TDC module with Ethernet interface
V. Zager, A. Krylov
Joint Institute for Nuclear Research, Dubna, Russia.
Fast measurement with high precision is one of the main requirements for the automation hardware.
There is a number of hardware standards applied at the Flerov Laboratory of Nuclear Reactions: CAMAC,
VME, ORTEC, etc. The CAMAC system is the most popular and at the same time the most outdated one. The
specialists of the Automation group of the Accelerator Division developed a high performance system that
exceeds CAMAC controllers in all aspects. At present there is a unit testing of the ADC and TDC modules with
an Ethernet interface. This presentation describes a principle of the TDC module, algorithms; fundamentally new
ideas were used as well, when designing and writing software.
Introduction
A facility for testing electronic products was established on the basis of the heavy ion
accelerator MC-400 FLNR JINR. Tests have been performed in accordance with a method
based on the international standards. By the requirements measured should be: density of the
ions beam, fluence, homogeneity of the beam on irradiated product and ions energy.
Scintillation detectors were used for testing electronic devices to measure the ion energy.
These detectors are smaller in comparison with the cross-section beam and mounted on the
periphery of the ions transport channel, so as not to shade each other and the field of
exposure. A delay time is measured between the far and near detectors located on the ions
transport channel. Signal cables from the detectors are strictly same length, so the delay time
of the signals in both measurement channels had the same value. Time-to-digital converter
SmartTDC-01 (Fig. 1) was designed to measure the time delay.
General description
The SmartTDC-01 is a universal 2-channel multihit Time-to-Digital Converter. This
complete multi-functional and wide-range device is well suited for industrial applications and for
research. The module is based on the chip TDC-GP1 Acam mess electronic GmbH (Germany), it
can operate in several measurement modes, which are selected using the software.
Fig. 1. SmartTDC-01 unit
293
SmartTDC-01 has two measurement channels "Stop 1" and "Stop 2" with a 15 bit
resolution.
The measuring unit for both channels is started by the sensitive edge of the Start
pulse. Every channel can receive four independent stops. The various stops pulses can not
only be calculated against the start pulse, but also each other. It makes no difference if the
stops arrive over the same or different channels. All time difference combinations between the
8 possible results can be calculated. If one compares the events which arrive by different
channels, it is possible to measure time differences down to zero. When comparing the events
that arrive on one channel, the double pulse resolution of the specific channel limits the
precision. Figure 2 illustrates the timings. The double pulse resolution is in the range of 15 ns
typ. I.e. if two stops arrive on the same channel within less than 15 ns the second stop will be
ignored since it arrived during the recovery time of the measurement unit.
Key features:
2 channels with 250 ps resolution or 1 channel with 125ps resolution,
4-fold multihit capabilities per channel Queuing for up to 8-fold multihit,
Resolution on both channels absolutely identical,
Double pulse resolution approx. 15 ns,
Retriggerability,
2 measurement ranges,
3 ns - 7,6 s,
60 ns -200 ms (with predivider, only 1 channel),
The 8 events on both channels can be measured against one another arbitrarily, no
minimum time,
difference, negative time differences possible,
Edge sensitivities of the measurement inputs are adjustable,
Efficient internal 16-bit ALU, the measured result can be calibrated and multiplied
with a 24 bit integer,
Ethernet interface, Tcp/Ip protocol.
Hardware description:
TDC module assembled on separate board that is compatible with the processor
module ACPU-01.
The boards connected by mezzanine technology. The processor module ACPU-01 has
the following characteristics:
Ethernet chip -WIZnet w5300:
Supports hardwired TCP/IP protocols TCP,UDP, ICMP, IPv4, ARP, Ethernet,
Supports 8 independent SOCKETs simultaneously,
High network performance Up to 80Mbps (DMA),
10BaseT/100BaseTX Ethernet PHY,
Internal 128Kbytes memory for data communication(Internal TX/RX memory).
Main CPU Atmega-128:
Up to 16MIPS Throughput at 16MHz,
128Kbytes of In-System Self-programmable Flash program memory,
4Kbytes EEPROM,
4Kbytes Internal SRAM,
Programmable Watchdog Timer with On-chip Oscillator.
294
There is a USB interface for configuring and debugging the local software.
Theory of operation:
The measurement range from 3 ns to 7.6 ms was chosen to measure the energy. The
triggering input TDC "start" is used in this mode, which is connected to photomultiplier
located closer to the accelerator, and the stop input "stop one", which is located farther from
the photomultiplier on a ions beam channel.
Fig. 2. Timings measurement
Software description:
The special software was developed by Labview for working with Smart TDC-01
module (Fig. 3).
Fig. 3. Software for the ion beam energy measurement
295
Conclusions
At present, the SmartTDC-01 is applied on the accelerator MC-400. The beam energy
is calculated twice per second. This time is enough to update the bar chart on the operator
screen. The ion energy is measured by the SmartTDC-01 module with an accuracy of
1% MeV/nucleon. The measurements of the ion energy were carried out jointly with the
Russian Federal Space Agency and the measurement result was very accurate.
References
[1] acam messelectronic gmbh: TDC-GP1, http://www.acam.de
[2] WIZnet: Innovative Embedded Networking, http://www.wiznet.co.kr/
[3] Atmel AVR ATmega128,
http://www.atmel.com/dyn/resources/prod_documents/doc2467.pdf
[4] Russian Federal Space Agency Roscosmos, http://www.federalspace.ru/
296
Front End Electronics for TPC MPD/NICA
Yu. Zanevsky, A. Bazhazhin, S. Bazylev, S. Chernenko, G. Cheremukhina,
V. Chepurnov, O. Fateev, S. Razin, V. Slepnev, A. Shutov, S. Vereschagin and
V. Zryuev
Laboratory of High Energy Physics, Joint Institute for Nuclear Research, Dubna, Russia
1. Introduction
A new scientific program on heavy-ion physics launched recently at JINR (Dubna) is
devoted to the study of in-medium properties of hadrons and the equation of state of nuclear
matter. The program will be realized at the future accelerator facility NICA. It will provide a
luminosity of up to L=10
27
cm
-2
s
-1
for Au
79+
over the energy range 4 < NN
S
<11 GeV. Two
interaction points are foreseen at the collider. One of two detectors is the Multi Purpose
Detector (MPD) optimized for the study of properties of hot and dense matter in heavy-ion
collisions [1, 2]. At the design luminosity, the event rate capability of MPD is about ~ 5
kHz; the total charged particle multiplicity exceeds 1000 in the most central Au+Au collisions
NN
S
= 11GeV.
2. Requirements to TPC readout electronics
The Time-Projection Chamber (TPC) is the main tracking detector of the MPD
(Fig. 1). The TPC readout system will be based on Multi-Wire Proportional Chambers
(MWPC) with cathode readout pads. The TPC will provide efcient tracking at
pseudorapidities up to || = 1.2, high momentum resolution for charged particles, good two-
track resolution and efcient hadron and lepton identication by dE/dx measurements. In
order to fulfill these performances and the main TPC features (Table 1) parameters of the
Front End Electronics (FEE) has to be fixed to several strong requirements (Table 2).
Fig. 1. Common view of TPC/MPD
297
Table 1. TPC parameters
Required performance of the TPC
Parameter Value
Spatial resolution
x~0,6 mm, y~1,5 mm,
z
~ 2 mm
Two track resolution < 1 cm
Momentum resolution < 3% (0.2 < p
t
< 1 GeV/c)
dE/dx resolution < 8%
TPC main features
Size 3.4 m (length) x 2.2 m (diameter)
Drift length 150cm
Data readout 2x12 sectors
(MWPC, cathode pad readout)
Total number of time samples 350
Total number of pads ~80.000
Gas gain (Ar + 20%CO
2
) 10
4
Data from 24 readout chambers is collected by TPC FEE. The FEE has to provide
reliable operation, low noise, optimal shaping and complicate signal processing, small power
consumption etc. (Table 2).
The electronics has to take several samples for each ionization cluster reached the pad
and then a fit can be used to localize the hit. Estimations show that the contribution of the
electronics noise to the space resolution is comparable to the chamber resolution when the
signal to noise ratio (S/N) is about 30:1 for mean of MIP (ENC < 1.000 electrons).
The dynamic range is determined by the energy loss dE/dx of produced particles.
Taking into account that the maximum ionization of a 200 MeV/c proton is 10 times more
than ionization of MIP, the path length is longer at non-zero dip angle, signal to noise ratio
is ~30 and Landau fluctuations dynamic range value is about 1000. Therefore a 10 bits
sampling ADC is required.
Drift velocity, drift length and diffusion of primary electrons determine timing
constants of the FEE. The average longitudinal diffusion determines peaking time and the
electronics is best matched to the signal of cluster if the shaping time is comparable to the
width of this signal (about 160-180 ns, FWHM).
Power consumption should be not more than 40mW per channel to keep of the TPC
gas volume temperature stable (t 0.1
0
C) using appropriate cooling system.
Table 2. Main parameters of the FEE
Parameter Value
Total number of channels 80.000
Signal to noise ratio, S/N > 20:1 @ MIP (ENC < 1000e
-
)
Dynamic Range 1000 (10 bits sampling ADC)
Shaping time ~ 170 ns
Sampling 12 MHz
Tail cancellation < 1% (after 1 s)
P
cons
~ 40mW/ch
298
3. 64-channels Front End Boards (FEB-64)
A version of the FEE based on two PASA (analogue) and ALTRO (digital) has been
chosen [3]. The FEB-64 card contains 64 channels as the most flexible solution for our case (Fig. 2).
Fig. 2. Block-scheme of the 64-chs FEE for TPC/MPD
A single readout channel consists of three basic units: a charge sensitive
amplifier/shaper; a 10-bit low power sampling ADC and a digital circuit that contains a
shortening digital lter for the tail cancellation, the baseline subtraction, zero-suppression
circuits and a multi-event buffer. The charge induced on the pad is amplied by PASA. It is
based on a charge-sensitive amplifier followed by a semi-gaussian pulse shaper of forth order.
It produces a pulse with a rise time of 120ns and a shaping time of about 190 ns (e.g. near the
optimal value). ENC of the PASA is less than 1000 e
-
.
The output of the differential amplifier/shaper chip is fed to the input of the ALTRO.
ALTRO contains 16 channels which digitize and process the input signals. After digitization,
a baseline correction unit removes systematic perturbations of the baseline by subtracting a
pattern (stored in a memory). The tail cancellation lter removes the long complex tail of the
detector signal, thus narrowing (up to 2 times) the clusters to improve identification. It can
also remove undershoot that typically distorts the amplitude of the clusters when pile-up
occurs. A second correction of the baseline is performed based on the moving average of the
samples that fall within a certain acceptance window. This procedure can remove non-
systematic perturbations of the baseline. The zero-suppression scheme removes all data that is
below a certain threshold.
After digital processing data flow to the several events deep buffer memory to
eliminate loosing of data due to DAQ dead time.
PASA
ADC
Digital
Circuit
Buffer
memory
R
E
A
D
O
U
T
C
O
N
T
R
O
L
L
E
R
4 CHIP
x
16 CH/CHIP
4 CHIP
x
16 CH/CHIP
FPGA
(control)
ALTRO
TPC_MPD
L1: 5 kHz
8
0
.
0
0
p
a
d
s
BASELINECORR.
TAILCANCELL.
ZEROSUPPR.
10 BIT
MULTI-
EVENT
MEMORY
Shaping
FWHM = 190 ns
GAIN =12 mV/fC
299
The designed FEB-64 contains chips on both sides of the PCB. Control functions are
performed with FPGA. Several important features will be also read out, namely U, I and
temperature. Each card will be put into a Cu-plated envelope with tubes for cooling water.
Maximum data readout rate of the FEB-64 is > 200 MB/s.
The large granularity of TPC (~ 80.000 pads x 350 time bins) leads to event size of
~0.5 MB after zero-suppression. At trigger rate of ~5 kHz maximum data flow is ~2.5 GB/s
for whole TPC which will be compressed further by several times.
Conclusion
Prototyping of the FEE based on the PASA and ALTRO has been started. The first
test of 64-channels card (FEB-64, prototype) is expected to be performed in start of 2012.
References
[1] The Multipurpose Detector (MPD), Conceptual Design Report, 2009, JINR Dubna.
[2] Kh.U. Abraamyan et al. The MPD detector at the NICA heavy-ion collider at JINR.
NIM A628, 2011, pp. 99-102.
[3] L. Musa. The ALICE Time Projection Chamber for the ALICE Experiment. 16
th
International Conference on Ultrarelativistic Nucleus-Nucleus Collisions: Quark
Matter 2002, Nantes, France, July 2002. Nuclear Physics A715, 2003, pp. 843-848.
300
Mathematical model for the coherent scattering of a particle beam
on a partially ordered structure
V.B. Zlokazov
Joint Institute for Nuclear Research, Dubna, Russia
Neutron diffraction is the most prospective means for the investigation of solid materials with partially
ordered structure such as, e.g., lipid membranes, but the problems which the physicist faces while analyzing the
data of such a diffraction belong mathematically to the most ill-posed ones. The report describes an approach for
the regularization of such problems to guarantee obtaining a stable solution and estimating the accuracy of these
solutions.
Introduction
Let us consider a scheme of a part of a multi-layer lipid membrane
Fig. 1. A multi-layer structure
The Fig. 1 depicts a sheme of a 3 bilayer membrane; the circle denotes a hydrophile
component of the bilayer, and the vector - a hydrophobe one.
It is assumed that the small-angle diffraction on this structure is defined essentially
only by the periodicity along the z-axis (perpendicular to the membrane plane) and its unit
cell will be one-dimensional. The symmetries of this cell can be considered as one-
dimensional cubic. If we set up the "zero", at a point C, the cell will be centrosymmetrical and
consist of 3 scattering items: two negative at the edges, and a positive one in the middle.
The theoretical structure factor for a centrosymmetrical cell looks as follows
F(h) =
=
n
j 1
(z
j
)cos(h z
j
), (1)
here ( z
j
) is the measure of the scatterer density, proportional to the coherent scattering
length and the occupation of the position in the cell.
It is a characteristic of the quality and quantity of the coherent scattering of a particle
beam on a lipid membrane, and is defined in the reciprocal space as a function of Miller
indices h.
301
The latter make nodes d(h) of the coordinate net of the reciprocal space - d-spacings.
The nodes d(h) are determined by the formula d(h) = 1/ (a*
2
h
2
), where a* is the reciprocal
space parameter, and h are all integer values.
In our case d(h) = 1/(Dh), since D is the mean of the periodicity of the lipid
membrane, and it is a direct parameter of the unit cell.
The expression (1) describes the interference part of the particle beam, scattered on the
lipid membrane for an index h, and in an ideal case would have looked as a narrow peak
(delta-function) at the node d(h). However, the crystal structure of the membrane is not
ideally periodic, and the particle beam is not ideally collimated, therefore, a more realistic
than (1) model of the structure factor would be the following expression
F(d,d(h)) = F(h) =
=
n
i 1
}
zi
(z, z
i
)cos(h z
i
)dz, (2)
where n is the number of scattering elements with the centers z
i
.
It has the following meaning: the operator (2) (the Fourier one) maps a set of items
(z, z
i
) of the direct space of the crystal onto an element F(d,d(h)) (or F(h)) of the reciprocal
space for a given Miller index h.
It describes the results of the interferential scattering as a sum of several continuous
functions of a continuous variable d, which have a peak-like appearance with a certain width
near the nodes d(h).
The Fig. 2 illustrates a typical graph of a real diffraction spectrum of scattered neutron
beam on such a structure.
Fig. 2. An experimental distribution F(d,d(h))
This distribution has an additive (background) component, and its interferential part is
essentially broadened.
The standard problem of the crystallographic analysis of the data of the particle
scattering on such a structures is the determination of the unit cell parameter (in our case D)
and scatterer (further atomic) coordinates (in our case the density distribution (x)).
302
Before going to the analysis of concrete models (2), let us see how the Fourier
operator works for the mostly typical functions (x).
Fourier transformation of the scattering density
Let us consider the different types of the scattering density
(U) (x) =
=
n
i 1
A
i
o (x-C
i
);
Then from (2) it follows
F(h) =
=
n
j 1
A
j
exp(- iC
j
h).
This case corresponds to the point atomic scatterer, when all the scattering mass is
concentrated in the atomic centers. F(h) is the classical one-dimensional structure factor for
the Miller indices h.
In the case of the central symmetry ( (x)= (-x)) we have
F(h) =
=
n
i 1
2 A
i
cos(C
i
h).
The Fourier operator F transforms the delta-functions of the coordinate space to the
delta-functions of the frequency space - they are imaginary exponentials
F o (x-x
0
) exp(- i x
0
h). (3)
In the case of the central symmetry they are the usual cos (x
0
h) functions. The
analogy follows from the fact that the conditioned integrals along the whole axis (x and h)
exist for the both delta-functions and are the characteristic functions of the point x
0
.
(Uu) The point model is, as a rule, not realistic, and then we have some continuous
distribution of the scattering mass. The simplest kind of such scatterer is the uniform
distribution
(x) = A , if |x-c| s a.
0, otherwise
On the basis of (2) we get F(h) = (2 A sin(ah))/h exp(ich). The function F(h) is a continuous
distribution.
(Uuu) Next, (x) = A exp(-a|x-c|); In this case F(h) = 2Aa/(h
2
+a
2
) exp(ich).
It is another example of a non-point-like scatterer.
(Uuuu) And, finally,
(x)=Aexp(-((x-c)/w)
2
).
In this case F(h) =Aw t exp(-(hw)
2
/4)exp(ich); it is an instance of a non-point-like
scatterer, which, however, is concentrated around the center and fast diminishes while the
distance from it grows - in a certain sense it is an intermediate case between (U) and (Uuuu).
303
While analyzing the data of a coherent scattering of a beam from the lipid membranes we
normally use the models of a continuous distribution of the scatterers - (Uu) or (Uuuu).
The algorithm for the problem solution
A mathematically correct method was described in [3], but it can be used only in a
particular case.
In the general case, starting from papers [1], [2] and the similar ones, the method for
the solution of the problem of the scatterer density parameter determination consists in the
following. From the experimental diffraction spectrum the background is subtracted; the
residue is the sum of intensities, which after different corrections represent squared modules
of the structure factors F(h). With their help an expression is built
G(x)=
=
n
h 0
F(h)cos(2t hx) , (3)
where x= -D/2 + iD/m belongs to the interval -D/2 D/2; m is an arbitrary number, and
i=0,1,..,m/2.
Here chances that (3) will be a correct estimate of the density (x) are based on the
fact that it looks similarly to an inverse Fourier transform.
Next, the fitting of G(x) by a sum of functions modelling (x) and depending on the
sought parameters is made. Normally Gaussians are taken as such functions, each of which
depends on the 3 parameters: the amplitude A, the center C, and the halfwidth W.
In the best case the method is rough and inexact, but, as a rule, erroneous.
Mathematically deficient is the approach itself - using a sample of n numbers (F(h)) an
attempt is made to build estimates of 3n parameters A
i
,C
i
,W
i
.
The G(x) will be a somehow reliable estimate of the density (z) in a unique case,
when this density is a delta-function, i.e., in an rather ideal than real one. The practical use of
the method is abundant of mathematical curiosities - no single and non-stable solutions,
immense errors of parameters, etc.
Apparently, a mathematically correct method of the problem solution would be some
adaptation of the Rietveld method to the given case.
Models of the structure factor
Let us consider the function (z) from the case (Uu). For some range H - set of Miller
indices h let a diffraction spectrum s(d) be registered, which is a sum of Fourier
transformants of the type (Uu)
F(d,h)=
=
n
h 0
2 A
h
sin(Aa
hd)/d exp(ic hd).
We don't know the "natural" centers in s(d), therefore, we make these centers fitted
parameters, so that we can rewrite the parametrical representation (7) as follows
F(d,h)=
=
n
h 0
A
h
sin((d-U
h
)/ W
i
)/(d- U
h)
exp(i (d- U
h
)/C
h
)),
where the parameter C
h
is the center of the scatterer density function, W
i
, A
h
are its halfwidth
and the amplitude, and U
h
is the center (reflection order) of the interval, which corresponds to
the Miller index h.
We subtract the background from the spectrum s(d), make a correction for Lorentz
factor, for absorption, etc., and extract the square root from the residue, so that further we will
mean under the spectrum s(d) just this square root. The variance (in the Poisson case) is a
304
constant, equal to 0.25; accordingly, the error at each channel d is 0.5, and with account of the
square root of s(d) it will be equal to 1.
If the intervals of Miller indices for different density components, characterized by
quantities C
h
, don't overlap, we can analyze the scattering intensities at these intervals with
the centers at U
h
separately.
The estimates of the parameters A
h
, C
h
, W
i
, U
h
for each i can be obtained with the help of
least-square estimator (LSE), i.e., by the minimization of the expression with respect to
parameters
=
m
h 1
(s(d)-|F(d, A
h
, C
h
, W
i
, U
h
|)
2
.
(4)
The function |F(d,h)| in the space of parameters is differentiable everywhere except for
the points, where it is equal to zero. Accordingly, care should be taken that such a situation
does not occur. Besides, the minimization of (4) should be made under restrictions on | A
h
|>0,
W
i
>0 and U
h
>0 (to avoid the degeneration of the LSE-matrix).
On the parameters C
h
crystallographic restrictions should be imposed. In the reciprocal
space of the one-dimensional membrane structure the reflection planes are families of points
with the coordinates C
h
+i/D, where h are Miller indices, and {i} are all integer numbers. But
C
1
= 1/D,
= C
h
/(Dh) (formula of the interplanar distance for the one-dimensional cubic
symmetries).
From this we get the restrictions on the parameters C
h
: C
h
= C
1
/h, h=1,2,3,...
Thus, we have an interval of the Miller indices h e H with the center U
h
, where the
spectrum s(d) was measured. For simplicity reason, we can assume that in each interval the
quantities d vary between 1 and some m, the difference between such quantities from different
intervals is accounted for by the correction D
h
, i.e., d of the hth interval is shifted to the 1st
interval by D
h
; otherwise, its center U
h
= U
1
+ D
h
.
This follows from the fact that if we in (1) make a replacement h h+D, this will
multiply the integral in (1) by exp( ihD), and so shift the parameter U: U U+D.
Let us write the function F(d,h) for the LSE in the following form
F(d,h)= Asin(q(d-U)/W)/d-U exp(q(d-U)/C/2)),
here q=i2 t /(m+1).
This is the most general case, corresponding to the non-centrosymmetrical scatterer
structure. To make the consideration comlete, let us take also the centrosymmetrical case.
Let the graph of the function (x) look as follows
Fig. 3. The centrosymmetrical case
It is the case (Uu). We have
F(h) =A
}
+
a
w a ) (
exp(-ihx)dx + A
}
+w a
a
exp(-ihx)dx \ =
-A/ (ih)(exp(iha))-exp(ih(a+w))+ exp(-ih(a+w))- exp(-iha)).
Let us make 4 groups of exponentials in the following way
305
exp(ih(a+w/2))(exp(-ih(w/2)-exp(ih(w/2)) + exp(-ih(a+w/2))(exp(-ih(w/2)-exp(ih(w/2))).
Making use of the formulae 2sin(t)=(exp(-it)-exp(it))/i, 2cos(t)=exp(-it)+exp(it) and
the introduced parametrization, we get finally
F(h) = 2Asin(q(h-U)W/2)/(h-U)cos(qhC/2),
here q turns to q=2t /(2m+1).
Similarly for the case (Uuuu) we have (the non-centrosymmetrical variant)
F(h) =A exp(-((h-U)/W)
2
) exp( i(h-U)/C)),
where the parameters A,C,W,U have the same meaning, as in the case (Uu).
For the fitting in the centrosymmetrical case we write this expression as follows
F(h) =2A exp(-((h-U)/W)
2
) cos(q(h-U)/C/2)),
and q=2t /(2m+1).
If the fitting was success, the expressions A
h
sin((x-C
h
)/W
i
)/(x- C
h
) and
A exp(-((x- C
h
)/ W
h
)
2
) will be densities of the scattering masses, sought for.
On the Fig. 4, 5 graphs of a diffraction spectrum with 5 praks and results of its fitting
by the models (Uu) (Uuuu) are given.
Comparing the results of the peak analysis by different models we can conclude that
the hypothesis: the scatterer density is described by the model (Uu}) - is more plausible than
its alternative. Meanwhile the standard method (fitting the cosine sum) uses the Gaussian
model which is obviously inadequate to the data, and this increases the inaccuracy and
unreliability of its estimates.
Fig. 4. The diffraction spectrum of a multi-layer lipid membrane with 5 peaks and its fitting
by the Gaussian (Uuuu) model of the scatterer density model. The dotted line is the initial
spectrum, and the thick line is its fitting.
306
Fig. 5. The same spectrum and its fitting by the (Uu) density model. The _
2
per degree of
freedom is significantly greater than in the former case.
Conclusion
The described approach represents a regularization of an ill-conditionned problem, the
solution of which will be stable with respect to both the statistical errors and the small
variations of the initial data. Within its framework an anecdotical situation, when one tries to
estimate 15 parameters on the basis of 5 numbers becomes impossible.
References
[1] G. Zaccai, J.K. Blasie & B.P. Schoenborn. Proc.Natl. Acad.Sci. USA, 72, 1975, pp. 376-380.
[2] G. Zaccai, G. Bueld, A.Seelig & J. Seelig. J.Mol.Biol. 134, 1979, pp. 693-706.
[3] V.I. Gordeliy and N.I. Chernov. Acta Cryst. D53, 1997, pp. 377-384.
307
The distributed subsystem to control parameters of the Nuclotron extracted beam
E.V.Gorbachev, N.I.Lebedev, N.V.Pilyar, S.V.Romanov, T.V.Rukoyatkina, V.I.Volkov
JINR, Dubna, Russia
The distributed control subsystem is intended to solve 3 tasks: to measure and remote look
through results of the extracted beam spatial characteristics measurements in 4 points of the
transportation channel by means of proportional chambers; to measure the beam intensity using
the ionization chamber; to control the gains of proportional chambers by means of their HV
power supply control.
The equipment of the signal preliminary registration from the detectors is located on the beam
transportation channel. The server computer and data acquisition modules are placed in the
accelerator control room at ~400m from the detectors. The results are shown on users
computers; one of them is in the central accelerator control room.
The subsystem has been made on the basis of multifunctional modules NI USB-6259 BNC, NI
PCI-6703, SCB-68 produced by National Instruments, HV power supplies N1130-4 produced by
WENZEL and DDPCA-300 I/U convertor with trans-impedance from E+4 to E+13 V/A.
The client-server software was made in LabView by National Instruments for Windows. The
subsystem was tested on the beam in the March 2011 run of the Nuclotron.
The problem definition and basics to solve it
Now the project to construct a new experimental complex NICA/MPD on the basis of the
modernized Nuclotron-M is under active development at JINR. When the Nuclotron equipment
is modernized, there is a task to develop and test up-to-date the data acquisition and control
systems during accelerator runs avoiding functionality damages of the operating subsystems.
Usual consumer properties must be preserved while improving the graphic user interface,
operation speed and stability.
Fundamental principles of the control system organization are as follows:
the client-server distributed model of data interchange,
the TCP transport protocol socket using,
the server/equipment interoperation stability,
the client operation stability in the long accelerator run which is achieved due to
reconnect in every socket connection.
The wide admission of the electronics and software produced by National Instruments is quite
satisfactory to solve the above tasks. The software is based on .NET Framework, NI-DAQmx
universal driver set and powerful graphics. The injection subsystem [1] was made on the same
principles and the experience of using it, obtained in the December and March 2011 runs of the
Nuclotron, has confirmed and justified the correct choice of the approach.
The subsystem structure
The extracted beam subsystem is purposed to solve 3 tasks (Fig.1):
to measure the extracted beam spatial characteristics in 4 points of the transportation channel
by means of proportional chambers ( NI USB-6259);
to control the gains of proportional chambers by means of their HV power supply control:
o voltage management NI PCI-6703, SCB-68;
o power supplies - WENZEL N1130-4 with 4 HV-channels by 0.1 - 6kV;
o HV voltage measurement - NI USB-6259;
308
to measure and summarize the beam intensity using the ionization chamber and DDPCA-
300 I/U convertor (with control of its gain and output voltage-to-frequency conversion to
transfer the data into the NI USB-6259 integration counter).
The subsystem makes it possible to control and measure HV power supplies and control the
beam intensity parameters either locally in the Server or remotely in the Client. Only one
Client may control HV. The access permission for writing in Server computer must be fixed
for this IP-address only. That is why we have additionally used a specialized server DSS
(DataSocketServer) from LabView base packet. It starts on the server computer and, besides
the server access permission fixing, realizes the low level DSTP transport protocol operations
that simplifies the task of organizing simultaneous interchange with 9 socket connections in
one cycle of the Server program. The publishing and subscribing of the data, located in the
DSS URL-named memory, are preformed via a simple low-level interface of DataSocket
Read and Write functions. These functions handle the low-level TCP/IP programming for us.
Due to DSS the Clients become light Clients, their executables contain only NI Runtime
Engine and NI DataSocket API for DSS. Clients have time to represent a new portion of data
with a measurement time stamp, selectively to store and look through the data according to
the operators desire. In addition we have used an independent XControl for every of the 3
above tasks [2, 3]. It results in additional Client speed increasing since every XControl has its
own event processing structure.
Fig.1 The Client-Server subsystem structure
The Server program structure
The Server program intends to control, measure and send the results to the Clients. It takes the
session-specific settings - socket URL-names, chambers and module channels parameters from
the XML-file, and starts to operate. After that it opens 9 socket connections on DSS-server.
Every socket connection uses its own unique URL-name to address to its data memory field at
the DSS. Then Server signals that it is in operation in the network. Then it sends to all Clients the
Server Remote-enable signal flag and HV values set on power supplies in the previous control
session.
NI USB Client
NI USB server
Start
Hardware Server Clients
NI USB-6259
DDPCA-300
(E+4...E+13 V/A)
U/F converter
NI PCI-6703
NI USB Client
NI USB Client
HV control
The calculation of the beam intensity
Profile-meters
HV
SCB-68
WENZEL
N1130-4
The calculation of the beam spatial parameters
Stop
I
309
The application consists of 2 continuous working cycles While and a Timed Loop cycle. In the
first cycle While permanent measurements of the beam spatial characteristics are going by means
of proportional chambers (MWD, profile-meters) to transfer them to the Clients as well as
measurements of voltage supply sources of the profile-meters by using module NI USB-6259.
The second cycle reads-out the remote control signal recordings and on their basis the following
question is answered: who will control- the server or the controlling Client. If there is the
controlling Client in the network the Client is controlling, if not then the Server does. The
cycle Timed Loop is purposed to organize the start of measuring the beam intensity in a
precisely given moment of time: 32-bit counter of the NI USB-6259 module counts the number
of pulses of the voltage-frequency converter between the signals of the beginning and end of the
slow extraction plato. At the moment when the count is over, the value on the counter is read-
out, summed up and sent to the Clients together with the other parameters of the measurements
and calculations of the beam intensity. If there is a remote reset command this cycle carries out a
reset of the summed intensity into 0.
Fig.2 The Client-Server subsystem block diagram
The Client program structure
The Client application is purposed for on-line - off-line processing and looking through the
remote measurement results and control. If the Remote flag enabled by the Server permits to do
this in this Client, then the Client application may control.
The Client application consists of the two program loops: the Event structure and a continuous
working cycle While. The first cycle treats the events every 300ms from all the buttons, sliders
and dropping lists of the application. Joining to the DSS on the transport protocol the working
cycle gets 9 client connections. All 9 connections open simultaneously (TimeOut=20s). Inside
the working cycle the data are consequently treated in 3 stages (a, b, c):
Out1
In
In R
Out PU
X
Start
NI USB-6259
Dig
I/O ADC Count-32
A/D Clock
PFI 1
A/D Trigger
PFI 0
USB 2.0
Server
Client
Y
TCP
EPIK-3
Client
The beam spatial parameters
from the MWD (30x30)
WENZEL
N1130-4
MWD HV
DDPCA-300
(E+4...E+13 V/A)
U/F converter
converter
Ionization chamber
I
F
NI PCI-6703
Strobe
SSB-68
4
4
E+4 E+11 ions
32x64
64
PFI 9
PFI 8
ai16:19
port0/
line8:11
ai0:3
MWD
(30X30)
The slow extraction gate
310
a. Reconstruction of each connection if it is lost is performed with a preliminary closing of the
lost connection. The read-out of the new data from the socket is performed if the connection
is normal.
b. The choice of the new (regime Work) or previous data array from the profile-meters (regime
FileOpen) for calculations and imaging or its remembering into the file (regime
FileSave/FileSave as);
c. Processing and imaging of the data array in regimes Work and FileOpen.
At first for 32 measurements the program calculated 32 beam profile shots by using the
method of the least squares taking into account the chosen distance between the wires in
each profile-meter. Then the averaged integral beam profile is calculated for each profile-
meter. After that for each chosen profile-meter the 32 beam profile shots and the integral
beam profile are reflected in the screen tab Profiles on axes X, Y. The tab Integral
profiles shows the integral profiles of all 4 profile-meters simultaneously.
a)
b)
c)
d)
Fig.3 The beam tests of Client-Server subsystem to control extracted beam parameters
(21.03.2011 16:55). The Client window tabs: a) Integral profiles, b) Profiles,
c) Intensity. d) HV supply control
Every 400ms the working cycle (stage a) looks through 9 socket connections (TimeOut=50ms).
The number of the current Client cycle informs the Server that the Client is in the network. In its
turn the Client controls the renewing of the Server working cycle. When the Server is not
accessible in the network, only off-line look-through is available for the Client to see the saved
data. The Client work is carried out in the on-line regime if the Server is valid. At the moment
311
when module NI USB-6259 (Timeout=1000ms) has completed the measurements, the Client
reads out a two-dimensional data array of the profile-meters with the timing stamp from the
socket, remembers and shows everything on the screen tabs and signals about the readiness of
the data. Then the Client application actions are determined relatively the control over the beam
intensity parameters and high voltage suppliers:
If the Client is allowed to control remotely, the parameters chosen in this Client are then
established to the Servers parameters and they are recorded in the modules and sent to the
other Clients.
If not the variable values are established by the Server and sent to all the Clients.
When the measurements of the beam intensity are completed by the Server, the Clients read out
the beam intensity value together with the parameters of measurements and calculations of the
intensity.
The distributed subsystem beam tests
In the March 2011 run the subsystem was tested on the extracted beam (Fig.2) with 3 profile-
meters and two Clients. Fig. 3 shows the screen tabs of the Client window. Fig.3a illustrates the
integral profiles of the 3 profile-meters - PIK-1, PIK-2 and PIK-3. Fig.3b gives 32 measurements
of the shot profiles in PIK-2 and their integral profile. Each profile-meter chooses its own color
to present the data of this profile-meter on the both screen tabs. Fig.3.c shows the screen tab
Intensity. The remote control for this case is forbidden (the lamp is not shining), that is why
the measurements are performed with the values established by the Server. The amplification
coefficient of DDPCA-300 is equal to 1E+8. 1584 cycles of measurements were performed.
Since the screen tab with the integral profile values must be watched permanently, the values of
the intensities are given also below all the screen tabs. Fig.3d in the screen tab HV supply
control shows that the control sliders are not accessible for the user because the remote control
is forbidden (the lamp is not shining).
Conclusion
The distributed subsystem to control parameters of the Nuclotron extracted beam was made on
client-server technology in LabView by National Instruments. The subsystem test on the beam in
the March 2011 run of the Nuclotron showed the correct approach to the hardware-software
selection and a stable Server interaction with some Clients during 24 hours. A successfully tested
application of DSS-server and XControl is planned to be used in the Nuclotron injection
subsystem developed by us earlier and operated already in 2 runs of the accelerator.
Literature
1. E.V. Gorbachev et al. Dubna; JINR, 2010, pp.144-149.
2. http://www.ni.com
3. http://zone.ni.com/devzone/
312
INDEX of REPORTERS
Akishina Valentina JINR 10
Antchev Gueorgui INRNE-BAS,Switzerland Gueorgui.Antchev@cern.ch 29
Atkin Eduard MEPhI, Moscow, Russia atkin@eldep.mephi.ru 77
Balabanski Dimiter INRNE-BAS, Bulgaria balabanski@inrne.bas.bg 42
Barberis Dario University of Genova, Italy Dario.Barberis@cern.ch 52
Bazarov Rustam IMIT, AS Uzbekistan rustam.bazarov@gmail.com 64
Belov Sergey JINR belov@jinr.ru 68,74
Bezbakh Andrey JINR Delphin.silence@gmail.com 242
Chernenko Sergey JINR chernenko@jinr.ru 296
Dannheim Dominik CERN dominik.dannheim@cern.ch 100
Derenovskaya Olga JINR odenisova@jinr.ru 107
Dimitrov Lubomir INRNE-BAS, Bulgaria ludim@inrne.bas.bg 112,191
Dimitrov Vladimir University of Sofia , Bulgaria cht@fmi.uni-sofia.bg 115
Dolbilov Andrey JINR dolbilov@jinr.ru 20
Elizbarashvili Archil SU, Tbilisi, Georgia archil.elizbarashvili@tsu.ge 122
Farcas Felix NIRDIMT, Romania felix@itim-cj.ro 259
Filozova Irina JINR fia@jinr.ru 132
Garelli Nicoletta CERN nicoletta.garelli@cern.ch 138
Golunov Alexander JINR agolunov@mail.ru 154
Gorbunov Ilia JINR ingorbunov@gmail.com 145
Gorbunov Nikolay JINR gorbunov@jinr.ru 154
Grebenyuk Victor JINR greben@jinr.ru 60,86
Isadov Victor JINR brahman63@mail.ru 16
Ismayilov Ali IP, Azerbaijan alismayilov@gmail.com 8
Ivanoaica Teodor NIPNE (IFIN-HH), Romania iteodor@nipne.ro 163
Ivanov Victor JINR ivanov@jinr.ru 20
Kalinin Anatoly JINR kalinin@nusun.jinr.ru 90
Kirilov Andrey JINR akirilov@nf.jinr.ru 169,174,236
Korenkov Vladimir JINR korenkov@cv.jinr.ru 20,68,145,148,
154
Kouba Tomas IP, AS Czech Republic koubat@fzu.cz 94
Kreuzer Peter RWTH Aachen, Germany/CERN Peter.Kreuzer@cern.ch 179
Lebedev Nikolay JINR nilebedev@gmail.com 158
Lyublev Y. ITEP, Russia lublev@itep.ru 186
Mitev Georgi INRNE-BAS, Bulgaria gmmitev@gmail.com 191,264
Mitev Mityo TU, Sofia, Bulgaria mitev@ecad.tu-sofia.bg 264
Murashkevich Svetlana JINR svetlana@nf.jinr.ru 174
Nikiforov Alexander JINR, MSU nikif@inbox.ru 60
Osipov Dmitry MEPhI, Moscow, Russia DLOsipov@MEPHI.RU 77
Petrova Petia ISER-BAS, Bulgaria Petia.Petrova@cern.ch 196
Polyakov Aleksandr JINR polyakov@sungns.jinr.ru 281,286
Prmantayeva Bekzat ENU, Kazakhstan Prmantayeva_BA@enu.kz 200
Ratnikov Fedor IT, Karlsruhe, Germany fedor.ratnikov@kit.edu 206
Ratnikova Natalia IT, Karlsruhe, Germany ratnik@ekp.uni-karlsruhe.de 212
Rukoyatkina Tatiana JINR rukoyt@susne.jinr.ru 158
Schovancova Jaroslava IP, ASCR, Czech Republic jschovan@cern.ch 219
Sedykh George JINR eg0r@bk.ru 223
Shapovalov Andrey MEPhI, Moscow, Russia andrey.shapovalov@desy.de 227
Sidorchuk Sergey JINR sid@nrmail.jinr.ru 242
Sirotin Alexander JINR sirotin@nf.jinr.ru 236
Slepnev Roman JINR roman@nrmail.jinr.ru 242
Strizh Tatyana JINR strizh@jinr.ru 68
Svistunov Sergiy ITP, Ukraine svistunov@bitp.kiev.ua 246
Tarasov Vladimir JINR vtarasov51@mail.ru 158
Tikhonenko Elena JINR eat@cv.jinr.ru 68,148
Tleulessova Indira ENU, Kazakhstan indira.t.nph@gmail.com 200
Tomskova Anna IM & ICT Uzbekistan tomskovaanna@gmail.com 253
313
Trusca M.R.C. NIRD of IMT, Romania Radu.Trusca@itim-cj.ro 259
Tsaregorodtsev Andrei CPP, Marseille, France atsareg@in2p3.fr 269
Tsyganov Yury JINR tyura@sungns.jinr.ru 278,281,286
Tutunnikov Sergey JINR tsi@sunse.jinr.ru 223
Voinov Alexey JINR voinov@sungns.jinr.ru 281,286
Zager Valery JINR valery@jinr.ru 292
Zhiltsov Victor JINR zhiltsov@jinr.ru 68,148
Zlokazov Victor JINR zlokazov@jinr.ru 281,300