Professional Documents
Culture Documents
5
References:................................................................................................ 54
6
References: ..............................................................................................106
7
MODULE 6 – RISK ANALYSIS IN SECURED SYSTEMS.............................. 181
8
2. Tutorial on Java Smart-Card Electronic Wallet Application .................259
2.1 Introduction ........................................................................................259
2.2 Complete applications for Java smart cards .............................................260
2.3 Elements involved in the development and life cycle of a Java card applet..265
2.4 Practical sample of developing a Java applet card ....................................270
2.5 Conclusions .........................................................................................277
References: ..............................................................................................277
9
1.4 Architectures for insuring database security............................................ 348
References:.............................................................................................. 349
INDEX....................................................................................................... 409
10
The Ministry of Education and Research
Foreword
11
12
The Academy of Economic Studies Foreword
The Rector of
Academy of Economic Studies Bucharest
13
14
The Economic Informatics Department
Foreword
The Head of
Economic Informatics Department
15
16
The Informatics Security Master Foreword
This handbook is the result of one-year experience with the
Informatics Security Master Program. The topics presented within this
handbook are according to the master program’s curricula and they come
from various informatics security subject areas, such as: cryptography,
computer networks, mobile and smart card applications, programming,
databases, intelligent agents and operating systems security. The
productive collaboration between the professors that are taking part to
the Informatics Security Master Program is essential to it’s
performance and contributes in a decisive manner to the appearance of
this handbook.
The Head of
Informatics Security Master
17
18
Cryptography Basis
19
20
Module 1 – Cryptography Basis
1. Introduction to Cryptography
Ion IVAN, Cristian TOMA
Abstract: In this paper are briefly presented the basic notions and
concepts used in cryptography. The paper is structured in two main parts.
In the first part it is explained the difference between cryptography,
cryptanalysis and cryptology. The second part is briefly synthesis about
random numbers generators, the ancestors of cryptography algorithms,
hash functions, symmetrical and asymmetrical crypto systems. The
conclusions announce that each important part of this paper will be
detailed in separate papers included in this module.
C
ryptography is the science of secret writings. Cryptanalysis is the
art or science of “breaking” the cipher texts without knowing the
key used for decrypting. Those who “practice” cryptography are
called cryptographers and those who “practice” cryptanalysis are called
cryptanalysts. In nowadays the cryptography is used for securing
messages, certification, services and mechanisms used for electronic
equipment networks and. Cryptology is a branch of the mathematics that
studies the mathematical basis used by the cryptographic methods.
A cipher is defined as the transformation of a clear array of bytes
in a cipher array of bytes or cryptogram.
21
There are two types of cryptographic systems: symmetrical
and asymmetrical. The symmetrical cryptographic systems (with secret
key) use the same key, as well as for encrypting as for decrypting the
messages. The asymmetrical cryptographic systems (with public key) use
different keys for encrypting and decrypting (but related one to the other).
One of the keys is kept secret and known only by its owner. The second
key (its pair) is made public. The cryptographic algorithms (ciphers) used
in symmetric cryptographic systems are divided into stream ciphers and
block ciphers. The stream ciphers may encrypt only a single bit clearly
at a time, as long as the block ciphers may encrypt more bits (64 or 128
bits) at a time.
I
n nowadays cryptography it is operated with an entire series of
theoretical elements such as: random number generators, hash
functions, symmetric and asymmetric encrypting algorithms, multiple
encrypting systems, encrypting models, digital signature and more such
elements as presented in the following paragraphs.
In the best case, the random numbers are generated by “physical sources
that start events at random”, events that cannot be anticipated. Such
sources include semi-conductive devices noise, the less significant bits of
an audio channel, and periods between hardware stops or tasting by a
22
user in a period of time. The noise so obtained from a physical source is
then “diluted” into a hash function in order to make each bit dependent of
each bit. Often a large number of bits (thousands of bits) contains the
random event and so from these bits are created keys.
When the physical random event is not available, there are used
pseudo-random numbers. This is not a desirable situation. The ideal is to
obtain random numbers from environment noises – statistics concerning
the use of resources, statistics concerning the used network.
The generators of pseudo-random numbers for cryptographic
applications have a large number of bits (“seed value”) that contain the
random event. Although it is not very hard to design, by cryptographic
methods, random and pseudo-random number generators, often there are
not created such generators. The importance of a random number
generator is underlined when it does not generate random numbers but
numbers included in statistic series, so the generator becoming the weak
link of the cryptographic system.
The best known random number generator is Yarrow designed in
the Counterpane labs.
Here under there are presented the principles of the
pseudorandom number successions, element often used in designing
pseudo-random number generators.
A string of real numbers, {Un} = U0, U1, …, Un with 0<=Un<=1, is
called succession of random numbers, if they are chosen at random. The
objections regarding the obtaining of random number strings concerned
the fact that each number was completely determined by its predecessor.
This kind of strings determinate generated are called in the filed literature
pseudorandom successions, meaning that, in fact, they are random
numbers only apparently. Generating long random successions proved to
be an extremely difficult process. The danger is that the string
degenerates and aims to become centered to some element cycles.
In practice some of the most used generation methods are:
congruent linearly method, adding congruent method, multiplying
congruent method, the generation method with movement registers,
block-reaction generators, meter generators.
According to the congruent linearly method it is obtained a string
{Xn} using the following recurrent relation: Xn+1=(aXn+c) mod m. The
numbers m, a, c, X0 (also called magical numbers) are: m, the module,
m>0; a, the multiplication, 0<=a<m; c, the increment, 0<=c<m; X0, the
initial term, 0<=X0<m. The real numbers string {Un} has a unitary
repartition on a limited group if all the numbers are equally probable. The
congruent linearly strings will enter in a loop, if there is a final cycle of
numbers that infinitely repeat. This characteristic is common to all the
successions of type Xn+1=f(Xn). The cycle that repeats is called period. A
sufficiently random string will have a period relatively long.
The generators with movement registers of linearly reaction
presume the existence of a register R=(rn,rn-1,…,r1) and a band sequence
23
T=(tn,tn-1,…,t1) where ti and ri are bits. At each step the bit r1 is added to
the key string, the bits rn,…,r2 are moved with one position to the right
and a new bit, derived from T and R, is inserted in the position rn. If
R’=(r’n,…,r’1) is the new status of the register R then: r’i=r’i+1, i=1,…,n-1;
and r’n=T*R=ti*ri+…+tn*rn, where * means multiplying bit by bit (AND)
and + means module 2 addition bit by bit (xor or exclusive). A register of
movement looks like in the following figure:
Shift Register R
tn tn-1 t1
The OTP cipher (one-time pad) is the only one that proved secure
and so far, in practice, it has never been broken. As well it is proved that
the best ciphers are of OTP type.
The Vernam cipher (invented by G. Vernam in 1917) is a good
example of OTP. The cipher is very simple: there are taken bits containing
the (plain text) and a string of secret random bits as long as the message
(the cipher key) in clear, than it is executed OR-EXCLUSIVE (XOR)
between the plain message and the key, and the result is the encrypted
message. Applying the same key on the encrypted message, still using
the operation on XOR bits it is obtained the original message. The
problem for this cipher is that generating keys as large as the message for
the text is difficult and costly. The problem of security is passed to the
changing process (transmission between partners) for the extremely large
keys (as large as the plaintext, meaning the security problem does not
concern any more the message but the key that is also a message).
24
The Fish cipher was used by the German army during the Second
World War for encoding the communications between commandments.
Fish was the name given by British cryptanalysts. Fish was a stream
cipher called the Lorentz machine. The British built a machine named
Collossus to break the cipher, machine that was one of the first digital
computers.
The Enigma cipher was another cipher used by the German army
during the Second World War. The machine producing the cipher had
several key wheels, and it looked like a typing machine. This cipher was
used for the communications between German submarines. Some Unix
machines use variants of this cipher.
The Vigenere cipher uses the multi-alphabetic substitution to
combine the key and the message. The difference between an OTP and
Vigenere is that the later reused several short keys more times to encrypt
a message. The Kasiski test is used to attack the Vigenere ciphers. The
same principle is the base for the Hill cipher.
Until the ‘70s all the ciphers were symmetrical with secret keys.
25
given M, it is hard to find another M’ message (even
impossible), so that H(M) = H(M’);
it is impossible to find 2 random messages, so that H(M) =
H(M’), property called collision resistance.
The most used hash algorithms are MD5 and SHA-1. The MD5
algorithm is presented more detailed in an article from this module.
26
K K
M C M
A B
Encrypt Decrypt
M: message
C: cryptogram
The following algorithms are from “64 bits era” and they are the
best known:
DES – Data Encryption Standard (with its versions DESX, GDES,
3DES);
IDEA – International Data Encryption Standard (initially called
PES – Proposed Encryption Standard);
FEAL – Japanese Fast Data Encryption Algorithm (FEAL-1,
FEAL-4, FEAL-8);
LOKI – Australian symmetric cipher (LOKI89, LOKI91), RC2 –
Rivest Cipher.
27
The following algorithms are from “128 bits era” and they are the
best known:
Rijndael – finalist and winner of AES competition;
Twofish – finalist in AES competition;
Serpent – finalist in AES competition;
RC6 – Rivest Cipher 6 – finalist in AES competition;
Others: Blowfish, DEAL, SAFER+, FROG, CAST-256, Magenta,
Skip-Jack.
The asymmetric ciphers (with public keys) use distinct keys for
encryption and decryption (but dependent one of another). Because it is
impossible to deduct a key from the other one, one of the keys is made
public, made available to anyone wants to send an encrypted message.
Only the receiver, who has the second key, may decrypt and use the
message. The technique of public keys is used also for the digital
(electronic) signature, that making even larger the popularity of the
public-keys cryptographic systems. The use pattern for an asymmetric
crypto-system is reproduced in figure 1.3.
K public of B K private of B
M C M
Sample 1
Confidentiality A B
Encrypt Decrypt
Anyone can encypt the Only B can decrypt the
message M for B message M for B
M: message K: key
C: cryptogram K private of A K public of A
M C M
Sample 2
Authentication A B
(Digital Signature)
ENCRYPT DECRYPT
A “signs” – authenticates B verifies with public
the message M with his key of A
private key
Fig. 1.3. Public key encryption system used for the confidentiality or
certification of a message M between sender A and receiver B
28
In the asymmetric ciphers are often encountered the following
terms:
Computational complexity is the equivalent for the
complexity of an algorithm and may be polynomial (for instance
O(n2)) or exponential. Is a guessed solution for a problem may
be verified in a polynomial time (limited number of steps), then
the problem is NP (non-deterministic polynomial time).
Prime numbers. A prime number is a number that has no
divisors but 1 and itself. Examples of prime integer numbers
are: 2, 3, 5, 7, 11, and so on. A number a is relatively prime
with a number b if the highest common divisor between a and
b is 1.
Factorization. By this procedure is shown that any integer
number may be expressed by a product of unique prime
numbers (is accepted the powering of a prime number so that
it should be a product of unique prime numbers). For instance
10=2*5, and 12 = (22)*3.
Discreet logarithms are the same with those defined in the
Rijndael algorithm with symmetric keys (previous paragraphs)
for defining the multiplication in discreet filed (GF(28))
programming as adding (XOR or exclusive on bits) of
logarithms in discreet field (limited).
The knapsack problem. With the help of this problem are
designed different types of asymmetric ciphers. Given a small
set of integer numbers and a integer number A, to be
determined a subset of that given set so that A should be the
sum of the determined subset. For instance, given A=10 and
the set (2, 3, 5, 7) is found the subset (2, 3, 5) so that
A=10=2+3+5.
Latice calculation. Given a basic vector Wi = <w1, …, wn>,
with elements wj integer (or real or even objects), where n is
the vector dimension, 1<=j<=n and m the number of vectors
that will determine the latice 1<=i<=m. The elements of the
latice are of type: t1*W1 + t2*W2 + … + tm*Wm where ti are
integer numbers, Wi vectors, * the multiplying operation of a
scalar with vector and + means the vectors adding.
29
Rivest-Chor, MH (Merkle-Hellman, has iterative version or
with addition trapdoor or with multiplying trapdoor),
GS (Graham-Shamir), SH (Shamir) (based on the knapsack
problem).
1.3 Conclusions
T
his paper is a synthesis about what is present in cryptography as
basic concepts. In the other papers, from this handbook chapter
module, are presented in detailed issues such as: algorithms before
‘70s, hash functions, symmetric and asymmetric encrypting algorithms,
multiple encrypting systems, encryption models. The digital signature is
separated module-chapter in this handbook.
With this synthesis the authors describe the main topics that
should be studied in cryptography basics in order to understand much
better all the next technology such as operating systems security,
networking security, intelligent agents and database security.
References:
30
2. Symmetric crypto-algorithms’ ancestor issues
Victor Valeriu PATRICIU, Cristian TOMA
T
he transposition ciphers produce a transposition (permutation,
reversion) of the characters in the plaintext. The encryption key is
the pair K=(d,f) where d represents the length of the successive
character blocks that will be encrypted according to the permutation f,
f:Zd->Zd Zd={1,2,…,d}, of type 1 2… d where f(i)!=f(j), ∀ i!=j.
f(1) f(2)…f(d)
the multitude of functions so defined is d!. This way the clear text
M=m1m2…mdmd+1…md+d… (a succession of blocks) is encrypted so:
Ek(M)=mf(1)…mf(d)md+f(1)…md+f(d). The decryption is obtained by reversed
permutation.
Encryption by transposition is a transformation of the plaintext in
which is modified the position of the characters in the message. The
transpositions may be applied to the whole message or to the block of
length d obtained by splitting the entire message. In the transposition
encryption method the alphabet of the cleartext remains unchanged. An
often used method for implementing this type of transformation is writing
the message in a certain matrix after that the plaintext being obtained by
reading the characters on the line, on the column or on a certain route in
the matrix. The transposition ciphers are classified after their application
number, in monophase transpositions, when they are applied once,
and multiphase transpositions, when they are applied several times.
As well, if in the transformation process the unit element is the letter,
then the transposition is called monographic, and if there ar transposed
groups of characters (symbols) the transposition is called
multigraphic.[PATR94]
The simplest monographic transpositions are obtained by splitting
the clear text in to halves written one under the other, after which the
31
columns are read from left to right. For instance, the word “calculator” is
encrypted like this:
CALCU
=> CLAALTCOUR
LATOR
CLUAO
=> CLUAOACLTR
ACLTR
CALCU
LATOR
UNIVE
RSALF
EL IXX
32
In encryption practice, the decision about the reading routes is
often made with the help of a key word. The key has a number of letters
equal with the number of columns in the matrix. The letters of the key,
alphabetically numbered, are written above the matrix; the columns of
the matrix, in the order decided by the key, provide the encrypted text.
An ingenious transposition of the letters in a clear text is realized
by rotating by 90 degrees of a square-shaped grill (bit mask). There are
imagined more grills (triangles, pentagons) that are used for transposing
the letters in the plaintext. As well there are realized more transposition
types, such as those operated on letter groups – polygraphic
transpositions.
In the case of computational ciphers (those that use informatics
systems), the transposition is made by P boxes (fig. 1.7).
m1 P BOX c1
m2 c2
. .
. .
. .
. .
mn cn
T
he substitution ciphers replace each character in the alphabet of
the A messages with a character in the C cryptogram alphabet (the
C cryptogram alphabet may coincide with the A message alphabet,
or not; for the transposition ciphers that was not an issue because they
were only transposed and they did not substitute the symbol
combinations). If A={a1,a2,…,an} then C={f1(a1),f(a2),…,f(an)} where f:A-
>C is the substitution function, representing the cipher key.
The encryption of a message M=m1m2…mn is made so:
Ek(M)=f(m1)f(m2)…f(mn).
So the substitutions are transformations by which the characters
(symbols) or groups of symbols of the basic alphabet are replaced by the
characters or groups of characters of the secondary alphabet. In practice
33
it is frequently applied the substitution that may be described with the
help of the linear transformation such as: C = a*M + b (mod N).
For this purpose is established a biunivoque corespondence
between the primery alphabet letters and the integer numbers 0,1,…,N-1
that form a ring, ZN, to the addition operations modulo N and
multiplication modulo N. In the relation, a is called amplification factor,
and b movement coefficient. By particularization the a and b
coefficients there are obtained particular cases of linear transformations.
In the simplest case a correspondence is established between the letters
mi ∈ M of the primary alphabet and the elements in the secondary
alphabet (eventually the extended alphabet) ci ∈ C of the cryptogram.
ABC…XYZ
D E F …. A B C
34
Primary letters: A B C … X Y Z
Equivalent numbers, ei: 0 1 2 … 23 24 25
3*ei(mod 26): 0 3 6 … 17 20 23
Cipher: A D G … R U X
The letters of the cipher are obtained from the primary alphabet
also by the following selection process: A is chosen as first letter an then,
in the cyclic order each third letter; so D, G, …, Y. After Y the cipher string
continues with B, because, in the cyclic order, the third letter after Y in
the primary alphabet is B, reason for which the amplification factor a=3 is
also called selection factor.
It is so obtained a substitution alphabet by composing the
movement operation with that of selection. So, for instance, given b=4
and a=3 tit is obtained the cipher: C(ei) = 3*(ei+4) (mod 26), which is
equivalent with a general linear transformation of type: C(ei) = 3*ei+12
(mod 26). The cipher or permutation P of the letters in the primary
A B C ... X Y Z
alphabet, where P= ( ), is characterized in
M P S ... D G J
univoque mode by the pair of numbers (3,4), for which 3 represents the
selection factor, and 4 the movement coefficient.
Generally, the pair (a,b) that univoque defines a linear
transformation is called substitution key.
These types of ciphers are weak against cryptanalytic attacks,
because it is enough for a cryptanalyst to establish a simple key, based on
which to obtain the substitution of all the letters. A stronger cipher to
attacks is the random substitution cipher, in which the letters of the
substitution alphabet are obtained by a randomization process.
The random substitution cipher, along the advantages
concerning the more difficult decryption, as the letters in the substitution
alphabet are statistic independent, presents an advantage concerning the
generation, transmission and keeping the key. The key contains, in this
case, 26 pairs of equivalent numbers of type (a,b), where
a,b ∈ {0,1,2,…,25}.
An encryption system based on substitution is also obtained by
using a mnemonic key. For instance, a permutation of the primary
alphabet can be précised with the help of a mnemonic key.
It is taken as example the literary key CERNEALA under which is
written the number key, obtained by numbering the letters of the key
word, after ordering them alphabetically, so:
35
Where the first A in the key word has the order number 1, the
second A number 2 and so on. Afterwards the letters of the primary
alphabet are written under the number key as in the form:
CERNEALA
3487 5162
---------------------
ABCDE FGH
I J KLMNOP
QRSTUVWX
YZ
P= ABC…XYZ
F N V… C K S
CERNEALA
3487 5162
---------------------
FNVHP XAI
QYBJ RZEM
U G OW D L T C
Y Z.
P2(CERNEALA) = A B C … X Y Z
F N V… C K S
36
Another substitution cipher may be obtained with the help of stair-
shaped table. For this the alphabet letters are written in alphabetic order,
under the letters of the key under the condition that the line should be
completed starting with the i column, for i=1,2,…. Then the fixed
permutation or the encrypted alphabet is obtained by writing under the
letters of the primary alphabet of the letters in the table columns in
ascending order. So, for instance, for the mnemonic key PRACTICA is
obtained the following stair-shaped table:
PRACTICA
67138542
---------------------
1 ABCDEF
2 G
3 HIJ KL
4 MN
5 OP Q
6 R S T U V WX Y
7 Z
P=ABCDE…WXYZ
RSZAT… LNQY
37
alphabets C1, C2, …, Cd and d functions fi that realize the substitution in
the form fi:A->Ci, 1<=i<=d.
A clear message M=m1m2…mdmd+1…m2d… will be encrypted by
repeating the sequences of functions f1,…,fd at every d character:
Ek(M)=f1(m1)f2(m2)…fd(md)f1(md+1)….
The use of a periodic sequence of alphabet substitutions
considerably increases the cryptogram security by leveling the statistic
language characteristics. The same letter in the encrypted text may
represent several letters in the plaintext, with different apparition
sequences. In this case the number of possible keys increases from 26!
(mono alphabetic substitution), to (26!)n.
In the n-alphabetic substitution the character m1 of the clear
message is replaced by a character in the alphabet A1, m2 with a
character from the alphabet A2,…,mn with a character in the alphabet An,
mn+1 again with a character in the alphabet A1 and so on.
A known version of poly alphabetic substitution is the Vigenere
cipher, where the key K is a sequence of letters in the form: K=k1k2…kd.
The functions fi of substitution are defined as follows: fi(a)=(a+ki) (mod ni)
where ni is the length of the alphabet. As an example is considered the
eight letter key ACADEMIE that will be repeating used to encrypting the
message SUBSTITUTIE POLIALFABETICA.
Using a biunivoque correspondence between the letters of the
alphabet and the elements of the ring for the rests class modulo 26 (A=0,
B=1,…,Z=25), the substitution 8-alphabetic leads to the following
encrypted text:
38
xor(on bits) ki, i=1,2,…,n (each bit is taken from the binary representation
of mi and xor is performed with the correspondent bit from ki).
A B C D E
-------------------------------------------------
A| QX FN LB YE HJ
B| AS EZ BN RD CO
C| PD RA MG LU OP
. ………………………………….
. ………………………………….
. ………………………………….
39
A classical example for diagram substitution is PLAYFAIR’s cipher.
The method consists of arranging the Latin alphabet of 25 letters into a
square with five lines and five columns as follows:
V U L P E
A B C D F
G H I K M
N O Q R S
T W X Y Z
Usually, in the first line of the square is written a key word and
then the other lines are completed with the letters of the alphabet,
without repeating the letters. The encryption is executed respecting the
following rules:
40
Conversie Binar-Zecimal
Conversie Zecimal-Binar
m1 P BOX c1
m2 c2
. .
. .
. .
. .
mn cn
A
n output algorithm (also called output cipher) represents a
composition of t functions (ciphers) f1,f2,…,ft, in which each fi may
be a substitution or permutation. Shannon proposed the
composition of different kind of functions for creation of some mix
transformations that uniformly distribute the multitude of M messages on
the multitude of all the C cryptograms. These categories of output-ciphers
are based on networks of S-P boxes, in which is obtained the cryptogram
C=Ek(M)=StPt-1…S2P1S1(M), where each Si is dependent of a key k, part of
the cipher K key.
41
versions of such Feistel networks. A Feistel network is reproduced
in figure 1.9.
From this point forward almost all symmetrical key algorithms are
based on S and P boxes, key scheduling and bit slice operation.
42
2.4 Conclusions
I
t is proved the importance of P and S boxes in modern crypto systems
with symmetrical key. The review of previous methods that generated
the P and S boxes helps the person, that study cryptography basis, to
achieve proper knowledge for understanding cryptographic algorithms
with symmetric key.
Also, important is to have minimum knowledge about Feistel
networks, bit slice operations and key scheduling in order to design
eventually new cryptographic algorithms that create infusion and diffusion
over the input bytes arrays.
The algorithms presented in this article are for study purposes
because they are not used in present computational cryptography. The
most important cryptography algorithms with symmetric and asymmetric
key used in modern computational cryptography are analyzed in the next
papers.
References:
43
44
3. Issues of MD5 – hash algorithm and Data
Encryption Standard – symmetric key algorithm
Cristian TOMA, Marius POPA, Catalin BOJA
T
he algorithm was proposed by Ronald Rivest from MIT
(Massachusetts Institute of Technology) and was developed by the
RSA company (Rivest, Shamir, Adleman) Data Security. It is an
algorithm that receives at entrance a message of arbitrary length and
produces at exit a digest of 128 bits.
The calculation of the digest of a M message is made in 5 steps:
45
3. It is used a MD register of length 128 bits (4 words
of 32 bits) as to calculate the digest;
4. The M message is edited in successive blocks of 16
words of 32 bits (Mj) the edition of each block being
made in 4 rounds, each round being made of 16
steps.
5. The MD register contains at the end of editing the
exit, meaning the digest value of 128 bits.
Message Block
A 32 bits
Mj ti
MDj B
4*32 bits C Nonliniar
function F + + + +
D
<<< k
46
It begins with an initial constant value MD0, and at the end MDn
represents the digest. The value MD0 is formed by the concatenation of
the following fixed values:
A = B + ((A+F(B,C,D)+Mjm+ti)<<<k) where:
T
he DES cipher (Data Encryption Standard) is the first standard
dedicated to the cryptographic protection of computer data. It was
studied by IBM starting with 1970 for NBS (National Bureau of
Standards). After a few modifications made by NBS and NSA (National
Security Agency), it was published as FIPS PUBS 46 (Federal Information
Processing Standards Publications) in 1977 and called DES. Afterwards, it
47
was adopted by ANSI (American National Standard Institute) as standard
ANSI X3.92 and called DEA (Data Encryption Algorithm).
DES is block symmetric ciphers, with the length of the edited data
block of 64 bits, block edited in conjunction with a key of 64 bits. The key
has 56 bits randomly generated (or from the password) and 8 bits used
for detecting the transmission errors (each bit represents the odd parity of
the 8 octets of the key).
As a whole, this algorithm is nothing else but a combination of two
encryption techniques: “confusion” and “diffusion”. The fundamental DES
design is a combination of these two techniques (a substitution followed
by a permutation) on the message, based on the key. This design is called
round (in fact a round is a Feistel network that makes permutation
between 2 blocks (each of 32 bits) of the initial message block and applies
a substitution by the function f, which with the help of the Feistel network
becomes a reversible function ff). DES is composed by 16 rounds and at
each round is used a different key of 48 bits initially extracted from a 56
bits key.
The DES encryption design is presented in figure 1.12.
48
The steps of the algorithm are as follows:
a) the data block is submitted to an initial permutation IP that is
the following:
IP
58 50 42 34 26 18 10 2
60 52 44 36 28 20 12 4
62 54 46 38 30 22 14 6
64 56 48 40 32 24 16 8
57 49 41 33 25 17 9 1
59 51 43 35 27 19 11 3
61 53 45 37 29 21 13 5
63 55 47 39 31 23 15 7
where the bit 58 of the initial block in the clear message becomes the first
bit, bit 20 becomes bit 2 and so on to bit 7 that becomes bit 64.
b) Each round is executed as follows: the block resulted in the
previous step is divided in two blocks of 32 bits (block L and block R) after
which is executed the editing: Li = Ri-1, Ri=Li-1 ⊕ f(Ri-1,Ki). The counter i
represents the number of the round and the operation ⊕ means XOR (or
exclusive on bits = sum modulo 2 bit by bit).
So L0 and R0 represent the first 4 and the last 4 columns in IP. The
following transformations take place:
L1=R0, R1=L0 ⊕ f(R0,K1);…;L15=R14,R15=L14 ⊕ f(R14,K15). Kn is the key of
each round (1<=n<=16). Kn is obtained with the formula: Kn = KS(n, KEY)
where KS is the programming function for the keys of each round (key
scheduling). The function f and KS will be presented in the next
paragraphs.
Function f is reproduced in figure 1.13:
49
By E is noted a function that overtakes a block of 32 bits and
“throws” 48 bits at exit. In fact E implements an expanded permutation as
in the following table:
E BIT-SELECTION TABLE
32 1 2 3 4 5
4 5 6 7 8 9
8 9 10 11 12 13
12 13 14 15 16 17
16 17 18 19 20 21
20 21 22 23 24 25
24 25 26 27 28 29
28 29 30 31 32 1
For instance the bits 32, 1 and 2 of the block R become bits 1,2
and 3 of the block E(R). The matrix so resulted is interpreted as 8 blocks
of 6 bits each. Each block will suffer a S transformation (S-box). Each S-
box of the 8 receives 6 bits and throws 4 bits according to several
substitution tables. Each substitution from the S-box is reproduced in the
following tables:
S1
14 4 13 1 2 15 11 8 3 10 6 12 5 9 0 7
0 15 7 4 14 2 13 1 10 6 12 11 9 5 3 8
4 1 14 8 13 6 2 11 15 12 9 7 3 10 5 0
15 12 8 2 4 9 1 7 5 11 3 14 10 O 6 13
S2
15 1 8 14 6 11 3 4 9 7 2 13 12 O 5 10
3 13 4 7 15 2 8 14 12 0 1 10 6 9 11 5
0 14 7 11 10 4 13 1 5 8 12 6 9 3 2 15
13 8 10 1 3 15 4 2 11 6 7 12 0 5 14 9
S3
10 0 9 14 6 3 15 5 1 13 12 7 11 4 2 8
13 7 O 9 3 4 6 10 2 8 5 14 12 11 15 1
13 6 4 9 8 15 3 0 11 1 2 12 5 10 14 7
1 10 13 0 6 9 8 7 4 15 14 3 11 5 2 12
S4
7 13 14 3 0 6 9 10 1 2 8 5 11 12 4 15
13 8 11 5 6 15 O 3 4 7 2 12 1 10 14 9
10 6 9 0 12 11 7 13 15 1 3 14 5 2 8 4
3 15 O 6 10 1 13 8 9 4 5 11 12 7 2 14
50
S5
2 12 4 1 7 10 11 6 8 5 3 15 13 O 14 9
14 11 2 12 4 7 13 1 5 0 15 10 3 9 8 6
4 2 1 11 10 13 7 8 15 9 12 5 6 3 O 14
11 8 12 7 1 14 2 13 6 15 O 9 10 4 5 3
S6
12 1 10 15 9 2 6 8 O 13 3 4 14 7 5 11
10 15 4 2 7 12 9 5 6 1 13 14 O 11 3 8
9 14 15 5 2 8 12 3 7 0 4 10 1 13 11 6
4 3 2 12 9 5 15 10 11 14 1 7 6 0 8 13
S7
4 11 2 14 15 0 8 13 3 12 9 7 5 10 6 1
13 0 11 7 4 9 1 10 14 3 5 12 2 15 8 6
1 4 11 13 12 3 7 14 10 15 6 8 0 5 9 2
6 11 13 8 1 4 10 7 9 5 0 15 14 2 3 12
S8
13 2 8 4 6 15 11 1 10 9 3 14 5 0 12 7
1 15 13 8 10 3 7 4 12 5 6 11 0 14 9 2
7 11 4 1 9 12 14 2 0 6 10 13 15 3 5 8
2 1 14 7 4 10 8 13 15 12 9 0 3 5 6 11
If the output of this P-box is P(L) for the input L (a block of 32 bits
obtained by concatenating 4 bits of the 8 S-boxes, each S-box producing
a line of 4 bits in L) then the bits 16, 7 and 20 of L become the bits 1, 2
and 3 of P(L), and so on.
In order to calculate KS is defined the bit table PC-1 (Permuted
choice 1) so determinate:
51
PC-1
57 49 41 33 25 17 9
1 58 50 42 34 26 18
10 2 59 51 43 35 27
19 11 3 60 52 44 36
63 55 47 39 31 23 15
7 62 54 46 38 30 22
14 6 61 53 45 37 29
21 13 5 28 20 12 4
The bit table represents the 56 bits of the initial key (the key bits
are numbered from 1 to 64) and is divided in two C0 and D0 so: bits 57,
49, 41,…,36 of the initial key represent bits 1, 2, 3,…, 28 of C0 and bits 63,
55, 47,…,4 of the initial key represent bits 1, 2, 3,…, 28 of D0. With C0 and
D0 so defined are obtained the blocks Cn and Dn from the blocks Cn-1 and
Dn-1 for n=1,2,…,16. The way of expanding-calculating keys is reproduced
in figure 1.14:
52
Iteration number Bits number
cyclic left shift
1 1
2 1
3 2
4 2
5 2
6 2
7 2
8 2
9 1
10 2
11 2
12 2
13 2
14 2
15 2
16 1
53
After these steps is obtained the encrypted message block from
the clear message. The initial message has, theoretically, more blocks of
64 bits, and each block passes through the cipher presented above as to
obtain the correspondent encrypted block.
The reverse cipher (decryption) supposes using the same
algorithm but with the keys Ki applied in reverse, from K16 to K1. The first
step in decryption is applying permutation, which unties the last step IP-1,
from the encryption operation. Then are generated in reverse: Ri-1=Li, Li-
1=Ri ⊕ f(Li,Ki). It starts from R16 and L16 generating in the end R0 and L0.
In the final the block of 64 bits is submitted to a reverse permutation, IP-1.
This was the standard used by the majority of the symmetric
cryptographic systems until the official denomination of the algorithm
Rijndael-AES as replacement for DES in 2nd October 2000.
3.3 Conclusions
T
his paper presents MD5 – Message Digest 5 – hash function
algorithm and DES – Data Encryption Standard algorithm. The main
information was disemenated for MD5 from RFC 1321 [RFC132] –
MD5 Request for Comments – and for DES from FIPS 46-2 [FIPS46] –
Federal Information Processing Standards Publication 46-2. There are
more other implementation but in this article were shown the standard
algorithms presentations. In practical applications and systems, the MD5
tends to be replaced by SHA-1 and DES by AES Rijndael, but still there
are many systems that use MD5 and DES.
References:
54
4. Practical topics on Rijndael - Advanced Encryption
Standard - AES, symmetric key algorithm
Cristian TOMA
S
ome operations in the Rijndael algorithm are defined at byte (octet)
level, and the bytes are represented in the limited filed GF(28). For a
better understanding of GF is briefly presented the operations of
addition and multiplication in the algebraic ring with the multitude of
numbers in the limited field GF(28), Galois Field (256).
In GF(28) are represented numbers from 0 to 255, i.e. 256
numbers. The maximum value that may be represented on a byte (octet)
without a sign is 255 (all the 8 bits with value 1 =>
20+21+22+23+24+25+26+27 = 28-1 = 255). On one hand it operates with
bits and on the other hand it operates with mathematical polynomial
expression.
For instance the value 7 in binary is represented 0000 0111 on a byte or
the polynomial form is:
b(x) = 0*x7+0*x6+0*x5+0*x4+0*x3+1*x2+1*x1+1*x0.
55
(each digit in hexadecimal occupies 4 bits) ‘01010111’ +
‘10000011’ = ‘11010100’ or in polynomic representation
(x6+x4+x2+x+1) + (x7+x+1) = x7+x6+x4+x2. This
operation is implemented very easy at byte level in ASM,
C/C++, Java or C#, using XOR on bits (the operator ^).
The multitude {0…255} together with the XOR operation
forms an abelian group (the operation is internal,
associative, comutativă, there exists the neutral element
‘00’, there is the reverse element – the element itself is
his reverse);
b) The multiplication has no equivalent with an operation
on bits existing in the present processors. In polynomic
representation, multiplication in GF(28) corresponds to
the multiplication of two polynoms modulo with an
irreducible polynom of 8 level. An irreducible polynom
means a polynom that has no other divisors but 1 and
itself. For instance for Rijndael, the level 8 irreducible
polynom is called m(x) and is of the type: m(x) =
x8+x4+x3+x+1, i.e. ‘11B’ in hexadecimal representation.
Example: ‘57’*’83’=’C1’ in hexadecimal, or polynomic:
((x6+x4+x2+x+1)*(x7+x+1)) = x13+x11+x9+x8+x7+
x7 +x5 +x3+x2+x +
x6 +x4 +x2+x +1
= x13+x11+x9+x8+x6+x5+x4+x3+1
x +x +x +x +x +x +x +x +1 modulo m(x) = x7+x6+1
13 11 9 8 6 5 4 3
56
31*4
3 *4
32* 34
*4
... *33
of c) c = de ( log3(a)+ log3(b)) ori c=(x+1)*(x+1)*…*(x+1) multiplied by
log3(a)+log3(b) times.
#include <stdio.h>
class inelGF {
public:
unsigned char val;//1 octet fara semn adica 8 biti
int alog[256];//functia exponentiala f(y)de exemplu
//f(4)=(x+1)*(x+1)*(x+1)*(x+1)=x4+4x3+6x2+4x+1
int log[256];//functia logaritm inversa exponentialei
//de exemplu fg(x4+4x3+6x2+4x+1)=4, adica (x+1) de câte
//ori trebuie înmulŃit ca că dea polinomul x4+4x3+6x2+4x+1.
//constructorii
inelGF(int b=0);
inelGF(unsigned char b);
//metodele folosite
void generareALOGSiLog();
void setVal(unsigned char b);
unsigned char getVal();
//adunare
inelGF operator+ (inelGF &);
//inmultire
inelGF operator* (inelGF &);
//atribuire
inelGF operator= (inelGF &);
};
inelGF::inelGF(int b) {
this->val = (unsigned char)b;
this->generareALOGSiLog();
}
inelGF::inelGF(unsigned char b) {
this->val = b;
this->generareALOGSiLog();
}
void inelGF::generareALOGSiLog() {
alog[0]=1;
int i=0,j=0;
int ROOT = 0x11B;
for(i=1;i<256;i++) {
j=(alog[i-1] << 1)^alog[i-1];
if((j&0x100)!=0)j=j^ROOT;
alog[i]=j;
57
}
for (i = 1; i < 256; i++) log[alog[i]] = i;
}
void inelGF::setVal(unsigned char b) {
this->val = b;
}
unsigned char inelGF::getVal() {
return this->val;
}
inelGF inelGF::operator+(inelGF &igf2) {
inelGF temp;
temp.val = this->val^igf2.val;
return temp;
}
inelGF inelGF::operator*(inelGF &igf2) {
inelGF temp;
int t1 = (int) temp.val;
int t2 = (int) this->val;
int t3 = (int) igf2.val;
(t2 != 0 && t3 != 0) ?
t1 = this->alog[(log[t2 & 0xFF] + log[t3 & 0xFF])
% 255] : t1 = 0;
//adica 7*5 = alog[log[7]+log[5]];
//in mod normal logaritmul //pastreaza toate
//propietatile in acest inel algebric ca peste
//numerele reale
temp.val = (unsigned char)t1;
return temp;
}
inelGF inelGF::operator= (inelGF &igf2) {
this->val = igf2.val;
return *this;
}
void main()
{
inelGF a(87);
inelGF b(131);
inelGF rez1, rez2;
rez1 =(a + b);
rez2 =(a * b);
printf(" Afisez rezultat adunare: %d\n",rez1.val);
printf(" Afisez rezultat inmultire: %d\n",rez2.val);
}
58
This example is a class (data structure) created in C++, where are
implemented the 2 operations and that together with the Galois field form
an algebraic ring.
Taking into account that the operations are made with registries or
data blocks of 32 bits (4 bytes), there are defined for abstraction of the
mathematical polynomial operations with coefficients in GF(28). In
this way, a vector of 4 bytes corresponds to a polynomial expression of
lesser level than 4 with coefficients. The addition of coefficient polynomial
expression is made by the mere addition of the coefficients (the
coefficients are seen as polynomial expression, a coefficient has 8 bits-a
byte) meaning or exclusive (xor) between coefficients. The multiplication
is more complicated. There are taken two polynomial expression with
coefficients in GF(28):
and
where:
Obviously, c(x) resulted has the level higher than 4, so it does not
represent anymore a 4 bytes vector. Reducing modulo with a polynom of
level 4, the result is brought to a level lesser than 4. In Rijndael the
polynom modulo is M(x) = x4+1, so as long as xj mod (x4+1) = xj mod 4,
then: a(x)*b(x) = (a(x)*b(x)) (mod M(x)) = a(x) ⊗ b(x) = c(x) mod M(x)
= d(x).
The form of d(x) is given by: d(x) = d3x3 + d2x2 + d1x + d0 where:
59
circularly matrixes where the elements of the matrix are polynomials
expressions (an element is a byte), meaning elements in GF(28):
T
hree criteria were taken into account when designing the cipher: to
be resistant against all known attacks; to be implemented on a
whole series of platforms and to prove high computational speed and
designing and implementing to be as simple as possible. Different from
the majority of the algorithms the round function is NOT implemented by
a Feistel network (structure) (DES, Twofish, Serpent use at each round
Feistel network). In fact the round function–round transformation is
composed by three different transformations, uniform (uniform meaning
that each bit in State – bits array taken into the algorithm, or the bits
array as an intermediary result of the editing – is similarly treated) and
reversible, transformations called layers.
60
A round is reproduced in figure 1.15. (Copyright [SAVA00])
61
Each layer must fulfill the following objectives:
The linear mixing layer: ensures a great diffusion of the bits along
the multiple rounds of the algorithm. This layer is realized by the
functions ShiftRow and MixColumn.
The non-linear layer: represented by several parallel S-boxes that
make the combination of the bits in a non-linear way. This layer is
realized by the function ByteSub.
The key addition layer: xor is executed on bits between the key in
a round (generated from the original key of the user and bits in
State. This layer is realized by the function AddRoundKey.
All layers repeat at each round (there are several rounds 10,12 or
14 according to the length of the key).
T
he next paragraphs from this article are adaptation from AES
Rijndael block cipher standard (Copyright [AESR99]). The
cipher consists in an initial round for applying the user’s key (Round
Key Addition), Nr-1 rounds and a final round.
The pseudo code in C is the following:
Rijndael(State,CipherKey) {
KeyExpansion(CipherKey,ExpandedKey) ;
AddRoundKey(State,ExpandedKey);
for( i=1 ; i<Nr ; i++ )
Round(State,ExpandedKey + Nb*i);
FinalRound(State,ExpandedKey + Nb*Nr);
}
}
62
/*
//funcŃie folosită doar dacă Nk>6
KeyExpansion(byte Key[4*Nk] word W[Nb*(Nr+1)]) {
for(i = 0; i < Nk; i++)
W[i] = (key[4*i],key[4*i+1],key[4*i+2],key[4*i+3]);
FinalRound(State,RoundKey) {
ByteSub(State) ;
ShiftRow(State) ;
AddRoundKey(State,RoundKey);
}
AddRoundKey(State,ExpandedKey) {
State = (State ^ ExpandedKey);
}
63
Fig. 1.16. Example of State of 192 bits (Nk=4 and Nb=6; Nk*Nb*8bits =
192bits) and 128 bits key (Nk=4 and Nb=4)
, where j is
calculated by cutting the result obtained by division. The number of Nr
rounds of the algorithm (of how many times the Round function is applied)
is calculated from the following table according to the length of bytes
blocks:
64
representation B(x)*X(x) = 1, reverse of ‘00’ – hex-decimal
representation of a byte ‘00h’ – is ‘00’) and 2. is applied an “affine”
transformation given by:
Fig. 1.18. The action of the ByteSub function on each byte in the State
65
Fig. 1.20. The ShiftRow function acting on a State
66
The reverse function InvMixColumn, supposes that each column
in the State is multiplied by the reverse polynom of c(x) given by d(x):
(‘03’x3 + ‘01’x2 +‘01’x+‘02’) ⊗d( x ) = ‘01’ => d( x ) = ‘0B’ x3 + ‘0D’ x2 +
‘09’ x + ‘0E’.
67
In this function, SubByte(W) is a transformation that receives as
input and returns a word of 4 bytes taking each byte from the input word
through a Rijndael S-box. The operation RotByte(W) returns a word for
which the bytes are cyclicly rotated so that if the word is composed of
bytes (a,b,c,d) the result is (b,c,d,a). For Nk>6 the pseudo code of the
function is described in the pseudo code of the cipher from the previous
paragraphs. In the both functions appear the constants Rcon and RC that
are independent of Nk and are defined as: Rcon[i] = (RC[i],’00’,’00’,’00’)
with RC[i] representing an element in GF(28) that has the value x(i-1) so:
RC[1] = 1; RC[2] = x; RC[3] = x2 , so RC[i] = x*RC[i-1] = x(i-1).
T
he reverse cipher means the reverse application of the cipher used
for encryption. The pseudo code in C is the following (each function
has been defined in the previous paragraphs):
InvRijndael(State,CipherKey) {
KeyExpansion(CipherKey,ExpandedKey) ;
InvFinalRound(State,ExpandedKey + Nb*Nr);
for(i=1;i<Nr;i++)
InvRound(State,ExpandedKey + Nb*i);
AddRoundKey(State,ExpandedKey);
}
68
KeyExpansion(byte Key[4*Nk] word W[Nb*(Nr+1)]) {
for(i = 0; i < Nk; i++)
W[i] = (Key[4*i],Key[4*i+1],Key[4*i+2],Key[4*i+3]);
for(i = Nk; i < Nb * (Nr + 1); i++)
{
temp = W[i - 1];
if (i % Nk = = 0)
temp = SubByte(RotByte(temp)) ^ Rcon[i / Nk];
W[i] = W[i - Nk] ^ temp;
}//end for
}
/* funcŃie folosită doar dacă Nk>6
KeyExpansion(byte Key[4*Nk] word W[Nb*(Nr+1)]) {
for(i = 0; i < Nk; i++)
W[i] = (key[4*i],key[4*i+1],key[4*i+2],key[4*i+3]);
for(i = Nk; i < Nb * (Nr + 1); i++)
{
temp = W[i - 1];
if (i % Nk == 0)
temp = SubByte(RotByte(temp)) ^ Rcon[i / Nk];
else
if (i % Nk == 4) temp = SubByte(temp);
69
4.5 Conclusions
T
here are in this article, technical software issues that optimized
Rijndael implementation. Rijdael is a simple design cipher and it was
requested to give comments on the suitability of Rijndael to be used
for ATM, HDTV, BISDN, Voice and Satellite. The only thing that is relevant
in such systems is the processor on which the cipher is implemented. As
Rijndael can be implemented efficiently in software on a wide range of
processors, makes use of a limited set of instructions and has sufficient
parallelism to fully exploit modern pipelined multi-Arithmetico Logic Units
processors.
It is demonstrated that for applications that require rates higher
than 1 Gigabits/second, Rijndael can be implemented in dedicated
hardware.
References:
70
5. Simple Approach on MH, RSA, El Gammal and DSS,
asymmetric key algorithms
Victor Valeriu PATRICIU, Ion IVAN, Cristian TOMA
T
he MH encryption method with public key is based on the well-
known problem the knapsack, that consist in determining in a
multitude of integer numbers, a submultitute of a given sum. Merkle
and Hellman propose a method whose security depends on the difficulty of
solving the following problem:
Given a positive integer C and a vector A=(a1, a2,…,an) of positive integers,
to be found a subset of A whose sum to be C. In other words it is
necessary to determine a binary vector (the elements have as values only
71
RucsacSimplu(C, A, M) {
for(i=N;i>=1;i--) {
if(C>=a[i]) m[i] = 1 else m[i] = 0;
C=C-a[i]*m[i];
}
if(C= =0) “The solution is in M”
else “There is no solution; another M will be tried”
}
In designing the MH algorithm with addition trapdoor the “simple
knapsack” was converted into a trapdoor knapsack that is more difficult to
solve. First is selected a vector “simple knapsack” A’=(a’1, a’2,…, a’m). This
knapsack allows a simple solving of the problem, C’=A’*M. then is chosen
72
The cryptogram C is obtained with the help of the trapdoor vector
A (the vector A is the public key): C=EA(M)=A*M (scalar product) =
7+1+10=18 => C=(1, 0, 0, 1, 0) (the encrypted value 13 becomes 18).
At decryption is obtained the clear message with the simple vector
A’ which is secret (the secret key A’ plus w plus n):
M=DA(C)= DA(18)=RucsacSimplu(3*18 mod 20, A’,
M)=RucsacSimplu(14,A’,M)= (1,0,1,1)=13 (1*1+1*3+1*10=13; M
“reveals” itself as parameter in procedure-function, and at the end of the
execution in M is found the decrypted message). Other algorithms are
used in the digital signature and will be presented in the following
paragraph.
T
his cryptographic system with asymmetric (public) keys, created by
three researchers of MIT (Massachusetts Institute of Technology),
represents the “de facto” standard in the field of digital signatures
and of encryption with public keys. Under different implementation forms,
by special programs or hardware devices, RSA is recognized today as the
most secure securitization and certification method commercially available.
RSA is based on the present quite impossibility to integer’s
factorization numbers very large; the encryption/decryption functions are
of exponential type, where the exponent is the key, and the calculation is
made in the ring of rests class modulo n.
73
Differently from algorithms such as DSA and El Gamal that can be
used only for digital signature, RSA can be used also for the electronic
signature and as well for encryption/decryption. The digital signing of M is
made as follows: S=S(H(M))=(H(M))PRIV mod n and the verification is
made H(M)=SPUB mod n. The securitization of M is made: C=MPUB mod n
and the decryption M=CPRIV mod n.
The strength of the algorithm consists in the difficulty to factorize n
in p and q. The RSA laboratories suggest to use very big prime numbers
on 128 or 1024 bits, whose factorize is made in several years.
The numeric example is just for the digital signature, is depict also
in figure 1.24 for electronic signing process:
given p=53 and q=61, 2 secret prime numbers of A.
given nA=53*61 = 3233 the product of these two numbers
(if n very big the possibility to find p and q – meaning to
factorize on n – is very little).
The indicator of Euler calculated is:
o Φ(n) = (p-1)*(q-1) = 52*60 = 3120.
Is chosen a secret private key PRIVA = 71; (meaning a
number relatively prime with 3120 and in the period
[max(p,q)+1,n-1]
Is calculated the public key PUBA = 791 as multiplied
reverse mod 3120 of PRIVA:
o (71*791) mod 3120 = 1 i.e 56161
mod 3120 = 1;
Is considered a document whose digest H(M)=h is
13021426. As the value overpasses the length of the
module nA = 3233, this digest will be broken in two blocks
(1302 and 1426), that A will sign separately, using the key
PRIVA = 71:
o (1302^71) mod 3233 = 1984;
o (1426^71) mod 3233 = 2927;
the electronic signature obtained is S = 1984 2927;
S obtained is transmitted to the previous step and M (in
clear – for this example, there is no secretization).
When receiving, B (B receives the package S+M) will
calculate H(M) in two modes and will compare them:
he will calculate H(M) with the public key of A:
o (1984^791) mod 3233 = 1302
o (2827^791) mod 3233 = 1426;
o So H1 = 1302 1426;
and will calculate H(M) on the received message:
H2 = H(M) = 1302 1426;
74
Document
S
M
M
Hash RSA
(DIGEST) Signature
Hash S
PRIVA
Hash (MD5,
SHA-1) USER A
USER B
PUBA
H2 S
Hash RSA
AUTHENTICATION (DIGEST)
if (H1==H2) the H1
M
Hash
authentication is
correct
Fig. 1.24. RSA applied for electronic signature
5.3 El Gammal
T
he algorithm El Gamal proposes a method of signature derived from
the key distribution scheme of Diffie and Hellman (the Diffie Hellman
system is described in the next paragraph that treats different types
of cryptanalytic attacks – man in the middle of attack). The EG
cryptographic system, differently from the RSA, can be used only in the
certification process (digital signing) and not in the secretization process.
The EG asymmetric cryptographic system funds its cryptographic strength
on the difficulty to calculate logarithms in large Galois fields.
75
Signing of a document M is made under the following algorithm:
1. is calculated the digest of the message, H(M);
2. is random generated K in [0,n-1], so that
cmmdc(K,n-1)=1;
3. is calculated r=aK (mod n);
4. is calculated then, using the secret key of the sender, the value
of s in the equation:
H(M)=(PRIVA*r+k*s) (mod (n-1));
D
SA – Digital Signature Algorithm is the algorithm for digital
signature of the DSS standard, issued by NIST in August 1991. It is
a highly controversial standard in the field literature because it is
meant to replace the “de facto” standard, RSA. It is based on a
mathematic apparatus derived from the EG method, funding its
76
cryptographic strength also on the problem of difficulty in the calculation
of logarithms in limited field.
The parameters of the system are:
Global parameters (the same for everybody):
o p is a prime number, p in (2511,2512) has 512 bits;
o q a prime divisor of (p-1)-160 bits, q in (2159,2160);
o g an integer with the property: g=h(p-1)/q mod p, where
h is a integer relatively prime with p, h in (0,p) so that:
h(p-1)/q mod p>1.
o H the hash function to calculate the digest of a message.
The user’s parameters (different from one user to another):
o secret key: PRIV, an integer in (0,q);
o public key: PUB, an integer, calculated so that:
PUB=gPRIV mod p.
Signature parameters (different from one signature to another):
o M the message that will be signed;
o k a random integer, k in (0,q), chosen different for each
signature.
77
The signature of message M, the pair S=(94,97), is received by B
who will check it with the help of A’s public key:
w=s-1 mod q = 97-1 mod 101 = 25; r’=(gH(M)w * (PUBA)r*w mod p) mod q.
Is calculated:
H(M) = w mod q = 1234*25 mod 101 = 45; r*w mod q = 94*25 mod 101
= 27 and the verification: r’=(17045*456727 mod 7879) mod 101 = 2518
mod 101 = 94.
Because r=r’, the signature is valid.
5.5 Conclusions
T
here is presented in this article a comparision study about the most
known cryptographic algorithm with symmetric key. Each algorithm
has simple numerical sample. In computer systems the “de facto”
standard is RSA because it is a simple algorithm and because is very
strong at different cryptanalitic attacks.
The RSA advantage is that it can be use for digital signature but
also for encrypting in order to provide confidentiality. The other
cryptographic algorithms with asymmetric key can be used only for digital
signature and do not present strength of RSA.
References:
[IVAN02] Ion IVAN, Paul POCATILU, Marius POPA, Cristian TOMA – The
Electronic Signature and Data Security in the Electronic
Commerce, in “Informatica Economică”, vol. VI, no. 3,
Bucharest, 2002, pp. 105 – 110.
[PATR05] Patriciu V., Bica I., Pietrosanu M, Priescu I., "Semnaturi
electronice si securitate informatica", Ed.All, 2005.
[PATR01] Patriciu V., Bica I., Pietrosanu M, "Securitatea comertului
electronic", Ed All, 2001.
[SCHN96] Bruce Schneier, “Applied Cryptography 2nd Edition: protocols,
algorithms, and source code in C”, John Wiley & Sons, Inc.
Publishing House, New York 1996.
[STIN02] Douglas Stinson, “Cryptography – Theory and Practice” 2nd
Edition, Chapman & Hall/Crc Publishing House, New York
2002.
78
6. Encryption Modes and Multiple Ciphers used in
Cryptography
Victor Valeriu PATRICIU, Ion IVAN, Cristian TOMA
Abstract: The title of the article induces the idea that there are discussed
two major topics. One is about encryption modes, and second about how
to combine 2 or 3 cipher algorithms in order to obtain a greater security
level. In encryption modes topic is included both block and streaming
ciphering, and both are analized. Most of the techniques presented here
are very often implemented with symmetric key ciphers but they are
sometimes implemented with asymmetric key ciphers too.
A
s for the use mode of the symmetric algorithms (no matter the used
algorithm), in practice there are two types of encryption: block
ciphering and stream ciphering.
Block ciphering operates with data blocks in clear and encrypted
– usually of 64 and 128 bits but, sometimes, even larger. The best-known
modes of this type are: ECB, CBC, PCBC, OFBNLF.
Stream ciphering operates with data sequences in clear an
encrypted of length a bit or byte but, sometimes, with data of 32 bits. The
best-known modes of this type are: sequential stream ciphering, self-
synchronization sequential stream ciphering, reaction ciphering,
synchronous sequential stream ciphering, output reaction sequential
stream ciphering, counter ciphering.
In what concerns the block ciphering, the same data block in clear
will be ciphered every time in the same encrypted data block, using the
same key. In what concerns the stream ciphering, similar data sequences
in clear will be encrypted differently, if repeatedly encrypted.
The encryption modes mean combinations of the two basic types,
some of them using feedback methods, other producing simple operations.
These operations are simple because the security is the attribute of
encryption and not of the mode in which the encryption is designed.
Moreover, the mode of realizing the encryption does not lead to
compromising the security given by the basic algorithm.
79
6.2 Block ciphers
Symmetric
crypto
system
Files, data Files, data structures,
structures, messages encrypted
messages in clear
The problem with ECB is that if a cryptanalyst, who has the data
block in clear and the encrypted data block equivalent to some messages,
he can realize a book of codes without knowing the key. In usual
expression there are parts of messages that incline to be repeated. The
messages may have redundant structures or long arrays of spaces and
zeros. If the cryptanalyst sees that the clear message ‘5ffa6ba1’ is
encrypted in the message ‘778e342b’, he is able to immediately decrypt
that message wherever he founds it.
80
6.2.2 CBC encryption - Cipher Block Chaining
Ri
Bi
Bi XOR Ri
Clear data i=i+1
128 bits
Bi Crypto Ri+1 = Ci
system
Fişiere, mesaje
sau structuri Ci = Crypt(Bi xor Ri) it is written Ci
de date în clar
Fişiere, mesaje
sau structuri
de date
criptate
Briefly, the steps are the following: the reaction register is initialized by a
hash dispersion function MD5, which produces the digest of a password.
Then for i (a counter) from 0 to the number of blocks of the file or data
structure is executed xor (or exclusive) between the block read from the
file and the data block from the reaction register. The encrypted block is
written in the file and then the encrypted bits block is allocated to the
81
reaction register. Then i is incremented and then the process is restarted.
This is a composed data structure that involves two data structures of file
type and a structure of array type or, if required, it may be a dynamic one
(list of lists of bytes).
CBC makes the same data block to be transformed in different
data blocks, because for different runs the initialization value of the
reaction register may be different. If the initial value of the reaction
register stays unchanged between the runs, then two identical messages
using the same key will be transformed in the same encrypted message.
The initialization vector (initial value of the reaction register)
should not be necessarily secret (although it may be generated by a hash
function after a password, so that it should not be necessary to transmit –
the initial value – by network to the receiver).
Even if this seems a wrong approach (not to keep secret the initial
value), it is not because, anyway be the channel (network) circulate
encrypted blocks but not the key so, someone who would like to break the
cipher will have to know what data structure was used and what algorithm
and moreover – the transmission protocol of data, and then to break the
algorithm. A possible description in C/C++ of this type of structure is:
struct CBC {
FILE *foriginal;
FILE *fcriptat;
unsigned char registruReactie[16];//16 octeŃi = 128 biŃi
unsigned char buffer[16];
AlgoritmCriptare* ob;//obiectul care realizeaza criptarea
// ce primeste parametrii blocul de date in clar si
// parola si “scoate” blocul criptat”
};
82
128 bits raction registry
(random initialized).
Ri-1
Crypto
Bi-1 System
Bi
i=i+1
Ri = Ci-1
Clear files or
data structures Ci-1 = Crypto(B xor Ri-1)
I
n this mode, stream mode, the clear data are converted bit by bit into
encrypted text. The general model (the data structure that
fundaments the model) is presented in figure 1.28.
83
Key Key
Generator Generator
Ki Ki
Encrypted Data
Ci
Clear data
Pi(clear data)
Files
Files
84
6.4 Multiple ciphering systems
T
here are several ways to combine block algorithms in order to obtain
new algorithm. The purpose is to try improving security by other
means than by writing a new algorithm. Multiple encryption is one of
the combination techniques: it uses an algorithm as to encrypt the same
clear message several times with several keys. The same principle is used
for cascading but instead of a single algorithm it uses more.
In practice are used 2 types of multiple encryptions or in cascade:
double ciphering and triple ciphering like this:
Double ciphering
o Encryption with two keys
A simple way of improving the security of a symmetric block
algorithm is to encrypt a block with two different keys
according to the formula:
C = EK1(EK2(P)); P = DK1(DK2(C)); where P is the data block
of the clear message, C is the encrypted block, EKi means
the use of the symmetric encryption algorithm E with the
key Ki, and DKi means using the symmetric decryption
algorithm D with the key Ki. The encrypted message block
is harder to break using an exhaustive research. For a
length of the key of n bits, compared to the 2n possible
versions for a single encryption, there are now 22n possible
versions, that, for an algorithm using a key of 128 bits,
means 2256 possible keys.
o Encryption as in the Davies-Price method
This type of encryption is a version of CBC and is made
according to the formula:
Ci = EK1(Pi ⊕ EK2(Ci-1)); Pi = DK1(Ci) ⊕ DK2(Ci-1); where the
notations are the same, and i designates which block in the
message is encrypted or decrypted. In practice it is also
used the Double OFB/Counter method.
Triple ciphering
o Triple ciphering with two keys
The idea proposed by Tuchman is to edit three times a
block with two keys as follows: the sender encrypts with the
first key, then decrypts with the second key and at the end
encrypts again with the first key:
C = EK1(DK2(EK1(P))); P = DK1(EK2(DK2(C))); this method is
also called EDE – encrypt-decrypt-encrypt.
o Triple ciphering with three keys
In this case the model is: C = EK3(DK2(EK1(P)));P =
DK1(EK2(DK3(C)));
85
Also, in practice are used: triple encryption Inner CBC and triple
encryption Outer CBC.
6.5 Conclusions
T
he security of the symmetric encryption depends on the key
protection; its management is a very important element in the data
security and includes the following aspects:
Key generating. For the master keys (used to encrypt session
keys) are used manual procedures (dice throwing) and
automatic procedures (statistics on the network packages at a
certain time). For the session keys are used automatic
procedures, for (pseudo)random generating, which are based
on noise amplifications, mathematic functions, and different
parameters (the current number of the system calls, date, hour
etc.);
Key distribution. In what concerns the transport of the secret
key, the problem is generally solved by using another key,
called terminal key, in order to encrypt it. The session keys –
generated only for communication – are transported with the
terminal keys that, as well, are encrypted (when memorized)
with another key, called master key. Another version is using
the hibrid cryptographic systems (asymmetric and symmetric)
as follows: by asymmetric cryptographic systems (is realized
the certification, non-repudiation and service selective
application – it is applied even as digital signature) are
encrypted symmetric keys. And by symmetric keys is encrypted
only the clear message so ensuring the confidentiality and
integrity of the data.
Key memorization. Using symmetric algorithms, in the case of
N entities that want to communicate, involves N(N-1)/2 keys to
be memorized in a secure mode. In reality, not all bidirectional
bonds are established in the same time; this is the reason why
session keys are used. The terminal keys that encrypt only very
short data (session keys), are very difficult to attack. When
there are used hybrid cryptographic systems, is preferable to
use CA – Certificate Authority and CA hierarchies.
86
References:
87
88
Electronic Signatures
89
90
Module 2 – Electronic Signatures
I
nformation Security is an integral part of the overall Information
Technology (IT) worker shortage. In all countries, at a time when
demand is highest, the ability to protect the nation’s critical
information infrastructures is impeded by the ability to produce the
quantity and quality of information technology professionals required to
operate, maintain and design our cyber systems. Between the year 1996
and the year 2006 more than 1.3 million new highly skilled IT workers will
be needed in EU and USA to fill new jobs as well as replace those current
employees exiting the career field. This is a growth rate of nearly 90%.
The responsibility for educating that new generation work force will
fall squarely on the shoulders Higher Education programs. In the cyber-
world of networks and sub networks, eventually, nearly all participating
systems are connected into the Global Information Infrastructures. That
connectivity, while moving forward communications, is inexorably
intertwined with increased vulnerability. Those vulnerabilities grow in
numbers and types on a daily basis, and deserve earnest and robust study
in order to comprehend their impact on systems and the infrastructures
dependent on the viability and survivability of those systems. In this
91
cyber-world, there are no borders – nothing stands between elements of
information assets in the critical infrastructure and those who would bring
it down. This threat calls for new directions and new pedagogic models in
our educational systems. Educational systems at all levels must be able
respond to system intrusions, misuse and abuse by providing both initial
and refresher education and training in all areas of Information Security.
Information Security encompasses many disciplines that ensure
the availability, integrity, confidentiality, and non-repudiation of
information while it is transmitted, stored or processed. One of the new
paradigm, very important from the point of view of creating trust in the
cyber-world is the electronic (digital) signature. This course is focused to
the domain of electronic signatures and their infrastructures (PKI-Public
Key Infrastructures).
T
he major concern in e-business transactions is the need for the
replacement of the hand-written signature with an ‘online’ signature-
called generally electronic signatures or, using crypto technology
(only one today accepted technology), digital signatures. The traditional
e-mail system, which has problems of message integrity and non-
repudiation, does not fulfill the basic requirements for an online signature.
Further, since the Internet communication system is prone to various
types of security breaches , the discussion of robust and authenticated e-
business transactions is incomplete without consideration of ‘security’ as a
prominent aspect of ‘online signatures’. One may consider an e-signature
as a type of electronic authentication. Such authentication can be
achieved by means of different types of technologies. A Digital Signature
(DS) can be considered as a type of e-signature, which uses a particular
kind of technology that is crypto DS technology. DS technology involves
encrypting messages in such a way that only legitimate parties are able to
decrypt the message. Two separate but interrelated ‘keys’ carry out this
process of encryption and decryption. One party in the transactions holds
the secret key, or the private key, and the other party holds the public
key or the key with wide access. The selection and use of an encryption
technique plays a crucial role in the design and development of keys. In
short, a DS satisfies all the functions, such as authenticity, non-
repudiation, and security, of a hand-written signature. Such a ‘signature’
can be viewed as a means of authentication and can be owned by an
individual. While using this technology, there must be third party
involvement in order to handle the liability issues that may be raised by
bilateral transactions. It was introduced the concept of a Certifying
Authority (CA), which acts as a trusted third party. A lot of international
and national technologically neutral acts were established to promote e-
business applications in all its modes such as business-to-business (B2B),
92
business-to-consumer (B2C), and business-to-government (B2G). With
this existing legal infrastructure and the rapid emergence of software
security products, it is important to understand the role of emerging
technologies like DS in e-business.
Encryption
Encryption is the process of transforming the contents of a
message using a secret key so that the message cannot be read.
Decryption is the process of transforming the message back into a
readable form. Message encryption and decryption is the foundation upon
which a secure messaging system is built. The problems with establishing
and managing a secure messaging system are to ensure that:
Encryption techniques and secret keys are sufficiently complex
so that unauthorized people cannot decrypt messages
Keys are accessible to people who are authorized to use them,
and kept away from people who are not authorized to use them
Public Key Cryptography
One assumes that encryption techniques have been used for as
long as written languages have existed. Traditionally (until about 30 years
ago), the secret key used to encrypt a message was the same key used to
decrypt a message. This technique is known as symmetrical key or secret
key cryptography. This technology is thought to be sufficiently strong that
it would be almost impossible to decrypt a message without the secret
key.
The problem with symmetrical key encryption is key distribution:
ensuring that the keys to the message senders and recipients do not get
into the hands of unauthorized persons. As the number of users of the
secure messaging system increases, the problem of generating,
distributing, safeguarding, and accounting for the secret keys increases at
a geometric rate. In the 1970s, cryptographers introduced the concept of
asymmetrical key or public key cryptography. Public key cryptography
uses two keys that are mathematically linked; one key can be used only
to encrypt a message, and the other key can be used only to decrypt the
message. The key that is used to encrypt a message can be freely
distributed (or placed in an accessible directory), and the recipient keeps
the key used to decrypt the message.
Digital Signatures
Generally speaking, electronic signatures are data attached to
other data for authentication purposes. The term not only refers to digital
signatures (see below), but also to PINs and faxed signatures. Digital
signatures are electronic signatures linked to the signed data in a way
that tampering is noticed and that the sender can be identified
93
unequivocally. Other forms of electronic signatures, such as PINs, do not
protect the data integrity.
94
Policies
A Certificate Policy is a set of rules that indicates the applicability
of a certificate.
A Certification Practice Statement (CPS) is a statement of the
practices that a PKI uses to manage the certificates that it issues. The
Operating Authority (usually an individual within the IT unit) is responsible
for preparing and maintaining the CPS. The CPS describes how the
Certificate Policy is interpreted in the context of the system architecture
and operating procedures of the organization.
While a Certificate Policy is defined independently of the specific
details of the operating environment of the PKI, the corresponding CPS is
tailored to the organizational structure, operating procedures, facilities,
and computing environment of the Operating Authority. The use of a
standard structure for Certificate Policy and CPS documents is
recommended to ensure completeness and simplify users’ and other
Certificate Authorities’ assessment of the corresponding degree of
assurance.
Key Management—PKI
The use of PKI enables a secure exchange of digital signatures,
encrypted documents, authentication and authorization, and other
functions in open networks where many communication partners are
involved. PKI has four parts:
Certificate Authority (CA)
Registry Authority (RA) or Local Registry Authorities (LRA)
Directory Service
Time Stamping (as an additional service)
Certificate Authority
The Certificate Authority (CA) is the entity responsible for issuing
and administering the digital certificates. The CA acts as the agent of trust
in the PKI. A CA performs the following main functions:
Issues users with keys/Packet Switching Exchanges (PSEs)
(though sometimes users may generate their own key pair)
Certifies users’ public keys
Publishes users’ certificates
Issues certificate revocation lists (CRLs)
The foundation upon which a PKI is built is trust—in other words
the user community must trust the CA to distribute, revoke, and manage
keys and certificates in such a way as to prevent any security breaches.
95
As long as users trust the CA and its business processes, they can
trust certificates the CA issues. The CA’s signature in a certificate ensures
that any changes to its contents will be detected. Such certificates can be
distributed publicly and users retrieving a public key from a certificate can
be assured of the validity that the key:
Belongs to the entity specified in the certificate
Can be used safely in the manner for which the CA certified it
Users need to be able to determine the degree of assurance or
trust that can be placed in the authenticity and integrity of the public keys
contained in certificates issued by the CA. The information upon which
such determinations can be made is documented in the Certificate Policy
and the Certification Practice Statement of the CA.
A CA has the following tasks:
Generate the certificate based on a public key. Typically a Trust
Center generates the pair of keys on a smart card or a USB
token.
Guarantees the uniqueness of the pair of keys and links the
certificate to a particular user
Manages published certificates
Is part of cross certification with other CAs
Registration Authority (RA)
The Registration Authority (RA) is responsible for recording and
verifying all information the CA needs. In particular, the RA must check
the user’s identity to initiate issuing the certificate at the CA. This
functionality is neither a network entity nor is it acting online. The RAs will
be where users must go to apply for a certificate. Verification of the user
identity will be done for example by checking the user’s identity card.
A RA has two main functions:
Verify the identity and the statements of the claimant
Issue and handle the certificate for the claimant
Directory Service
The directory service has two main functions:
Publish certificates
Publish a Certificate Revocation List or to make an online
certificate available via the Online Certificate Status Protocol
(OCSP)
96
Timestamp Service
Time stamping is a special service. Time stamping confirms the
receipt of digital documents at a specific point in time. The service is used
for contracts or other important documents for which a receipt needs to
be confirmed.
T
he application of DS requires technical infrastructure in the form of
public/private keys, software that integrates user applications with
the encryption process, and the necessary hardware required for the
operation. DS infrastructures are usually developed for a specific area of
technology. Some companies have developed Public Key Infrastructure
(PKI), which provides solutions for the DS application and security. The
Certifying Authority (CA) who issues a Digital Certificate (DC) to a
customer also works as a service provider for different DS technologies.
The signatures are verified with the help of licensed and audited
test centers all over the world. A leading CA service provider is Verisign
Inc., which primarily works in the area of retail and enterprise services.
Its services include providing PKI infrastructure depending on the
requirements of an enterprise, and also include consulting and training
services. Most of the security services offered by Verisign Inc. are based
on a 128- bit encryption process. Another service provider, Globalsign, Inc.
creates and manages DCs for signed and sealed e-mail messaging for
secure and confidential e-commerce and m-commerce applications.
In Romania, Transsped and E-Sign are two Romanian operators of
Certificate Authorities that issue and sales qualified certificates and
associated applications under the brand of TC TrustCenter-Germany
(Transsped) or Adacom-Greece (E-Sign).
The availability of features of online authentication will have a
long-term impact on e-business. The DC issued by a service provider can
be categorized according to internationally recognized classes of
certificates. This categorization depends on the confidence one may have
in the identity given by the verification process. The growth in the number
of applications of DS technology has resulted in the emergence of product
and service differentiation. Some CA service providers focus on high value
transactions such as finance, health care and government transactions,
which require a very high level of security. Other firms provide strong
support to network developers as well as Web server security. With the
rapid development of m-commerce, the applications of DS are being
extended to wireless solutions and m-commerce applications. Products
and services can also differ in the product features and services realized
by differences in the DS technology used. DS vendors provide products
97
necessary to manage the DC and public key infrastructure. These vendors
develop “keys” and other technical infrastructure such as encryption
software. ‘Smartcard Solutions,’ developed by RSA Securities, Inc., are
equipped with many important features that specify the accountability of
the user in electronic transactions, authenticate the user and secure the
storage of important credentials. Since the implementation of DS
technology requires a highly secure environment, Internet security is also
one of the emerging fields for DS technology. Some companies have
focused on the niche market for the security measures required for the
implementation of digital signatures. Every DS technology should be
flexible enough to allow for modifications as business needs of an
organization change. Another area in DS technology is the development of
plug-ins or ad-ins to existing applications. Adobe systems and Microsoft
are the leading companies in this area, developing ad-ins and plug-ins for
their popular software.
In order to understand the use of DS technology in the future, it is
important to study trends in the use of various DS technologies. We found
that most of these technologies fall under the categories of biometrics,
secure data transactions, secure e-messaging, wireless security, secure
data access, and other tailor-made or standardized technologies. Industry
trends show that in most of the cases, secure e-business suites utilize
combinations of various technologies. Such solutions possess a high level
of scalability and customizability. As shown in Table 1, these solutions
constituted 62.4% of the total number of applications developed. Further,
most of these applications were developed for the B2B mode of e-business.
However, there are many other applications that utilize certain specific DS
technology. For example, e-messaging technologies and secure data
access technologies each constituted 10.4 % of the total number of
applications developed (Table 2.1). Secure wireless technologies (5.6%)
were yet to gather this pace of development. The distribution of DS
technologies differed significantly across e-business modes. For example,
e-messaging and data security constituted the majority of solutions
developed for the B2C mode of e-business, whereas the greatest number
of applications developed for the B2G and B2B modes were tailor-made
DS technologies. Even if the majority of the present application
developments in DS technology are in the B2B mode, B2C applications are
likely to grow with a faster pace in the future. The rapid development of
DS technology applications for m-commerce, health care, and banking
imply potential growth opportunities in the B2C mode of e-business.
Another major impact of DS products on the B2C mode of e-business may
be realized in terms of increasing perceptions of trust in secure online
business transactions. In the B2C mode of e-business, consumers
perceive the term ‘Thawte Certified’ or the appearance of a Verisign logo
on a Web site as a symbol of trust and security. This change in
consumers’ perceptions of online transactions will improve e-business in
the long-term. We found that application developments in DS technology
98
responded more to certain sectors of the economy compared to other
sectors. The primary reason behind this may be varied needs for secure
business processes in different industries. For example, a security breach
in the health care industry poses different issues and risks compared to a
security breach in the banking and financial services industries. In the
same way, the security needs of a government defense department are
different from the security needs of the wireless infrastructure for
eCommerce. Irrespective of the type of e-business, we found that the
purpose of most applications of DS technologies is to secure business
transactions and processes. Seventy-two applications (58%) dealt
exclusively with the security of transactions, whereas forty applications
(32%) were intended to solve the problem of security as well as
authentication. Only thirteen applications (10%) were developed mainly to
ensure the authenticity of entities involved in transactions. Depending
upon these varied needs across industries, the market responds to the
demand by developing solutions to build a robust and secure online
infrastructure.
These developments will certainly lead to necessary modifications
in the existing legal infrastructure. Further developments in DS
applications and the legal infrastructure in the international arena can
increase the efficiency of global businesses.
TABLE 2.1- Trends in Use of Various Digital Signature Technologies
Digital Signature Mode of Business
Technology
Total Applications
B2G B2C B2B
developed
Biometrics technology 2 1 2 5 (4.0%)
99
1.4 European Electronic Signature Initiative
T
he European Commission has proposed to the European Parliament
and to the Council a Directive (DIRECTIVE 1999/93/EC of the
EUROPEAN PARLIAMENT AND COUNCIL of 13 December 1999 on a
Community Framework for Electronic Signatures) to provide a
common framework for electronic signatures. The Directive covers
electronic signatures used for authentication in general as well as a
particular type of “qualified” electronic signatures, which have legal
equivalence to hand-written signatures. The Directive also identifies
requirements that have to be met by service providers supporting
electronic signatures and requirements for signers and verifiers. These
requirements need to be supported by detailed standards and open
specifications which also meet the requirements of European business, so
that products and services supporting electronic signatures can be known
to provide legally valid signatures – thus furthering the competitiveness of
European business in an international market.
Under the auspices of the ICTSB, European industry and
standardization bodies have launched the European Electronic
Signature Standardization Initiative (EESSI).
EESSI has the objective of analyzing the future needs for
standardization activities in support of the European Directive on
electronic signatures in a coherent manner, particularly in the business
environment. The processes used to draw up the specifications which
have been consensus-based activities carried out in CEN/ISSS (the
Information Society Standardization System of the European Committee
for Standardization) and ETSI (the European Telecommunication
Standards Institute). The groups working in each organization have been
open to the participation of all interested parties and their results
reviewed in public workshops during their development. The initial set of
the 10 consensus-based specifications, in the form of CEN Workshop
Agreements (CWAs) and ETSI Technical Specifications (TSs) is now being
published. The initial set of EESSI deliverables is submitted to the
European Commission for consideration as relevant specifications in
relation to the Directive. The EESSI is continuing to work on
enhancements, and on aspects relating to their implementation and
conformity assessment. The main standards issued by EESSI are the
followings:
100
Part2: Cryptographic Module for CSP Signing Operations- Protection
Profile
CWA 14170: Security Requirements for Signature Creation Systems,
CWA 14171: Procedures for Electronic Signature Verification
CWA 14172: EESSI Conformity Assessment Guidance
Part1: General
Part2: Certification Authority Services and Processes
Part3:Trustworthy Systems Managing Certificates for Electronic
Signatures
Part4: Signature Creation Applications & Procedures for Signature
Verification
Part5: Secure Signature Creation Devices
SSCD
CWA 14168: Secure Signature-Creation Devices, version 'EAL 4',
CWA 14169: Secure Signature-Creation Devices, version 'EAL 4+'
Requirements for CSP
ETSI TR 102 030 Provision of harmonized Trust Service Provider status
information
ETSI TR 102 040 International Harmonization of Policy Requirements for
CAs issuing Certificates
ETSI TS 102 042 Policy requirements for certification authorities issuing
public key certificates
ETSI TS 101 456 Policy requirements for certification authorities issuing
qualified certificates
Qualified Certificate Format (Profile) and Policy
ETSI TS 101 862 Qualified certificate profile
ETSI TR 102 041 Signature Policies Report
ETSI TR XML Format for Signature Policies
Electronic Signature Format
ETSI TS 101 733 Electronic Signature Formats
ETSI TS 101 903 XML Advanced Electronic Signatures (XAdES)
Time-stamping Protocol
ETSI TS 101 861 Time stamping profile
101
ETSI TS 102 023 Policy requirements for time-stamping authorities
From the point of view of terminology, with great impact in legislation and
regulations, EU uses the followings definitions for the different kind of
electronic signatures:
ELECTRONIC SIGNATURE means data in electronic form which are
attached to or logically associated with other electronic data and which
serve as a method of authentication;
ADVANCED ELECTRONIC SIGNATURE means an electronic signature
which meets the following requirements:
102
the transaction through the telecommunication provider), it will not be
possible to categorize such a signature as an advanced one. However,
it is possible to have a link to the signatory using a pseudonym in the
certificate, if the CSP holds the personal data identifying the signatory.
c) "Created using means that the signatory can maintain under his
sole control": This is a requirement for the access control to the Signature
Creation Device (SCDev) containing the signature-creation data (SCD).
The access control has to be implemented in such a way that the
signatory is able, using a certain procedure, to be sure that his/her SCD
and/or SCDev can be utilized only by himself/herself in order to sign data.
This means that the signatory may have to be somehow “active” in
protecting his/her secret data. (Note: With an SSCD, as specified in Annex
III and required for Qualified Electronic Signatures, no activity is required
by the signatory in order to maintain secret his/her SCD. He only has to
refrain disclosing the activation data of the SSCD.)
It should be noted that the requirement for sole control precludes the
use of symmetric cryptography, where the secret key is available both to
the signer and verifier.
103
ii) Security features:
1) The signing algorithm has to be adequate to the
security required (an algorithm with sufficient strength)
2) The key data has to be adequate to the security
required. Especially the key length must be secure against
brute force and other attacks.
A
signature is not part of the substance of a transaction, but rather of
its representation or form. Signing writings serve the following
general purposes:
Evidence: A signature authenticates a writing by identifying
the signer with the signed document. When the signer makes
104
a mark in a distinctive manner, the writing becomes attribu-
table to the signer.
Ceremony: The act of signing a document calls to the signer's
attention the legal significance of the signer's act, and thereby
helps prevent inconsiderate engagements;
Approval: In certain contexts defined by law or custom, a
signature expresses the signer’s approval or authorization of
the writing, or the signer’s intention that it has legal effect.
Efficiency and logistics: A signature on a written document
often imparts a sense of clarity and finality to the transaction
and may lessen the subsequent need to inquire beyond the
face of a document. Negotiable instruments, for example, rely
upon formal requirements, including a signature, for their
ability to change hands with ease, rapidity, and minimal
interruption
105
Romania, like others Europeans countries, has adopted his
electronic signature legislation (in July 2001).
1.6 Conclusions
I
n our high tech world the security of communications has become
increasingly important. Information technology is now playing a
fundamental role in the personal lives. This means that all levels of
government want to seek out ways to better deliver services using the
new technologies. In the process of doing this it is becoming evident that
peoples want more easy electronic access to government services. When
a citizen comes to access it does not matter to them who owns the
medium on which they are seeking a service. Whether it is a web site, a
kiosk, a smart card, a terminal at a library or a community access center
offering access to online services, what matters to the average citizen is
that they can have the service. Research has found that the level of
government providing the service doesn’t matter. Peoples want to be able
to access all levels of government seamlessly. A municipality might
operate a kiosk terminal but the citizen wants to be able to go there to
retrieve services from any level of government from that kiosk.
These recent trends point out the need for digital signatures, a
public key infrastructure and verification and authentication methods. This
course will define and analyze the basic issues and compare the
mathematical, technical and legal approach to digital signatures and
public key infrastructure and the attendant issues.
References:
106
2. Practical Approaches on Electronic Signatures
Implementations
Cristian TOMA
Abstract: There are depicted in this article, practical patterns for digital
signature. In real application RSA algorithm is time consuming and
requires quite computational power. In order to save computational power,
the RSA algorithm is used only when is strictly necessary, otherwise it is
used in conjunction with the session keys concept. The certificates are the
next items that are discussed and then the new technology of XML
signature that is used in increasing back-end systems and protocols.
U
sing RSA public key signature cryptographic algoritm, it is possible
to obtain authentication – digital signature, confidentiality or both
of them. In practical application RSA is used only digital signature
and for confidentiality. Because RSA is time consuming it is in
confidentiality but only for small amount of bytes such as session keys
and challenge keys. This topic will be covered in the following paragraphs.
107
K public of B K private of B
M C M
Sample 1
Confidentiality A C C B
Encrypt Decrypt
Anyone can encypt the Only B can decrypt the
message M for B message M for B
M: message K: key
C: cryptogram K private of A K public of A
M S M
Sample 2
Authentication A S S B
(Digital Signature)
ENCRYPT DECRYPT
A “signs” – authenticates B verifies with public
the message M with his key of A
private key
M C1 C2 C1 M
A C1 C2 C2 C1 B
ENCRYPTION DECRYPTION
A “signs” – authenticates the message M B decrypts the message C2 with his
with his private key, then crypt the private key, only B can transform C2 into
message with public key of B. A knows C1 because only B knows his private key.
his keys and the public key of B. Then B transforms C1in M with A’s
public key, thus be sure that only A could
transmit C2 so C1 and finally M.
108
public key, applies to message S and obtain back M. It is clear that
message S could be send only by A, because only A has his/her private
key and only A’s public key applied to S obtains message M. If the
message S could be taken from transmission channel – corporate
networks or Internet – then anyone can applies A’s public key in order to
obtain message M, and this is perfectly possible because anyone has
access to A’s public key. So in order to avoid such an inconvenience in
practical system has implemented the digital signature like in picture 2.3.
Also another inconvenience at RSA is computational time consuming, so it
is normal to find out a way to encrypt or sign just a small amount of bytes
instead the entire message.
Document
S
M
M
Hash RSA
(DIGEST) Signature
Hash S
PRIVA Net
Hash (MD5,
SHA-1)
CACertificates
Authority PUBA
Smart Card with A secret
key
H2 S
Hash RSA
AUTHENTICATION (DIGEST)
109
H2. If H1 is the same with H2 that means authentication, so user A is the
one that signed the document M. Let’s suppose that someone, an intruder
named X with malicious intensions, get the package message S+M from
the networks. Of course that X can see the original document M because
is sent in clear over network, also X can see the original digest because
anyone has access to A’s public key. The question is if the intruder X can
falsify the signature. If X obtains the clear message M, X can modify it. In
order to not be any kind of suspition, X has to obtain the hash. This is an
easy task, but X does not have A’s private in order to sign instead A. If it
is obtain the signature can not simply attached to another document
because the signature is on hash H2 and when the intruder’s document is
put in hash function by user B, it will be obtained a different hash H1 than
the hash from signature H2. This scheme is very useful for authentication
but not useful for confidentiality.
110
5. B applies to MC his secret key: MD = (MC^PRIVB)
mod nB;
Where MD is the decrypted message.
6. To MD is applied the same h function as in step 1.
H1 = H(MD);
7. From the received S is deducted h as follows:
H2 = (S^PUBA) mod nA;
8. If H1 is equal with H2 then h=H1=H2 and MD is the
same as M; if H1!=H2 (different) means that
something happened in the communication channel
or A tries to deceive B.
CA Certificate
NET Authority
Document PUBB
S
M MC
RSA MC
Hash RSA
(DIGEST) Signature
Hash S
PRIVA
Hash Function
(MD5, SHA-1) NET
PUBA
Smart Card with secret
key of sender A
H2 S
Hash RSA
(DIGEST)
MC
AUTHENTICATION +
CONFIDENTIALITY
PRIVB
RSA
if (H1==H2) the
Smart Card with secret
authentication is correct
key of receiver B
+ confidentiality (only B
can decrypt C) M
H1 Hash
111
This scheme (fig. 2.4) is perfect for authentication and confidentiality but
it is time consuming. The best approach for authentication and
confidentiality is to use session keys, and this model is depicted in figure
2.5:
Step 1: Session Key Establishement
Random keys generator (128, 192, 256
biŃi) for symmetric algorithms (Rijndael)
Key Generator Session key signed by
A and encrypted for B
Key K1 for. Symmetric algs...RSA Session key (become goes from A (K3)
K2) signed by A RSA
PRIVA Signature
PUBB
CA Certificate
Authority
Smart Card with secret
key of sender A
NET
RSA
Check
Signature Session key signed by A
PUBA
and encrypted by B
Session key (become because B get it (K3)
K2) signed by A
K1 K1
M C NET C M
Fig. 2.5. Electronic signing and encryption of the session key and the
confidentiality of an electronic document using a hybrid cryptographic
system
112
On this design cannot be performed attacks of type man-in-
middle-of-attack. The principle of the design in figure 2.5 is simple: if A
wishes to transmit a message M to B then they have to agree on a
common secret key. This (establishing the common key) is made by A
using eventually a hash function to generate a key K1 with 128 bits length.
This session key K1 is very small comparing with the document M, so it is
no problem to sign (authenticity – applies A’s private key and obtains K2)
and encrypt (confidentiality – applies B’s public key and obtains K3) the
session key K1 by A. B receive K3 over the network and applies B’s
private key and then A’s public key in order to obtain K1. It is clear that
the session key K1 was send by A and only A and was for be decrypted by
B and only B. It is observed the necessity of establishing a protocol
between A and B by which to be made information exchange concerning:
the length of the session key, the name of the symmetric and asymmetric
cryptographic system used, etc.
After users A and B both have session key K1, A can use a
symmetric crypto system and encrypt the message with key K1 after
Rijndael algorithm for instance.
It is observed that in all the designs previously presented (figure
2.3, 2.4 and 2.5), the digital signature fulfills the following conditions:
AUTHENTICITY, because is verified only the public key of the
message sender;
the signature is NON-FAKEABLE, because only the sender
knows his own secret key;
the signature is NON-REUSABLE, because it depends on the
content of the document that is encrypted (or on the generated
session key);
the signature is NON-ALTERABLE, because any alteration of the
content of the message (document) makes the signature non-
verifiable with the public key of the sender;
the signature is NON-REPUDIABLE, because the receiver of the
message (document) does not need the help of the sender to
verify the signature;
T
he infrastructures based on the cryptography with asymmetric
(public) keys are essential for the viability of the message
transactions and communications especially in networks and
generally on the Internet. The public key infrastructure (PKI – Public Key
Infrastructure) consists of the multitude of services required to be
ensured when the technologies of encryption with public keys are used on
113
a large scale. These services are of technological nature, as well as legal,
and their existence is necessary in order to permit the exploitation of
public keys technologies at their full capacity.
The main elements of the public keys infrastructures are:
Digital certificates;
Certification Authorities (CA);
Management facilities (protocols) for certificates.
114
Digital Certificate Private Key of CA
Version
Serial Number
Crypto Algorithm for signature
The CA name that has issued the crtificate
Validity Period
Subject Name Generate
Subject Public Key Digital
ID unique of CA issue (optional) Signature
ID unique of subject (optional)
Extensions (optional)
While the previous versions ensured support only for the name
system X.500, X.509 v3 accepts a large variety of forms for names,
including e-mail addresses and URLs.
A system based on public keys certificates presumes the existence
of a Certification Authority that issues certificates for a certain group of
owners of keys pairs (public and private). Each certificate includes the
value of the public key and information that uniquely identifies. The
certificate’s subject (who is a private person or a company, an application,
a device or another entity that holds the secret key correspondent to the
public key included in the certificate). The certificate represents a liaison
impossible to falsify, between a public key and a certain attribute of its
owner. The certificate is digitally signed by a Certification Authority
(certified by the government), that so confirms the subject’s identity.
Once the certificates set established, an user of public key infrastructure
(PKI) may obtain the public key for any user certified by that Certification
Authority, simply getting the certificate for that user extracting from it the
desired public key.
The systems for obtaining public keys based on certificates are
simple and cheap to implement, due to an important characteristic of the
115
digital certificates: the certificates may be distributed without requiring
protection through the average security systems (certification, integrity
and confidentiality). This because the public key should not be kept secret;
thus, the digital certificate that includes it is not secret. There are no
requirements for certification or integrity, because the certificate self-
protects (the digital signature of AC in the certificate ensures its
certification, as well as its integrity).
Consequently, the digital certificates may be distributed and
moved by unsecured communication liaisons: by unsecured file servers,
by systems of unsecured folders and/or communication protocols that do
not endure the security.
116
o request for crossed certificates – when a certification
authority certifies another certification authority;
o updating some crossed certificates.
Publishing of a certificate or list of revoked certificates:
involves storing a certificate or a list of revoked certificates
where everybody may have access (for instance such a
protocol is LDAP).
Restoration of a pair of keys: when an end entity loses its
private key and wishes to restorate it, if previously RA or
CA saved this key.
Revoking a certificate: when an end entity wishes to revoke
(cancel) a certificate, operation that involves a revocation
demand and implicitly the update of the list of revoked
certificates (CRL – Certificate Revocation Lists).
Certification means
117
User X
Digital Certificate 1
Digital Certificate 2
Subject=AC2
Public key of AC2 Subject=AC3
Issuer=Certificate Public key of AC3
Authority AC1
Issuer=Certificate
Authority AC2
Digital Certificate 3
Subject=User Y
Public key of Y
User X obtains the public key of Y using a
Issuer=Certificate certification chain. X knows where to find
Authority AC3 the digital certificate1 and starting with it,
find out the public key of Y.
S
ecurity technologies in theory are sufficient for securing business
transactions on the Web but particularly common deployment of
security technologies in web services are still insufficient. For
instance, Secure Sockets Layer (SSL) provides for the secure interchange
of sensitive data between a client and server, but once received, the data
is all frequently left unprotected on the server. In fact, SSL protects the
data at a point in its travels but not also at back-end. If the danger about
sniffing IP packets in transit in order to obtain a single user's credit card
number is now minimized through SSL, now there is another problem:
what about security of the data from back-end database? The database
118
contains thousands of credit card numbers and we should be able to
ensure security at this level too. This problem is aggravated in the case
where a message is routed from server to server. If the data itself were
encrypted, as opposed to just its transport, it would help reduce the
incidents of unencrypted data left vulnerable on public servers. A sample
XML file with digital signatures is depicted in table 2.2.
<SignedInfo Id="foo">
<CanonicalizationMethod
Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/>
<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#dsa-sha1" />
<Reference URI="http://www.s.com/index.html">
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
<DigestValue>b6lwx3rvABC0vFt32p4NbeVu8nk=</DigestValue>
</Reference>
<Reference URI="http://www.s.com/logo.gif">
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<DigestValue>CrDSaq2Ita6heik5/A8Q38GEq32=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>FC0D…AE=</SignatureValue>
<KeyInfo>
<X509Data>
<X509SubjectName>CN=ASE,O=Software Development Research
Lab.,ST=BUCHAREST,C=RO</X509SubjectName>
<X509Certificate>
MIID5jCCA0+gA...lVN
</X509Certificate>
</X509Data>
</KeyInfo>
</Signature>
The steps for creating a XML Digital Signatures like in table 4 are:
Determine which resources are to be signed. The resource is
identified in URI (Universal Resource Indicator) form. For
instance, we will prepare to sign to resources:
o "http://www.s.com/index.html" - reference to an HTML
page on the web
119
o "http://www.s.com/logo.gif" - reference to a GIF image on
the Web
Calculate the digest of each resource. In XML signatures, each
referenced resource is specified through a <Reference> element
and its digest (calculated on the identified resource and not the
<Reference> element itself) is placed in a <DigestValue> child
element like in table 2.2. The <DigestMethod> element represents
the algorithm used to calculate the digest.
Enclose the Reference elements. Collect the <Reference> elements
(with their associated digests) within a <SignedInfo> element. The
<CanonicalizationMethod> element indicates the algorithm was
used to canonize the <SignedInfo> element. Different data with
the same XML information set may have different textual
representations, differing as to white-space or break-lines. To
obtain accurate verification results, XML information sets must first
be canonized before extracting their bit representation for
signature processing. The <SignatureMethod> element identifies
the algorithm used to produce the signature value.
Signing. Calculate the digest of the <SignedInfo> element, sign
that digest and put the signature value in a <SignatureValue>
element.
Adding key information. If key information is to be included, it will
be placed in a <KeyInfo> element. Here the keys information
contains the X.509 certificate for the sender, which would include
the public key needed for signature verification.
Enclose in a Signature element. Place the <SignedInfo>,
<SignatureValue>, and <KeyInfo> elements into a <Signature>
element. The <Signature> element comprises the XML signature.
120
2.4 Conclusions
T
he article has apparently three separated parts: electronic signature
– authentication and confidentiality, certificates and public key
infrastructures, and XML signature. But these three parts are
strongly interconnected. For instance, as a Security Chief Officer in a
corporation, it is easy to forecast that if you need to implement an
electronic signature policy over a document management, then you need
a public key infrastructure based on digital certificates and back-end
solutions such as XML signature.
The solutions provided in nowdays needs a lot of intelligent
combinations between new software technologies with proper hardware
architectures support. Even if in this paper are not debated the hardware
architectures a person that knows practical issues, even if is not Security
Chief Officer, should rise problems about concurrency and multiprocessing
in certificates route access, the hardware security of stored certificates,
the speed and costs of database access and so on. These things can be
achieved only with performance in theoretical and practical approaches on
electronic signature and related technologies.
References:
[IVAN02] Ion IVAN, Paul POCATILU, Marius POPA, Cristian TOMA – The
Electronic Signature and Data Security in the Electronic
Commerce, in “Informatica Economică”, vol. VI, no. 3,
Bucharest, 2002, pp. 105 – 110.
[PATR05] Patriciu V., Bica I., Pietrosanu M, Priescu I., "Semnaturi
electronice si securitate informatica", Ed.All, 2005.
[PATR01] Patriciu V., Bica I., Pietrosanu M, "Securitatea comertului
electronic", Ed All, 2001.
[PATR99] Patriciu V., Patriciu S., Vasiu I., "Internet-ul si dreptul", Ed
All, 1999.
[PATR98] Victor Valeriu Patriciu, Bica Ion, Monica Ene-Pietroseanu,
“Securitatea Informatică în UNIX şi Internet”, Publishing
House Tehnică, Bucharest 1998.
[PATR94] Victor Valeriu-Patriciu, “Criptografia şi securitatea reŃelelor de
calculatoare cu aplicaŃii în C si Pascal”, Publishing House
Tehnică, Bucharest 1994.
[RHOU00] Housley R., Planning for PKI, John Wiley, 2000.
[SCHN96] Bruce Schneier, “Applied Cryptography 2nd Edition: protocols,
algorithms, and source code in C”, John Wiley & Sons, Inc.
Publishing House, New York 1996.
[STIN02] Douglas Stinson, “Cryptography – Theory and Practice” 2nd
Edition, Chapman & Hall/Crc Publishing House, New York
2002.
121
122
Security Standards and Protocols
123
124
Module 3 – Security Standards and
Protocols
T
he International Telecommunications Union, ITU-T (formerly known
as CCITT), is a multinational union that provides standards for
telecommunication equipment and systems. ITU-T (ITU-T
standards) is responsible for standardization of elements such as the
X.500 directory, X.509 certificates and Distinguished Names.
Distinguished names are the standard form of naming. A distinguished
name is comprised of one or more relative distinguished names, and each
relative distinguished name is comprised of one or more attribute-value
assertions. Each attribute-value assertion consists of an attribute identifier
and its corresponding value information.
Distinguished names were intended to identify entities in the X.500
directory tree. A relative distinguished name is the path from one node to
a subordinate node. The entire distinguished name traverses a path from
the root of the tree to an end node that represents a particular entity, for
example, “CN=John Doe, O=ABC, C=US”. A goal of the directory was to
provide an infrastructure to uniquely name every communications entity
everywhere (hence the ‘‘distinguished’’ in ‘‘distinguished name’’). As a
result of the directory’s goals, names in X.509 certificates are perhaps
125
more complex than one might like (for example, compared to an e-mail
address). Nevertheless, for business applications, distinguished names are
worth the complexity, as they are closely coupled with legal name
registration procedures; this is something simple names, such as e-mail
addresses, do not offer.
ITU-T Recommendation X.400, also known as the Message
Handling System (MHS), is one of the two standard e-mail architectures
used for providing e-mail services and interconnecting proprietary e-mail
systems. The other is the Simple Mail Transfer Protocol (SMTP) used by
the Internet. MHS allows e-mail and other store-and-forward message
transferring such as Electronic business Data Interchange (EDI) and voice
messaging. The MHS and Internet mail protocols are different but based
on similar underlying architectural models. The noteworthy fact of MHS is
that it has supported secure messaging since 1988 (though it has not
been widely deployed in practice). The MHS message structure is similar
to the MIME message structure; it has both a header and a body. The
body can be broken up into multiple parts, with each part being encoded
differently. For example, one part of the body may be text, the next part
a picture, and a third part encrypted information.
ITU-T Recommendation X.509 specifies the authentication service
for X.500 directories, as well as the widely adopted X.509 certificate
syntax. The initial version of X.509 was published in 1988, version 2 was
published in 1993, and version 3 was proposed in 1994 and published in
1995. Version 3 addresses some of the security concerns and limited
flexibility that were issues in versions 1 and 2. Directory authentication in
X.509 can be carried out using either secret-key techniques or public-key
techniques. The latter is based on public-key certificates. The standard
does not specify a particular cryptographic algorithm, although an
informative annex of the standard describes the RSA algorithm.
An X.509 certificate consists of the following fields:
version
serial number
signature algorithm ID
issuer name
validity period
subject (user) name
subject public key information
issuer unique identifier (version 2 and 3 only)
subject unique identifier (version 2 and 3 only)
extensions (version 3 only)
signature on the above fields
126
additional information beyond just the key and name binding. Standard
extensions include subject and issuer attributes, certification policy
information, and key usage restrictions, among others.
X.509 also defines syntax for certificate revocation lists (CRLs).
The X.509 standard is supported by a number of protocols, including PKCS
and SSL.
IEEE P1363
ISO standards
127
algorithms. Registering a cryptographic algorithm results in a unique
identifier being assigned to it. The registration is achieved via a single
organization called the registration authority. The registration authority
does not evaluate or make any judgment on the quality of the protection
provided.
For more information on ISO, contact their official web site
http://www.iso.ch.
ANSI X9 standards
128
The data keys are used for bulk encryption and are changed on a
per-session or per-day basis. New data keys are encrypted with the key-
encrypting keys and distributed to the users. The key-encrypting keys are
changed periodically and encrypted with the master key. The master keys
are changed less often but are always distributed manually in a very
secure manner.
ANSI X9.17 defines a format for messages to establish new keys
and replace old ones called CSM (Cryptographic Service Messages). ANSI
X9.17 also defines two-key triple-DES encryption as a method by which
keys can be distributed. ANSI X9.17 is gradually being supplemented by
public-key techniques such as Diffie-Hellman encryption.
One of the major limitations of ANSI X9.17 is the inefficiency of
communicating in a large system since each pair of terminal systems that
need to communicate with each other will need to have a common master
key. To resolve this problem, ANSI X9.28 was developed to support the
distribution of keys between terminal systems that do not share a
common key center. The protocol defines a multiple-center group as two
or more key centers that implement this standard. Any member of the
multiple-center group is able to exchange keys with any other member.
ANSI X9.30 is the United States financial industry standard for
digital signatures based on the federal Digital Signature Algorithm (DSA),
and ANSI X9.31 is the counterpart standard for digital signatures based
on the RSA algorithm. ANSI X9.30 requires the SHA-1 hash algorithm
encryption; ANSI X9.31 requires the MDC-2 hash algorithm. A related
document, X9.57, covers certificate management encryption.
ANSI X9.42 is a draft standard for key agreement based on the
Diffie-Hellman algorithm, and ANSI X9.44 is a draft standard for key
transport based on the RSA algorithm. The former is intended to specify
techniques for deriving a shared secret key; techniques currently being
considered include basic Diffie-Hellman encryption, authenticated Diffie-
Hellman encryption, and the MQV protocols. Some work to unify the
various approaches is currently in progress. ANSI X9.44 will specify
techniques for transporting a secret key with the RSA algorithm. It is
currently based on IBM’s Optimal Asymmetric Encryption Padding, a
‘‘provably secure’’ padding technique related to work by Bellare and
Rogaway.
ANSI X9.42 was previously part of ANSI X9.30, and ANSI X9.44
was previously part of ANSI X9.31.
PKCS
129
The published standards are PKCS #1, #3, #5, #7, #8, #9, #10 #11,
#12, and #15; PKCS #13 and #14 are currently being developed.
PKCS includes both algorithm-specific and algorithm-independent
implementation standards. Many algorithms are supported, including RSA
and Diffie-Hellman key exchange, however, only the latter two are
specifically detailed. PKCS also defines an algorithm-independent syntax
for digital signatures, digital envelopes, and extended certificates; this
enables someone implementing any cryptographic algorithm whatsoever
to conform to a standard syntax, and thus achieve interoperability.
The following are the Public-Key Cryptography Standards (PKCS):
PKCS #1 defines mechanisms for encrypting and signing data
using the RSA public-key cryptosystem.
PKCS #3 defines a Diffie-Hellman key agreement protocol.
PKCS #5 describes a method for encrypting a string with a
secret key derived from a password.
PKCS #6 is being phased out in favor of version 3 of X.509.
PKCS #7 defines a general syntax for messages that include
cryptographic enhancements such as digital signatures and
encryption.
PKCS #8 describes a format for private key information. This
information includes a private key for some public-key
algorithm, and optionally a set of attributes.
PKCS #9 defines selected attribute types for use in the other
PKCS standards.
PKCS #10 describes syntax for certification requests.
PKCS #11 defines a technology-independent programming
interface, called Cryptoki, for cryptographic devices such as
smart cards and PCMCIA cards.
PKCS #12 specifies a portable format for storing or
transporting a user’s private keys, certificates, miscellaneous
secrets, etc.
PKCS #13 is intended to define mechanisms for encrypting and
signing data using Elliptic Curve Cryptography.
PKCS #14 is currently in development and covers pseudo-
random number generation.
PKCS #15 is a complement to PKCS #11 giving a standard for
the format of cryptographic credentials stored on cryptographic
tokens.
130
IETF standards
131
1.2 Authentication protocols
A
uthentication is the verification of a claimed identity of a user,
process, or device. Other security measures depend upon verifying
the identity of the sender and receiver of information. Authorization
grants privileges based upon identity. Audit trails would not provide
accountability without authentication. Confidentiality and integrity are
broken if you can’t reliably differentiate an authorized entity from an
unauthorized entity.
The level of authentication required for a system is determined by
the security needs that an organization has placed on it. Public Web
servers may allow anonymous or guest access to information. Financial
transactions could require strong authentication. Strong authentication
requires at least two factors of identity. Authentication factors are:
What a person knows. Passwords and personal identification
numbers (PINs) are examples of what a person knows.
Passwords may be reusable or one-time use. S/Key is an
example of a one-time password system.
What a person has. Hardware or software tokens are examples
of what a person has. Smart cards, SecureID, CRYPTOCard,
and SafeWord are examples of tokens.
What a person is. Biometric authentication is an example of
what a person is, because identification is based upon some
physical attributes of a person. Biometric systems include palm
scan, hand geometry, iris scan, retina pattern, fingerprint,
voiceprint, facial recognition, and signature dynamics systems.
132
Kerberos is designed to address the problem of authentication in a
network of slightly trusted client systems. Kerberos uses dedicated
authentication servers which can be hosted on machines physically
distinct from any other network services, such as file or print servers. The
authentication servers possess secret keys for every user and server in
the network. Kerberos is not a public-key system; its primary
cryptosystem is DES.
When a user logs in, the client transmits the username to the
authentication server, along with the identity of the service the user
desires to connect to, for example a fileserver. The authentication server
constructs a ticket, which contains a randomly generated session key,
encrypted with the fileserver's secret key, and sends it to the client as
part of its credentials, which includes the session key encrypted with the
client's secret key. If the user typed the right password, then the client
can decrypt the session key, present the ticket to the fileserver, and use
the shared secret session key to communicate between them. Tickets are
timestamped, and typically have an expiration time on the order a few
hours.
In practice, the load on the authentication server is further reduced
by using a ticket-granting server (TGS). The first service requested by the
user is typically the TGS, which then grants additional tickets for
additional servers. Thus, the passwords are localized on the
authentication server, while the trust relationships are maintained by the
TGS.
Kerberos also supports realms, a management domain roughly
analogous to a Windows NT domain. Cross-realm authorizations can be
maintained by establishing an inter-realm key between two TGSs,
allowing each one to issue tickets valid on the other realm's TGS.
N
etwork layer security can be applied to secure traffic for all
applications or transport protocols in the above layers. Applications
do not need to be modified since they communicate with the
transport layer above. IPSec is a network layer security protocol primarily
used for implementing Virtual Private Networks (VPN).
IPSec
133
IPSec protocols can supply access control, authentication, data integrity,
and confidentiality for each IP packet between two participating network
nodes. IPSec can be used between two hosts (including clients), a
gateway and a host, or two gateways. No modification of network
hardware or software is required to route IPSec. Applications and upper
level protocols can be used unchanged.
IPSec adds two security protocols to IP, Authentication Header (AH)
and Encapsulating Security Payload (ESP). AH provides connectionless
integrity, data origin authentication, and antireplay service for the IP
packet. AH does not encrypt the data, but any modification of the data
would be detected. ESP provides confidentiality through the encryption of
the payload. Access control is provided through the use and management
of keys to control participation in traffic flows.
IPSec was designed to be flexible, so different security needs could
be accommodated. The security services can be tailored to the particular
needs of each connection by using AH or ESP separately for their
individual functions, or combining the protocols to provide the full range
of protection offered by IPSec. Multiple cryptographic algorithms are
supported.
A Security Association (SA) forms an agreement between two
systems participating in an IPSec connection. An SA represents a simplex
connection to provide a security service using a selected policy and keys,
between two nodes. A Security Parameter Index (SPI), an IP destination
address, and a protocol identifier are used to identify a particular SA. The
SPI is an arbitrary 32-bit value selected by the destination system that
uniquely identifies a particular Security Association among several
associations that may exist on a particular node. The protocol identifier
can indicate either AH or ESP, but not both. Separate SAs are created for
each protocol, and for each direction between systems. If two systems
were using AH and ESP in both directions, they would form four SAs.
Each protocol supports a transport mode and a tunnel mode of
operation. The transport mode is between two hosts. These hosts are the
endpoints for the cryptographic functions being used. Tunnel mode is an
IP tunnel, and is used whenever either end of the SA is a security gateway.
A security gateway is an intermediate system, such as a router or firewall,
that implements IPSec protocols. A Security Association between a host
and a security gateway must use tunnel mode. If the connection traffic is
destined for the gateway itself, such as management traffic, then the
gateway is treated as a host, because it is the endpoint of the
communication.
In transport mode, the AH or ESP header are inserted after the IP
header, but before any upper layer protocol headers. AH authenticates the
original IP header. AH does not protect the fields that are modified in the
course of routing IP packets. ESP protects only what comes after the ESP
header. If the security policy between two nodes requires a combination
of security services, the AH header appears first after the IP header,
134
followed by the ESP header. This combination of Security Associations is
called an SA bundle.
In tunnel mode, the original IP header and payload are
encapsulated by the IPSec protocols. A new IP header that specifies the
IPSec tunnel destination is prepended to the packet. The original IP
header and its payload are protected by the AH or ESP headers. AH offers
some protection for the entire packet. AH does not protect the fields that
are modified in the course of routing IP packets between the IPSec tunnel
endpoints, but it does completely protect the original IP header.
Key management is another major component of IPSec. Manual
techniques are allowed in the IPSec standard, and might be acceptable for
configuring one or two gateways, but typing in keys and data are not
practical in most environments. The Internet Key Exchange (IKE) provides
automated, bidirectional SA management, key generation, and key
management. IKE negotiates in two phases. Phase 1 negotiates a secure,
authenticated channel over which the two systems can communicate for
further negotiations. They agree on the encryption algorithm, hash
algorithm, authentication method, and Diffie-Hellman group to exchange
keys and information. A single phase 1 association can be used for
multiple phase 2 negotiations. Phase 2 negotiates the services that define
the SAs used by IPSec. They agree on IPSec protocol, hash algorithm, and
encryption algorithm. Multiple SAs will result from phase 2 negotiations.
An SA is created for inbound and outbound of each protocol used.
A common use of IPSec is the construction of a Virtual Private Network
(VPN), where multiple segments of a private network are linked over a
public network using encrypted tunnels. This allows applications on the
private network to communicate securely without any local cryptographic
support, since the VPN routers perform the encryption and decryption.
T
ransport layer security is directed at providing process-to-process
security between hosts. Most schemes are designed for TCP to
provide reliable, connection-oriented communication. Many transport
layer security mechanisms require changes in applications to access the
security benefits. The secure applications are replacements for standard
unsecure applications and use different ports. SSL/TLS is the most
common type of transport layer security protocol.
SSL/TLS
135
SSL is a transport-level protocol that provides reliable end-to-end
security. SSL can secure a session from the point of origin to its final
destination. SSL addresses the security between two communicating
entities. This could include communication between a Web browser and a
Web server, an e-mail application and a mail server, or even server to-
server communication channels.
SSL is a connection-oriented protocol that requires both the
application and server to be SSL-aware. If SSL is required on a server,
applications that are not SSL-capable will not be able to communicate
with that server.
SSL provides security services including privacy, authentication,
and message integrity. SSL provides message integrity through the use of
a security check known as a message authentication code (MAC). The
MAC ensures that encrypted sessions are not tampered with in transit.
SSL provides server authentication using public key encryption
technology, and is optionally capable of authenticating clients by
requesting client-side digital certificates. In practice, client certificates are
not widely deployed because they are not easily portable between
machines, they are easily lost or destroyed, and they have been generally
problematic to deploy in the real world. Many Web sites have found that
the combination of SSL used with a username and password has provided
adequate security for most purposes.
Transport Layer Security (TLS) is an open, IETF-proposed standard
based on SSL 3.0. RFCs 2246, 2712, 2817, and 2818 define TLS. The two
protocols are not interoperable, but TLS has the capability to drop down
into SSL 3.0 mode for backwards compatibility. SSL and TLS provide
security for a single TCP session.
A
pplication layer security provides end-to-end security from an
application running on one host through the network to the
application on another host. It does not care about the underlying
transport mechanism. Complete coverage of security requirements,
integrity, confidentiality, and nonrepudiation can be provided at this layer.
Applications have a fine granularity of control over the nature and content
of the transactions. However, application layer security is not a general
solution, because each application and client must be adapted to provide
the security services.
SSH
136
rsh, rlogin, rcp, telnet, rexec, rcp, and ftp. The security of SSH is largely
dependent on end-to-end encryption of a session between a client and
server. SSH also has the ability to strongly authenticate machines before
sending login information over the wire.
SSH is generally used to log in remotely to other computer
systems and execute commands. SSH also allows for the secure transfer
of files from one machine to another through the use of secure file copy
(SCP) and secure ftp (SFTP). SSH can help secure X11 traffic by sending it
through an encrypted tunnel. SSH has even been used to set up primitive
virtual private networks (VPNs) between hosts.
SSH components include the server (SSHD), the client (SSH),
secure file copy (SCP), and ssh-keygen. Ssh-keygen is an application used
to create the public and private keys that are used for machine
authentication.
Using strong authentication options SSH can protect against IP
spoofing attacks, IP source routing, Domain Name System (DNS) spoofing,
sniffing attacks, man-in-the-middle attacks, and attacks on the X-Window
system.
SSH consists of three layers, the transport layer protocol, the
authentication protocol, and the connection protocol. The SSH transport
layer protocol is responsible for handling encryption key negotiation,
handling key regeneration requests, handling service request messages,
and handling service disconnect messages. The SSH authentication
protocol is responsible for negotiating the authentication type, checking
for secured channels before passing authentication information, and
supporting password change requests. The SSH connection protocol
controls the opening and closing of channels and also controls port
forwarding.
Currently, there are two versions of SSH, v1 and v2. The original
SSH, version 1, is generally distributed free for non-commercial use in
source code format. It is available (at least in client form) on almost every
computing platform ranging from UNIX to PalmOS.
SSH1 comes in three major variants, version 1.2, 1.3, and version
1.5. Although many security problems have been discovered with SSH, it
is still considered secure provided that attention is paid to the
authentication method and the ciphers being used. For example, SSH1 is
vulnerable to a data insertion attack because it employs CRC for data
integrity checking.
Using the Triple-DES encryption algorithm solves this problem.
SSH1 actually supports a wider variety of authentication methods than
version 2, including AFS (based on Carnegie-Mellon’s Andrew File System)
and Kerberos. SSH1 is still quite popular and is extensively in use.
SSH2 is a complete rewrite of SSH1 that also adds new features, including
support for FTP and the TLS protocol. Because of differences in the
protocol implementation, the two versions are not fully compatible. SSH2
provides improvements to security, performance, and portability.
137
SSH2 requires less code to run with root privileges. This means
that an exploit such as a buffer overflow in the SSH server program will
be less likely to leave an attacker with root privileges on the server.
SSH2 does not use the same networking implementation as SSH1,
because it encrypts different parts of the packets. SSH2 does not support
weak authentication using .rhosts files. In SSH2, the Digital Signature
Algorithm (DSA) and the Diffie-Hellman key exchange replace the RSA
algorithm, but since the RSA patents have now expired, expect support
for this algorithm to return in future versions. SSH2 supports Triple-DES,
Blowfish, CAST-128, and Arcfour.
Because of the differences between SSH1 and SSH2, and because
of licensing restrictions, both versions will continue to be in use for some
time. New development is happening primarily with SSH2, as it is in the
process of becoming an IETF standard. For this reason, SSH2 should be
preferred over SSH1. A free implementation of SSH2 has been developed
by the OpenBSD community and is available from www.openssh.com.
PGP
138
GNU Project recently released a command line program called GNU
Privacy Guard (GnuPG) based on the OpenPGP standard. This program is
available as freeware from www.gnupg.org. GNU Privacy Guard is not as
elegant as PGP Desktop and, as a command-line application, provides no
integration with the operating system.
Public key authentication issue is addressed by PGP by using a
model based on people trusting other people. This trust is expressed by
signing someone’s PGP key. In effect, any PGP user becomes a
Certification Authority (CA) by signing other users’ keys. In the PGP trust
model, there is no difference between signing a key as a user or as a CA.
This differs significantly from the public key infrastructure scenario where
only a CA can express trust of a public key. As other users sign your key,
and you sign their keys in return, a “web of trust” is built. This trust is
based on whether or not you trust the public key as being genuine, and
whether you trust the other people who have signed the key.
Version 7.0 of PGP Desktop introduces support for X.509v3 digital
certificates, allowing PGP to use both web of trust and public key
infrastructures for key management.
S/MIME
139
MIME encodes files using various methods, and then decodes them
back to their original format at the receiving end. A MIME header is added
to the file, which includes the type of data contained and the encoding
method used.
MIME is a rich and mature specification for sending a variety of
content encoded over the Internet. It makes sense to add security
features to this existing standard rather than creating a new and
completely different standard. By extending MIME in the form of S/MIME,
the standard is given a rich and capable foundation on which to add
security features.
In order to send an S/MIME secured message, both the sender and
recipient must have an S/MIME-capable client such as Outlook, Outlook
Express, or Netscape Communicator. Indeed, one of the advantages of
S/MIME is that the sender and receiver of an e-mail do not need to run
the same mail package. In addition, each user must obtain a digital
certificate with a corresponding private key.
S/MIME is a hybrid encryption system that uses both public and
private key algorithms. Private key algorithms are used for encrypting
data whereas public key algorithms are used for key exchange and for
digital signatures.
S/MIME requires the use of X.509 digital certificates. The S/MIME
specification recommends the use of three encryption algorithms: DES,
Triple-DES, and RC2. The security of an S/MIME encrypted message
largely depends upon the key size of the encryption algorithm. An
interesting aspect of S/MIME is that the receiver, not the sender, of a
message determines the encryption method used based on information
provided in the digital certificate.
Sending an S/MIME message involves several steps. First,
someone wishes to send an encrypted e-mail that will be safe from
eavesdroppers. The message is encrypted with a randomly generated
symmetric session key. Next, this session key is encrypted using the
recipient’s public key. This key was either previously exchanged or it was
pulled from a directory such as an LDAP server. Next, the encrypted
message, the session key, algorithm identifiers and other data are all
packaged into a PKCS #7-formatted binary object. This object is then
encoded into a MIME object using the application/pkcs7-mime content
type. The message is then sent. When the message is received, the digital
envelope is opened and the recipient’s private key decrypts the session
key. The session key is then used to decrypt the message. The clear-text
message can now be read.
S/MIME and PGP both provide reliable and secure methods for
encrypting e-mail. PGP’s trust model, until version 7.0, has relied on the
web of trust security model. S/MIME, on the other hand, can take
advantage of PKI and digital certificates, helping it to scale to much larger
environments. S/MIME is also integrated into many e-mail clients,
140
whereas PGP requires the user to download an application and install e-
mail application plug-ins.
1.6 Conclusions
T
he paper combine and synthetisis the information about
cryptographic standards, authentication protocols, network layer,
transport layer and application layer security protocols. This
knowledge is important, because when you want to create security policy
in a corporation, it is not enough to implement one single network
security protolocol; instead you have to design a hardware and software
security technology combination that forms a proper package in order to
fulfil corporation security requirements. For this goal anyone who wants to
implement a proper security policy should study the security standards
and protocols.
References:
141
142
Security in Operating Systems
143
144
Module 4 – Security in operating
systems
1.1 Introduction
I
n general, the concern of security in operating systems is with the
problem of controlling access to computer systems and the
information stored in them. There have been identified four types of
overall protection policies of increasing order of difficulty [1]:
1. No sharing: processes are completely isolated from each other,
and each process has exclusive control over the resources
statically or dynamically assigned to it. In this case, processes
often share a program or data file by making a copy of it and
transferring the copy into their own virtual memory.
2. Sharing originals of program or data files: with the use of reentrant
code, a single physical realization of a program can appear in
multiple virtual address spaces, as can read-only data files. To
prevent simultaneous users from interfering with each other,
special locking mechanisms are required for the sharing of writable
data files.
3. Confined, or memory-less, subsystems: In this case, processes are
grouped into subsystems to enforce a particular protection policy.
For example, a client process calls a server process to perform
145
some task on data. The server is to be protected against the client
discovering the algorithm by which it performs the task, while the
client is to be protected against the server's retaining any
information about the task being performed.
4. Controlled information dissemination: In some systems, security
classes are defined to enforce a particular dissemination policy.
Users and applications are given security clearances of a certain
level, while data and other resources are given security
classifications. The security policy enforces restrictions concerning
which users have access to which classifications. This model is
useful not only in the military context but in commercial
applications as well.
146
1.2 Computer System Assets
T
he assets of a computer system can be categorized as hardware,
software, and data. We will consider each of these in turn and will
detail at reasonable level.
Hardware
Software
Data
147
or aggregate information. As a first impression, the existence of
aggregate information does not threaten the privacy of the individuals
involved, but as the use of statistical databases grows, there is an
increasing potential for disclosure of personal information. In essence,
characteristics of constituent individuals may be identified through careful
analysis. To take a simple example, if one table records the aggregate of
the incomes of respondents X, Y, Z, and W and another records the
aggregate of the incomes of X, Y, Z, W, and K, the difference between the
two aggregates would be the income of K. Finally, data integrity is a
major concern in most installations. Modifications to data files can have
consequences ranging from minor to disastrous.
T
here are identified a number of principles for the design of security
measures for the various threats to computer systems [SALT75].
These include:
1. Least privilege: every program and every user of the system
should operate using the least set of privileges necessary to
complete the job. Access rights should be acquired by explicit
permission only; the default should be "no access."
2. Economy of mechanisms: security mechanisms should be as
small and simple as possible, aiding in their verification. This
usually means that they must be an integral part of the design
rather than add-on mechanisms to existing designs.
3. Acceptability: security mechanisms should not interfere in
improper ways with the work of users, while at the same time
should meet the needs of those who authorize access. If the
mechanisms are not easy to use, they are likely to be unused
or incorrectly used.
4. Complete mediation: every access must be checked against the
access-control information, including those accesses occurring
outside normal operation, as in recovery or maintenance.
5. Open design: the security of the system should not depend on
keeping the design of its mechanisms secret. Thus, the
mechanisms can be reviewed by many experts, and users can
have high confidence in them.
148
1.4 Protection mechanism
T
he introduction of multiprogramming brought about the ability to
share resources among users. This involves not just the processor
but also the following:
1. Memory
2. I/O devices, such as disks and printers
3. Programs
4. Data
The ability to share these resources introduced the need for protection. An
operating system may offer protection along the following spectrum [3]:
1. No protection: appropriate when sensitive procedures are being
run at separate times.
2. Isolation: implies that each process operates separately from other
processes, with no sharing or communication. Each process has its
own address space, files, and other objects.
3. Share all or share nothing: the owner of an object declares it to be
public or private. In the former case, any process may access the
object; in the latter, only the owner's processes may access the
object.
4. Share via access limitation: the operating system checks the
permissibility of each access by a specific user to a specific object.
The operating system therefore acts as a guard or gatekeeper,
between users and objects, ensuring that only authorized accesses
occur.
5. Share via dynamic capabilities: this extends the concept of access
control to allow dynamic creation of sharing rights for objects.
6. Limit use of an object: this form of protection limits not just access
to an object but the use to which that object may be put. For
example, a user may be allowed to view a sensitive document but
not print it. Another example is that a user may be allowed access
to a database to derive statistical summaries but not to determine
specific data values.
149
1.4.1 Memory Protection
150
previously authorized the user to access the object, we do not necessarily
intend that the user should retain indefinite access to the object. In fact,
in some situations, we may want to prevent further access immediately
after we revoke authorization. For this reason, every access by a user to
an object should be checked.
The second goal is concerned with the principle of least privilege
which states that a subject should have access to the smallest number of
objects necessary to perform some task. Even if extra information would
be useless or harmless if the subject were to have access, the subject
should not have that additional access. Not allowing access to
unnecessary objects guards against security weaknesses if a part of the
protection mechanism should fail.
The last goal is to verify acceptable usage. Ability to access is a
yes-or-no decision. But it is equally important to check that the activity to
be performed on an object is appropriate. For example, a data structure
such as a stack has certain acceptable operations, including push, pop,
clear, and so on. We may want not only to control who or what has access
to a stack but also to be assured that the accesses performed are
legitimate stack accesses.
Regarding object protection we will present three ways of implementing it:
directory, access control lists, access control matrix and procedure
oriented access control.
Directory
151
Files Rights Pointer Files Files Rights Pointer
152
may have two distinct sets of access rights to F, one under the name Q
and one under the name F. In this way, allowing pseudonyms leads to
multiple permissions that are not necessarily consistent. Thus, the
directory approach is probably too simple for most object protection
situations. [PFLE03]
USER A RW
Doc
USER C RWX
153
without specific permission. The compartment was also a way to collect
objects that were related, such as all files for a single project.
To show how this type of protection might work, suppose every
user who initiates access to the system identifies a group and a
compartment with which to work. If UserA logs in as user UserA in group
Grp and compartment Comp2, only objects having UserA-Grp-Comp2 in
the access control list are accessible in the session. This kind of
mechanism would be too restrictive to be usable. UserA can-not create
general files to be used in any session. Worse yet, shared objects would
nave not only to list UserA as a legitimate subject but also to list UserA
under all accept-able groups and all acceptable compartments for each
group. The solution is the use of wild cards, meaning placeholders that
designate "any user" or "any group" or "any compartment". An access
control list might specify access by UserA-Grp-Comp1, giving specific
rights to UserA if working in group Grp on compartment Comp1. The list
might also specify UserA-*-Comp1, meaning that UserA can access the
object from any group in compartment Comp1. Likewise, a notation of *-
Grp-* would mean that any user in group Grp in any compartment.
Different placements of the wildcard notation * have the obvious
interpretations. The access control list can be maintained in sorted order,
with * sorted as coming after all specific names. For example, UserA-Grp-
* would come after all specific compartment designations for UserA. The
search for access permission continues just until the first match. In the
protocol, all explicit designations will be checked before wild cards in any
position, so a specific access right would take precedence over a wildcard
right. The last entry on an access list could be *- *- *, specifying rights
allow-able to any user not explicitly on the access list. By using this
wildcard device, a shared public object can have a very short access list,
explicitly naming the few subjects that should have access rights different
from the default. [PFLE03]
154
We examined protection schemes in which the operating system
must keep track of all the protection objects and rights. There are other
approaches that put some of the burden on the user. For example, a user
may be required to have a ticket or pass that enables access. Formally, a
capability is an unforgeable token that gives the possessor certain rights
to an object. A subject can create new objects and can specify the
operations allowed on those objects. For example, users can create
objects such as files, data segments, or subprocesses and can also specify
the acceptable kinds of operations, such as read, write, and execute. But
a user can also create completely new objects, such as new data
structures, and define types of accesses previously unknown to the
system.
A capability is a ticket giving permission to a subject to have a
certain type of access to an object. For the capability to offer solid
protection, the ticket must be unforgeable. One way to make it
unforgeable is to not give the ticket directly to the user. Instead, the
operating system holds all tickets on behalf of the users. The operating
system returns to the user a pointer to an operating system data
structure, which also links to the user. A capability can be created only by
a specific request from a user to the operating system. Each capability
also identifies the allowable accesses. Alternatively, capabilities can be
encrypted under a key available only to the access control mechanism. If
the encrypted capability contains the identity of its rightful owner, user A
cannot copy the capability and give it to user B. One possible access right
to an object is transfer or propagate. A subject having this right can pass
copies of capabilities to other subjects. In turn, each of these capabilities
also has a list of permitted types of accesses, one of which might also be
transfer. In this instance, process A can pass a copy of a capability to B,
who can then pass a copy to C. B can prevent further distribution of the
capability (and therefore prevent further dissemination of the access right)
by omitting the transfer right from the rights passed in the capability to C.
B might still pass certain access rights to C, but not the right to propagate
access rights to other subjects.
As a process executes, it operates in a domain or local name space.
The domain is the collection of objects to which the process has access. A
domain for a user at a given time might include some programs, files,
data segments, and I/O devices such as a printer and a terminal. An
example of a domain is shown in Figure 4.3.
155
Files
Data
storage
Devices
Processes
One goal of access control is restricting not just what subjects have
access to an object, but also what they can do to that object. Read versus
156
write access can be controlled rather readily by most operating systems,
but more complex control is not so easy to achieve. By procedure-
oriented protection, is implied the existence of a procedure that controls
access to objects. The procedure forms a capsule around the object,
permitting only certain specified accesses. Procedures can ensure that
accesses to an object be made through a trusted interface. For example,
neither users nor general operating system routines might be allowed
direct access to the table of valid users. Instead, the only accesses
allowed might be through three procedures: one to add a user, one to
delete a user, and one to check whether a particular name corresponds to
a valid user. These procedures especially add and delete, could use their
own checks to make sure that calls to them are legitimate.
Procedure-oriented protection implements the principle of
information hiding, because the means of implementing an object are
known only to the object's control procedure. Of course, this degree of
protection carries a penalty of inefficiency. With procedure-oriented
protection, there can be no simple, fast access, even if the object if
frequently used.
As the mechanisms have provided greater flexibility, they have
done so with a price of increased overhead. For example, implementing
capabilities that must be checked on each access is far more difficult than
implementing a simple directory structure that is checked only on a
subject's first access to an object. This complexity is apparent both to the
user and to the implementer. The user is aware of additional protection
features, but the naive user may be frustrated or intimidated at having to
select protection options with little understanding of their usefulness. The
implementation complexity becomes apparent in slow response to users.
157
management and support functions. In the process management category
we can consider: process creation and termination, process scheduling
and dispatching, process switching, process synchronization and support
for interprocess communication and management of process control
blocks. Regarding memory management we can refer to the allocation of
address space to processes, swapping and page and segment
management. I/O management includes buffer management and
allocation of I/O channels and devices to processes. As for support
functions we can list: interrupt handling, accounting and monitoring. The
reason for using two modes is clear. It is necessary to protect the
operating system and key operating system tables, such as process
control blocks, from interference by user programs. In the kernel mode,
the software has complete control of the processor and all its instructions,
registers, and memory. This level of control is not necessary and for
safety is not desirable for user programs. [TIPT02]
Someone can ask how does the processor know in which mode it is
to be executing, and how is the mode changed. Regarding the first
problem, there is a bit in the program status word that indicates the mode
of execution. This bit is changed in response to certain events. When a
user makes a call to an operating system service, the mode is set to the
kernel mode. This is done by executing an instruction that changes the
mode. When the user makes a system service call or when an interrupt
transfers control to a system routine, the routine executes the change-
mode instruction to enter a more privileged mode and executes it again to
enter a less privileged mode before returning control to the user process.
If a user program attempts to execute a change-mode instruction, it will
simply result in a call to the operating system, which will return an error
unless the mode change is to be allowed.
There can be provided more sophisticated mechanisms. A scheme
is to use a ring-protection structure. In this scheme, lower-numbered, or
inner, rings enjoy greater privilege than higher-numbered, or outer, rings.
Typically, ring 0 is reserved for kernel functions of the operating
system, with applications at a higher level. Some utilities or operating
system services may occupy an intermediate ring. Basic principles of the
ring system are:
A program may access only those data that reside on the same
ring or a less privileged ring.
A program may call services residing on the same or a more
privileged ring.
158
1.5 File sharing
M
ulti user systems almost always require that files can be shared
among a number of users. Two issues arise: access rights and the
management of simultaneous access.
The file system should provide a flexible tool for allowing extensive file
sharing among users. The file system should provide a number of options
so that the way in which a particular file is accessed can be controlled.
Typically, users or groups of users are granted certain access rights to a
file. A wide range of access rights have been used.
The following list indicates access rights that can be assigned to a
particular user for a particular file:
None: the user may not even learn of the existence of the file,
much less access it. To enforce this restriction, the user would
not be allowed to read the user directory that includes this file.
Knowledge: the user can determine that the file exists and who
its owner is. The user is then able to petition the owner for
additional access rights.
Execution: the user can load and execute a program but cannot
copy it. Proprietary programs often are made accessible with
this restriction.
Reading: the user can read the file for any purpose, including
copying and execution. Some systems are able to enforce a
distinction between viewing and copying. In the former case,
the contents of the file can be displayed to the user, but the
user has no means for making a copy.
Appending: the user can add data to the file, often only at the
end, but cannot modify or delete any of the file's contents. This
right is useful in collecting data from a number of sources.
Updating: the user can modify, delete, and add to the file's
data. This normally includes writing the file initially, rewriting it
completely or in part, and removing all or a portion of the data.
Some systems distinguish among different degrees of updating.
Changing protection: the user can change the access rights
granted to other users. Typically, only the owner of the file
holds this right. In some systems, the owner can extend this
right to others. To prevent abuse of this mechanism, the file
owner typically is able to specify which rights can be changed
by their holder.
Deletion: the user can delete the file from the file system.
159
One user is designated as owner of a given file, usually the person
who initially created a file. The owner has all of the access rights listed
previously and may grant rights to others. Access can be provided to
different classes of users:
Specific user: individual users who are designated by user ID.
User groups: a set of users who are not individually defined.
The system must have some way of keeping track of the
membership of user groups.
All: all users who have access to this system; these are public
files.
W
e discussed issues regarding the protection of a given message or
item from passive or active attack by a given user. Another
widely applicable requirement is to protect data or resources on
the basis of levels of security. This is found in the military, where
information is categorized as unclassified (U), confidential (C), secret (S),
top secret (TS), or beyond.
This concept is equally applicable in other areas, where information
can be organized into gross categories and users can be granted
clearances to access certain categories of data. For example, the highest
level of security might be for strategic corporate planning documents and
data, accessible by only corporate officers and their staff; next might
come sensitive financial and personnel data, accessible only by
administration personnel, corporate officers, and so on.
When multiple categories or levels of data are defined, the
requirement is referred to as multilevel security. The general statement of
the requirement for multilevel security is that a subject at a high level
may not convey information to a subject at a lower or noncomparable
level unless that flow accurately reflects the will of an authorized user. For
implementation purposes, this requirement is in two parts and is simply
stated.
A multilevel secure system must enforce:
No read up: A subject can only read an object of less or equal
security level. This is referred to as the simple security property.
160
No write down: A subject can only write into an object of
greater or equal security level. This is referred to as the * star
property.
Security kernel
database
Audit File
161
The requirement for complete mediation means that every access
to data within main memory and on disk and tape must be mediated. Pure
software implementations impose too high a performance penalty to be
practical; the solution must be at least partly in hardware. The
requirement for isolation means that it must not be possible for an
attacker to change the logic of the reference monitor or the contents of
the security kernel database. Finally, the requirement for mathematical
proof is formidable for something as complex as a general-purpose
computer. A system that can provide such verification is referred to as a
trusted system. A final element is an audit file. Important security events,
such as detected security violations and authorized changes to the
security kernel database, are stored in the audit file.
W
indows 2000 provides a uniform access control facility that
applies to processes, threads, files, semaphores, windows, and
other objects. Access control is governed by two entities: an
access token associated with each process and a security descriptor
associated with each object for which interprocess access is possible.
162
perform a privileged operation, the process may enable the appropriate
privilege and attempt access. It would be undesirable to keep all of the
security information for a user in one system wide place, because in that
case enabling a privilege for one process enables it for all of them.
A security descriptor is associated with each object for which
interprocess access is possible. The most important component of the
security descriptor is an access control list that specifies access rights for
various users and user groups for this object. When a process attempts to
access this object, the SID of the process is matched against the access
control list of the object to determine if access will be allowed.
When an application opens a reference to a securable object, Windows
2000 verifies that the object's security descriptor grants the application's
user access. If the check succeeds, the system caches the resulting
granted access rights.
An important aspect of Windows 2000 security is the concept of
impersonation, which simplifies the use of security in a client-server
environment. If client and server talk through a Remote Procedure Call
connection, the server can temporarily assume the identity of the client so
that it can evaluate a request for access relative to that client's rights.
After the access, the server reverts to its own identity.
Access Token
163
the Access Control List for any object that it owns or that one
of its groups owns.
Security Description
164
the access mask specify access rights that apply to a particular type of
object. For example, bit 0 for a file object is File_Read_Data access, and
bit 0 for an event object is Event_Query_Status access. The most
significant 16 bits of the mask contain bits that apply to all types of
objects. Five of these are referred to as standard access types:
1. Synchronize: gives permission to synchronize execution with
some event associated with this object; this object can be used
in a wait function.
2. Write_owner: allows a program to modify the owner of the
object. This is useful because the owner of an object always
can change the protection on the object.
3. Write_DAC: Allows the application to modify the DACL and
hence the protection on this object.
4. Read_control: Allows the application to query the owner and
DACL fields of the security descriptor of this object.
5. Delete: Allows the application to delete this object.
The high-order half of the access mask also contains the four
generic access types. These bits provide a convenient way to set specific
access types in a number of different object types. For example, suppose
an application wishes to create several types of objects and ensure that
users have read access to the objects, even though read has a somewhat
different meaning for each object type. To protect each object of each
type without the generic access bits, the application would have to
construct a different ACE for each type of object and be careful to pass
the correct ACE when creating each object. It is more convenient to create
a single ACE that expresses the generic concept allow read, simply apply
this ACE to each object that is created, and have the right thing happen.
That is the purpose of the generic access bits, which are:
Generic_all: allow all access
Generic_execute: allow execution if executable
Generic_write: allow write access
Generic_read: allow read only access
The generic bits also affect the standard access types. For example,
for a file object, the Generic_Read bit maps to the standard bits
Read_Control and Synchronize and to the object-specific bits
File_Read_Data, File_Read_Attributes, and File_Read_EA. Placing an ACE
on a file object that grants some SID Generic_Read grants those five
access rights as if they had been specified individually in the access mask.
The remaining two bits in the access mask have special meanings.
The Access_System_Security bit allows modifying audit and alarm control
for this object. However, not only must this bit be set in the ACE for a SID,
but the access token for the process with that SID must have the
corresponding privilege enabled. Finally, the Maximum_Allowed bit is not
really an access bit but a bit that modifies Windows 2000's algorithm for
165
scanning the DACL for this SID. Normally, Windows 2000 will scan
through the DACL until it reaches an ACE that specifically grants (bit set)
or denies (bit not set) the access requested by the requesting process or
until it reaches the end of the DACL, in which latter case access is denied.
The Maximum_Allowed bit allows the object's owner to define a set of
access rights that is the maximum that will be allowed to a given user.
With this in mind, suppose that an application does not know all of the
operations that it is going to be asked to perform on an object during a
session.
There are three options for requesting access:
1. Attempt to open the object for all possible accesses. The
disadvantage of this approach is that the access may be denied
even though the application may have all of the access rights
actually required for this session.
2. Only open the object when a specific access is requested, and
open a new handle to the object for each different type of
request. This is generally the preferred method because it will
not unnecessarily deny access, nor will it allow more access
than necessary.
3. Attempt to open the object for as much access as the object
will allow this SID. The advantage is that the user will not be
artificially denied access, but the application may have more
access than it needs.
References:
166
Informatics Project Management
167
168
Module 5 – Informatics Project
Management
1.1 Introduction
T
he term eBusiness and eCommerce have many definitions in IT fields.
One of them is that the eBusiness is the integration of a company's
business including products, procedures, and services over the
Internet [ANIT00]. Usually and in practice a company turns its business
into an eBusiness when it integrates the marketing, sales, accounting,
manufacturing, and operations with web site activities. An eBusiness uses
the Internet as a resource for all business activities.
This paper concerns eCommerce. We can understand
eCommerce as a component, a part of an eBusiness.
The term of Electronic commerce or "eCommerce" is related with a
wide variety of on-line business activities for products and services, of
different type business-to-business – B2B and business-to-consumer –
B2B, through the Internet or even through IntraNets – private networks,
including mobile ones.
After the opppinion of different specialist, eCommerce is devided in two
components:
Online Shopping - the activities that provide to the customer or
to the business partener the information about products or
169
services traded. This information help them to be informed and
to take the proper decision regarding the buying process.
Online Purchasing - “ePayment” - the activities through a
customer or a company actually purchase a product or a
service over the Internet or private networks. Also another type
of using for Online purchasing is described in [Anne00] like “a
metaphor used in business-to-business eCommerce for
providing customers with an online method of placing an order,
submitting a purchase order, or requesting a quote”.
T
he companies and corporations that are involved in “normal”
Businesses to Business – B2B and Business to Client – B2C activities
become eBusinesses when the organization succeed to integrate
standard activities with their electronic information system. This electronic
information system could be an outsourcing solution for web and mobile
sites or portals or an “in house” developed system. For example, someone
who work in sales department could consider the web and wap – wireless
application protocol – site a sales tool. When he or she speaks with a
customer, he/she shows to the customer the product and service
presentations on company’s web or even wap site, goes with the customer
in virtual tours of the newest products and services. The Marketing
Department releases products and services on the web and wap site first,
providing eLearning courses, online presentations and brochures. Another
department like Customer Support host FAQ – Frequently Asked
Questions, support chat lines, and moderate newsgroups on the web site.
Purchasing uses the web to obtain prices on necessary components and
place orders, and Shipping uses the web to schedule deliveries and notify
customers of product arrival. So if a company “is making” in eBusiness
way, then for each department the web site, wap site and electronic
infomation distributed open systems are important tools they can use to
be “number 1” in business. E-Commerce in the previous case have an
important role in sales and accounting department but is bound with all
departments concerning the input/output information that can provide.
The customer or the company’s client use eCommerce “channel” for
170
achieve services and products, faster, more secure and with less costs
than before.
Also, someone who is involved in eCommerce Project Management
have to take care of and to be connected with following fileds-
departments: package selection, business intelligence, knowlege
management, customer relationship management, project portofolio
management, services-application-solutions development and research,
process improvment, audit management and human resources
management; especially if this tasks are in a big entreprise environment.
Package selection – some software companies have already
focused on several real business problems. They created some remarkably
solutions in markets like Supply Chain Management (SCM), Customer
Relationship Management (CRM), Enterprise Resource Planning (ERP), E-
Procurement, e-Commerce and m-Commerce – mobile commerce. It is
difficult to choose and select the proper software/hardware solutions from
the given number of vendor and choices. The challenge for a company is
to select the vendor and package solution from hundreds of products,
philosophies and solutions in order to make the right choice when
implement them. The solution is designed to provide to the company the
resources on how to confront with the challenges – processes and
methodologies, tools – templates and software/hardware, and an
opportunity to learn from what others have done – articles, books and
discussions [PACK04].
Business Intelligence – means the procedures and techniques used
in order to achieve information about company’s customers, competitors
and internal business processes. Now many companies are in transition
between traditionnal business models towards eBusiness models, and they
are forced to interact with customers and competitors in digital
competitivness. By using business intelligence, organizations can analyze
client and customer trends, evaluates the effectiveness of internal
processes and study competitor patterns. To enable effective business
intelligence practices, organizations need to use a wide variety of tools
and techniques and integrate those with existing business processes
[BUSI04].
Knowledge Management – is the ability of company’s staff and
employees to search and find out in their company's electronically stored
documents the information which they need. Technologies used for
Knowledge Management include: Search engines – with artificial
intelligence features, Databases – relational or object oriented ones, Meta
tags like XML, Classifications – maybe using neuronal networks [KNOW04].
Customer Relationship Management – this is the interactive and
knowledge based age, where success of the company, could depend on its
ability to learn how to treat each customer as an important individual.
Project portofolio management – this one have a great impact over
the C-level –CEO, CIO, CTO – decision maker. Important thing here is
how managers can analyse projects both on an individual and in an
171
aggregate manner, metrics and corporate strategy. Also important is to
know how to use tools and technologies to choose the best project for
improving company skills and competitveness even if that project don’t
make so much profit for company.
Services-application-solutions development and research – this is
about how to provide software and hardware solutions in time into the
market. It means the company have to handle with Time to Market (TTM)
problem. Nevertheless, managers' answer to the problem is usually to
shorten the development schedule and place even more pressure on
developers. But this is not the proper approach on TTM problem. This
department must explains strategies and technologies that will help
department manager to optimize TTM. The managers involved in this field
have to find out best practices that will help them to meet challenging
schedules, and technologies such as Web services and distributed systems
that enable software on computers and mobile devices to connect and
interoperate with each other across platforms, programming languages
and applications [SOFT04].
Process improvment – stakeholders are concerned about the
company rank in the market and about the amout of profit that can be
done by company. If is done correctly, it can align the organization's
operations with its strategic objectives. Process Improvement have the
scope and the goal to improve products and services all the time in order
to achieve desired outcomes.
Audit and evaluation risk management – this kind of management
is used to aproximate and establishes the economic and social
implications of the company decisions in the future, based on observations
done during the examinations on business activities.
Human resource management – that means people-management.
The people that are in this field have to take care how they hire the best
human people for the company interests in order to get commitment
without any problems from production teams, stakeholders and upper
management when is working in a project. An important thing is to avoid
conflict and project obstacles by discovering and working with company's
personality.
So, even if the eCommerce is part of eBusiness now we can see
how many implications have eCommerce projects in company life and of
course how important is to have the best management for such kind of
projects.
172
1.3 The dimensions of management in eCommerce projects
E
ven in our days many IT projects and of course many eCommerce
projects are not successful. Also can be observed that the need for
IT and eCommerce projects is increasing all the time. Any project is
a temporary effort undertaken in order to accomplish a unique purpose.
The management goal is to distributed resources during the time in order
to accomplish this task.
Some specialists consider that a project has to have following
attributes: have unique purpose, is temporary, require resources often
from different fields, should have a primary sponsor and/or customer,
involve uncertainty.
In real life every eCommerce project, even every project, is
restricted in different ways by its:
Scope objectives
Time objectives
Cost objectives
Quality objectives – client satisfaction
40
35
30
25
20
Scope
15
Time
10
Cost
5
0
Cost
1
Time
2
Scope
173
Project management is “the application of knowledge, skills, tools,
and techniques to project activities in order to meet or exceed stakeholder
needs and expectations from a project” [PMBK96]. So now we have the
idea what means at least conceptual a eCommerce Project Management.
The framework of eCommerce project management contains briefly
the following knowledge areas: Scope Management, Time
Management, Cost Management, Quality Management, Human
Resources Management, Communication Management, Risk Management,
and Procurement Management. Those knowledge management areas are
bound through Project Integration Management using appropiate tools
and techniques. The project managers of such projects have to use
diffrent tools like: Project Charter and Work Breakdown Structure for
Scope Management; Gantt charts, PERT charts, critical path analysis for
Time Management; Cost estimates and Earned Value Analysis for Cost
Management. Another important thing in eCommerce project
management is that the stakeholders – people involved in or affected by
project activities – to know all the time the state of project. Sometimes
eCommerce project manage could be tougher than every project
management or general management. A project manager must have
experience an knowledge in general project management, general
management and in application area of project, ussualy have to know
very good eCommerce models patterns and the technologies within can
develope the model. Of cours the last thing but not the least is that a
project manager has to obey to project management code of ethics
developed by Project Management Institute. So any eCommerce Project
that is developed with adeqvate have many benefits: improved
communication among participants, mechanisms for performance
measurments, identification of problems areas, clarification of project
goals, clear understanding of project scope and quantification of project
risk.
All knowledge area has to be passed through all phases: Concept,
Development, Implementation and Close-Out. Also these phases are a
common project’s life cycle. Many new project managers have trouble
looking at the “big picture” and want to focus on too many details. In each
knowledge area the Project Plan Development is important because it
must taking the results of other planning processes and putting them into
a consistent, coherent document named the project plan. Project Plan
Execution must carry out the project plan and Overall Change Control
must coordinating changes across the entire project.
eCommerce Project Plan is a document used to coordinate all
project planning documents. The main scope of this plan is to guide
project execution. eCommerce Project Management Plan Manual
assist the project manager in leading the project team and assessing
project status. Those activities – and project plan – are parts of
integration management, the liant between all ohter knowledge area.
174
The content of Project Management Plan as a result of Integration
Management – essensial part – coordinating manual in eCommerce
Project Management is highlighted in picture 5.2:
These are the most important chapter from the main paper of an
eCommerce Project. If the implications within knowdledge areas are big,
then it can be created every project plan for every knowledge area. In this
document and in other documents is recommended to be used different
management procedures described in this chapter. Also, we strongly
175
recommand to use a software application – Project Management Software
that helps the manager work. As annexes to this documents or at Scope
Management Document it is a common use to attach Stakeholder
Analysis Document that contains: A stakeholder analysis documents
important (often sensitive) information about stakeholders such as:
stakeholders’ names and organizations, roles on the project, unique facts
about stakeholders, level of influence and interest in the project,
suggestions for managing relationships and a resume of each participant.
Another important thing is that if the eCommerce project is made
in “outsourcing” or “in house”. If is developed by you for a special
company – even if is “developed in house” is recommanded – that in
“Technical Chapter” from Project Plan Management to exist links to
technical papers like: “User Specifications”, “Design Manual”,
“Development Manual” – contains software libraries and packages also
class hyerachies description used in eCommerce project, “Acceptance
Tests Manual” and “Maintenance Manual”.
Project Plan Execution involves managing and performing the
work described in the Project Plan Manual. The majority of time and
money is usually spent on development and deployment. The application
area or the project directly affects project execution because the services
offered by the project are produced during execution.
Overall change control involves identifying, evaluating, and managing
changes through the project life cycle. Three main objectives of change
control: influence the factors that create changes to ensure they are
beneficial, determine that a change has occurred and manage actual
changes when they occur.
Of course, the management process depends of technological
constraints and by the expierience and knowledge of the project manager.
A project manager has to create as many projects as it is possible
and to pay interes at all knowdlege area managements. This helps him or
her to collect funds and support because have very solid financial
arguments highlighted in Scope Management through financial analysis
methods: NPV – Net Present Value Analysis, ROI – Return of Investments
analysis and Payback analysis.
T
here is no software product in our days which dominaite the market.
A thing common in practice is to analyse which model or framework
will be choose. Any eCommerce project/solution have o provide at
least to parts: “online shopping” and “online purchasing”. So if for “online
shopping” is enough to have a good organized web application or portal
solution, for “online purchasing” – ePayment is more complicated.
Ussually if eCommerce solutions are forced to work with banks
they have to include credit card schemes of macro-payment models. So
176
this are constrains so the manager is faced with serious problems because
he can not choose the proper technological solution for his problem.
All eCommerce architectures have particular frameworks and
provide different secure technology to get a safe ePayment process. The
most important part regarding the security and reliability of an
eCommerce solution is eCommerce architecture and features. That’s why
a project manager involved in eCommerce Project Management must
have solid knowledge about eCommerce architectures and technologies.
That means he or she is connected with computer science field and
comprehended very well the facts from eCommerce world. If he or she
has comprehended eBusiness world in an adequate manner this is an
advantage.
For example if a constrains came in his eCommerce solution that is
saying that for ePayment module have to use SET payment scheme
because the bank or card processor company accept only this scheme,
during the project management have to be recalculated all estimation
regarding the time, cost, risk and quality.
1.5 Conclusions
A
ll the time the team leader and project manager have to realize that
is better to use as much as is possible standard technologies and
solutions. There are special requirements about communications,
knowledge background and flexibility of the project manager in order to
177
ensure a good development environment. In the paper was presented
only hints and general ideas, and a person who is involved in
management staff should discover the management practice in a special
field like eCommerce solutions.
It is recommended that in project management activities the
project managers, team leaders, general and department managers and
C-staff to use proffessional specialized tools like: Microsoft Project Server
2003, Replicon Web Time Sheet, SmartDraw Management, Change
Management System 2.0.1, ConceptDraw Project 1.1, Intellisys Project
Desktop 1.22; specialized techniques and technologies in order to make a
complete genuine management – containing the most important
knowledge management areas: scope, time, cost, quality, human
resources, communication, risk, procurement and integration
management.
A proper project management of eCommerce solution have many
advantages and in this kind of project is worthing to do a high
professional project management than an “empirical” development and
deployment of such solutions. Some advantages of a proper project
management are:
Good project management provides assurance and reduces risk;
PM provides the tools and environment to plan, monitor, track,
and manage schedules, resources, costs, and quality;
PM provides a history or metrics base for future planning as
well as good documentation;
Project members learn and grow by working in a cross-
functional team environment.
All the time we have to keep in mind that the company staff,
customers, and other stakeholders do not like failed projects and the
project management in generally is not a simple task, and in particularly –
eCommerce field – requires special knowledge and skills.
References:
178
Risk Analysis in Secured Systems
179
180
Module 6 – Risk Analysis in Secured
Systems
1.1 Introduction
T
here are daily attacks on any type of servers from the Internet. And
daily-specialized companies make huge efforts and spend huge
amounts of money to cover the security gaps discovered on time or
after some attacks, sometimes having disastrous consequences.
The big software producing companies have specialized departments on
security. Any program produced also has safety and security elements
included.
But what should a small company with low capital do? How will it
handle some possible attacks when it is connected to the Internet network?
Is it necessary and enough to purchase the latest software?
There are certain situations when, after some unpleasant events, there is
nothing that can be done, the company’s data after compromised. In this
case we can talk about a disaster.
181
But before a possible disaster happens, there are two things we
can do to avoid the limit – situations:
Risk analysis in data security, in particular, and of security in
general;
Cost planning for disaster control measures and to cover the
losses.
The first case is the worst. An exception is the cases when the
company has recently been created or it does not have calculation
technique. The lack of security measures is unacceptable for a company,
no matter how small it is. This case is the easiest to deal with from the
implementation point of view.
The second case is more delicate and suggests first an evaluation
of the existing security and then establishing some alignment measures
with the imposed requirements.
In quite many cases the implementation of security measures
within the company creates problems. Some of these problems are
technical ones, others are human problems.
T
he security program is the process by witch security is offered to
the company. This process is important and can not be follwed only
few stepts from it. This presupposes to go through the following 5
steps:
1. Establishing the staff responsible for insuring the security.
2. Establishing the main stages for insuring the security.
3. Defining the requirements for security improvement.
4. Informing the personnel about the imposed security measures.
5. Audit and security monitoring.
182
each company should follow them in order to implement a correct security
program.
There are four stages that are essential and that should be defined
from the beginning. They are:
Risk evaluation and documents or data classification within the
company;
Establishing the access rights;
Defining the security policy;
Technical planning, design and implementation of the security
measures.
Security plan
Vulnerability
Threat
183
Risk
184
Risk analysis intends to make this process work on solid theoretical
and practical bases. There are more methods on how to approach risk.
The most known methods are:
Quantitative analysis;
Qualitative analysis;
Employment place analysis.
Impact
185
(confidentiality, integrity, availability and nonrepudiation). The impact has
in this case a financial quantification in the volume of the losses happened
as result of an undesired event.
ALE will be divided into ALE for each threat and ALE for each asset.
ALE for each threat will quantify the financial value of the disasters that
can be provoked by a threat against all damaged assets.
ALE for each asset will quantify the financial value of the disasters that
can be provoked to an asset by all possible threats.
186
Total Annual Loss Expectancy (ALE)
Total ALE = total annual loss expectancy for the pairs asset/threat.
In both cases of ALE calculation, in threat categories and asset categories,
the result must be identical.
187
small disasters in terms of value (operation mistakes), in both
cases the financial consequences are the same;
The choice of the used numbers can be considered subjective,
the laborious work requires time and consumption of resources.
This method is more often used than the previous one, being more
suitable for small size companies.
This method does not use statistical data. In exchange, we use as
entry datum the potential of loss.
The method uses terms as:
Frequent/high, medium, rare/reduced - referring to the
occurrence probability of risks and their impact.
Vital, critical, important, general and informational-referring to
the type and classification of information.
Numbers, 1, 2, 3.
Level of losses
Costs of disasters
188
Occurrence probabilities of disasters
Matrix of risk qualitative analysis is done with the help of data from
the last two charts. Depending on it we can determine the risk levels and
the ways of acting in order to reduce or eliminate risk. The resulted risk
levels are:
E – Extreme Risk. It is imperative to act immediately in order to
minimize it. It is also imperative to detail the assets and the
management plans in order to minimize the risk. We must impose
strategies in order to do this.
I – High Risk. They must be immediately taken into consideration
by the manager. In this case the management strategies will be
identified. Just as in the previous case, the risks must be
minimized.
M – Moderate Risk. They must be taken into consideration by the
manager.
R – Reduced Risk. Actions specified in the routine procedures.
189
The identification of goods and threats against them is similar to
the previous methods.
The estimation of probability and impact on vulnerability is much
simpler, using a probability matrix/exposure level.
For each asset or employment place we localize the exposure level
and the occurrence probability of an event. The major advantage is that
statistical data are no longer used. For each asset or employment place
analyzed we identify the values which offer the highest possible exposures
to undesired events and we establish the control measures for these
situations.
But all the risk analysis methods have shortcomings. Some of them
have been pointed out for each method.
The most important ones are:
The values used are imprecise;
The frequency of estimated losses is imprecise;
Calculations based on analyses and statistical and probabilistic
theories;
Data must be annually up-dated.
190
The latter measure will impose the reducing of expenses necessary to
insure the controls in order to cover all the possible threats. This could
reflect upon the modification and configuration of the control measures.
We will identify the threat that causes the highest value of ALE
corresponding to this.
We will identify the measures that can lead to the reduction of the
vulnerabilities. Some measures can be applied to more types of threats or
to more categories of assets (goods).
1.3 Conclusions
191
quite many cases, the companies must deal with personnel hired but who
does not have enough knowledge for the respective employment place.
References:
192
Informatics Ethic Codes
193
194
Module 7 – Informatics Ethic Codes
Abstract: The paper contains ethic codes presentation and ethical code
system for different job positions. The current practices of ethic codes,
there are brought major critics because ethics principles are treated as
already understood and these are not explicit write, what leads to a
tendency of decrease of the importance of ethics dimension. Using the
ethical principles and existing implication analysis and identifying the
critical points, it’s likely to assured the ethical issues solution which
appear in the development process of informatics applications.
T
here are a lot of professions in informatics, each having his place
and role in the informatics applications cycle life. Forward we’ll try to
review some professions from informatics, the existing professions
list being in a continuous modification depending on the very dynamic
evolution of the informatics.
The software developers represent a category of persons with a
high qualification which elaborates source codes. They have a lot of
knowledge of programming languages, define operand, data structures,
files structure, data operations, define program structures, make
procedures and modules join operations. The software developers know
advanced development techniques, to enable resources from visual
development environments.
The software engineers represent the category of involved
persons in whole cycle life of an application in analysis, the make the
specifications, the projection, the development, the testing, the
certification and the maintenance. They must have a general vision of the
application to be able to manage all involved teams and to integrate these
works in the process frame of application development.
195
The web designer represents an involved person in the web
pages development, the user interfaces of Internet. In web pages
development they are due to demonstrate strong knowledge of web
development techniques and interfaces design methods.
The informatics security analyst is the security specialist who
has the following attributions: the estimation and the prescription of
security informatics systems, the damages prevention, the users training
and security equipments design. Thus he contributes to security politics
creation, estimation and implementation to the organization level. For
good development of professional activity are necessary technical
knowledge and both communication and business abilities.
The applications tester represents the person which assures the
applications quality through functionalities testing starting with
development phase to sale phase. The application complexity grew very
much so that the testing process is made by tester teams using
automatically generated test dates. The applications quality assurance is
achieved as much through the functionalities testing and through code
source analysis in order to find mistakes which are hard to find using test
data.
The database administrator represents the category of persons
accountable for good database working. For this must he due to achieve
and to respect the backup’s procedures, to restore database in case of
necessity, to assure the checked up access to database for the users, to
assure database and applications tuning, to verify the integrity of the data.
The informatics project manger is proper to [APMR02] the
person which has the authority and the responsibility to manage the
project for specific purpose performance, so that the informatics project
can be judge like a success. In [BODE00] are defined the necessary
project managers knowledge which include specific project management
knowledge (the projects element coordination, the included sphere
management, the project resources management: financial, human and
time, the quality management, communication, risk and the acquisitions),
general management knowledge and informatics knowledge. In [LUCA03]
is specified as alike how the high quality informatics application
development must be the main goal of the informatics project manager
alike informatics applications must look up to ethical principles. The
informatics project management must involve an ethical view. The project
manager is due to guide himself after the principle justice and of share
equal benefits and responsibility. In this way the informatics project
manager shall approach and treat proper the ethical implications.
The network supervisor represents the person with strong
networking knowledge of computers of which main attribution is network
good function. For this they must be able to make the changes in the
network configuration, to install new necessary applications for
organization, to detect and to fix the network hard use by applications,
to train the users, to evaluate the informatics application used from the
196
point of view of networking used resources, to implement security politics
to the network level.
The informatics system supervisor is person with attributions
in the check function and keeps up operationally of an informatics system.
The supervisor assures the existing resources security establishing the
access rights to them.
The technical support represents the person with strong
technical knowledge and special communication skills. He has as main
attribution to analyze the current situation occurred in application use
time and to find solutions for these.
The informatics applications consultants represent the
category of persons which is busied about analysis of organizations
activity, find the sensitive points in the companies’ activity. The
informatics application prescription already existing which can be done to
answer to the organization needs the application implementation, the
users training and the application maintenance. The consultants are due
to have strong knowledge for area in which they have activity, technical
knowledge about the application which will be implemented by them and
communications abilities necessary in the relation with the persons from
the organization frame. The consultants who make the application
implementation must establish the working parameters, select which
modules are activated and define usual inputs and outputs for users.
The applications and systems auditors dispose of procedures
for establishing the measure which in is made the concordance between
specifications and differed stages of informatics application development.
In case of this concordance is made, the auditor, through his recognized
authority, achieves a transfer of credibility about the application. An
informatics system, an application which passed successfully an audit
process offer the guarantee as the remaking offered are complete and
accurate and the reliable ness is to a high level. The audit activity just as
is specified in [IVAN01] consist in two fundamental processes the
verification and the validation. The verification is the process what consist
in determination if at the end of each process, software development
stage the suggested objectives in the beginning are touched and in what
measure. The validation is the process which consists in determination if
the application which results after a development stage carries out
objective established in requirements.
197
1.2 Professions in informatics products utilization
A
n informatics product must be used in a properly way in order to
obtain correct, complete and on time results.
The email users are persons without a qualification in informatics
area which have knowledge’s incident to the way of launch into under
development of an email informatics product. They write up texts, attach
files and send letters. They know to create folders and to manage
messages.
It’s important as the users of resources for the email application to be
signers of some documents in which they are agree to respect a series of
rules which allow the sustentation at a high level messages exchange
between the persons from the same organization or different
organizations.
The forums users are persons which assists in meetings as part
as of a group based on professional principles or which gravitates around
of the same problem or solution. The forums open character makes a
series of risks which refer to the messages content. The messages authors
propose to all group members for debate points of view and even
considerations.
The databases users are persons which dispose of a least a
database and a software product which enables to them to access a
database and to obtain reports with required structure or reports with
variable structure and content. They have all resources to obtain new
reports, to calculate new indicators and to obtain new graphic
representations. The interface is complex enough, what requires as its
users to know some aspects concerning the database fields, the build way
for the selection expressions and the structure way for new report forms.
The application users are persons trained about the basic
appearance concerning:
the definitions of the problem which must be solved;
the determination of necessary input information;
the structure of input information opposite to informatics
application requirements;
the data input;
the parameters choose to determine the processes sequence in
concordantly with the results form which must be obtained.
The applications users are due to know the conditions required for
input dates, what correlations are between these in such way that to can
assure to get needed results. There are situations in which the application
contains processing functions not so used and which must be used by this
type of user.
198
The applications operators are persons which want to solve
personal problems or to solve requirements compliance with job file of
organization where they work. If the applications are build so that they
select only information and launch action of acquisition, of temporary
allocate of resources, of ask for information, of draw up of a officially
documentation or of make invoices or payments, the operators have the
limited possibility of not respect ethics code special defined.
These applications presume a basic stage called the authentication.
The on-line application user registration with his permission presumes the
supply of a complete set of identification dates. It’s very important as the
on-line user to not permit the selection running and the options activation
by other persons after the login operation as result of authentication.
A good training of operators shall permit to make selections and to
activate options related with available resources and especially with
objective which must be touched by this type of user.
The degree of respect the ethical code to the level of the users
depends on:
the way which in were build the interfaces, that is the ability of
these interfaces of identified if input data belong the areas
which assure the quality of the results;
the correlation defined by who makes the specifications in order
to show all situations which in input data are complete and
correct and show exactly which problem must be solved;
the homogeneity wherewith define the interface, what leave a
restricted place of operators intervention for input values, for
input strings with undesirable effects when the values and the
strings are submissive of a process of complex validation and
is accepted only inputs absolutely correct, without be done
automatic corrections;
the establishment of rules regarding the way which in is
assured the rightness of big volumes of date; it’s about the
input verification in the many points or simultaneous data input;
the causes registration which makes to obtain wrong results
opposite with initial requirements of beneficiary, making
changes in application in order to change the report cause-
effect and pass on another plane the errors or mistakes
typologies.
199
errors of define the specifications and are past as the unforeseen
problems or are analyzed as residual resources offered by software
product as finish product which can be used by users.
T
he objective of each informatics application is took over input data,
make processing and obtained useful result of the beneficiaries.
Input data, the processing and the results must be complete, correct
and on time in order to brought those benefits on which the users, which
invent in the informatics techniques and the code development (resources
which require the ethics code appearance) , wait.
The unauthorized utilization risk consists in the utilization by a
person of an informatics or communication resource which knows that
don’t have rights to use. This risk is partly removed through the
implementation of a security politics regarding at the resources access.
Any security politics has certain limits, which must be find out by users
and removed. Through of an existing ethical code is assured the general
frame of users conduct so that just finding out these security problems
the users behavior shall be guided by principles included in this code.
This risk includes also the risk of intellectual ownership violation
which is about the use of informatics applications without the owners
agreement even if these are not protect through a licenses system, the
use of the ideas and intellectual work of another persons.
The unauthorized modification consist in modification of a
resources features, options, settings or content to the informatics network
level this affecting resources good operation within the system framework.
This risk directs to the code source modification, which process assumes a
control system for application versions, which versions must pass a tests
set before the market launch. The source code unauthorized modification
leads to the perturbation of modifications control which were make in the
version and to the decrease of new application version quality. There are
enough cases which in a new application version is low qualitative than
the previous version, a part of these cases be due the unauthorized
modification risk.
The use of old technologies risk consists in use of some
technologies which don’t more answer to the existing requirements in
informatics area at the beginning of informatics application development.
This shall lead to decrease of product quality, decrease which is removed
through a good functionalities test. This risk exists when persons which
assists in application development are not involved into a continuous
process of improve technical knowledge and in this way they don’t know
the last used technologies in informatics area.
The over appreciation of professional qualification consist in
confidence graduation to a person from informatics area for project
200
development on the strength of an analogies with another projects from
different areas. The person considers this project as being innovating for
his career but he needs a learning process behind whom shall touch an
inferior level opposite to others specialized persons in this area. This shall
lead to a decrease of technical performance, which thing is balanced by
partner’s confidence in project good flow. This think must be known from
begin by all involved parts in order to avoid the appearance of conflicting
situations.
The risk of the legislation, the organizations politics and
procedures breach is owed specially the ignorance of these by persons
concerned in informatics area. This breach has serious consequences on
these persons which must to assume whole responsibility for their actions.
The risk of confidential information divulgation appears
during the project when the consultants approach to such information
related to employer or customer. This risk is decreased through the strong
cryptographic methods use and the information archive after the project
completion. The risk contains also the situations which in the information
are used in personal interest.
The risk of conflicts of interests during the activities from
informatics area is owed by these conflicts which appear and which if are
not avoided lead to the loss professional credibility of involved persons
and employers. This risk and the risk of confidential information
divulgation are detailed in ethical principles contained in [NEDE05].
The risk of resources wrong allocation appears in sense of
software applications and systems development using techniques,
methods and instruments with low performance beside other techniques,
methods and instruments. This risk is about to choose those methods
which assures the minimization of transactions duration, minimum
implementation cost and maximum flexibility.
The risk of discrepancy between specifications and product
between the planed level and the really measured level at informatics
product use appears when the specifications were not enough detailed to
be understood and used in application development stage. During the
application development must be traced the grade of specifications
respect, the modifications due of unforeseen conditions occurred in
development time must do modifications in specifications.
Another risks category represents the risks at the users’ level. These risks
stay to a high level even if through application is put a special accent on
decrease of them, the informatics application output quality depending in
an overwhelming proportion of date entered by users.
The risk of entered correct date but for another problem
represents a risk due of misunderstandings from the user of application
context. So date which are correct but in another context make
prejudices of application working and alter the results waited behind
processing.
201
The risk of select options for which there aren’t pay funds
appears in the case of applications organized by modules which in each
module or option enabled carries a financial responsibility which must be
pay and there aren’t funds. Therefore the users are due to know about
standard modules and options included in initial contract and other
modules and options which can be used against costs.
The risk of change database date without be necessary and
not saved appears when the informatics applications allow to make
changes which are not necessary by users. At the last these changes are
not saved or are saved for a while follow as another subsequent
processing leads to their loss.
The risk to give resources access to persons which performs
also other operations than one for which he has access occurs in
case which in through application the politics concerning the access rights
are not enough refined so that the access laws can be set at the operation
level. This thing is unrealizable and therefore exists permanent. Thus for
granting rights for a certain option to a user it must be grant an entire
rights category which includes also the desirable operation, leaving to the
attitude of the user to use rights just this operation and not for whole
category.
The risk of copy the application and to give on unauthorized
utilization to other persons represents a frequent risk encountered in
the utilization of all informatics application because regardless of the
protection method used shall exist some way for unauthorized utilization.
This risk can be decreased through the improvement of the applications
protect methods but also through preventive act of ethical codes adopt.
The risk of to qualify operators without registration and to
work several operators using same access password enter into the
risk category of unauthorized utilization and occurs due the tendency of
use to maximum the number of users licenses without carry on that the
user number defined in application must be the same with the number of
persons which use the application. This leads to the grant of large access
rights to each user defined in the application to cover the needs of all
persons which use the same application user thus removing the politics
access in application. As well correspondence of 1 to 1 between the
number of users and the number of persons is necessary because in most
applications input data, processing made wear of mark of user who makes
the operation and not the person, thus the application user caries the
responsibility for operations made.
The risk of delivery to beneficiaries of incomplete results
although the results were obtained completely and correct, but through
handle some results were missed. This risk is a due to wrong handle of
application results. This risk can be reduced also through the handling
decrease of intermediate users of the results and to deliver these in a final
form direct to the beneficiaries through application. The beneficiaries’
202
necessary result can be changed depending on context and thus occur the
handling necessity of results and this risk.
The risk of applying repeated only certain sequences of
processing saying as only these exist although for solving of a bigger
varieties of problems there are also another options which can be enabled
occurs due to conservatism which is demonstrated by the users which use
certain sequences of processing already know for a long time without
trying to use all processing from application and thus to obtain the
complete and correct results.
For every user type can be associated risks types which have effect of
multiple entrainments which affect whole the context. The ethics codes
must start from the realities which are registered in each organization
which in are utilized informatics applications and must include those
measures which warn the negative effects raise, which as often as not
they have the fate to compromised whole process, showing that the
decisions quality and the manual processing quality was better before the
application implementation.
T
he ethics code represents an ensemble of rules and prescription
related to persons conduct. The ethics codes must be completed by
a guide whom puts at command to the persons which is addressed
explanations for a better understand of rules which will lead to the
abidance of these rules and prescriptions. The rules represent mandatory
norms which must be respected and which breach attracts professional
sanctions sometimes even legal. The prescriptions represent desirable
norms which are desired to be respected although attract only sanctions
on moral level for their violation.
In [COCU05] is specified as the ethics code comes in the completion of
legal dispositions, establishing moral standard and protecting the
profession public image. This leads to development of a reliable relation
between profession and society.
203
In [COMA05] it’s presented that the ethics codes necessity is given by the
role on which has it in prevention of contestation between the interested
parts and makes the frame of the activities development in good
conditions, based on abidance of the same ethical principles from all parts.
Due to their formal character, the ethics codes have the role of
encouraging the ethical practices.
204
The ethics code embracement includes the following stages:
the reading and understanding of this ethics code by profession
members, so the ethics principles must be general valid to the
level of the profession, to be easy understandable, to contain
detailed explanations for a more good understand;
the ethics code acceptance must be done without any
compulsion for profession members which previous understood
the principles contained in the code and the utility of the
abidance of these;
the signature of the abidance commitment which represents
the assurance that the ethics code will be respected and this
can stand as base of analyze the professional activities of
profession members.
205
conditions of pass from the information society to the society based on
knowledge.
The ethics codes don’t contain just pure theoretical principles but
establish rules and practical useful prescriptions in the specific situations
of professional life. This don’t mean as the embracement of an ethics code
assures automatic an ethical conduct or that covers all the situations. The
ethics codes have an evolution because this code must cover almost all
practical situations. Thus the conflicting situations appeared are analyzed
based on these codes starting from general principles and searching
specific elements for this conflicting situation. If can not be find specific
elements then can be done the analysis of situation starting from general
principles and can be taken a conclusion related to the ethical principles
which were violated in this situation. As well the decision of completing
the ethics code with these new elements is taken to prevent explicit the
occurrence for the future of conflicting situations of same type.
T
he ethics code [ACM92] was adopted by the Council of Association
for Computing Machinery (ACM) in 16/10/1992. This code is
addressed to all organization members of all types: honorable
members, voting members, members, associate members, collective
members without juridical personality. The ACM members are from all
informatics area having different professions which are framed in this
large area. This is the reason for which this code is considered an ethical
general code for the informatics area which provide the general principles
for this area, which principles presents differences for each profession.
This Code is consisted by 24 imperatives formulated as statements
of personal responsibility. It contains many, but not all, issues
professionals are likely to face. Section 1 outlines fundamental ethical
considerations, while Section 2 addresses additional, more specific
considerations of professional conduct. Statements in Section 3 pertain
more specifically to individuals who have a leadership role, whether in the
workplace or in a volunteer capacity such as with organizations like ACM.
Principles involving compliance with this Code are given in Section 4.
The Code is supplemented by a set of Guidelines, which provide
explanation to assist members in dealing with the various issues
contained in the Code. It is expected that the Guidelines will be changed
more frequently than the Code.
The Code and its supplemented Guidelines are intended to serve as
a basis for ethical decision making in the conduct of professional work.
Secondarily, they may serve as a basis for judging the merit of a formal
complaint pertaining to violation of professional ethical standards.
206
It should be noted that although computing is not mentioned in the
imperatives of Section 1, the Code is concerned with how these
fundamental imperatives apply to one's conduct as a computing
professional. These imperatives are expressed in a general form to
emphasize that ethical principles which apply to computer ethics are
derived from more general ethical principles.
This code contains the following sections: General Moral
Imperatives, More Specific Professional Responsibilities, Organizational
Leadership Imperatives and Compliance with the Code.
The General Moral Imperatives are related with the fact that
each ACM member must:
contribute to society and human well-being; this principle
concerning the quality of life of all people affirms an obligation
to protect fundamental human rights and to respect the
diversity of all cultures. An essential aim of computing
professionals is to minimize negative consequences of
computing systems, including threats to health and safety.
When designing or implementing systems, computing
professionals must attempt to ensure that the products of their
efforts will be used in socially responsible ways, will meet social
needs, and will avoid harmful effects to health and welfare;
avoid harm to others: users, the general public,
employers and employees; harm means injury or negative
consequences, such as undesirable loss of information, loss of
property, property damage, unwanted environmental impacts,
intentional destruction or modification of files and programs
leading to serious loss of resources or unnecessary expenditure
of human resources such as the time and effort required to
purge systems of computer viruses; well-intended actions,
including those that accomplish assigned duties, may lead to
harm unexpectedly; in such an event the responsible person or
persons are obligated to undo or mitigate the negative
consequences as much as possible; one way to avoid
unintentional harm is to carefully consider potential impacts on
all those affected by decisions made during design and
implementation; to minimize the possibility of indirectly
harming others, computing professionals must minimize
malfunctions by following generally accepted standards for
system design and testing; furthermore, it is often necessary to
assess the social consequences of systems to project the
likelihood of any serious harm to others; if system features are
misrepresented to users, coworkers, or supervisors, the
individual computing professional is responsible for any
resulting injury; in the work environment the computing
professional has the additional obligation to report any signs of
207
system dangers that might result in serious personal or social
damage;
be honest and trustworthy; the honest computing
professional will not make deliberately false or deceptive claims
about a system or system design, but will instead provide full
disclosure of all pertinent system limitations and problems; a
computer professional has a duty to be honest about his or her
own qualifications, and about any circumstances that might
lead to conflicts of interest; membership in volunteer
organizations such as ACM may at times place individuals in
situations where their statements or actions could be
interpreted as carrying the weight of a larger group of
professionals; an ACM member will exercise care to not
misrepresent ACM or positions and policies of ACM or any ACM
units;
be fair and take action not to discriminate; the values of
equality, tolerance, respect for others, and the principles of
equal justice govern this imperative; discrimination on the
basis of race, sex, religion, age, disability, national origin, or
other such factors is an explicit violation of ACM policy and will
not be tolerated; inequities between different groups of people
may result from the use or misuse of information and
technology; in a fair society, all individuals would have equal
opportunity to participate in, or benefit from, the use of
computer resources regardless of race, sex, religion, age,
disability, national origin or other such similar factors; however,
these ideals do not justify unauthorized use of computer
resources nor do they provide an adequate basis for violation of
any other ethical imperatives of this code;
honor property rights including copyrights and patent;
violation of copyrights, patents, trade secrets and the terms of
license agreements is prohibited by law in most circumstances;
even when software is not so protected, such violations are
contrary to professional behavior; copies of software should be
made only with proper authorization; unauthorized duplication
of materials must not be condoned;
give proper credit for intellectual property; computing
professionals are obligated to protect the integrity of
intellectual property; specifically, one must not take credit for
other's ideas or work, even in cases where the work has not
been explicitly protected by copyright, patent;
respect the privacy of others; computing and
communication technology enables the collection and exchange
of personal information on a scale unprecedented in the history
of civilization; thus there is increased potential for violating the
privacy of individuals and groups; it is the responsibility of
208
professionals to maintain the privacy and integrity of data
describing individuals; this includes taking precautions to
ensure the accuracy of data, as well as protecting it from
unauthorized access or accidental disclosure to inappropriate
individuals;
honor confidentiality; the principle of honesty extends to
issues of confidentiality of information whenever one has made
an explicit promise to honor confidentiality or, implicitly, when
private information not directly related to the performance of
one's duties becomes available; the ethical concern is to
respect all obligations of confidentiality to employers, clients,
and users unless discharged from such obligations by
requirements of the law or other principles of this Code.
209
profession, depends on professional reviewing and critiquing;
whenever appropriate, individual members should seek and
utilize peer review as well as provide critical review of the work
of others;
give comprehensive and thorough evaluations of
computer systems and their impacts, including analysis
of possible risks; computer professionals must strive to be
perceptive, thorough, and objective when evaluating,
recommending, and presenting system descriptions and
alternatives; computer professionals are in a position of special
trust, and therefore have a special responsibility to provide
objective, credible evaluations to employers, clients, users, and
the public; when providing evaluations the professional must
also identify any relevant conflicts of interest, as stated in
imperative about to be honest and trustworthy; as noted in the
discussion of principle about avoid harm to others, on avoiding
harm, any signs of danger from systems must be reported to
those who have opportunity and/or responsibility to resolve
them;
honor contracts, agreements, and assigned
responsibilities; honoring one's commitments is a matter of
integrity and honesty; for the computer professional this
includes ensuring that system elements perform as intended;
also, when one contracts for work with another party, one has
an obligation to keep that party properly informed about
progress toward completing that work; a computing
professional has a responsibility to request a change in any
assignment that he or she feels cannot be completed as defined;
only after serious consideration and with full disclosure of risks
and concerns to the employer or client, should one accept the
assignment; the major underlying principle here is the
obligation to accept personal accountability for professional
work; on some occasions other ethical principles may take
greater priority; a judgment that a specific assignment should
not be performed may not be accepted; having clearly
identified one's concerns and reasons for that judgment, but
failing to procure a change in that assignment, one may yet be
obligated, by contract or by law, to proceed as directed; the
computing professional's ethical judgment should be the final
guide in deciding whether or not to proceed; regardless of the
decision, one must accept the responsibility for the
consequences.
improve public understanding of computing and its
consequences; computing professionals have a responsibility
to share technical knowledge with the public by encouraging
understanding of computing, including the impacts of computer
210
systems and their limitations; this imperative implies an
obligation to counter any false views related to computing;
access computing and communication resources only
when authorized to do so; theft or destruction of tangible
and electronic property is prohibited by the principle about
avoid harm to others; trespassing and unauthorized use of a
computer or communication system is addressed by this
imperative; trespassing includes accessing communication
networks and computer systems, or accounts and/or files
associated with those systems, without explicit authorization to
do so; individuals and organizations have the right to restrict
access to their systems so long as they do not violate the
discrimination principle; no one should enter or use another's
computer system, software, or data files without permission;
one must always have appropriate approval before using
system resources, including communication ports, file space,
other system peripherals, and computer time.
211
as to benefit an organization, the leadership has the
responsibility to clearly define appropriate and inappropriate
uses of organizational computing resources; while the number
and scope of such rules should be minimal, they should be fully
enforced when established;
-ensure that users and those who will be affected by a system
have their needs clearly articulated during the assessment and
design of requirements; later the system must be validated to
meet requirements; current system users, potential users and
other persons whose lives may be affected by a system must
have their needs assessed and incorporated in the statement of
requirements; system validation should ensure compliance with
those requirements;
articulate and support policies that protect the dignity of users
and others affected by a computing system; designing or
implementing systems that deliberately or inadvertently
demean individuals or groups is ethically unacceptable;
computer professionals who are in decision making positions
should verify that systems are designed and implemented to
protect personal privacy and enhance personal dignity;
create opportunities for members of the organization to learn
the principles and limitations of computer systems; this
complements the imperative on public understanding;
educational opportunities are essential to facilitate optimal
participation of all organizational members; opportunities must
be available to all members to help them improve their
knowledge and skills in computing, including courses that
familiarize them with the consequences and limitations of
particular types of systems; in particular, professionals must be
made aware of the dangers of building systems around
oversimplified models, the improbability of anticipating and
designing for every possible operating condition, and other
issues related to the complexity of this profession;
212
1.6 The ethic codes system
F
or each profession from the informatics area is made an ethics code
starting from the ACM ethics code and bringing specific elements. In
this way results an ethics code system for the informatics area.
The ethics codes from the system are due to be:
consistent, to not include contradictions;
convergent;
Orthogonal, from profession to profession if there are
differences between these professions there are also
differences between codes.
213
Table 7.1. Ethic codes system
214
215
1.7 Conclusions
T
he ethics codes are due to be public, to be known by the
professional organization members and by the society members. In
this way the ethics codes conduce to the growth of large public
confidence in the profession members.
The ethics codes are due to be integrated within all levels of the
organization, all organization members are due to respects the ethical
principles, what leads to a more good reception and assimilation in all
professional activities of these principles.
The ethics codes are in a contiguous evolution so they must be
improved if appear misalignment, to be added what is absented from code
contributing for the future to the prevention of the appearance of such
situations.
The codes efficiency is made by the periodic analysis of the weight
of who are engaged that will respect the ethics code and didn’t do this in
all subscribers.
The effect of appearance of ethics code is analyzed through
compare the number of misalignments which products before the
appearance and the number after the subscription of whole organization
staff at the ethics code. Furthermore the inclusion in the ethics code of
the consequences of the non-observance creates the conditions of taking
of measures which in managerial plan have the fate to restrict still more
the number of persons which don’t respect the ethics active operational
code from organization.
References:
216
[COMA05] George COMANESCU – Codul de etica al proiectantului HMI
(Human-Machine Interface, the project presented at the
course Ethics Codes in Informatics, inside of the master
program Informatics Security, The Academy of Economics
Studies, Bucharest, November 2005
[IVAN01] Ion IVAN, Laurentiu TEODORESCU - Managementul calitatii
software, Editura Inforec, Bucharest 2001
[LUCA03] Gheorghe-Iulian LUCACI, coord. Ion IVAN - Principii ale
eticii profesionale in dezvoltarea proiectelor informatice,
the final project for the master program INFORMATIZED
MANAGEMENT OF PROJECTS, The Academy of Economics
Studies, Bucharest, March 2003
[MIRO01] Mihaela Miroiu, Gabriela Blebea Nicolae – Introducere in
etica profesionala, Editura Trei, Bucharest 2001
[NEDE05] Alexandru Stefan NEDELCU – Codul de etica al
consultantului de securitate, the project presented at the
course Ethics Codes in Informatics, inside of the master
program Informatics Security, The Academy of Economics
Studies, Bucharest, November 2005
[SECE99] Software Engineering Code of Ethics and Professional
Practice, http:/www.acm.org/serving/se/code.htm
[STOI05] Dragos Mihai STOIAN – Codul de etica al managerului de
proiecte IT, the project presented at the course Ethics
Codes in Informatics, inside of the master program
Informatics Security, The Academy of Economics Studies,
Bucharest, November 2005
217
218
Practical issues in the development of
secure distributed systems
219
220
Module 8 – Practical issues in the
development of secure distributed
systems
A
parallel system has several computers that can be used
simultaneously in solving a task. It is mainly used in scientific and
engineering applications.
A distributed system is a collection of autonomous computers that are
interconnected with each other and cooperate, thereby sharing resources.
It is manly used in commercial and data processing applications.
Common characteristics
1. Multiple processors are used
2. The processors are interconnected by some network
3. Multiple processes (program execution units) are in progress at the
same time and cooperate with each other.
221
Table 8.1. Differences between parallel and distributed computing
Convergence
1. The area of parallel computing and that of distributed computing have a
significant overlap.
2. The areas increasingly use the same architectures. On one hand, the
invention of fast network technologies enables the use of clusters in
parallel computing. On the other hand, parallel machines are used as
servers in distributed computing.
3. The issues of parallelism and distribution are often intertwined and
consequently researched together
222
common data) and synchronization (execute actions in a specific order or
at specific moments in time). In a parallel program, these processes are
executed in parallel on multiple processors. In a distributed program,
processes are executed on different computers that communicate through
a network. Thus, concurrent programs encompass parallel programs and
distributed programs.
D
esigners during the age of operating system and network evolution
synthetisis some general features that an parallel and distributed
system should umplement. Designers have several goals when
building a PDS:
Transparency – this is equivalent to “appear as a single
coherent system”. Transparency has several aspects (see DS
Figure 1-2, page 5). The most important aspects that
characterize the PDSs are the replication (hide that resources
are replicated; this means that the result must not be
dependent of the particular replica used by a process),
concurrency (hide that resources may be shared by several
competitive users; in other words, the user must not be forced
to program specific actions related to sharing), and failure (hide
the failure and recovery of resources; in other words, the
system must continues to operate in the presence of failure of
some components, eventually with degrade of performance;
the system re-enters a normal state after the failed component
has been repaired without user intervention).
Openness – means respecting standard rules. This permits the
interoperation with other components and systems that respect
the same rules and allows system extension with new services.
Also, applications can move from one system to another
without modifications if both systems respect the same
interface rules (implement the same interface).
Scalability – is the ability of the system to extend itself
without dramatic performance changes. The extension can be
in terms of the number of users or the size of the application,
the geographic dispersion of users and resources, or the
number of administrative organizations involved.
Software Concepts
223
other input/output devices) and to provide user programs with a simpler
interfaces to them. Since managed devices have different characteristics
and are interconnected in different ways, we distinguish several operating
system categories. The uniprocessor operating systems is the traditional
model, built to manage resources in computers with a single processor.
The most known version is the OS for time-sharing systems in
which several processes compete for the use of computer resources.
In tightly coupled systems the operating system tries to maintain a
single view of resources it manages. It is named a distributed operating
system (DOS) and is used for multiprocessors and homogeneous
multicomputers.
The middleware
224
and information exchange over the network. These services are available
to the applications through Application Programming Interfaces (APIs).
They are presented in the following Figure that has been proposed by
Amjad Umar in his book Object-Oriented Client/Server Internet
Environments, Prentice Hall, 1997.
225
top of network programming services and may be found in some network
operating systems such as NOS – Network Operating System, Novell,
Windows NT, and DCE – Distributed Computing Environment of the Open
Software Foundation. The latter represents the foundation of the “client-
server revolution” in the beginning of ’90. A lot of other middleware
services use the concepts developed in the Basic Client-Server model.
World Wide Web services are used in applications that use the Web
for accessing Internet resources. They include browsers, Web servers,
search engines, Hypertext Transfer Protocol (HTTP), HyperText Markup
Language (HTML), Common Gateway Interface (CGI), Java for developing
Web application servers, gateways for accessing legacy applications,
intranets that use Internet technologies at the level of an organization.
Distributed Data Management services that allow transparent access to
distributed data (no matter what are their locations in the network). Two
transparency levels are included here. One is the transparency of reading
form several sites. The user can read and gather data from several sites
without knowing the locations where data is stored. The other is the
transparency of the data producer. The user may read and join data from
databases of different types (Informix, Sybase, Oracle, etc.) Services from
this category are included in SQL Gateways.
Distributed Transaction Processing (DTP) permits the atomic
execution of distributed transactions. This means that a transaction is
executed completely or is not executed at all. If some errors occur during
the performance of a transaction, DTP must roll back all the modifications
performed from the beginning until the error has been detected. DTP must
offer several levels of transparency. First, data updates at one site must
be synchronized with the updates of all other copies to keep consistency
(update transparency). Second, a transaction can be decomposed in many
sub-transactions that are executed at different sites to use distributed
data (execution transparency). Third, the user must be isolated from
network and site failures by using alternate routes and sites (failure
transparency).Known DTP protocols include the Two Phase Commit
Protocol (2PC) and XA. DTP implementation involves many issues such as
synchronization algorithms, deadlock detection, fault tolerance, etc.
Distributed Object Services such as Common Object Request
Broker Architecture (CORBA), Java Remote Method Invocation (Java RMI),
and Microsoft Distributed Object Model (DCOM). These services are based
on remote method invocations, the use of object request brokers (ORBs)
that transfer the requests and the answers between client and server
objects, interface repositories (that keep the description of remote
operations) and implementation repositories (that keep the server objects
implementations). Together they provide frameworks for the development
and support of the object oriented distribution applications.
Special-purpose middleware for emerging applications (Wireless and
mobile computing middleware, Multimedia, groupware middleware to
226
support group activities on a network, gateways for the integration of
legacy applications and the coexistence with new applications, etc.)
Middleware is in a continuous evolution. Along with the
development of new technologies, new interfaces are added to it. Also,
some common and frequently used services evolve from APIs to
Operating System components.
1.3 RPC and RMI - Remote Procedure Call and Remote Object
Invocation
W
e stated that distributed applications are composed of
cooperating processes running in several different processors. In
order to cooperate, processes can use Message Passing. Remote
Procedure Call (RPC) is another method for inter-process communication.
In this method, a process on a machine can call a procedure on
another machine. When the procedure is called, the calling process is
suspended and the execution of the called procedure takes place on the
second machine. When the procedure terminates, the calling process is
resumed. Information is transferred from the process to the procedure as
parameters, and from the procedure to the calling process as result. No
message passing is visible to the programmer (despite the fact that the
transfer of parameters and result takes place through message passing
between the two machines hosting the calling process and the called
procedure). The protocol is illustrated by the following scheme:
execute procedure
227
RPC is a two-way communication between the client and server. The call
is termed remote since the client call and the called procedure may be
situated in two different machines.
228
the user cannot set directly the contrast to 80% on a scale from 0 to 100,
but he can progressively increase or decrease the contrast by the
corresponding “methods”.
Another example is a Customer object. The attribute can be: Name,
Address, Account_number, and Balance_due. The methods associated
with the customer object are: Add-Customer, Update-Customer, Get-
Balance-Due, and Send-Invoice. The following Figure (from the book
“Object Oriented Client/Server Internet Environments” by Amjad Umar)
suggests how the object methods protect the object attributes since they
are the only means to access or modify the attributes.
Since we can have several different objects of the same kind (for
example, several Customer objects), we may use a single template,
named class, for describing them. The purpose of a class is to define a
particular type of objects. The objects that belong to that class are known
as instances of the class. Object oriented programming languages provide
statements to create objects that belong to a defined class.
Classes can inherit common attributes from other classes. For
example, we can define a StudentCustomer as a special category of
Customer. It will have all the attributes of the Customer (Name, Address,
etc.) but also additional attributes (such as Study_Year, Host_University),
and additional methods (such as Update-University or Chage-Study-Year)
that a simple Customer does not have. We say that StudentCustomer is a
subclass of Customer. It inherits all the attributes and methods from the
Customer and has additional ones. Inheritance simplifies programs
development. When defining the StudentCustomer, the programmer has
to specify only that it is a subclass of Customer and to describe the
additional attributes and methods.
Different objects intercommunicate through messages. A message
invokes a method and includes additional information (arguments) needed
to carry out the method. The target object executes the method and
eventually transmits a result to the invoking object. The description of the
methods that an object can execute is presented in an interface. An
interface presents the definitions of the methods (names, types of
arguments, type of the result, etc.) not the description of their
implementation. It describes the outside view of an object and not the
internal details.
229
The first aspect relates to the target object identification. For local
invocations, the invoking object must know the name of the target object.
Since both the invoker and the target are in the same process, this name
is enough to distinguish among different objects inside the process. For
remote invocations, the object reference is more complicated since it must
uniquely identify the object in the distributed system that can span
several processors or even sub-networks. Also, a mechanism is needed to
direct a remote invocation to the corresponding target object, more
specific to find the network communication path to this object. This role is
fulfilled by a component known as an Object Request Broker, or ORB. The
broker has the function to identify the target object based on its reference
and to transmit the invocation to it. Also, it must transmit the response (if
any) back to the invoking object.
230
with another object by accessing its data, but this is not the usual
interacting way.) In a distributed system, objects in different processes
may communicate with one another by means of remote methods
invocation, RMI. It represents an extension to the local method invocation
that applies to objects that belong to the same process.
A television set and its remote control obey to the same model.
When we press a button of the remote control, an invocation is send from
it to the television indicating an action that must be executed (change the
channel, change the brightness of the image, etc.) The requested action is
performed and the user can see the effect (the response) usually in the
form of a display on the television screen. It is important to note that the
client (in this case the remote control) doesn’t “know” how the operation
is executed internally by the television set but knows how to invoke it.
Java RMI
231
Fig 8.4. RMI – Client and server structure (Coulouris et al.)
232
object B. (To do this, it uses the Remote reference module to translate
between the remote reference of B and its local reference in the server
process.) The dispatcher selects the appropriate method in the skeleton
and passes the message to it. The skeleton narwhals the arguments in the
message and invokes the corresponding method in B. Then, it waits for
the invocation to complete and marshals the result. It then transmits the
result through the communication modules to the proxy object in the
client process. When it receives the result, the proxy narwhals it and
transmits the result, in the appropriate form, to A.
233
1.4 Secure RPC
S
ecure RPC is an authentication method that authenticates both the
host and the user who is making a request for a service. Secure RPC
uses the Diffie-Hellman authentication mechanism based on DES
encryption. Applications that use Secure RPC include NFS and the NIS+
name service.
The protocol works as follows. When a user presents his userid and
password to a client, the client may proceed with the local login
mechanism that has the following steps:
234
With every additional transaction, the client returns the index ID to
the server and sends another encrypted time stamp. The server sends
back the client's time stamp minus 1, which is encrypted by the
conversation key.
M
iddleware security is concerned with ensuring our computer
systems are safe for authorized use and safe from unauthorized
use. Distributed systems are particularly vulnerable due to the
following factors:
They are not under centralized control
They provide interfaces specifically for remote access
Society is placing more responsibility on distributed systems
due to the distributed nature of some tasks, like banking,
increasing criminal focus.
235
The following principles are used in CORBA security:
Application level objects should be unaware of the security
services that are used
If client has specific security requirements, they should be
supported and used when an object is invoked
Selected at binding of the client to a server object
Determined by security policies specified by policy objects
o Specify the type of message protection, objects that
have a list of trusted parties, etc.
o Default policies are automatically associated with client
objects
Available as standard interfaces (that are replaceable)
Implemented in combination with two security interceptors
236
DNS records are grouped into RRSs (Resource Record Sets)
New record types are added
o The KEY record is public key of a zone, user, host, etc.
o The SIG record is the signed (encrypted) hash of the A
and KEY records to verify their authenticity.
Figure 8.6 bellow depicts the steps followed by a client that wants
to use a service. The operations are presented in more details in the
sequel. The client Alice (A) wants to use the service provided by Bob (B).
237
4. The workstation asks Alice to introduce her password
5. The workstation receives the password and uses it to generate
the shared key KA,AS. After that, the password in deleted from the
workstation's memory. The password is never transmitted across
the network. From this moment, Alice is completely authenticated
and can consider herself logged into the system.
6. The workstation decrypts the message 3, and extracts the
session key shared with the TGS, KA,TGS and the ticket
KAS,TGS(A,KA,TGS)). Alice sends a request to TGS to obtain a ticket
for accessing the service B. The request includes: the ticket
KAS,TGS(A,KA,TGS)) obtained from AS, the identity of Bob, and a
timestamp t encrypted with KA,TGS. This timestamp is used for
avoiding a reply attack.
7. The TGS decrypts the ticket and find out the session key shared
with Alice KA,TGS, and then uses this session key to decrypt the
timestamp. If the timestamp shows that the request was issued
recently, and if Alice has the right to use the service B then TGS
accepts the request. It then transmits a message containing: the
session key to be shared by Alice and Bob, KA,B, encrypted with
KA,TGS, and a ticket for B that specifies the identity of A and the
session key KA,B encrypted with the secret key shared by B and
TGS, in the form KB,TGS (A, KA,B).
For setting up a connection with Bob, Alice sends the ticket KB,TGS
(A, KA,B) and a timestamp encrypted with KA,B (step 1 in figure 8.7).
238
1.6 Session-level security, group security, secure replicated
services and design issues
I
n this chapter we will summarize issues about session-level security,
group security, secure replicated services and design issues.
Most session-level security protocols use some variation of
1. Decide on security parameters
2. Establish shared secret to protect further communications
3. Authenticate the previous exchange
239
In each IPsec implementation, there is a minimal database, SA
Database that defines the parameters associated with each SA, such
as:
AH information: authentication algorithm, keys, key
lifetime, …
ESP information: encryption and authentication algorithm,
keys, initialization values, key lifetimes.
Sequence number counter: used to generate the sequence
number field in AH and ESP headers
Anti-replay window: used to determine whether an inbound
AH or ESP packet is a replay
Lifetime of the SA
Sequence counter overflow flag: indicates what to do when
a counter overflow occurs
IPsec protocol mode: tunnel or transport mode
Path MTU: any observed path maximum transmission unit (to
avoid fragmentation).
240
Protection against unauthorized users (Roles)
J
ava has several security levels. Access control in Java is based on
protection domains (a notion from [SS75]), which group together
the set of objects which are currently accessible by a principal. The
main characteristics are summarized here:
Language – level security:
o No use of pointers
o No initialize variables
o runtime checks on array bounds
o Garbage Collection
Virtual Machine – level security:
o secure ‘playing field’
o controls access to operating system calls
o Security manager
API – level security
Web Browser Security:
o Security manager
1.8 Conclusions
S
ecurity in distributed systems is by far one of the most important
issues with multiple facets, namely: security mechanisms, protocols,
policies, management, etc. In a distributed system, security applies
241
to channels, hosts, services, resources, users. A design issue is "if" and
"when" to use symmetric cryptosystems and when to use a combination
of symmetric and asymmetric techniques. The current practice is to use
public key cryptography for symmetric key distribution and then use the
symmetric cryptography for data transfers in shorts sessions.
One important problem is to have secure channels. This is related
to the problem of authentication, for which several solutions are used,
starting with simpler ones (e.g., using a shared key) and ending with
public key cryptography. Then, message integrity and confidentiality are
also important. In this context, the digital signature and session keys
solutions are used. Digital signature may be attached to plain text
messages for ensuring their integrity. (The message can be read by a
third party but cannot be modified during the transmission so that the
receiver does not observe the modification.) A secure channel may assure
the confidentiality. In this case, the message is encrypted so that nobody,
except the receiver, can understand the content. An important topic is the
confidential group communication and secure replicated servers.
Another issue is the access control or authorization. It protects the
resources so that only processes (or users) that have the proper rights
can access and use the resources in some pre-established way (read,
write, modify, or execute). The access control can be made by using a
control list. It lists the rights of each user to access the resource. A
second method is the use of certificates. A certificate specifies, exactly,
what are the rights of the owner to access a particular set of resources.
Another issue is security management, which includes key
management and authorization management. Key management refers to
the distribution of cryptographic keys, while authorization management
means the handling of access rights, attribute certificates and delegation.
References:
242
[ORFA97] R.Orfali, D.Harkey "Client/Server Programming with Java
and CORBA", John Wiley 1997
[PATR01] Victor Valeriu Patriciu, Ion Bica, Monica Ene-Pietroseanu,
“Securitatea ComerŃului Electronic”, Publishing House
ALL, Bucharest 2001
[PATR98] Victor Valeriu Patriciu, Monica Ene Pietroseanu, Ion Bica,
C. Cristea, "Securitatea informatica in UNIX si INTERNET",
Publishing House Tehnica, Bucharest 1998
[SCHN96] Bruce Schneier, “Applied Cryptography 2nd Edition:
protocols, algorithms, and source code in C”, John Wiley &
Sons, Inc. Publishing House, New York 1996
[SIEG00] Jon Siegel (ed) "CORBA 3. Fundamentals and
Programming", OMG Press, John Wiley & Sons, 2000
[STAL03] William Stallings, “Cryptography and Network Security,
3/E”, Prentice Hall, 2003
[TANE02] A.S. Tanenbaum, M. van Steen "Distributed Systems.
Principles and paradigms", Prentice Hall 2002
[ZAHA00] Ron Zahavi "Enterprise Application Integration with
CORBA", OMG Press, John Wiley & Sons, 2000
243
244
E-Commerce and E-Payment Security
245
246
Module 9 – E-Commerce and E-
Payment Security
E
mergence of new e-commerce models combined with mobility and
Internet based technologies make the challenge of the industry and
its "Architects" even more difficult. The need for a more
comprehensive approach to understanding and gathering all the existing
profiles of architectures, frameworks and models for e-commerce has
become the more tangible. The use of electronic commerce - whether it is
Business to Business, Business to Consumers or Business to Government
- in an open environment is very much dependant on the correct
application of common rules and standards.
Information Security is an integral part of the overall e-commerce
activities. In all countries, the ability to protect the e-commerce
infrastructures is impeded by the ability to produce the quantity and
quality of information technology professionals required to operate,
247
maintain and design our cyber systems. The responsibility for educating
that new generation work force will fall squarely on the shoulders Higher
Education programs.
Ecommerce security issues are frequently aired in the press, and are
certainly important. Customers are concerned that the item ordered won't
materialize, or be as described. And (much worse) they worry about their
social security number and credit card details being misappropriated.
However rare, these things do happen, and customers need to be
assured that all ecommerce security issues have been covered. Your
guarantees and returns policies must be stated on the website, and they
must be adhered to.
This paper and the course is focused, generally, to the domain of
electronic commerce security and, particularly, on the payments systems
security and their infrastructures.
M
ost ecommerce merchants leave the mechanics to their hosting
company or IT staff, but it helps to understand the basic principles.
Any system has to meet four requirements:
privacy: information must be kept from unauthorized parties.
integrity: message must not be altered or tampered with.
authentication: sender and recipient must prove their identities
to each other.
non-repudiation: proof is needed that the message was indeed
received.
248
Digital Signatures and Certificates
Information sent over the Internet commonly uses the set of rules
called TCP/IP (Transmission Control Protocol / Internet Protocol). The
information is broken into packets, numbered sequentially, and an error
control attached. Individual packets are sent by different routes. TCP/IP
reassembles them in order and resubmits any packet showing errors. SSL
uses PKI and digital certificates to ensure privacy and authentication. The
procedure is something like this: the client sends a message to the server,
which replies with a digital certificate. Using PKI, server and client
negotiate to create session keys, which are symmetrical secret keys
specially created for that particular transmission. Once the session keys
are agreed, communication continues with these session keys and the
digital certificates.
Credit card details can be safely sent with SSL, but once stored on
the server they are vulnerable to outsiders hacking into the server and
accompanying network. A PCI (peripheral component interconnect:
hardware) card is often added for protection, therefore, or another
approach altogether is adopted: SET (Secure Electronic Transaction).
Developed by Visa and MasterCard, SET uses PKI for privacy, and digital
certificates to authenticate the three parties: merchant, customer and
bank. More importantly, sensitive information is not seen by the merchant,
and is not kept on the merchant's server.
Firewalls (software or hardware) protect a server, a network and
an individual PC from attack by viruses and hackers. Equally important is
249
protection from malice or carelessness within the system, and many
companies use the Kerberos protocol, which uses symmetric secret key
cryptography to restrict access to authorized employees.
Transactions
250
Transsped or E-sign service providers). Check out the
hosting company, and enter into a dialogue with the
certification authority: they will certainly probe your
credentials.
4. You possess a merchant account, and run the business from
your own server. You need trained IT staff to maintain all
aspects of security — firewalls, Kerberos, SSL, and a digital
certificate for the server (costing thousands or tens of
thousands of dollars).
P
aying for goods and services electronically is not a new idea. Since
the late 1970s and early 1980s, a variety of schemes have been
proposed to allow payment to be effected across a computer network.
Few of these schemes got beyond the design stage since the schemes
were of little use to those who were not connected to a network. The
arrival of the Internet has removed this obstacle to progress. This network
of networks has grown dramatically from its inception in the late 1970s to
today's truly global medium. By July 2000, after a period of exponential
growth, the number of machines hooked up to the network had grown to
over 93 million. In the early stages of the Internet evolution, it was
common to make the assumption that each of these machines was used
by around 10 people. This would mean that some 930 million people have
Internet access worldwide. Most commentators would agree that this
figure is much too high, and have used a variety of other estimating
techniques to arrive at a better answer. The 2001 an Internet Survey
takes an average of such estimates and concludes that just over 400
million people were on-line by January 2001. Much of this growth has
been driven by the avail-ability of World Wide Web (WWW) technology
that allows information located on machines around the world to be
accessed as a single multimedia-linked document with simple point-and-
click interactions.
Surveys of Internet users suggest that the profile is changing from
the original university-centered user base to a more broadly based
residential population with a high spending power. These facts are not lost
on commercial organizations wishing to offer goods and services for sale
to a global consumer audience. Initially the focus of electronic commerce
(e-commerce) was on selling goods to consumers. The most popular
categories included computer goods and software, books, travel, and
music CDs. This so-called business-to-consumer (B2C) e-commerce grew
251
spectacularly. In the United States, such spending was estimated at $7.7
billion in 1998, $17.3 billion in 1999, and $28 billion in 2000.
Around 1999, the industry focus began to shift to the trade that
companies do with each other. By building on-line electronic marketplaces,
it became possible to bring together businesses such as car manufacturers
and their component suppliers, or fruit wholesalers with primary
producers. This business-to-business (B2B) e-commerce is thought to
have the potential to become considerably larger than the B2C sector, and
indeed some early estimates suggest that B2B e-commerce reached $226
billion worldwide in 2000 and is projected to reach $2.7 trillion by 2004.
In both the B2C and B2B sectors, the Web was first used simply as
a means of discovering products and services, with the payment being
carried out of f-line by some conventional payment method. In the case of
B2C consumer purchases, merchants found they could capture credit card
details from Web forms allowing the completion of the transaction off-line,
albeit with a complete absence of security measures.
This course attempts to present the technology involved in the more
important payment systems currently available to network users. Since
the field is undergoing a major upheaval, this account will necessarily be a
kind of snapshot of the current state of play.
The course will look at the ways in which the world's population currently
pays for goods and services in order to gain a good appreciation for the
context in which the new systems are being introduced. Since most of the
new schemes rely on cryptographic techniques for their security, the
course provides the necessary background information on cryptography
required for a thorough understanding of how the new schemes operate.
The course make a survey the principal schemes used to effect payment
electronically in a manner that is most similar to credit card, check, and
cash, respectively; course looks also at micro payments, a new form of
payment that has no counterpart in conventional commerce.
Payment in its most primitive form involves barter: the direct
exchange of goods and services for other goods and services. Although
still used in primitive economies and on the fringes of developed ones,
this form of payment suffers from the need to establish what is known as
a double coincidence of wants. This means, for example, that a person
wishing to exchange food for a bicycle must first find another person who
is both hungry and has a spare bicycle! Consequently, over the centuries,
barter arrangements have been replaced with various forms of money.
The earliest money was called commodity money, where physical
commodities (such as corn, salt, or gold) whose values were well known
were used to effect payment. In order to acquire a number of desirable
properties including portability and divisibility, gold and silver coins
became the most commonly used commodity money, particularly after the
industrial revolution in the 1800s.
The next step in the progression of money was the use of tokens
such as paper notes, which were backed by deposits of gold and silver
252
held by the note issuer. This is referred to. as adopting a commodity
standard. As an economy becomes highly stable and governments (in the
form of central banks) are trusted, it becomes unnecessary to have
commodity backing for notes that are issued. This is referred to as fiat
money since the tokens only have value by virtue of the fact that the
government declares it to be so, and this assertion is widely accepted.
Cash payment is the most popular form of money transfer used
today, but as amounts get larger and security becomes an issue, people
are less inclined to hold their wealth in the form of cash and start to avail
of the services of a financial institution such as a bank. If both parties to a
payment hold accounts with the same bank, then a payment can be
effected by making a transfer of funds from one account to another. This
essential mechanism is at the root of a wide variety of payment schemes
facilitated by the financial services industry today. The following sections
will look at some of these and how they compare with traditional cash
payment.
Cash payments
253
banking industry, which acts as the distributor of cash in the economy,
has been attempting for many years to wean consumers off cash and into
electronic bank mediated payments and in recent years has begun to have
some success.
Where both parties have lodged their cash with a bank for
safekeeping, it becomes unnecessary for one party to withdraw notes in
order to make a payment to another. Instead, they can write a check,
which is an order to their bank to pay a specified amount to the named
payee. The payee can collect the funds by going to the payer's bank and
cashing the check. Alternatively, the payee can lodge the check so that
the funds are transferred from the account of the payer to that of the
payee.
Payment by check
If the parties hold accounts with separate banks, then the process
gets more complicated. The cycle begins when A presents a check in
payment to B. What happens: Party B lodges the check with his bank
(referred to as the collecting bank), which will collect the funds on his
behalf. In most cases, a credit is made to B's account as soon as the
check is lodged, but this immediate funds availability is not always the
case. All checks lodged with bank B over the course of a day will be sent
to the clearing department, where they are sorted in order of the banks
on which they are drawn. The following day, they are brought to a clear-
ing house, where a group of banks meet to exchange checks. The check in
question will be given to bank A and (usually) one day later bank A will
verify that the funds are available to meet the check and debit A's account
for the sum involved.
If funds are not available, the signature on the check does not
match with samples, or any other problem occurs, then the check must be
returned to the collecting bank together with some indication as to why it
could not be processed. Bank A must attend to this promptly, usually
within one working day. These so-called returned items are the major
problem with the check as a payment instrument in that their existence
introduces uncertainty, and the fact that they need individual attention
from banking staff means that they are very expensive to process. The
principal loser in this situation is B, who finds himself in possession of a
dishonored check with hefty bank charges to pay. In general, however,
the bank's changes are seldom high enough to cover their processing
expenses.
If funds are available to meet the check, then the following day the
banks that are part of the clearing arrangement will calculate how much
254
they owe to or are owed by the group of clearing banks as a whole. This
amount is then settled by making a credit or debit from a special account
usually maintained by the central bank.
255
The system is now used extensively by employers to pay wages
directly into workers' bank accounts, to implement standing orders, direct
debits, and direct credits. In the United Kingdom in 2000, BACS processed
3.2 billion transactions to the value of £1.8 trillion. In the United States,
usage of ACH has been growing at between 9% and 22% per year and in
1999 processed 6.2 billion transactions with a value of $19.4 trillion. More
than half of the recipients of Social Security use it for direct deposit, and
nearly half of the private sector receive their wages by ACH.
On a more global level, a consortium of global banking players
referred to as the Worldwide Automated Transaction Clearing House
(WATCH) came together in late 2000 to plan a global system that would
bridge national ACH systems with a target of achieving live operation by
July 2002. This would initially provide only credit transfers in six to eight
currencies with more functions being added over time.
The idea of payment using cards first arose in 1915, when a small
number of U.S. hotels and department stores began to issue what were
then referred to as "shoppers plates" [11]. It was not until 1947 that the
Flat-bush National Bank issued cards to its local customers. This was
followed in 1950 by the Diners Club, which was the first "travel &
entertainment" or charge card, and eight years later the American
Express card was born. Over the years, many card companies have
started up and failed, but two major card companies, made up of large
numbers of member banks, have come to dominate this worldwide
business. These are Visa International and MasterCard.
Credit cards are designed to cater for payments in the retail
situation. This means that payments can only be made from a cardholder
to a merchant who has pre-registered to accept payments using the card.
256
The card companies themselves do not deal with cardholders or
merchants, but rather license member organizations (usually banks) to do
this for them.
A bank that issues cards to its customers is called a card-issuing
bank. This means that it registers the cardholder, produces a card
incorporating the card association's logo, and operates a card account to
which payments can be charged.
Merchants who wish to accept payments must also register with a
bank. In this case, the bank is referred to as the acquiring bank, or simply
the acquirer. In a paper-based credit card payment, a merchant prepares
a sales voucher containing the payer's card number, the amount of the
payment, the date, and a goods description. Depending on policy, the
transaction may need to be authorized. This will involve contacting an
authorization center operated by or on behalf of the acquiring bank to see
if the payment can go ahead. This may simply involve verifying that the
card does not appear in a blacklist of cards, or it may involve a reference
to the card-issuing bank to ensure that funds are available to meet the
payment. Assuming it can be authorized, the payment completes.
At the end of the day, the merchant will bring the sales vouchers
to the acquiring bank, which will clear them using a clearing system not
unlike that used for paper checks and giros but operated by or on behalf
of the card associations. The merchant's account is credited, the card-
holder's is debited, and the transaction details will appear on the next
monthly statement.
All the costs associated with a credit card transaction are borne by
the merchant involved. The cardholder will see only the amount of the
transaction on his or her statement, but the merchant typically pays over
a small percentage of the transaction value with some associated
minimum charge that is divided between the acquiring bank and the card
association. For this reason, credit cards are not worthwhile for
transactions in which the amount is below a certain threshold (typically
around $2).
The reason why a credit card is so named is that the balance owing
on a cardholder's account need not necessarily be paid at the end of the
monthly period. The cardholder can pay interest on the outstanding bal-
ance and use the card for credit. Other arrangements are possible; for
example, if the balance must be paid in full at the end of the period, it is
called a charge card.
Another possibility is to link the card to a normal bank account,
and to process the transaction in real time. This means that at the time
the transaction takes place, the amount is transferred from the customer
to the merchant bank account. This arrangement is called a debit card.
One final way to use a payment card is to incorporate a storage facility
into the card that can be loaded with cash from the cardholder's bank
account. This electronic purse facility will be fully. Bankers often classify
257
payment cards into three types: pay before (electronic purse), pay now
(debit cards), and pay later (credit cards).
In conclusions, most important e-payments systems in use today
are in the following classes:
Credit card-based systems (IKP, SET)
Electronic checks and account transfers
Electronic cash payment systems (Ecash, Emoney, Epurse)
Micropayment Systems.
1.4 Conclusions
A
s e-commerce security becomes of paramount importance, the E-
Payments and E-Commerce Security Course provides an in-depth
understanding of basic security problems and relevant e-commerce
and e-payments solutions, while helping industry professionals implement
the most advanced security technologies.
It provides a thorough overview of e-commerce and the Internet
as an enabling technology in business while functioning in a regulatory
framework. The highlights the risks posed by insecure e-commerce and e-
payments systems and identify strategies, which help to mitigate these
risks.
References:
258
2. Tutorial on Java Smart-Card Electronic Wallet
Application
Cristian TOMA
Abstract: In this paper, are highlighted concepts as: complete Java card
application, life cycle of an applet, and a practical electronic wallet sample
implemented in Java card technology. As a practical approach it would be
interesting building applets for ID, Driving License, Health-Insurance
smart cards, for encrypt and digitally sign documents, for E-Commerce
and for accessing critical resources in government and military field. The
end of this article it is presented a java card electronic wallet application.
2.1 Introduction
T
he cards are classified in magnetic strip cards and smart cards. The
smart ones are divided after many features. For instance, if we
consider the way how they communicate with the card reader device,
those are contact-less – the communication between card reader and
smart card is made through radio wave, with contact – the smart card
make a physical contact with the smart card, or combined.
Concerning the type of integrated circuit that a smart card could
have, the smart cards are classifying in:
Cards with microprocessor chip, short term chip cards, contains
a microprocessor which is used for computations. Besides this
microprocessor with 8, 16 or 32 bits register, the card could
contain one or more memory chips which is using for read-only
memory and for random access memory – RAM. This features
offer to a card almost the power of a desktop computer. This
type of cards are used in different informatics systems like
banking credit cards, cards for access control in institutions,
SIMs – Secure Identification Module – for mobile phones and
cards for accessing digital TV networks.
Cards with memory chip, contains different data but can not
compute the store data because the card don’t have a
microprocessor. They are fully depended by the host
application.
In our days most of the specialists agree on the idea, that a card is
smart only if it can compute, only if it has a microprocessor or a
microcontroller. Keeping on this approach, the difference between a smart
card and a card only with memory chips or magnetic strips, is that the last
259
one only can store data and can not compute the data. The informatics
systems which are interacting with smart cards have an advantage
because the access to the different data bases and the time of
transactions could be considerably minimized. More than that, some smart
cards contain non-volatile memories which provide a great advantage
regarding the development of secure systems and applications, because in
those memories they can store sensitive information like digital
certificates, symmetric and asymmetric private keys. In order to improve
the speed of computations, this kind of cards have also specialized
cryptographic coprocessors. The coprocessors execute complicate
cryptographic algorithms like RSA, AES-Rijndael, 3DES or algorithms
based on elliptic curves. In the following sections will be implemented step
by step an electronic wallet implemented on a smart card as a Java card
applet.
A
Java card application is an applet which is running in smart card.
But often the applet needs to interact with different systems and
applications. That’s why in specialty literature a complete
application for Java smart card is composed from the java applet which is
running on smart card, a host application and back-end application
systems which provide to the end-user a service.
In figure 9.1 is depicted a complete Java card application:
Desktop, Laptop,
Server Intelligent Card
Applications
for business
logic Applet Applet
TCP/IP or IPC Electrical
communication device for
interaction
with cards APDU Vendor
APDU
Command Command Extensions
HOST CAD
Application JCRE JCAPI
APDU APDU
Response Response JCVM
Card OS
260
With more details, informatics systems that use smart cards will
have the following items from point of view of a complete Java card
application:
Back-end Applications – the ones that implement the business
logic and connects to the data bases and web services;
Host or off-card Applications – the ones that communicate with
the card reader, they are the interface between Back-end
applications and the card reader. These applications run on the
desktop which is connected with the card reader. Also they can
run on specialized terminal like ATM-Automatic Teller Machine
or can run on a mobile phone which is used as a smart card
reader;
Card Reader Applications – these are running in the card reader
and are responsible for the accomplishment and the
coordination of the interaction with card’s applications. The
physical equipment, the card reader, plus with the applications
that are running on it, is called CAD – Card Acceptance Device.
The CAD is responsible how is realized the physical connection
with the card, through electrical contact or radio wave. Also the
card is responsible for providing energy for the card. The CAD
takes APDU-Application Protocol Data Unit- commands –
standard strings of bytes – from host applications and are
forwarded to the smart cards;
Smart cards’ Applications – in Java Card platform can be in the
same time many applications, applets. The applets are run in
JCRE – Java Card Runtime Environment.
There are three models which can be used in order to realize the
communication between host application and Java applet. First model is
quite simple and is supposing to send and receive template strings of
bytes in typical format – Message-Passing Model. The second model is
Java Card Remote Method Invocation – JCRMI, which is a set of classes
and procedures likely with the ones from J2SE – Java 2 Standard Edition
RMI, but basically this model use the first model. The third model for the
communication between host and the applet from the card, is SATSA –
Security and Trust Services API. SATSA defined in JSR 177, provide to the
developers to use whatever model as base – Message-Passing Model or
JCRMI, but is a more abstract API based on GFC – Generic Connection
Framework API. So, most of developers use the first two models in order
to develop smart cards complete applications.
This is the reference model and represents the base for the other
two existent models, JCRMI and SATSA for developers. The
261
communication between host application and the applets from smart card
suppose to transmit some APDU – Application Protocol Data Unit from
host to CAD – Card Acceptance Device, and then the same bytes strings
are sent from the CAD to the card applet. The applet receive those bytes
strings, is parsing the bytes and then will send back following the reverse
path: Applet-CAD-Host. An APDU is composed from standard bytes blocks
conform ISO/IEC 7816-3 and 7816-4. Respecting the standards the applet
receives directly from CAD, APDU Commands and sends back to CAD,
APDU Responses. The communication between the card reader and the
card is physically realized through data link protocol. This protocol is likely
data link level protocol from protocol stack ISO/OSI. The link protocol,
defined in ISO/IEC7816-4, has four alternatives: T=0 – byte oriented,
T=1 – bytes arrays oriented, T=USB – oriented Universal Serial Bus or
T=RF – radio wave oriented, Radio Frequencies. The classes from Java
Card API and JCRE specifications embed the physical details for APDU
communication protocol.
APDU Command
CLA INS P1 P2 Lc Data Field Le
Header-mandatory Body-optional
There are other four specific structures for an APDU Command, but
these structures are used only in data link protocol T=0.
The explication for the fields from the APDU command is the
following:
CLA – is one byte – 2 hexadecimal digits, and has different
predefined values conform standard ISO7816. For instance,
between the value 0x00 and 0x19 are values for accessing file
system and security operations, from 0x20 to 0x7F are
reserved for future using, and from 0x80 to 0x99 can be used
for applets’ specific instructions implemented by developers but
between 0xB0 and 0xCF are specific instructions for all applets
and not for a particular one. As matter of fact the most used
value for this field is 0x80;
INS – is one byte, and the standard defines a specific
instruction in the field CLA. For instance, when CLA has the
value between 0x00 and 0x09, but INS has the value 0xDC –
means card’s records update. In personal applications which
are installed on the card, the field INS could have predefined
262
values established by developers but according with the
standard. For example, the developer chooses for this field the
value 0x20 for checking sold amount from card if and only if
the CLA field is 0x80;
P1 – this represents the first parameter for a instruction and
has one byte. This field is used when the developers want to
send some parameters to the applet or want to qualify the INS
field;
P2 – this is the second parameter for an instruction and has
one byte. Is used for the same scope like P1;
Lc – has one byte, is optional and represents the bytes length
for the field Data Field;
Data Field – is not fixed and has a bytes’ length equal with the
value from the field’s value Lc. In this field are stored data and
parameters which are send from host application to applet;
Le – stores the maxim number of bytes that should have Data
Field from APDU Response (the number of bytes from response
could be any value from the range 0 and the value from this
field).
Practically a host application sends to the CAD but the CAD sends
to the applet the same APDU commands with structures and values which
respect the standards.
APDU Response
Data Field SW1 SW2
Body-optional Trailer-mandatory
The fields SW1 and SW2 are parsed and interpreted together, but
a communication process is called complete if there were no problems
(SW1=0x61 and SW2=0x90 or any-0xnn) or if there were only warnings
(SW1=0x62 or SW1=0x63 and SW2 contains the warning code). A
communication process is called failed if there were execution errors
263
(SW1=0x64 or SW1=0x65 and SW2 has the error code for execution) or
checking errors (SW1=from 0x67 to 0x6F and SW2 has the code for
checking error).
264
2.3 Elements involved in the development and life cycle of a
Java card applet
F
or an Java card applet-application is involved a series of elements
and concepts, of which the most important are: Java Card Virtual
Machine – JCVM, Java Card Advanced Programming Interface –
JCAPI, Java Card Runtime Environment – JCRE, the life cycle of JCVM and
of the Java card applet, the Java Card sessions, logical channels, isolation
and partition of the applet objects, memory and memory objects
management and persistent transactions.
For the Java Card platform, JCVM is divided in two parts. One part
is external to the physical card and is used as a developing tool. This part
converts, uploads and verifies the Java classes that have been compiled
with a normal Java compiler. The finality of this external part is that from
a class normally compiled Java-byte code, results a binary execution CAP
file – Converted Applet, which will be executed by the JCVM on the card.
The other part of JCVM resides on the card and is used to interpret the
binary code produced by the first part and for the management of objects
and classes. The card part of the JCVM has, analogically, approximately
the same use as a JVM – Java Virtual Machine for a desktop computer. Of
course JCVM has a series of limitations of syntax language as well as of
organization structure. For instance, as limitations of syntax-language it is
possible to mention the lack of support for some key words (native,
synchronized, transient, volatile, strictfp), for some types (double, float,
long) and for some classes, interfaces and exceptions (the majority of the
classes, interfaces and exceptions from the packages java.io, java.lang,
java.util). As limitations of organization structure language, it is
mentioned the fact that a package cannot contain more than 255 classes
and a class cannot directly or indirectly implement more than 15
interfaces. More details are presented in the specifications of the virtual
machine for Java Card platform [JCVM03].
265
Briefly it is mentioned as follows:
From the package java.io there is kept only the IOException
class to complete the hierarchy of classes concerning the
exceptions from Remote Method Invocation;
From the package java.lang there is kept the simplified version
of the classes Exception, Object şi Throwable and it is
introduced the class CardException;
From the package java.rmi there is kept the Remote interface
and the RemoteException class;
There is introduced the package javacard.framework that
contains interfaces (ISO7816 – contains constants used by the
standard, MultiSelectable – used by the applets that accept
competitive selection, PIN – represents Personal Identification
Number, Shareable – used for objects that can be partitioned
on the card applets), classes (AID – identifies according to
ISO7816-5 the unique identifications for Application Identifier
applets, APDU – embeds Application Protocol Data Unit, that
have been presented in the above paragraphs, as in ISO7816-4,
Applet – abstract class that defines the application-applet which
resides on the card, JCSystem – contains specific methods for
controlling the life cycle of an applet, OwnerPIN – an
implementation of the PIN interface, Util – contains methods
such as arrayCompare() and arrayCopy() for editing the octet
strings from the smart card memory) and exceptions
(APDUException, ISOException, SystemException,
TransactionException, CardException) intensively used for
developing the applets for the Java Card platform;
It is introduced the package javacard.framework.service that
contains interfaces (Service – it is an interface for the basic
service used by the applet for processing the APDU Commands
and Answers by methods such as processCommand(),
processDataIn() and processDataOut(), RemoteService –
interface for the remote access to the card services by RMI,
SecurityService – it extends the Service interface and provides
methods such as isAuthenticated(), isChannelSecure() or
isCommandSecure() for verifying the current security state),
classes (BasicService – the pre-defined implementation of the
Service interface and provides helping methods for
collaborating with different services and for APDUs editing,
Dispatcher – used when the same APDU Command is intended
to be edited by different services) and exceptions
(ServiceException) for the different services management;
It is introduced the package javacard.security that contains
interfaces (Key, PrivateKey, PublicKey, SecretKey and sub-
interfaces specialized in algorithms such as AESKey, DESKey,
266
DSAKey, DSAPrivateKey, DSAPublicKey, ECKey, ECPrivateKey,
ECPublicKey, RSAPrivateCrtKey, RSAPrivateKey, RSAPublicKey),
classes (Checksum – abstact class for algorithms used for the
cyclic check of errors, KeyAgreement – normal class for
algorithms based on the exchange of keys of Diffie-Helman
type, KeyBuilder – assures the way of key creation, Keypair – a
container, such as a vector, that holds the pairs of private and
public keys, MessageDigest – basic class for algorithms of hash
type, RandomData – basic class for random numbers,
Signature – abstract class for electronic signature) and
exceptions (CryptoException) for different cryptographic
algorithms with public and private keys, digital signatures, hash
functions and for the cyclic verification of redundancy (CRC);
It is introduced two extension packages javacardx.crypto and
javacardx.rmi.
267
Applet Java Card
1. install()
2. register()
3. select() 5. deselect()
4. process()
JCRE
Card operating system
268
deselecting the applet each time. At its turn, an applet may receive
several APDU commands pseudo-simultaneously because it may be
designed to be multi-selectable meaning that it will implement the
methods of the interface javacard.framework.MultiSelectable. Practically,
this means that if there are two different APDU commands, each one on a
different logical channel, an applet may interpret both of them, or, as well,
two different applets may separately process a command of the two.
269
An important concept available for the Java Card platform is that
concerning the persistent transactions. Similar to databases, for the
card operating system it should be applied the atomic modification of
some memory zones, meaning that the fields of an object in the non-
volatile memory wheather modify all in the same time or they do not
modify at all. This can be achieved by: JCSystem.beginTransaction(),
JCSystem.commitTransaction(), JCSystem.abortTransaction(). In the
specifications it is mentioned that JCRE cannot sustain imbricate
transactions.
T
his practical example will be deployed as in the Message-Passing
model. In what concerns the models JCRMI and SATSA, because of
the space limits, they will be analyzed in future approaches. In order
to develop a Java Card application it is needed a Java 2 Standard Edition
compiler, preferably version 1.4.1 [J2SE04], a Sun Java Card
Development Toolkit – SJCDT [JCDT04], [JCDT04a] and optionally an
integrated developing environment – IDE such as Borland JBuilder, Net
Beans or IntelliJ IDEA. The source code with explanations in English is
taken of the examples provided by SJCDT and is entirely presented in the
annexes. The source code, whether it contains several syntax
modifications, is as in the license property of Sun Microsystems. This
paragraph is spitted in two parts: one which explains the way of compiling,
converting and uploading on the card the example and one which explains
significant parts of the code and interprets results.
2.4.1 Steps taken for compiling and uploading an applet on the card
270
and implement JCRE according to the specifications. The most “real” way,
for which the procedure is similar to that in the practice is thoroughly
described in this chapter and represents the way it is used C-JCRE.
The steps taken for compiling, uploading and simulating a card
applet are:
Saving the source code as in the annex into the file Wallet.java
in the directory structure
‘Wallet1\com\sun\javacard\samples\wallet’;
Step 1 – Compilation – Positioning into the directory
‘Wallet1\com\sun\ javacard\samples\wallet’ and subsequently
introducing the commands:
o SET
_CLASSES=.;%JC_HOME%\lib\apduio.jar;%JC_HOME%\lib
\apdutool.jar;%JC_HOME%\lib\jcwde.jar;%JC_HOME%\lib\c
onverter.jar;%JC_HOME%\lib\scriptgen.jar;%JC_HOME%\lib
\offcardverifier.jar;%JC_HOME%\lib\api.jar;%JC_HOME%\li
b\installer.jar;%JC_HOME%\lib\capdump.jar;%JC_HOME%\l
ib\javacardframework.jar;%JC_HOME%\samples\classes;%C
LASSPATH%;
o %JAVA_HOME%\bin\javac.exe -g -classpath %_CLASSES%
com\sun\javacard\samples\wallet\Wallet.java
Step 2 – Editing the configuration file for card uploading
– in the directory Wallet1 in the directory structure mentioned
for step 1 it is created a text file for configuration
named ’wallet.app’ that includes the following:
o // applet AID
com.sun.javacard.installer.InstallerApplet
0xa0:0x0:0x0:0x0:0x62:0x3:0x1:0x8:0x1
com.sun.javacard.samples.wallet.Wallet
0xa0:0x0:0x0:0x0:0x62:0x3:0x1:0xc:0x6:0x1
Step 3 – Conversion of the java byte code class into a
binary file that can be interpreted by the JCVM on the
card – with the configuration file from step 2, it is called in the
command prompt, also from the directory Wallet1, the
following instruction:
%JC_HOME%\bin\converter.bat -config
com\sun\javacard\samples\wallet\ Wallet.opt
271
that contains three files. The file with the extension CAP –
Converted Applet – is the binary form which will be understood
by the card JCVM. The file with the extension JCA – Java Card
Assembly – is the text-assembler representation of the binery-
compressed file CAP that will be uploaded on the card. The file
with the extension EXP – export – differently from the CAP and
like the JCA it is not uploaded on the card. EXP is used by the
conversion instrument (converter.bat) to also convert the
necessary elements from the classes or packages which are
imported from the bytecode class (Wallet.class).
Step 4 – Verifying the CAP, JCA, EXP files – this is
OPTIONAL. It is separately executed for each of the EXP and
CAP files, resulted in step 3, but not before copying all the
structure of EXP files including the directories from the
directory api_export_file of the distribution SJCDT [JCDT04],
[JCDT04a] into the directory Wallet1. Of course it is possible
also generate from the “assembler source code” – of the JCA
file an equivalent CAP file with the command
%JC_HOME%\bin\capgen.bat com\sun\javacard\samples\wallet\
javacard\wallet.jca
The commands for verifying the EXP and CAP files are:
o %JC_HOME%\bin\verifyexp.bat
com\sun\javacard\samples\wallet\ javacard\wallet.exp
o %JC_HOME%\bin\verifycap.bat
com\sun\javacard\samples\wallet\ javacard\wallet.exp
%JC_HOME%\api_export_files\java\lang\
javacard\lang.exp
%JC_HOME%\api_export_files\javacard\
framework\javacard\framework.exp
com\sun\javacard\samples\wallet\ javacard\wallet.cap
Step 5 – uploading the binary execution file into the
permanent memory of the card when this one is
manufactured – it is EXCLUSIVE. EXCLUSIVE means that
whether it is executed step 5 and stop, or it is not executed
step 5 and we pass directly to 6. In the step 5, it is generated
from the JCA file (resulted in step 3) a masking-file that is
uploaded in the non-volatile memory of the card when this is
manufactured and which will disappear from the memory only
after the physical destruction of the card. The command (the
file maskgen.bat is available only in some distributions directly
colaborating with the card producers but may also be
downloaded from the) for generating the masking file is:
272
%JC_HOME%\bin\maskgen.bat cref
com\sun\javacard\samples\wallet\ javacard\wallet.jca
Step 6 – uploading the binary execution file in the volatile
memory of the card – it is executed only if step 5 was excluded. It
is run the so-called off-card installer. The first command is:
o %JC_HOME%\bin\scriptgen.bat -o Wallet.scr
com\sun\javacard\ samples\wallet\javacard\wallet.cap
This command creates the script command file APDU –
Wallet.scr with the help of the binary file CAP (wallet.cap). The
resulting file – Wallet.scr – must be adjusted and it contains
the APDU Commands necessary to upload the applet into the
card memory. The content of the file Wallet.scr will be briefly
explained in this chapter. The adjustment is made by the text
editing of the file Wallet.scr. It is added the line ’powerup;’ at
the beginning of the file and ’powerdown;’ at the end of the file.
Then, in a new command prompt window it must be started the
program which emulates JCRE. There are two programs that
can emulate a JCRE: one Java implemented (jcwde.bat) and
one C implemented (cref.exe). The one in C, called C-JCRE, is
the most important and in the same time it represents the
refference implementation of JCRE. This implementation is
executed from the file cref.exe in the ’bin’ directory, from the
development kit distribution Java Card [JCDT04a]. So, in the
new command prompt window it is wrote the command:
o cref.exe –o eeprom1
The command is meant to save the EEPROM memory of the
card, after it is modified it by APDU commands. The command
launches the JCRE emulation, and JCRE listens to TCP/IP on the
pre-defined port 9025, in order to receive APDU commands.
The emulation of JCRE stops when it receives a powerdown; by
the APDU command script file (extension scr).
Now in the old command prompt window it is run the command:
o %JC_HOME%\bin\apdutool.bat Wallet.scr >
Wallet.scr.out
By this command, with the help of the program launched with
cu apdutool.bat, it is transmitted APDU commands from the
host-type application to JCRE. For the moment JCRE runs in a
window and after the APDU commands received modifies the
EEPROM memory and the content is saved in the file eeprom1.
The file Wallet.scr.out includes the APDU Answers from the
applet that runs over C-JCRE.
Step 7 – simulating the action between the host
application and the card applet by CAD – it is executed
after step 6. At step 6, the JCRE simulation has ended. Now it
must be relaunched by the command:
273
o cref.exe –i eeprom1 –o eeprom1
The command takes on the memory image from the eeprom1
file, modifies it according to the received APDU commands and
saves it again with the modifications in the same eeprom1 file.
On the same JCRE port it simulates receiving APDU commands
and by the command
o %JC_HOME%\bin\apdutool.bat demoWallet.scr >
demoWallet.scr.cjcre.out
it is sent the APDU Commands for test and simulation of the
applet, and the APDU Answers are in the file
demoWallet.scr.cjcre.out.
As can be seen in the source code presented in the first annex, the
applet has to import the package javacard.framework.*, to extend the
Applet class and to implement the methods mentioned for the life cycle of
an applet. In Table 9.1 is presented the draft of the application:
274
structure of commands depends mostly upon the service that the applet-
application provides. For instance, if it is an applet providing a service of
electronic wallet, it should provide sub-services such as debit or credit
transactions for the amount of money on the card, verifying the balance
sheet of the card, assuring the security of the access to the applet by PIN.
If it is about a loyalty applet for a gym or about a medical insurance
applet, then it should offer services such as: personal identification
information, access number in a locations, the person legally reliable for
the card owner, diseases.
Practically, for the Wallet-electronic wallet application, there are
given several models of APDU Commands designed by those who wrote
the source code of the applet:
PIN verification
Java CLA INS P1 P2 Lc Data Field Le
private 1 byte 1 byte 1 byte 1 byte 1 byte 5 bytes
method
verify() 0x80 0x20 0x00 0x00 0x05 0x01 0x02 0x03 0x04 0x05 0x02
CLA with the value 80 hex means that we intent to access the
application electronic wallet; INS with the value 20 hex means
that it is desired to execute the method verify(); P1 and P2 are
not defined; Lc has the value 5 i.e. the Data Field field will have
5 octets; the Data Field field has octets and clearly contains the
PIN 12345, although in the real applications this should be
encrypted; and Le has the value 2 that means that it is
expected maximum 2 octets as an answer;
275
APDU command for debit – e.g. to deposit money on the
card – usually executed by a bank
Java CLA INS P1 P2 Lc Data Field Le
private 1 byte 1 byte 1 byte 1 byte 1 byte 5 bytes
method
debit() 0x80 0x40 0x00 0x00 0x01 0x64 0x7F
It can be noticed in annex 9.1 that in the builder are allocated the
objects needed for the entire life cycle of the applet. Then the static
method install() should directly or indirectly call the static method
register(), because JCRE, after installing the applet, this must at his turn
register into JCRE. In this case, the register method is indirectly call by
the builder by the install() method. Each time that JCRE receives an APDU
command from the host-type application, it will call the process() method.
The source code in the annex is readable and „self-explained” for a Java
programmer – it self-explains by the comments.
In annex 9.3 are presented the APDU commands sent by the host
to JCRE and immediately having the separator ’,’ follows the APDU answer.
It considers the lines in Table 9.2:
Table 9.2. Interpretation of APDU commands and answers from the log
file in Annex 9.3.
CLA: 80, INS: 40, P1: 00, P2: 00, Lc: 01, 64, Le: 00,
SW1: 6a, SW2: 85
CLA: 80, INS: 30, P1: 00, P2: 00, Lc: 01, 64, Le: 00,
SW1: 90, SW2: 00
276
„final static short SW_NEGATIVE_BALANCE = 0x6A85;” as in the
source code in annex 9.1. This is the way we observe and interpret all the
results and the source code in the annexes of this presentation. The
chosen example is quite simple and does not use specific cryptography
elements or atomic transactions as normal for a practical application.
2.5 Conclusions
B
efore Java Cards appear, smart card software was depended on the
manufactures. Most smart card development kits were card and
reader specific. Some have externalized the card and reader
descriptions so that the buyer of the kit can adapt the software to new
cards and readers. Also, most smart card systems had been closed
systems, consisting of a specific card from a card manufacturer working
with a specific terminal from a terminal manufacturer. Sometimes the
same company manufactured both the card and the reader. As a result,
standard-specified, paper interoperability had rarely proved. Now, the
developers should no more be afraid about the diversity of smart cards
manufactures and operating systems, as long as they are using Java card
platform.
References:
277
http://java.sun.com/products/javacard/specs.html
[JCRE03] Runtime Environment Specification 2.2.1, Octomber 2003
http://java.sun.com/products/javacard/specs.html
[JCAP03] Application Programming Specification 2.2.1, Octomber
2003: http://java.sun.com/products/javacard/specs.html
[MANU03] Programming Manual, Octomber 2003, Application
Progaming Notes 2.2.1 included in [JCDT04a]
[MANU03a] User Manual, Octomber 2003, Development Kit User
Guide 2.2.1 included in [JCDT04a]
[POCA03] Paul Pocatilu, Cristian Toma, Mobile Applications Quality,
International Conference “Science and economic education
system role in development from Republic of Moldavia”,
Chişinău, September 2003, pg. 474-478
[SCTI98] Scott Guthery, Tim Jurgensen, „Smart Card Developer’s
Kit”, Macmillan Computer Publishing House, ISBN:
1578700272, USA 1998:
http://unix.be.eu.org/docs/smart-card-developer-
kit/ewtoc.html
[TOMA05] Cristian TOMA, Secure Protocol in Identity Management
using Smart Cards, Revista “Informatica Economica”, vol.
9, Nr. 2, Bucuresti, 2005, p. 135 – 140
278
ANNEX 9.1: Source code of electronic wallet applet for Java Card platform
package com.sun.javacard.samples.wallet;
import javacard.framework.*;
//import javacardx.framework.*;
279
byte aLen = bArray[bOffset]; // applet data length
// The installation parameters contain the PIN
initialization value
pin.update(bArray, (short)(bOffset+1), aLen);
register();
} // end of the constructor
buffer[ISO7816.OFFSET_CLA]=(byte)(buffer[ISO7816.OFFSET_CLA]&(
byte)0xFC);
if((buffer[ISO7816.OFFSET_CLA]==0)&&
(buffer[ISO7816.OFFSET_INS]==(byte)(0xA4))) return;
// verify the reset of commands have the
// correct CLA byte, which specifies the command structure
if (buffer[ISO7816.OFFSET_CLA] != Wallet_CLA)
ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED);
switch (buffer[ISO7816.OFFSET_INS]) {
case GET_BALANCE: getBalance(apdu); return;
280
case DEBIT: debit(apdu); return;
case CREDIT: credit(apdu); return;
case VERIFY: verify(apdu); return;
default:
ISOException.throwIt(ISO7816.SW_INS_NOT_SUPPORTED);
}
} // end of process method
ISOException.throwIt(SW_INVALID_TRANSACTION_AMOUNT);
281
byte byteRead = (byte)(apdu.setIncomingAndReceive());
if (( numBytes != 1 ) || (byteRead != 1))
ISOException.throwIt(ISO7816.SW_WRONG_LENGTH);
// get debit amount
byte debitAmount = buffer[ISO7816.OFFSET_CDATA];
// check debit amount
if ((debitAmount > MAX_TRANSACTION_AMOUNT)||
( debitAmount < 0 ))
ISOException.throwIt(SW_INVALID_TRANSACTION_AMOUNT);
// check the new balance
if ((short)( balance - debitAmount ) < (short)0)
ISOException.throwIt(SW_NEGATIVE_BALANCE);
balance = (short) (balance - debitAmount);
} // end of debit method
// check pin, the PIN data is read into the APDU buffer
// at the offset ISO7816.OFFSET_CDATA, the PIN data length
= byteRead
if (pin.check(buffer, ISO7816.OFFSET_CDATA, byteRead) ==
false)
ISOException.throwIt(SW_VERIFICATION_FAILED);
282
ANNEX 9.2: APDU commands and expected responses from the test-
simulation file demoWallet1.scr
//////////////////////////////////////////////////////////////
///////
// Select all installed Applets
//////////////////////////////////////////////////////////////
///////
powerup;
//////////////////////////////////////////////////////////////
///////
// Initialize Wallet
//////////////////////////////////////////////////////////////
///////
//Select Wallet
0x00 0xA4 0x04 0x00 0x0a 0xa0 0x0 0x0 0x0 0x62 0x3 0x1 0xc 0x6
0x1 0x7F;
// 90 00 = SW_NO_ERROR
//Get Balance
0x80 0x50 0x00 0x00 0x00 0x02;
283
//0x00 0x64 0x9000 = Balance = 100 and SW_NO_ERROR
//Get Balance
0x80 0x50 0x00 0x00 0x00 0x02;
//0x00 0x32 0x9000 = Balance = 50 and SW_NO_ERROR
//Get Balance
0x80 0x50 0x00 0x00 0x00 0x02;
//0x00 0x32 0x9000 = Balance = 50 and SW_NO_ERROR
//Get Balance
0x80 0x50 0x00 0x00 0x00 0x02;
//0x00 0x32 0x9000 = Balance = 50 and SW_NO_ERROR
//Get Balance
0x80 0x50 0x00 0x00 0x00 0x02;
//0x00 0x32 0x9000 = Balance = 50 and SW_NO_ERROR
284
0x80 0x20 0x00 0x00 0x05 0x01 0x02 0x03 0x04 0x05 0x7F;
//0x9000 = SW_NO_ERROR
//Get balance
0x80 0x50 0x00 0x00 0x00 0x02;
//0x00 0x32 0x9000 = Balance = 50 and SW_NO_ERROR
ANNEX 9.3: Received APDU responses from the C-JCRE when this was
asked with APDU commands – file demoWallet1.scr.cjcre.out
Java Card 2.2.1 APDU Tool, Version 1.3
Copyright 2003 Sun Microsystems, Inc. All rights reserved. Use
is subject to license terms.
Opening connection to localhost on port 9025.
Connected.
Received ATR = 0x3b 0xf0 0x11 0x00 0xff 0x00
CLA: 00, INS: a4, P1: 04, P2: 00, Lc: 09, a0, 00, 00, 00, 62,
03, 01, 08, 01, Le: 00, SW1: 90, SW2: 00
CLA: 80, INS: b8, P1: 00, P2: 00, Lc: 14, 0a, a0, 00, 00, 00,
62, 03, 01, 0c, 06, 01, 08, 00, 00, 05, 01, 02, 03, 04, 05, Le:
0a, a0, 00, 00, 00, 62, 03, 01, 0c, 06, 01, SW1: 90, SW2: 00
CLA: 00, INS: a4, P1: 04, P2: 00, Lc: 0a, a0, 00, 00, 00, 62,
03, 01, 0c, 06, 01, Le: 00, SW1: 90, SW2: 00
CLA: 80, INS: 20, P1: 00, P2: 00, Lc: 05, 01, 02, 03, 04, 05,
Le: 00, SW1: 90, SW2: 00
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 00, SW1:
90, SW2: 00
CLA: 80, INS: 40, P1: 00, P2: 00, Lc: 01, 64, Le: 00, SW1: 6a,
SW2: 85
CLA: 80, INS: 30, P1: 00, P2: 00, Lc: 01, 64, Le: 00, SW1: 90,
SW2: 00
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 64, SW1:
90, SW2: 00
CLA: 80, INS: 40, P1: 00, P2: 00, Lc: 01, 32, Le: 00, SW1: 90,
SW2: 00
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 32, SW1:
90, SW2: 00
CLA: 80, INS: 30, P1: 00, P2: 00, Lc: 01, 80, Le: 00, SW1: 6a,
SW2: 83
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 32, SW1:
90, SW2: 00
285
CLA: 80, INS: 40, P1: 00, P2: 00, Lc: 01, 33, Le: 00, SW1: 6a,
SW2: 85
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 32, SW1:
90, SW2: 00
CLA: 80, INS: 40, P1: 00, P2: 00, Lc: 01, 80, Le: 00, SW1: 6a,
SW2: 83
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 32, SW1:
90, SW2: 00
CLA: 00, INS: a4, P1: 04, P2: 00, Lc: 0a, a0, 00, 00, 00, 62,
03, 01, 0c, 06, 01, Le: 00, SW1: 90, SW2: 00
CLA: 80, INS: 30, P1: 00, P2: 00, Lc: 01, 7f, Le: 00, SW1: 63,
SW2: 01
CLA: 80, INS: 20, P1: 00, P2: 00, Lc: 04, 01, 03, 02, 66, Le:
00, SW1: 63, SW2: 00
CLA: 80, INS: 20, P1: 00, P2: 00, Lc: 05, 01, 02, 03, 04, 05,
Le: 00, SW1: 90, SW2: 00
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 00, SW1: 67, SW2:
00
CLA: 80, INS: 50, P1: 00, P2: 00, Lc: 00, Le: 02, 00, 32, SW1:
90, SW2: 00
286
3. Secure Patterns and Smart-card Technologies used
in e-Commerce, e-Payment and e-Government
Cristian TOMA
Abstract: Most of the systems require a high level of security. From the
point of view of security, the smart card technologies are very important,
because the same card could be use to access a network resource or to
crypt and sign an electronic confidential message. In this paper, are
highlighted the diversity of smart cards’ software development kits and
operating systems. In the last part is presented a new technology that is
used for developing applications which run on smart-cards. Also are
highlighted some security patterns which can be used in such information
systems that involves mobile applications and smart cards.
3.1 Introduction
I
n our days most of the specialists agree on the idea, that a card is
smart only if it can compute, only if it has a microprocessor or a
microcontroller. Keeping on this approach, the difference between a
smart card and a card only with memory chips or magnetic strips, is that
the last one only can store data and can not compute the data.
The term E-Business and E-Commerce have many definitions in IT
fields. One of definition is that the E-Business is the integration of a
company's business including products, procedures, and services over the
Internet [ANIT00]. Usually and in practice a company turns its business
into an E-Business when it integrates the marketing, sales, accounting,
manufacturing, and operations with web site activities. An E-Business
uses the Internet as a resource for all business activities.
Also for this paper the concern is security pattern used in E-
Commerce and E-Government. We can understand E-Commerce as a
component, a part of an E-Business.
The term of Electronic commerce or "E-Commerce" is related with
a wide variety of on-line business activities for products and services, of
different type business-to-business – B2B and business-to-consumer –
B2B, through the Internet or even through IntraNets – private networks,
including mobile ones.
287
After the opinions of different specialist, eCommerce is divided in
two components:
Online Shopping - the activities that provide to the customer or
to the business partner the information about products or
services traded. This information helps them to be informed
and to take the proper decision regarding the buying process.
Online Purchasing - “E-Payment” - the activities through a
customer or a company actually purchase a product or a
service over the Internet or private networks. Also another type
of using for on-line purchasing is described in [Anne00] like “a
metaphor used in business-to-business eCommerce for
providing customers with an online method of placing an order,
submitting a purchase order, or requesting a quote”.
M
ost smart card programming consists of writing programs on a
host computer that send commands to and receive results from
application-specific smart cards. These applications read data from
and write data to the smart card and use of the computing features of the
processor on the smart card.
There are different providers for software and hardware platform
for cards and for host applications. Sometimes the vendors of the
operating system, provides tools in order to extend capabilities of smart
card. Examples of such situations include a closed system application
where cost or a particularly high level of security is a critical factor, or
where a particular encryption algorithm is needed to connect the smart
card to an existing host system. In these situations, smart card
programmers write new operating system software for smart cards
partially in different programming languages or completely in the
assembly language of the processor on the smart card.
288
A growing number of smart card software development kits – SDKs
and application programming interfaces – APIs make this an easy task –
table 9.3. Some of these are card or card-reader specific, but opening of
the smart card application development marketplace is beginning to force
interoperability standards on the makers of smart card system
components, so that this is becoming less rather than more of a problem.
Table 9.3. Table with SDK and API technologies for smart cards
Product Company WWW
CryptOS Litronic www.litronic.com
JavaCard SDK Sun java.sun.com/products/javacard/in
Microsystem dex.jsp
PC/SC Microsoft www.pcscworkgroup.com
IC-XCard HealthData www.hdata.com
Resources
EZ Component Strategic www.sainc.com
Analysis
IBM Smart IBM www.chipcard.ibm.com
Card
Kapsch Card Kapsch www.kapsch.co.at
Development
Tools
289
3.3 Java technologies for smart-cards
T
he Java technologies for smart cards are compatible with all existent
standards. From the point of view of vendor, Java technologies
represent a reliable, secure and interoperable platform, which can
store and run many applications on the same card. This is very important,
because the same card could be used in the same time for health
insurance information system but also as a passport or drive license. On
the card is possible to reinstall a new application without modification
from vendor’s card side. From the point of view of developers that use the
platform Java Card, the applications, called generic applets, becomes
more and more easy to debug, test and install. These things increase the
scalability and decrease the cost for development and maintenance for
applications.
Most of important smart card vendors and producers implement
and have licenses for technology-platform Java Card. The vendors are
encouraged to have their own Java Card implementations through Java
Card Technology Compatibility Kit (TCK). Sun Microsystems provide
recurrently a reference implementation for java Card technology-platform,
plus development tools, grouped in Java Card Toolkit – JCT and Java Card
Protection Profile – JCPP. Practically Java Card Toolkit is the simulator and
the debugger, but Java Card Protection Profile provides a set of especially
security tools.
Sun Microsystems publishes on the web site, Java Card Platform
Specification – the specifications which the producers of smart cards must
respect and implement them. Also, the company publishes Java Card
Development Kit – represent the reference implementation of those
specifications.
For moment, the specifications are divided in three big parts:
The specifications for Java Card Virtual Machine – JCVM, define
how must be a Java virtual machine which will be implemented
in a smart card plus the structure of “executable” files – applets
files, and the used Java language subset;
The specifications for Java Card Runtime Environment – JCRE,
define how should behave the applets when are running in
JCVM. JCRE consists from JCVM, Java Card Framework and the
libraries with classes hierarchies which can be used by applets;
The specifications for Java Card API – Advanced Programming
Interface, define which syntax can be used and with which
classes and packages the developers should work.
290
A special category of smart cards are „Java Card S”. These cards
are identical with Java ones. The only difference is that once the cards are
on the market, is impossible to add new applications-applets or delete
existing ones.
The community of vendors and producers try to implement this
platform because it presents a lot of advantages:
Interoperability – once the applet is ready to run in a Java card
platform, this applet can run on any card that have java card
platform. This means if a company wants to modify the
information system that they used is not necessary to buy
another brand of cards;
Security – is inherited from the structure of object oriented
language Java. Plus, there are standard libraries for
cryptography which can be used in order to offer high degree of
security;
Scalability – many applets can stay in the same time on the
same card in secure manner. If the developed information
system request a new application, simply the new applets is
load on the smart card;
Dynamicity – connected with the needs of end-users, can be
upload on the card new applets even after this one is on the
market;
Compatibility with existing standards – Java platform do not
eliminate any of existent standard for smart-cards, like
ISO7816 or EMV – Euro-Master-Visa. The java smart-card do
not depend by physical link with the card reader, if is made
through ISO/IEC 14443-4:2001 or not, and do not depend of
the microprocessor or chip memory from the smart card;
Transparency – all specifications are made public and are for
free, also the development tools from the web site.
291
3.4 E-commerce and security features
B
efore taking in account eCommerce models and implicitly E-
Payments, models is better to have a general view about
eCommerce:
Conceptual Frameworks: REA Meta model, UMM;
General Frameworks: Biztalk Framework, Building Blocks,
ebXML Technical Architecture, FIPA, eCo Framework
Specification, IMPRIMATUR Business Model, STEP, Java EC
Framework, J2EE Framework, MPEG-21, OMG eCommerce
Domain Specifications, Open-EDI Reference Model (ISO 14662),
SPIRIT, TOGAF;
Trading Models: Ad Hoc Functional and Process Models,
Global Commerce Initiative & Protocol, cXL, Internet Open
Trading Protocol (IOTP), Open Applications Group Integration
Specification, Open Buying on the Internet (OBI), OBI Express,
RosettaNet, Secure Electronic Market Place for Europe
(SEMPER);
Payment models:
o Macro-payment electronic schemes:
3D Secure – VISA last technology – credit
card based solution;
SET – Secure Electronic Transaction –
Mastercard / VISA credit card based solution;
iKP – IBM, credit based solution;
CyberCash – CyberCash Inc., credit based
solution;
DigiCash – DigiCash Inc., eCash – cash
solution;
NetBill – CMU, e-payment transfer over
Internet – direct fund transfer;
FSTC E-check – Financial Services Technology
Consortium, eCheque – cheque solution;
Other – credit card, cash, cheque, direct fund
transfers schemes.
o Micro-payment electronic schemes:
Millicent – DEC-Digital Equipment Corporation,
eCash;
PayWord – Rivest and Shamir, eCash;
MicroMint – Rivest and Shamir, eCash;
NetCard – Anderson 1995, eCash;
NetBill – CMU, e-payment transfer over
Internet;
292
Mobile commerce models: OMA – MeT.
S
ET is a security specification designed to protect credit card
transaction on the Internet. It is not a payment protocol but rather a
set of security protocols for users to carrying out credit card
transaction in an insecure network such as the Internet. It is supported by
a wide range of companies including Visa, Master card, Microsoft,
Netscape. In figure 9.5 is depicted SET Network Architecture:
Certificate
Authority
Merchant
Acquirer
Issuer
Payment
Gateway
Payment
network
293
The SET Participants as they are described in original document
specification are:
Cardholder – an authorized holder of the credit card issued by
the issuer;
Merchant – a person who has some goods/services to sell;
Issuer – a financial institution that issues the credit card;
Acquirer – a financial institution that establishes an account
with the merchant and process payment card authorizations
and payments. It provide the interface between multiple
issuers and a merchant so that the merchant does not need to
deal with multiple issuers;
Payment gateway – connected to the acquirer, the payment
gateway interfaces between SET and existing payment
networks for carrying out payment functions;
Certification authority – an trusted authority which issues
X.509v3 certificates.
Before talking about the details of the SET protocol, is important for a
developer to know and understand the dual signature method introduced
in SET:
Dual signature is an innovative method for resolving the following
problem:
The customer needs to send the order information (OI) to the
merchant and the payment information (PI) to the bank;
Ideally, the customer does not want the bank to know the OI
and the merchant to know PI;
However, PI and OI must be linked to resolve disputes if
necessary (e.g., the customer can prove that the order has
been paid);
294
The steps are depicted in figure 9.6:
Priv_key
PI MD
PIMD
II E Dual
POMD
Signature
OI
MD OIMD
295
In figure 9.7 and 9.8 are comprehensive to depict how is created a
payment request by cardholder and verification by merchant.
PI Request message
E(DES) Encrypt
Dual
data
Signature
Ks Send to payment
gateway via merchant
OIMD Digital
E(RSA) Envelope
KUb PIMD
OI Send to merchant
Legend:
PI = Payment Information
OI = Order Information
OIMD =OI message digest Dual
PIMD = PI message digest Signature
E(DES) = Encryption by DES
Cardholder
E(RSA) = Encryption by RSA
Certificate
Ks = Temporary symmetric key
Kub = Bank’s public key-exchange key
Encrypt Legend:
data PI = Payment Information
Send to payment OIMD =OI message digest
gateway via merchant POMD = Payment order message digest
Digital MD = Message Digest
Envelope KUc = Customer’s public signature key
II MD POMD
PIMD
OI
MD OIMD
COMPARE
Dual
Decrypt POMD
Signature
KUc
Cardholder
Certificate
296
In figure 9.9 is highlighted the entire SET protocol:
Payment request
Authorization response
Inquiry request
Inquiry response
Capture request
Capture response
The SET protocol detailed steps are briefly depicted in figure 9.9:
Payment initialization:
o Request:
Having decided to buy something, the
cardholder sends a purchase initiation request
to the merchant;
o Response:
Merchant returns a response to the
cardholder. The response is signed digitally by
using merchant’s private key;
Merchant also send its certificate and
payment gateway certificates to the
cardholder;
Payment:
o Purchase Request:
After receiving the initiate response, the
cardholder verifies the certificates and obtain
the corresponding public keys;
The cardholder can then verify the merchant’s
response (how?);
297
(By using merchant’s public key, decrypt
merchant’s signature to get the message
digest. Compare the extracted message
digest with the message digest of the
response);
Cardholder prepares order info. (OI) and
payment instruction (PI);
Cardholder generates a dual signature for OI
& PI;
Cardholder encrypts PI with a randomly
generated symmetric key A. Key A and
cardholder information is then encrypted with
the payment gateway public key;
Cardholder transmits to the merchant – figure
9.6:
• OI + dual signature + PIMD,
• PI + dual signature + OIMD (all
encrypted by using key A),
• key A + cardholder information (all
encrypted by using payment gateway
public key): refer to as PI’s digital
envelope,
• cardholder certificate;
o Purchase Response:
After receiving the purchase request,
merchant verifies cardholder certificate and
cardholder dual signature on OI;
(Merchant decrypts OI by using cardholder’s
public key. The merchant is provided with OI
and PIMD so the dual signature can be
verified by comparing the previous result with
MD(MD(OI)+PIMD) – Figure 6);
Merchant processes the request and forward
PI to the payment gateway later;
Merchant creates digitally signed purchase
response and forward it to the cardholder
together with its certificates;
After receiving the purchase response, the
cardholder verifies the certificate and the
digital signature and stores the purchase
response;
Authorization:
o Request:
During the processing of the order, the
merchant will authorize the transaction by
creating an authorization request.
298
The authorization request includes the
amount to be authorized, the transaction
identity and other information about the
transaction.
The authorization request is encrypted by a
newly generated symmetric key B. Key B is
then encrypted by using the public key of the
payment gateway.
Note: only the payment gateway can get key
B and use it to decrypt the authorization
request.
The merchant sends to the payment gateway:
• the encrypted authorization request
and the encrypted key B.
• the encrypted payment instructions as
received from the cardholder.
• cardholder’s and merchant’s
certificates.
o Response:
After receiving the authorization request, the
payment gateway obtains key B by means of
decryption and uses it to decrypt the
authorization request;
It also verifies the validity of the merchant’s
certificates and the digital signature of the
request;
The payment gateway also obtains key A and
cardholder information by means of
decryption and uses key A to decrypt the
payment instructions, dual signature and
OIMD;
It then verifies the dual signature (how?);
The payment gateway is provided with PI and
OIMD so the dual signature can be verified by
computing MD(MD(PI)+OIMD);
Also, the payment gateway verifies that the
transaction identifier received from the
merchant matches with the one in the
cardholder PI;
Upon all successful verifications, the payment
gateway then sends an authorization request
to the issuer via existing payment system;
After receiving the authorization response
from the issuer, the payment gateway
generates an authorization response message
to the merchant. The message includes:
299
• Issuer’s response,
• Payment gateway certificate,
• optional capture token (to be
explained later);
The response is encrypted by a new
symmetric key which is in turn encrypted
using the merchant public key;
The authorization response is signed by using
payment gateway’s private key and is then
encrypted by a random symmetric key C;
Key C is then encrypted by using merchant’s
public key;
A capture token is generated, signed and
encrypted by using a random symmetric key
D;
Key D is encrypted by using payment
gateway’s public key;
Payment gateway sends to merchant:
• Signed authorization response
(encrypted by key C),
• Key C (encrypted by merchant’s public
key),
• Signed capture token (encrypted by
key D),
• Key D (encrypted by payment
gateway’s public key);
After receiving the message from the
payment gateway, the merchant obtains key
C by decryption and uses it to decrypt the
authorization response;
The merchant verifies the payment gateway
certificate and the digital signature of the
authorization response;
The merchant stores the authorization
response and the capture token to be used
for the later capture request;
The merchant also completes the merchant
order by shipping the goods or performing the
services;
Capture:
o Request:
Eventually the merchant will request payment
or perform this payment capture stage;
The merchant generates a capture including:
final amount of the transaction and
transaction identifier from the OI;
300
The capture request is then encrypted by a
newly generated symmetric key E. Key E is
then encrypted by using the public key of the
payment gateway;
The merchant sends to the payment gateway:
• Signed capture request (encrypted by
using key E),
• Key E (encrypted by using payment
gateway’s public key),
• Signed capture token (encrypted by
using key D)
• Key D (encrypted by using payment
gateway’s public key) ,
• Merchant’s digital certificates;
o Response:
After receiving the capture request, the
payment gateway obtains key E by decryption
and uses it to decrypt the capture request;
The payment gateway also verifies the digital
signature of the capture request by using
merchant’s public key;
If there are capture tokens, the payment
gateway also decrypts it;
After all successful verification, the payment
gateway sends a clearing request to the
issuer via the existing system;
The payment gateway creates a capture
response. The response is encrypted by a
random symmetric key F. Key F is encrypted
by using merchant’s public key;
Payment gateway sends to the merchant:
encrypted capture response, encrypted key F
and its digital certificate;
301
Acceptability – whether the payment can be accepted in
different environment e.g., not only by the issuer;
Transferability – the ability to transfer payment without the
need of a third party e.g., a bank;
Divisibility – the ability to divide a value V to an arbitrary
number of smaller values - “banknotes” with a total value of V.
3.6 Conclusions
T
he informatics systems which are interacting with smart cards have
an advantage because the access to the different data bases and the
time of transactions could be considerably minimized. More than that,
some smart cards contain non-volatile memories which provide a great
advantage regarding the development of secure systems and applications,
because in those memories they can store sensitive information like digital
certificates, symmetric and asymmetric private keys. In order to improve
the speed of computations, this kind of cards have also specialized
cryptographic coprocessors. The coprocessors execute complicate
cryptographic algorithms like RSA, AES-Rijndael, 3DES or algorithms
based on elliptic curves. The interesting approaches for information
systems are the one witch involves m-applications and smart cards
applications in E-Business and E-Government.
Considering these aspects, the developers can take advantages in
development of smart card application by Java card technology. As a
practical approach it would be interesting building applets for ID, Driving
License, Health-Insurance smart cards, for encrypt and digitally sign
documents, for E-Commerce and for accessing critical resources in
government and military field.
References:
302
[IVAN02] Ion Ivan, Paul Pocatilu, Marius Popa, Cristian Toma, “The
Digital Signature and Data Security in e-commerce”, The
Economic Informatics Review Nr. 3/2002, Bucharest 2002.
[JCDT04] Tools Sun Java Card Development Toolkit 2.2.1:
http://java.sun.com/products/javacard/index.jsp
[JCDT04a] Sun Java Card Development Toolkit 2.2.1:
http://java.sun.com/products/javacard/dev_kit.html
[ISOI04] http://www.ttfn.net/techno/smartcards/iso7816_4.html
[J2SE04] Java 2 Standard Edition Software Development Kit:
http://java.sun.com/products/archive/j2se/1.4.1_07/
[JCVM03] Virtual Machine Speification 2.2.1, Octomber 2003:
http://java.sun.com/products/javacard/specs.html
[JCRE03] Runtime Environment Specification 2.2.1, Octomber 2003
http://java.sun.com/products/javacard/specs.html
[JCAP03] Application Programming Specification 2.2.1, Octomber
2003: http://java.sun.com/products/javacard/specs.html
[MANU03] Programming Manual, Octomber 2003, Application
Progaming Notes 2.2.1 included in [JCDT04a]
[MANU03a] User Manual, Octomber 2003, Development Kit User
Guide 2.2.1 included in [JCDT04a]
[POCA04a] Paul Pocatilu, Cristian Toma, “Securing Mobile Commerce
Applications”, communication in – “The Central and East
European Conference in Business Information Systems”,
“Babeş-Bolyai” University, Cluj-Napoca, May 2004.
[TOMA05] Cristian TOMA, „Secure Protocol in Identity Management
using Smart Cards”, Revista “Informatica Economica”, vol.
9, Nr. 2, Bucuresti, 2005, p. 135 – 140.
[TOMA05a] Cristian Toma, "Smart Card Technologies in military
information systems", The 36-th International Scientific
Symposium of METRA, Editura METRA, Bucuresti, Mai
2005, p. 500-506.
[TOMA05b] Cristian Toma, "Secure architecture used in systems of
distributed applications", The 7-th International
Conference on Informatics in Economy, Academia of
Economic Studies Bucharest, Editura Economica-INFOREC,
Bucharest, May 2005, p. 1132-1138.
303
304
4. On-line Payment System e-Cash
Adrian Calugaru, Marius POPA, Cristian TOMA
4.1 Introduction
D
igital economy development determined electronic commerce
widely enhancement. The traditional payment methods are still
found on electronic commerce web sites, especially in the countries
that have not a good developed electronic payment system.
The cash is represented by banknotes and coins, being the most
spread payment method in retail trade. The cash using supposes the
simultaneous physical presence of the two transaction partners. This thing
leads to impossibility to use the Internet transactions.
The cheque represents a document used by a person who gives an
order to a bank to pay a sum of money to a beneficiary. It is one of the
most insecure payment methods, especially because the Romanian law
doesn’t specify a very simple method to retrieve the money in case of
which the buyer issues an uncovered cheque.
The payment order represents a document issued by payer
addressed to the bank that has its account. Through this document the
bank has to pay a fix amount to a beneficiary.
The promissory note represents a commitment of the issuer to pay
himself to the beneficiary a sum of money at an established date.
The letter of credit has as aim the replacement of the credit given to a
buyer with the credit and the fame of a bank that is replaced to this buyer
in the obligation to pay to the seller the merchandise price. The payment
is conditioned by the bringing of a delivery proof of the merchandise to a
buyer.
305
The traditional payment methods are boiled down to money
transfer as cash or through the documents: cheques, payment orders etc.
The payment supposes an account opening, the going to the bank to
deposit and/or to initiate the transfer in trader’s account. The payment
confirmation can be or not asked by fax. The last and the most extensive
stage is delivery through trader’s distribution network or specialized postal
service.
I
n electronic commerce, it is not necessary to go to the bank to pay the
suppliers of goods and services. The role of the bank is to transform
the cash in bits. The cash cannot be completely erased, but they will
be transformed more and more in the electronic format.
At present, there are many payment systems. The most important
problem is the security one. The most part of the messages sent by e-mail
are not crypted, that is anyone can intercept the message. The actual
electronic payment standards use the crypting and digital signatures.
Through electronic signature using can be make the identity proof of a
person who accesses a bank deposit or a credit card.
The electronic money can be divided in two classes:
with identity;
anonymous.
306
As well as traditional systems, the biggest problem consists in the
assurance that nobody cannot copy the digital money or take the credit
card information. The electronic financial transactions between the banks
were made before of Internet - SWIFT (Society for World-wide Interbank
Financial Communication).
The payment electronic systems must accomplish the following
requirements:
secure – it must permit the safe financial transaction making in
opened networks as Internet. Unfortunately, the electronic
money are resumed to a simple file that can be copied. Copying
or “double spending” of the same sum of money must be
prevent by electronic payment systems;
anonymous – the clients’ and made transaction identity must
be protected;
convertible – the system users work with different banks, being
necessary as a currency issued by a bank to be accepted by
another one;
useable – the payment system must be easy to used and
accepted. The traders who want to sell on-line the products
have not any chance in case in which the clients don’t agree
the idea to make business on web;
scalable – a system is scalable if it can support new users and
resources without to have performance failures. The payment
system must permit to the clients and traders to integrate
themselves in the system without alter its infrastructure;
transferability – it refers to the capacity of a electronic bill to
start the money transfer from an account in another one
without a bank direct contact by supplier or client;
flexible – it is necessary that the system to accept payment
alternative forms in function of the guarantees asked by the
parts involved in transaction, the needed time to payment
making, performance requirements and transaction value. The
infrastructure must support different payment methods,
including credit cards, personal cheques and anonymous
electronic money. These tools and payment methods must be
integrated in a common framework;
efficient – the term of efficiency refers to the cost necessary to
make a transaction. An efficient electronic payment system
must be capable to assure small costs in comparison with the
benefits;
integrable – it imposes that the system has to support the
existent applications, to offer the means for integration with
other applications indifferently of hardware platform or network;
reliable – the payment system must be permanent available
and to prevent the possible errors.
307
The presented characteristics must be assured in order to obtain with a
high-level quality.
T
he electronic commerce will evaluate beyond of certain level when
the ordinary consumers will percept as an electronic payment
mechanism as well as sure than the traditional one.
Internet payment – when a on-line selling system is set working,
the trader sells 24 hours per day, 7 days per week everywhere when the
Internet has came. The potential buyers and clients will have access to
last information referring the products, services, prices and their
availability. The trader will have to assure that the informatics system is
always available and he will operate the order management, invoicing,
payment processing and money delivery.
Real-time payment solutions – with the exception of the off-line
cases, the money getting resulted as an on-line selling supposes
interaction processes succession with banks and other financial
institutions. In present, the invoice payment is made with credit cards,
electronic money (e-cash), electronic-cheques or smart cards that are the
most important payment methods used in electronic commerce. The
payment methods are integrated at trader’s level in its informatics system
or they are offered as outsource by a commerce services provider. This
one manages or intermediates the payments from the third parts.
Credit card – it represents the most used payment form on
Internet. The its using is simple: the clients who browse in a web site and
decide to buy a product or service must introduce the credit card
information through HTML form. The completed content as card type,
number card, owner’s name and card expiring date is sent to web site
where the information are collected and sent to the bank. If the trader’s
site has a direct connection with the bank then it is possible the real-time
payment when the credit covers the ordered goods value. The on-line
transactions that use payment with cards are cryptographic protected and
the crypting way assures that only the bank and services provider for
credit cards will access the credit card information.
A first phase implies some agreements with financial institutions,
using cryptographic and authentication advanced technologies for
messages securing sent through Internet. The trader must open a bank
account, offering on-line transaction services based on cards. The
cryptographic technology currently used SSL (Secure Socket Layer)
erases the possibility that an intruder gets the card number, supposing
that he intercepts the crypted data. The disadvantage is that SSL doesn’t
permit to the trader that a person who uses the card in a transaction is
the card owner.
308
Also, SSL doesn’t offer any way to the client to know if the trader’s
web site is authorized to accept the credit card payments and the site is
not a pirate one, designed in order to collect data about cards.
The problem was resolved through new technology appearance called SET
(Security Electronic Transaction), developed by MasterCard and Visa. SET
resolves the authentication problem through digital certificates assigned
to the client and trader. SET offers a biggest security than the traditional
one. To interdict the trader’s access to the client’s card number, SET
crypts it in a way that assures the access only for the client and
authorized financial institutions.
Each of the actors involved in a transaction as trader, client or
financial institution uses the private SET certificates that has the role of
authentication in addition to public keys associated to the certificates that
identifies the other actors. In practice, a third company (Verysign) offers
the service for digital certificates providing to its clients, that is the credit
card owners. Regarding the seller, the process is similar: in the moment
of on-line shopping carrying out, before data interchange accomplishment
for transaction starting, the software that includes the SET technology
validates the trader’s identity and credit card owner. The validation
process consists of certificate verification issued by authorized providers
of some kind of services.
E-invoice – the credit cards represent the most common solution in
B2C and B2B models. In B2B sector, the transaction volume is biggest
than transaction volume made through credit cards. Another reason is
that the most part of the companies have already used this tool in its
classical form and payment method changing would need a reorganization
of the economic process that implies biggest costs. The payment
procedure through e-invoice is following: the transaction value is
automatic sent to the suppliers through an informatics system. These one
respond with an invoice that will be paid by different instruments. Secured
methods are needed in order to filter the access to the internal databases
of the company. The EDI (Electronic Data Interchange) standard offers an
infrastructure for this aim. The major problem consists of commercial law
of each country that should recognize the electronic invoice validity.
Electronic cheques (Internet cheques, NetCheque) – it is a
system developed at Information Sciences Institute of the University of
Southern California. The buyer and the seller must have an account
opened on the site of NetCheque. To assure the secure it is used the
identification through the Kerberos protocol and password. To pay through
cheque, it must install a special software at the client. The software works
as a cheque book. A client can send a crypted cheque through this
software. The trader can encash from the bank or he can use the digital
cheque for another transaction with a supplier. A special account from the
network verifies the cheque validity and send an acceptance message to
the trader that will deliver the goods. PayNow
309
Debit cards – they need a personal identification number (PIN)
introduction and a hardware device using that reads the information on
the cards. It is possible to be replaced with the electronic chips used for
smart cards that will replace the credit cards.
E-cash – they use a software application to save to the disk the
cash equivalent in a digital form. The advantage of this system is given by
the money transfer cost that is almost zero. To receive money it is
necessary to access a virtual pay office available on web or a ATM
machine where the money are encashed. The difficulty of this system is
represented by the security implementation that guarantees that the
money cannot be altered. The using of cryptographic technologies, digital
signatures and electronic signatures helps to reduce the fraud possibilities.
Another condition is that the e-cash must reveal the identity of the person
who paid them. The payment system has not to have the bank as
intermediary. Some examples are offered on [www3].
E
-Cash was implemented by the company DigiCash and it represents
a payment system on Internet based on the real money principle. It
was invented by David Chaum in Holland and it uses the
cryptography with public keys that assure both the digital signatures, and
blind signatures. The system is focused on electronic money anonymity
assurance, and the buyer and the seller must have an account opened at
the same bank.
The electronic payment systems have some essential requirements
that must be accomplished. From these, the following are parts:
security – this means that two payments are not made in the
same time without falsification also the protocol atomicity;
offline operability – if the system has offline operability, then
the transaction are executed only two parts: the buyer and the
seller;
transferability – if the system has transferability, the users can
use the coins without to be necessary accessing of coin issuer
in order to verify them. The transferability implies the
anonymity in the most part of the cases;
anonymity – it is a very important element for some users;
hardware independence – some working systems use
equipments to prevent the intrusions, double payment or to
protect the master key of the system;
scalability – if the system is scalable then it supports a bigger
users number. Also, it is important the facility of adding and
erasing the users in or from the system;
efficiency – the efficiency of each process, payment, money
retiring is an important factor for all the parts;
310
easy to use – the interface with the user is not a cryptographic
problem, but the payment system is important to be more
practical.
The E-Cash system asks the following participants:
participant – any participant in the system;
issuer – participant who issues the e-coins;
user – participant who uses the e-coins to buy or sell
merchandise;
payer – participant who uses the e-coins to buy merchandise;
payment beneficiary - participant who receives the e-coins in
order to sell merchandise;
certification authority – participant who certificates the public
keys of the participants.
E-Cash Bank
validation
cash depositing or
withdrwing coins
coin depositing
The serial number a must be different for each coin of the 50. The
identification number is a combination of two parts that are generated
311
using a separating secret protocol. The identification information is
separated in two parts.
When the bank receives these ones prepared money, it uses the
cut-and-choose protocol and opens 40 from the 50 coins and verifies if
the sum is the same, the serial number is different and the identification
number is valid.
Signing process using the blind signature – it is made because the
bank has not to associate the serial number and the person who wants to
sign the coin. It is introduced the bling factor r. It is a random integer
number that can be multiplied in coin before that the bank signs it. The
person can eliminate it after the signing.
The process has take place as follows: the person sends to the
bank f(a) * r instead of f(a). When the bank signs it, only the person
knows what it is the value after the r will be eliminated. It is eliminated
nay identification trace after the coin is spent.
Example: the person A has some money and A wants that the
person B signs them using blind signature. The person B has the public
key e, the private key d and a public module n. Person A selects a
random number k from 1 to n. After that, A blinds the value a, computing
t = a*ke(mod n). The person B signs t with his private key d:
td=(a*ke)d (mod n). The person A can reveal the money when the
previous result is divided by k.
After this process, person A has the money signed by person B
without person B to know what he or she signed.
f d (a * k e ) d (mod n) a d * k (mod n)
= = = a d (mod n)
k k k
312
BANK CLIENT
Signed coin
of 50$ 3
Signed coin
SN Value Bank
12345 50$ Signature
Record in the
database of
the bank
Fig. 9.11. One E-Cash coin spending
313
5
CLIENT
Bank
Coin signature
3
database
Fig. 9.12. E-Cash secured algorithm
4.5 Conclusions
I
n present, it ascertains a big using of the electronic commerce,
especially the on-line payment way using. In the first on-line payment
systems it took place the system E-Cash developed by DigiCash. The
system was focused on electronic money anonymity assurance, and the
buyer and the seller had to have an account at the same bank.
314
The system was not used long time because the main
disadvantage of the very large databases for the signatures. But the
model E-Cash can be checked up in order to build a more reliable system.
References:
315
316
Internet and Grid Computing Security
317
318
Module 10 – Internet and Grid
Computing Security
Abstract: Web services are essentially software services that available for
consumption over the Internet. However, they extend basic Internet
person-to-program interactions, such as individuals accessing programs
on Web servers via browsers, to support program-to-program interactions.
The use of Web services on the World Wide Web is expanding rapidly as
the need for application-to-application communication and interoperability
grows. These services provide a standard means of communication among
different software applications involved in presenting dynamic context-
driven information to the user.
1.1 Introduction
T
here is a strong trend for companies to integrate existing systems to
implement IT support for business processes that cover the entire
business cycle. Today, interactions already exist using a variety of
schemes that range from very rigid point-to-point electronic data
interchange (EDI) interactions to open Web auctions. Many companies
have already made some of their IT systems available to all of their
divisions and departments, or even their customers or partners on the
Web. However, techniques for collaboration vary from one case to another
and are thus proprietary solutions; systems often collaborate without any
vision or architecture.
Thus, there is an increasing demand for technologies that support
the connecting or sharing of resources and data in a very flexible and
standardized manner. Because technologies and implementations vary
across companies and even within divisions or departments, unified
business processes could not be smoothly supported by technology.
Integration has been developed only between units that are already aware
319
of each other and that use the same static applications. Furthermore,
there is a need to further structure large applications into building blocks
in order to use well-defined components within different business
processes.
In order to promote interoperability and extensibility among
different applications, as well as to allow them to be combined in order to
perform more complex operations, a standard reference architecture is
needed. The Web Services Architecture Working Group at W3C is tasked
with producing this reference architecture.
A shift towards a service-oriented approach will not only
standardize interaction, but also allows for more flexibility in the process.
The complete value chain within a company is divided into small modular
functional units, or services. A service-oriented architecture thus has to
focus on how services are described and organized to support their
dynamic, automated discovery and use.
Companies and their sub-units should be able to easily provide
services. Other business units can use these services in order to
implement their business processes. This integration can be ideally
performed during the runtime of the system, not just at the design time.
Web Services promise tremendous benefits in terms of productivity,
efficiency, and accuracy. Indeed, corporate IT organizations are only just
beginning to understand the full potential of Web Services. But, while they
offer attractive advantages, Web Services also present daunting
challenges relating to privacy and security. In exposing critical business
functions to the Internet, Web Services can expose valuable corporate
data, applications, and systems to a variety of external threats. These
threats are not imaginary. They range from random acts of Net vandalism
to sophisticated, targeted acts of information theft, fraud, or sabotage.
Either way, the consequences can be catastrophic to the organization.
Web services security is one of the most important Web services subjects.
A
Web service is an abstract notion that must be implemented by a
concrete agent. The agent is the concrete piece of software or
hardware that sends and receives messages, while the service is the
resource characterized by the abstract set of functionality that is provided.
There are many things that might be called "Web services" in the
world at large. The W3C Web Services Architecture Working Group use
one of the following definitions:
A Web service is a software system designed to support
interoperable machine-to-machine interaction over a network. It has an
interface described in a machine-processable format (specifically WSDL).
Other systems interact with the Web service in a manner prescribed by its
320
description using SOAP messages, typically conveyed using HTTP with an
XML serialization in conjunction with other Web-related standards.
A Web service is a software system identified by a URI, whose
public interfaces and bindings are defined and described using XML. Its
definition can be discovered by other software systems. These systems
may then interact with the Web service in a manner prescribed by its
definition, using XML based messages conveyed by Internet protocols.
Web services provide a standard means of interoperating between
different software applications, running on a variety of platforms and/or
frameworks. The Web Services Architecture (WSA), produced by the W3C
Web Services Architecture Working Group, is intended to provide a
common definition of a Web service, and define its place within a larger
Web services framework to guide the community. The WSA provides a
conceptual model and a context for understanding Web services and the
relationships between the components of this model.
The WSA describes both the minimal characteristics that are
common to all Web services, and a number of characteristics that are
needed by many, but not all, Web services.
The Web services architecture is an interoperability architecture: it
identifies those global elements of the global Web services network that
are required in order to ensure interoperability between Web services.
321
The four models are:
A
distributed system consists of diverse, discrete software agents
that must work together to perform some tasks. Furthermore, the
agents in a distributed system do not operate in the same
processing environment, so they must communicate by
hardware/software protocol stacks over a network. This means that
communications with a distributed system are intrinsically less fast and
reliable than those using direct code invocation and shared memory. This
has important architectural implications because distributed systems
322
require that developers (of infrastructure and applications) consider the
unpredictable latency of remote access, and take into account issues of
concurrency and the possibility of partial failure.
Distributed object systems are distributed systems in which the
semantics of object initialization and method invocation are exposed to
remote systems by means of a proprietary or standardized mechanism to
broker requests across system boundaries, marshall and unmarshall
method argument data, etc. Distributed objects systems typically are
characterized by objects maintaining a fairly complex internal state
required to support their methods, a fine grained interaction between an
object and a program using it, and a focus on a shared implementation
type system and interface hierarchy between the object and the program
that uses it.
A Service Oriented Architecture (SOA) is a form of distributed
systems architecture that is typically characterized by the following
properties:
Logical view: The service is an abstracted, logical view of actual
programs, databases, business processes, etc., defined in terms of what it
does, typically carrying out a business-level operation.
Message orientation: The service is formally defined in terms of the
messages exchanged between provider agents and requester agents, and
not the properties of the agents themselves.
Description orientation: A service is described by machine-
processable meta data. The description supports the public nature of the
SOA: only those details that are exposed to the public and important for
the use of the service should be included in the description. The semantics
of a service should be documented, either directly or indirectly, by its
description.
Granularity: Services tend to use a small number of operations
with relatively large and complex messages.
Network orientation: Services tend to be oriented toward use over
a network, though this is not an absolute requirement.
Platform neutral: Messages are sent in a platform-neutral,
standardized format delivered through the interfaces. XML is the most
obvious format that meets this constraint.
323
A service-oriented architecture consists of three basic components:
service provider, service broker and service requestor, which
perform the operations shown in figure 10.2.
324
The service requestor locates entries in the broker registry using
various find operations and then binds to the service provider in order to
invoke one of its Web services. One important issue for users of services
is the degree to which services are statically chosen by designers
compared to those dynamically chosen at runtime. Even if most initial
usage is largely static, any dynamic choice opens up the issues of how to
choose the best service provider and how to assess quality of service.
Another issue is how the user of services can assess the risk of exposure
to failures of service suppliers.
W
eb services technology is an ideal technology choice for
implementing a service oriented architecture:
Web services are standards based. Interoperability is a key
business advantage within the enterprise and is crucial in B2B scenarios.
Web services are widely supported across the industry. For the very first
time, all major vendors are recognizing and providing support for Web
services.
Web services are platform and language agnostic—there is
no bias for or against a particular hardware or software platform. Web
services can be implemented in any programming language or toolset.
This is important because there will be continued industry support for the
development of standards and interoperability between vendor
implementations.
325
This technology provides a migration path to gradually enable
existing business functions as Web services are needed.
This technology supports synchronous and asynchronous, RPC-based, and
complex message-oriented exchange patterns.
Today, there is an abundance of Web services standards available,
and it is not always easy to recognize how these standards are grouped
and how they relate to each other. Web service architecture involves
many layered and interrelated technologies. There are many ways to
visualize these technologies, just as there are many ways to build and use
Web services.
Core standards
326
SOAP (Simple Object Access Protocol, or Service-Oriented
Architecture Protocol) is a network, transport, and programming
language-neutral protocol that allows a client to call a remote service. The
message format is XML.
WSDL (Web Services Description Language) is an XML-based
interface and implementation description language. The service provider
uses a WSDL document in order to specify the operations a Web service
provides, as well as the parameters and data types of these operations. A
WSDL document also contains the service access information.
UDDI (Universal Description, Discovery, and Integration) is both a
client-side API and a SOAP-based server implementation that can be used
to store and retrieve information on service providers and Web services.
Description and discovery
The standards and specifications in this category are related to describing
and locating Web services either over the Internet or through means of
local resources.
WS-Inspection (Web Services Inspection Language - WSIL)
describes how to locate Web service descriptions on some server and how
this information needs to be structured. As such, WSIL can be viewed as a
lightweight UDDI.
WS-Discovery (Web Services Dynamic Discovery) defines a
multicast discovery protocol to locate Web services. By default, probes are
sent to a multicast group, and target services that match return a
response directly to the requester. To scale to a large number of
endpoints, the protocol defines the multicast suppression behavior if a
discovery proxy is available in the network. To minimize the need for
polling, target services that want to be discovered send an announcement
when they join and leave the network.
WS-MetadataExchange: web services use metadata to describe
what other endpoints have to know to interact with them. Specifically,
WS-Policy describes the capabilities, requirements, and general
characteristics of Web services; WSDL describes abstract message
operations, concrete network protocols, and endpoint addresses used by
Web services; XML Schema describes the structure and contents of XML-
based messages received and sent by Web services. To bootstrap
communication with a Web service, the WS-MetadataExchange
specification defines three request-response message pairs to retrieve
these three types of metadata.
WS-Policy provides a general purpose model and syntax to
describe and communicate the policies of a Web service. WS-Policy
defines a policy to be a collection of one or more policy assertions. Some
assertions specify traditional requirements and capabilities that will
ultimately manifest on the wire (for example, authentication scheme,
transport protocol selection). Some assertions specify requirements and
capabilities that have no wire manifestation yet are critical to proper
service selection and usage (for example, privacy policy, QoS
327
characteristics). WS-Policy provides a single policy grammar to allow both
kinds of assertions to be reasoned about in a consistent manner.
A
general security framework should address the following
requirements:
Identification — The party accessing the resource is able to identify
itself to the system.
Authentication — Authentication is the process of validating the user,
whether a client is valid in a particular context. A client can be either an
end user, a machine, or an application.
Authorization — Authorization is the process of checking whether the
authenticated user has access to the requested resource.
Integrity — Integrity insures that information will not be changed, altered,
or lost in an unauthorized or accidental manner.
Confidentiality — No unauthorized party or process can access or disclose
the information.
Auditing — All transactions are recorded so that problems can be analyzed
after the fact.
Non-repudiation—Both parties are able to provide legal proof to a third
party that the sender did send the information, and the receiver received
the identical information. Neither involved side is “unable to deny.”
328
Fig. 10.4. Main building blocks of a security framework
329
end security framework that provides support for intermediary security
processing. Message integrity is provided by using XML Signature in
conjunction with security tokens to ensure that messages are transmitted
without modifications. The integrity mechanisms can support multiple
signatures, possibly by multiple actors. The techniques are extensible
such that they can support additional signature formats. Message
confidentiality is granted by using XML Encryption in conjunction with
security tokens to keep portions of SOAP messages confidential. The
encryption mechanisms can support operations by multiple actors.
Web services implementations may require point-to-point and/or
end-to-end security mechanisms, depending upon the degree of threat or
risk. Traditional, connection-oriented, point-to-point security mechanisms
may not meet the end-to-end security requirements of Web services.
However, security is a balance of assessed risk and cost of
countermeasures. Depending on implementers risk tolerance, point-to-
point transport level security can provide enough security
countermeasures.
Security policies
330
what evidence must be offered by which agents before the access is
permitted.
A permission guard acts as a guard enabling or disabling access to
a resource or action. In the context of SOAP, for example, one important
role of SOAP intermediaries is that of permission guards: the intermediary
may not, in fact, forward a message if some security policy is violated.
Not all guards are active processes. For example, confidentiality of
information is encouraged by encryption of messages. As noted above, it
is potentially necessary to encrypt not only the content of SOAP messages
but also the identities of the sender and receiver agents. The guard here
is the encryption itself; although this may be further backed up by other
active guards that apply policy.
331
approach that enables complex interactions that can include the routing of
messages between and across various trust domains.
Web services face traditional security challenges. A message might
travel between various intermediaries before it reaches its destination.
Therefore, message-level security is important as opposed to point-to-
point transport-level security. In Figure 6, the requester agent is
communicating with the ultimate receiver through the use of one or more
intermediaries. The security context of the SOAP message is end-to-end.
However, there may be a need for the intermediary to have access to
some of the information in the message. This is illustrated as a security
context between the intermediary and the original requester agent, and
the intermediary and the ultimate receiver.
The major risk factors for message security are: message alteration,
confidentiality, man-in-the-middle, spoofing, denial of service, replay
attacks.
The standards of the security category deal with the security enablement
of Web services.
XML-Encryption specifies a process for encrypting data and
representing the result in XML. The data can be arbitrary data
(including an XML document), an XML element, or an XML
element content. The result of encrypting data is an XML
encryption element that contains or references the cipher data.
XML-Signature: specifies a process for encrypting data and
representing the result in XML. The data can be arbitrary data
(including an XML document), an XML element, or an XML
element content. The result of encrypting data is an XML
encryption element that contains or references the cipher data.
WS-Security: describes extensions to SOAP that allow for
quality of protection of SOAP messages. This includes, but is
not limited to, message authentication, message integrity, and
332
message confidentiality. The specified mechanisms can be used
to accommodate a wide variety of security models and
encryption technologies. It also provides a general-purpose
mechanism for associating security tokens with message
content.
WS-Secure Conversation Language: is built on top of the WS-
Security and WS-Policy/WS-Trust models to provide secure
communication between services. WS-Security focuses on the
message authentication model but not a security context, and
thus is subject several forms of security attacks. This
specification defines mechanisms for establishing and sharing
security contexts, and deriving keys from security contexts, to
enable a secure conversation.
WS-Security Policy Language: defines a model and syntax to
describe and communicate security policy assertions within the
larger policy framework. It covers assertions for security tokens,
data integrity, confidentiality, visibility, security headers, and
the age of a message.
WS-Trust Language: uses the secure messaging mechanisms of
WS-Security to define additional primitives and extensions for
security token exchange to enable the issuance and
dissemination of credentials within different trust domains.
WS-Federation: defines mechanisms that are used to enable
identity, account, attribute, authentication, and authorization
federation across different trust realms.
SAML (Security Assertion Markup Language): is a suite of
specifications that define interoperability between different
security domains. This is a natural requirement for Web
services single sign-on, or distributed transactions.
333
Performance – Security mechanisms and functions also impact
the application’s response time. When defining the Web service
system response time requirements, keep in mind that the
response time will be affected when applying security.
W
eb applications .NET typically implement one or more of the
logical services by using the following technologies:
334
Figure 10.7 illustrates security framework provided by .NET Web
applications.
1.8 Conclusions
N
etworks must be designed to provide a high level of security for
information that travels across the Internet or privately managed
intranets or extranets. Algorithms, such as third-party
authentication, public key encryption, and digital signature, can provide a
sufficient level of security. However, security does not only depend on
algorithms, standards, and products. Companies are required to follow
security best-practice recommendations.
References:
335
[OASI02] OASIS. “Assertions and Protocol for the OASIS Security
Assertion Markup Language” http://www.oasis-open.org ,
31 May 2002
[WSIO05] Web Services Interoperability Organization:
http://ws-i.org
[WSSS02] Web services standards: http://www.w3.org/2002/ws/
[WWW306] World Wide Web Consortium:
http://www.w3.org sau http://www.w3c.org
336
Databases, Datawarehouses and
Datamining Security
337
338
Module 11 – Databases,
Datawarehouses and Datamining
Security
Abstract: In this paper are briefly presented the basic notions and
concepts used in databases field. The paper is structured in two main
parts. In the first part it is explained the main issues involved especially in
relational databases. The second part is a briefly synthesis about security
of databases and architectures used for ensuring reasonable security level.
1.1 Introduction
W
e can assert that a database is used to store data in more
optimal way than to do same thing using files. The indexing
mode of files, the access privileges, the language for
interrogating the data from physical files, the languge for describing the
data from files is database software responsibility.
A database has one or more definitions and the most important
features are:
One or more collections of data in interdependence, together
with data description and the relationships between them.
Collection of data.
Rules describing how data are related
A collection of data stored on external addressable memories,
used by a large number of users.
339
A program to interact with database in order to create tables,
views, to maintain database objects is called DBMS (Database
Management System). Actually a DBMS is a software system that
facilitates the creation, maintenance and use of an electronic database.
The proper software of the database which ensures the completion
of the following activities:
defining the structure of the database
loading data into the database
access to data (examination, update)
maintaining the database (collecting and reusing the blank
spaces restoring the database in case of an incident)
reorganizing the database (restructuring and changing the
access strategy)
data security.
340
o Damage to entire database.
o Damage to individual database item.
Element Integrity
o DBMS maintains element integrity in three ways:
o Field checks
o Access control
o Change log
Auditability
o Audit trail desirable in order to:
o Determine who did what.
o Prevent incremental access.
o Audit trail of all accesses is impractical:
o Slow
o Large
o Possible over reporting.
o Pass through problem - field may be accessed during select
operation but values never reported to user.
Access Control
o Recall
o DBS - enforces DBA's policy.
o Operating System vs. Databases
o Access control for Operating Systems
o Deals with unrelated data.
o Deals with entire files.
o Access control for Databases
o Deals with records and fields.
o Concerned with inference of one field from another.
o Access control list for several hundred files is easier to
implement than access control list for a database!
Authentication and Availability
o User Authentication
o DBMS runs on top of operating system.
o No trusted path.
o Must be suspicious of information received. (principle
of mutual suspicion)
o Availability
o Arbitration of two users' request for the same record.
o Withholding some non-protected data to avoid
revealing protected data.
341
1.2 Database integrity
D
atabase Integrity concerns that the database as a whole is
protected from damage. The element integrity concern that the
value of a specific element is written or changed only by actions of
authorized users. The element accuracy concerns that only correct
values are written into the elements of a database.
Two-Phase Update
If the system fails during second phase, the database may contain
incomplete data, but this can be repaired by performing all activities of
the second phase.
There are more database integrity ensurance tools that will be described
briefly:
Error Detection and Correction Code
o Parity checks.
o Cyclic redundancy checks (CRC).
o Hamming codes.
Shadow Fields
o Copy of entire attributes or records.
o Second copy can provide replacement.
342
Recovery
o Backup
o Change log
Concurrency/Consistency
o Simultaneous read is not a problem.
o Modification requires one to be locked out.
o Query-update cycle treated as a single uninterrupted
operation.
Monitors
o Range Comparison
o Tests each new value to ensure value is within
acceptable range.
o Can be used to ensure internal consistency of database.
State Constraints
o Describes the condition of the entire database.
Transition Constraints
o Describes conditions necessary before changes can be
applied to database.
Sensitive data
o Data that should not be made public (private data).
Factors that make data sensitive:
o Inherently sensitive.
o From a sensitive source.
o Declared sensitive.
o Of a sensitive attribute or record.
o Sensitive in relation to previously disclosed data.
Data warehouse
o A collection of databases, data tables, and mechanisms
to access the data on a single subject.
Data mining
o Selection of knowledge from data.
o Overuse of data in order to deduce invalid inferences.
Discovery of concise information about the data.
T
he task of insuring database security is divided between the
database administration system and the operating system through
methods called security techniques. These can each totally ensure
a part of the security insurance functions or they can be complementary
in achieving them.
343
The tasks for insuring database security are exemplified as follows:
Function Task
Identification Operating System (OS) or in some cases of DBAS
Authentification too.
Authorization DBAS (security use modules)
Access Control
Integrity DBAS (transaction administration model)
Consistency
Audit Operating System (OS) or in some cases of DBAS
too.
344
of the same attribute. This implies the implementation of the
security for each element.
2. A few security perimeters are necessary, and they will
represent access areas to certain data which sometimes can
overlap.
3. The security of a whole can be different from the security of a
single element. This can be bigger or smaller.
Database partition
Encryption
Integrity blocking
345
The classification must be as follows:
unforgeable – a malevolent user will not be able to create a
new sensitive datum for an element;
unique – the malevolent user will not be able to copy a level of
sensitivity from another element;
secret – the malevolent user will not be able to determine the
sensitivity for a random object.
Sensitivity blocking
We must not allow the discovery of two elements which have the
same security level just by searching in the security section (area) of
integrity blocking. As result of the encryption, the content of the blocking,
especially the security level, is hidden.
346
the front - end mechanism will format the data for the user;
the data are transmitted towards the user.
Switch filter
The switch filter interacts both with the user and with the database
administration system (DBAS).
The switch filter will reformulate the requests as follows:
the DBAS will perform as many tasks as possible, rejecting as
many unacceptable requests as possible which disclose
sensitive data;
data selection to which the user can have access.
The switch filter can be used both for records and for attributes or
elements. On records level, the filter requests wanted data and the
cryptographic control sum; if they verify the accuracy and data
accessibility, then they can be transmitted towards the user.
On attribute level, the filter verifies if all the attributes from the
user’s request are accessible to him, and if so, it transmits the request
towards the database manager. When returning, it will delete all the
requests to which the user can not have access.
On element level, the system will request the asked for data and
the cryptographic control sum. When they are returned they verify the
belonging to security levels for each element.
Views
347
Advantages Security techniques
Request Simplification Performances
Multiple queries applied to Queries on views can be changed
more databases can be applied into queries on base tables,
only to a simple view. something that will reflect on
performances.
Structural simplicity Restrictions to updates
The views can create to the Many views can be read-only
user a personal vision on the protected; in this case updates are
databases in order to interpret not possible.
the results.
Security
Data access restriction for
users
348
with full rights (superusers) and the users with data access restrictions
benefit from security insured by DBAS and OS.
Architectures with unsafe subjects assume that OS insures a high
level protection and DBAS a medium protection. In this case the “front-
end” security mechanisms must insure a high security.
We can notice that the „front - end“ security mechanism has a
strong encryption unity, which insures the access to data both for the user
with full rights and the user who has access restrictions. But each of them
to the data for which they have access rights.
In the case of integrated architecture, the access is allowed through
two different types of DBAS. The operating system, which is considered
safe, will play an important part in insuring access to data which are not
arranged on a disk.
From the point of view of access to data we can notice a similarity
with the replied / distributed architecture.
The replied / distributed architecture uses a safe replied mechanism which
will direct the requests. The user with full rights (the supseruser) will have
access, with the help of this mechanism, to the data of the user with
restrictions too (if this is mentioned in the access rights).
References:
349
350
Multiagent Security Systems
351
352
Module 12 – Multiagent Security
Systems
T
he mobile devices that can be used for m-applications and wireless
applications are: mobile phones, smart cards, PDAs (Personal Digital
Assistant) etc. A m-application is running into a infrastructure
formed by: mobile devices, standards and communications protocols,
processes. The most used devices in m-applications, that is also wireless,
are the mobile phones.
The standards and protocols that are used or will be used in
wireless mobile phones are: GSM – Global System for Mobile
Communications, GPRS – General Packet Radio Services, PDC – Personal
Digit Cellular, CDMA –Code Division Multiple Access, W-CDMA – Wide-
353
band CDMA, EDGE – Enhanced Data rates for Global Evolution, UMTS –
Universal Mobile Telecommunications System.
The basic concept for wireless and mobile applications based on
mobile agents is to use Internet and mobile networks like support for data
and voice transmission. The vendors from mobile phones industry, offers
the possibility to develop reliable m-applications for new mobile devices
(2G-3G), even the devices use GSM (CSD, GPRS, EDGE, UMTS), CDMA or
WCDMA. In this time are three principal methods to develop mobile-
wireless applications:
M-applications WAP (using the phone’s WAP browser – an
example of WAP m-application is presented in e3com
[IVAN01a]) – the phone have a client WAP browser that
parse and interpret the information from different markup
languages like: WML, XHTML or HDML, in such way like
Netscape Communicator or Internet Explorer “understand”
HTML. The possibility for m-computing (calculus for complex
algorithms like crypto-graphics one) is very small. Some
WAP browsers support WMLS (something like JavaScript for
HTML browsers – there are functions for processing strings,
arrays, and even some simple cryptographic functions –
using WTLS from the WAP protocols stack), but this thing is
not sufficient for reasonable security.
Some phones implement KVM (“KVM’s big brother is JVM”),
so the procedure for building the programs (m-application
MIDP or PJAE) is more like building programs for PC’s –
personal computers. The application eBroker is developed
in this way [IVAN03a]. The m-applications are running in
the device’s micro-processor and use a special memory
from the device’s memory. The company Sun Microsystems
provides APIs J2ME for developing MIDP m-applications
(midlets – for Motorola, Nokia 7650, Nokia 6060, Siemens
SL 45i, etc.) and PJAE m-applications (m-applications for
Nokia Communicator 9300, 9210 – using Java Standard JDK
1.1 API). Some companies like Nokia offer special Java API
for SMS, UI – User Interface, Camera and HTTP connections,
so a m-application can communicate with a server-script-
side program (JSP, PHP, Perl or ASP), CGI program, Java
Servlet, or EJB component for distributed m-computing.
More than that is the increasing power for computational
process at mobile devices with this kind of technologies In
this way it is possible to create an end-to-end security, very
useful in e-commerce. In m-applications developer
communities it is used MIDP 1.0 for mobile phones Java
enabled and it is expected MIDP 2.0 – Nokia 7700.
For mobile phones and devices that have their own
operating system – SymbianOS, the client side of m-
354
applications is building using SDKs provided by the
producers of devices. For the devices that have operating
system Microsoft Windows CE, and implement
Microsoft .NET Compact Framework, the m-applications are
“written” in C# .NET. The m-applications that are running in
Symbian using APIs form special SDKs for Java, C++ or
OPL.
For the devices that are in CDMA networks (Kyocera, some Nokia
phones – SymbianOS 7 has its own CDMA module) the m-applications are
developed using BREW technology. SymbianOS 7.0 is implemented by
Sony-Ericsson P800/P802, Nokia 7700 and Nokia Engage. SymbianOS 7.0
provides very strong C++ APIs for crypto/authentication processes for
voice and data, SMS, TCP/IP networking, WAP, Bluetooth and infrared
communications. Once device that have SymbianOS is using data from
mobile networking it became a usual devices that use Internet because it
receive an IP address.
M
obile software agents are a new concept used in distributed
systems and this concept is based on human agents idea – real
estate agent, travel agent.
A New Development
Technology
Distributed Information
Systems Retrieval
Data Structures
Mobile Code
AGENTS
Database & Security – cryptographic -
Knowledge base Technology
Technology
Machine AI & Cognitive
Learning Science
Structured Programming 1975 -> Objects 1982 -> Agents 1998
355
systems, since multiple cooperating agents can be used to solve very
formidable problems. In figure 12.1 it is shown the concepts from
computer science which cooperate in developing agents concept.
Some example of agent applications are: a) user-interface agents
– Microsoft Office Assistant, Microsoft Agents; b) personal (expert)
assistants – calendar managers and investment assistants; c) e-
commerce agents – travel and shopping agents; d) network management;
e) business process agents – data driven workflow management; f)
information management agents – email filtering, web browsing,
notification and resource discovery agents.
An agent is a computer system, situated in some environment,
that is capable of flexible autonomous action in order to meet design
objectives (Jennis, Sycara and Wookbridge, 1998).
Summarizing some characteristics, it is told that an agent is:
Autonomous – proactive, goal-directed, long-lived;
Adaptive – adapt to their environment and users and learn
from their users, other agents and their own experience;
Cooperative – cooperates with human agents and other
software agents, utilize various agent communication
languages, advertise their capabilities and understand the
capabilities of other agents.
Autonomous Interface
Collaborative Agents
Agents
Vision of Hyachinth Nwana, 1996
356
unstructured information available on the networks; and provide a new,
more powerful methodology to develop complex software systems.
In real heterogeneous distributed applications there are used two
types of mobile software agents:
stationary agents – executes only on the system where it
begins execution and if it needs information on another system,
or needs to interact with an agent on another system, it use
client-server communication mechanism such as socket
programming, RPC, RMI, DCOM or CORBA
mobile agent – not bound to the system where it begins
execution, can move from one system to another within
network and transports both its state and its code with it.
357
Strong and weak mobility used in agent theory is translate that
the strong mobility means migration of agent code, data and execution
state and the weak mobility means migration of only the agent code
and data. It is obvious that strong mobility is difficult to accomplish
because: first if the agent code is interpreted, access to the execution
state is difficult to obtain and second if the agent code is compiled before
execution, the execution state is represented by the stack of program.
Transporting the stack and rebuilding it on a different host, which can be
an entirely different architecture, is a not easy task.
In order to use a dedicated terms in mobile agent technology it is
shown the most used terms and their meanings [BOBT00]:
Agent Mobility - the ability to transport agents between
computers;
Agent Naming - the ability to assign globally unique names to
agents to distinguish one agent from another;
Agent Authentication - the ability to authenticate the identity of
the owner (authority) of an agent;
Agent Permissions - the ability to assign permissions to agents
that restrict access to data and unintended consumption of
computer resources. Selected agents may have the ability to
grant permissions to other agents or re-negotiate their own set
of permissions;
Agent Collaboration - the ability to request and respond to
requests for establishing a meeting with another agent. Agents
should also have the ability to begin and end meetings with
other agents and enforce rules for the meetings;
Agent Creation - the ability for agents to create other agents
locally and remotely. New agents may have the authority of the
existing agent and either the same permissions or a subset of
them;
Agent Life Cycle - the ability to control the life-span of agents
by age and resource consumption;
Agent Termination - the ability to terminate agents gracefully,
thereby allowing them to notify other agents they are
collaborating with;
Agent Staging - the ability to write to disk agents that must
wait for long periods of time for events to occur;
Agent Persistence - the ability to checkpoint agents to disk so
that they survive crashes on their host computers;
Agent Interaction - the ability for related agents to interact.
The means of interaction might depend upon whether the
agents occupy the same or different computers;
Agent Management - the ability to manage a collection of
agents in the system
Agent Tracking - the ability to track and locate agents that
have migrated to other computers;
358
Agent Debugging- the ability to monitor and log agent activities
and exceptions.
So after Danny Lange there are seven good reasons to use mobile
agents: reduce network load, overcome network latency, encapsulate
protocols, execute asynchronously and autonomously, adapt dynamically,
are naturally heterogeneous, are robust and fault-tolerant. An important
issue is that there is still no killer application for mobile agents.
Also important is to securing mobile agents in order to perform
their task using new cryptographic techniques. The security issues with
mobile agent systems are:
Masquerading
o Agent poses as another agent to gain access to services or
data at a host.
o Host assumes false identity in order to lure agents.
Denial of Service
o Agents may attempt to consume or corrupt an hosts
resources to preclude other agents from accessing the
host’s services.
o Hosts can ignore an agent’s request for services or access
to resources.
Unauthorized Access
o Agents can obtain access to sensitive data by exploiting
security weaknesses.
o Agent interferes with another agent to gain access to data.
Eavesdropping
o With agents that are interpreted, the host can inspect their
internal algorithms and data, such as the maximum price
the agent’s owner is willing to pay for item X.
Alteration
o Hosts can change an agent’s internal data or results from
previous processing to influence the agent.
Repudiation
o After agreeing to some contract, an agent can subsequently
deny that any agreement ever existed or modify the
conditions of the contract.
359
these messages will be HTTP requests. If the mobile phone implements
MIDP 2.0, have SymbianOS or run applications in Java smart card, it is
quiet possible to make call of remote procedures using socket
programming – ensure migration of calculus – and also send/receive
messages – ensure data migration.
T
he privacy and security in wireless and mobile application is very
important but is different from the point of view of designing and
building in this two approach: using a defined security protocol or
using mobile agents than encapsulate the protocol. For a better
understanding it is studied in this part of paper the mobile application
eBroker using two kind of approach. In this moment m-application is in
version 1 [IVAN03a] – eBroker v1.0. The m-application will be used in
financial field and it is composed from many modules. The entire
architecture can be seen in picture 12.4. The application realize the
transactions with capital stocks depending of stocks quota. In this version
it is used communication protocols between application’s modules instead
mobile agents. The protocols use XML format (eXtended Markup
Language), and the format is validate using DTD (Document Definition
Type) or XML Schema (the communications between application’s
modules deployed in banks, application server and stock markets are in
XML format).
There are few conditions that have to be fulfilled for the
architecture from figure 12.4:
The mobile device must have WAP connection, WAP browser
and to implement MIDP 1.0 or better.
The link between mobile network and local network –LAN or
Internet it is through WAP Gateway (a program that is running
for “translate” data packets between TCP/IP stack protocols
and WAP stack protocols).
Before WAP Gateway there is a GSM modem which it is linked
to gateway using serial or parallel port; on the same computer
with WAP Gateway is running a RAS (Remote Access Server)
that will take the phone calls for WAP connections.
Using RAS the WAP data packets arrive at WAP Gateway and
after that they are transformed in TCP/IP data packets.
360
Bank
Bank Server
DB = Software Agents,
Data malicious or not
Wherehouse
Internet or Databases
or LAN
Distributed
Computing
WAP Gateway
Application
Server
eBroker JavaServlet
, CORBA, JSP, ASP, PHP, Web Services .NET
361
STEP 1: Server sesssion key establishment
+ Server Name
Ka
Ks = Secret key generated
by server
DB
AUTHENTICATION (Session key – step 1) + CONFIDENTIALITY (STEP 2)
Ka
M2
C2
Ks
Fig. 12.5. Security protocol used by eBroker between midlet and server
1.4 Conclusions
W
ireless technologies can face inherent technical vulnerabilities
when improperly deployed or administered. However, these
technologies do not represent a specific threat to privacy any
more than other technologies, and have not as yet been specifically
addressed in existing legislation and regulation. Privacy represents a
higher level, information flow discipline that extends to roles, definitions,
collection, and disclosure of specifically defined, often industry specific,
362
personal information. Wireless networks will simply serve as another
avenue of access to this information. Understanding information flow
across these infrastructures, combined with established restrictions on
access, transmission, and dissemination, should drive any potential
deployments of these new technologies.
References:
363
364
Legislation of Informatics
Security Systems
365
366
Module 13 – Legislation of
Informatics Security Systems
S
oftware products are in fact programs written in different
programming languages, using special analysis, design,
programming and testing techniques and which are used to solve a
problem from a certain class of problems.
A program is defined as a logical succession of instruction. In fact,
what one must define is another related entity, which is the procedure.
A procedure is an intellectual construct created to perform a
certain operation on a set of data. There are for example procedures for:
data introduction, data validation, result viewing, performing
miscellaneous calculations, file sorting, homogenous array manipulation,
data searching, data compression, data encryption, matrix operations,
polynomial operations, linear optimization, discrete optimization, sound,
video and image manipulation etc.
367
Procedres are constituted in libraries which are reusable due to
high degrees of generality and due to the correctness guaranteed by an
ample testing process.
A procedure is a well delimited sequence of instructions used for
obtaining a rigorously specified operation of data manipulation. There are
procedures for sorting arrays, inversing matrixes, solving an equation,
creating a file, generating a structure diagram etc. A procedure consists of
defining the input data, the sequence of instructions for the manipulation
of the data and establishing the final results.
The input data, the final results and the variables containing the
way the processing went are contained in the formal parameters list of the
procedure.
With the above concepts we can define a program as a construct of
procedures which call or are called by other procedures. The diversity in
which procedures can be employed and called leads to the establishment
of the following program structures:
linear structure, in which the procedures are called one from
the other;
tree structure, in which procedures are located on different
levels and are called from the upper level to the lower level;
network structure, in which procedures can be called from any
point, no matter their level.
368
results, without the necessity to resume the execution from an
intermediary checkpoint.
369
Equally, the object code resulted after the compilation of a
program, as well as the executable programs themselves, is represented
in binary form on the storage media whichever that may be.
Executable programs are launched and perform processing on
information from databases, obtaining results which are either shown on a
computer screen, printed or stored in databases.
Programs in text form are the result of the activity of programmers
and represent, at different stages, beginning with the initial forms, which
include errors and continuing with more complex forms from which errors
have been eliminated. After compilation, the programs become forms
closer to their executable final forms.
Procedures form collections called subprogram libraries meant to
provide solutions to distinct problems. There are function libraries for
mathematical calculations, graphical function libraries, compression
subprogram libraries, sorting subprogram libraries etc.
Operating systems, database management systems, production
management systems, multimedia development tools, security systems
implementations etc. are complex program, with a very high degree of
generality and reliability, which function independent of the producer. For
their development, huge sums of money have been spent.
A
less complex program is created by a single programmer. A system
of programs however calls for an important volume of work and is
realized by complex teams which include analysts, designers,
programmers, testers and implementation-use assistants.
The first step in the realization of a software product is defining the
specifications which include the complete description of the processing
demands, the input data, the output results, the models, equations,
inequations and selection criteria.
Based on the specifications, the designers design the project,
which includes diagrams, resources and relations.
The product is the basis on which the program is written. The
software product is the result of the work of an entire team and is a
product of the intellectual activity of a group of people.
The collective author of the software product includes several
professions, several experienced persons with different competences and
which take part in different activities.
The software product includes:
instructions written by programmers
procedure calls from included libraries
sequences generated as a result of commands
370
The software product is the result of an intellectual creation activity
because:
data structures are chosen based on the algorithm
control structures and processing instructions are associated to
each step in the algorithm
the programmer attempts to run a process of optimization of
the source code
371
the right to demand that the integrity of the creation be
maintained and to oppose any change as well as any damage
to the creation if it affects his/her honor or reputation
the right to retract the creation, paying, if necessary the
persons holding rights of use which have suffered damages
through the retractation
T
he property right of software products is the patrimonial component
of copyright or, more clearly, of the rights related to the quality of
author.
According to article 12 of the law, “The author of a creation has the
exclusive patrimonial right to decide if, in what way and when his/her
creation will be used, including the right to consent to the use of his
creation by others.”
The property of an intellectual creation gives the right to decide
upon a series of possible operations on the creation, of which the most
important is copying.
Of the rights the law explicitly mentions in the case of computer
programs there are the exclusive rights to both execute and authorize:
permanent or temporary reproduction of a program, in its
integrity or partially, by any means and under any form,
including the case in which reproduction is determined by
installation, storage, running or execution, display or network
transfer
translation, adaptation, formatting or any other transformations
to a computer program, as well as reproduction of the result of
these operations, without causing damages to the rights of the
person which transforms the computer program
distribution or leasing of the original or the copies of a
computer program under any form
372
In the case of computer programs, “the patrimonial rights of
computer programs created by one or more employees during the
exercise of their work attributions or under the instructions of the
employer belong to the employer”. This provision is different from those
referring to other creations, for which the patrimonial rights belong to the
employee.
For patrimonial component of copyright, the legal presumption
functions in the favor of the employer, not the actual creator.
Patrimonial rights have two special characteristics:
they have a temporary existence, only for the duration of the
author’s life and a fixed period afterwards
they are transmissible in their entirety or partially, unlike the
moral rights
T
he right to use a software product is transmissible by the person
holding the patrimonial rights by way of a license contract. According
to article 75 of the law, in lack of a contrary contractual provision,
through the license contract:
the user is granted the non exclusive right to use the computer
program
the user cannot transmit to another person the right to use the
software program
373
Through the license contract only the right of use is transmitted,
not the author’s patrimonial rights and, more importantly, not the right to
reproduce the program. There is of course a notable exception, the right
of the user to make a personal safety or archive copy, but this copy has
the role of increasing the safety of usage, not distribution.
However, there are a series of operations which do not necessity
the authorization of the author. The right to perform these operations is
transmitted through the license contract, but only if they are necessary
for the use of the computer program according to its purpose, including
the correction of errors:
permanent or temporary reproduction of a program, in its
integrity or partially, by any means and under any form,
including the case in which reproduction is determined by
installation, storage, running or execution, display or network
transfer
translation, adaptation, formatting or any other transformations
to a computer program, as well as reproduction of the result of
these operations, without causing damages to the rights of the
person which transforms the computer program
This right has however a series of limitations, in the sens that the
obtained information:
cannot be used for other purposes than realizing
interoperability on the independently created computer
program
cannot be communicated to other persons, apart from the case
where communication is necessary for the interoperability on
the independently created computer program
cannot be used for the finalization, production or sale of a
computer program with is fundamentally similar or for any
other act which prejudices the rights of the author
374
The user can also, without the authorization of the author, analyze
study or test the operation of the program in order to determine the ideas
and principles which fundament any of its elements, during installation,
display, running or execution, transmission or storage of the program,
operations which he is has the right to perform.
T
he informational society assumes rigorous management of all
resources, be it equipment or software and databases. Those who
pay for the elaboration of software, acquisitioning of computers and
internet access have to be aware that they are making an investment.
The cost of this investment has to be recovered through the positive
effects generated by the use of the program, the extraction of information
from the internet and digital libraries, information which is useful in the
decision making process.
The tendency to show accelerated obsolescence, sometimes
through the simple lack of use of the assets that form the investment
(software and equipment) determines the impossibility to recover the
costs of the investment. For this problem there are models in investment
theory that estimate the losses generated by lack of use and obsolescence
(factors that act simultaneously).
Rigorous flows are defined to assure the efficiency of software
production and a specific legislation is employed. The most important
legislative initiatives have been made in the direction of software piracy
prevention, but are not limited to this scope.
It is extremely important that a preoccupation exist to realize
informatics systems, computer applications, multimedia products etc. with
an adequate juridical frameset.
In the event that a software product has a high character of
generality it is in the advantage of the beneficiaries to purchase the right
to use it for a limited period of time and for a limited number of users.
If the software product is dedicated and its cost is significantly
larger, the owner has to ensure that the program will not be resold with
small modifications from the author. On sale, the author accepts a series
of restrictions in order to not do any damages to the owners and users.
In the informational society there is a balance between authors,
owners and legal users.
When an organization has to address a problem solvable through
the implementation of a software product, the organization has to decide
if it wishes to become owner of the application or if the application should
stay in the property of the author (the software development company).
If the organization chooses to become owner of the application, the
advantages are the position of sole user and at the same time owner,
position which will force anyone who wishes to use the application to pay
375
the organization a fee. On the other hand, the maintenance costs are no
longer distributed to many users and can reach exaggerate sums of
money and the process of bug discovery through use in a real life
production environment will take a lot more time.
The alternative is that the organization remains a simple user while
the ownership of the application remains the producer’s. In this case, the
organization pays less for the use of the product and the existence of a
large number of users leads to faster discovery and fixing of bugs, but for
modifications in the program the organization remains dependent on the
producer, which in turn leads to a rise in costs. The producer can even
decide that he is no longer willing to modify the application to adapt it to
the needs of the users.
In the context of the automatization process ways of increasing
software quality through protecting the producers have to be identified in
order for them to:
stimulate their employees
buy hardware
buy process management instruments
increase the complexity of the applications
apply quality management procedures.
S
oftware piracy means infringement of the rights which form the
content of copyright in the case of computer programs. Of course
this especially refers to usage of computer programs. One has to
notice that software producers are in the interest of obtaining a maximum
of sales and the most effective way to make the performance of their
products known and thus increase the number of buyers is offering
storage media (CD’s, DVD’s etc.) with or allowing downloads through the
internet to versions of their product with limited usage rights (be it
through limiting the functionality of the program or the period during
which it can be used). Going outside the boundaries of this special license
also constitutes copyright infringement, although this case it not explicitly
covered in Romanian legislation. For example, at the end of a 30 day trial
period the user loses the right to use the product and if, although he
decides not to buy the product, he continues to use it he infringes upon
the copyright of the author.
Certainly, the interest the producer has to convince users to try
the product during a test period is more complex, yielding various
advantages:
a possibility to increase the client base
acquainting the user with the product
376
testing the product without supplementary costs through the
fact that the temporary users send the error messages to the
producer
377
Through Government Ordonance 25/2006 a national register is
created. According to article 17 both authorized persons and moral
persons have the obligation to register if they operate on the territory of
Romania any of the following activities:
software development
software import
software distribution
software leasing
software sale
378
1.7 Conclusions
T
he regulation of copyright and its components in Romanian
legislation is for now incomplete, leaving out some problems with
which Romanian society has not yet been confronted, but which will
appear in time. In the context of the evolution towards a digital society, it
is imperative that the legislation that offers the framework for the
development of such a society adapt to its dynamic needs.
Equally, an ever increasing involvement of the state is necessary,
both for the creation of a legal framework and for the mobilization of
resources in order to acquisition and distribute licenses to its subordinate
institutions.
This article cannot exhaustively present the proposed problems
and the authors hope that it will be followed by a series of articles which
will expand on the various aspects that should be treated in greater detail.
References:
379
380
IT&C Security Companies Section
381
382
Module 14 – IT&C Security
Companies Section
T
he latest technologies are used not only in business, considered the
most dynamic field of activity, but also in Governmental strategies of
interaction with the civil society and the business environment. The
governmental functional and technological requirements created new
concepts and approaches of electronic business (G2G, G2B, and G2C).
383
Once the trend was set, many of the Governments were keen to
adhere to it. From e-Procurement solutions to e-Administration and new
electronic services to enhance the communication and interaction with the
society, they begin to define requirements, to interconnect systems, to
ask for standards and to make the industry interested to get involved in
this process.
In Romania, the Ministry of Communications and Information
Technology was the trend setter. It developed a large number of proofs of
concept and pilot projects covering a wide area of services that needed to
evolve electronically. Together with the Ministry of Public Finance, it
defined the requirements and specifications for DECWEB, a G2B system
that allows companies to submit their fiscal statements via Internet.
Goals:
Transparency and efficiency for the activities of the Ministry of
Public Finance
Efficient and standardized work procedures
Decreasing public expenses and bureaucracy
Assurance of a very high security and trust environment for
performance of public funds administration activities
Automatic and faster submission, taking over, processing and
interaction between Companies and Fiscal Administration office
Eliminated fraud or subjectivism
Encouraged introduction of IT and integrated applications for
finance and accounting at Company level, independent of their
size
384
One major threat for balance sheet submission operations is
represented by the risk of processing malformed information for which
there is no guarantee regarding its authenticity and integrity. This is a
major problem from a legal point of view:
if the Company is not able to prove the information authenticity
and integrity there may be charges caused by the submission
of forged documents
if the MoPF is not processing the correct data, it may begin
legal actions against Companies without sound pieces of
evidence; this may lead to important image and trust damages
for both the Ministry and the Company
D
ECWEB is an integrated system, designed with a built in high
availability architecture, ensuring a high level of system security,
the system being a mixture of proprietary and open-source
technologies.
The project consisted of:
• Definition of a standard needed to set up secured communications
and information transmission between fiscal administration offices
and the companies/tax-payers
• Setting up a secured portal for the access of companies to the
Ministry of Public Finance application and for the Financial
Inspectors, to access the data submitted by the companies
• Implementation of a Certification Authority within the Ministry of
Public Finance to support authentication based on digital
certificates and advanced encryption
• Setting up an assistance software for the generation, verification
and electronic signature of balance sheets
385
(paper-based and electronically) and to provide a graphical interface for
the electronic forms identical with the image of the printed ones.
An intensive user awareness program was initiated to explain the
differences between the systems, the benefits of the electronic techniques
and the basic knowledge about electronic signatures and information
protection.
Technical headlines of the system:
High availability and scalability to face heavy loading periods
before submission deadlines
Advanced security characteristics which ensure confidentiality,
authenticity, integrity and non-repudiation
Open solution, flexible, scalable, and able to interface with
other systems.
System architecture
386
User authentication
387
Fig. 14.2. Balance sheet submission flow
388
signing certificate and the correspondence with the certificate of the
logged-in Company and then stores the data into the database of the
Fiscal Administration where the Company is affiliated to. All the actions
are logged and at the end of the submission process the Company
receives a receipt as proof of successful operations or notification of the
transmission errors.
Once a balance sheet is submitted, the Financial Inspector can
verify the documents (analyze them versus other submitted documents,
Company’s financial history, etc.) and notify the Company if they need
more details, supplementary information or if the balance sheets need to
be submitted again.
The Companies can log-in anytime into the portal and check the
status of their documents (submitted, reviewed/not reviewed by a
Financial Inspector) and the messages from the inspectors.
T
he project represents a corner turn for the MoFP, as it changes the
paper-based way of conducting activities with a state-of- the-art
system that allows the organization to decrease its front office
manned operations and to have a better focus on the core business of the
Ministry.
The main impact of DECWEB was over the human element of the
system. As everything within the balance sheet submission was changed,
it was needed a sound system able to gain the confidence of the users
that no data is lost, operations are faster and more reliable, the integrity
and confidentiality of information is preserved, everything with less effort
and lower costs.
At a glance, the advantages of using the DECWEB system are:
The Ministry of Public Finance work patterns are preserved.
The activity of companies is simplified.
The operation is legally similar (the same legal value) as the
classic document system.
389
PKI architecture: certSAFE Certification Authority offering,
besides the digital certificates management, the possibility to
further implement key recovery, time-stamping and online
digital certificate status and validation modules. It was
provided with dedicated hardware security devices (HSM) for
key management and policies, practices and procedures
tailored for DECWEB and the needs of the Ministry of Finance.
Authentication and authorization system: gateSAFE, a versatile
and feature-rich identity management product for user log-in
on portal, covering the following functionalities: PKI enabling of
legacy applications, secure connection over insecure lines
(Internet/Intranet), SSO, SSL Acceleration, OSI level 7 load
balancing, accounting and virtual WEB server, web services
security, identity control
Mechanisms to offer IT security controls for information
protection on user side: offline application to fill in, check and
digitally sign the balance sheets
User training, documentation and security awareness programs
User training and support was a key element for the success of the
project, as long as they needed to be able to understand the
characteristics and functionalities of the new system. As they became
aware of the new mechanisms, they became confident in using DECWEB.
If in the beginning the electronic signature was nothing more than
a scanned hand signature pasted into a document, after a short time the
Companies began to ask the MoFP if they could sign e-mails and contracts
using the digital certificates provided for DECWEB.
DECWEB is a modern system, with an open architecture, able to support
the development of new functionalities and modules.
As a consequence of DECWEB success and upon the request of the
companies, the Ministry of Public Finance decided that the system needs
to be developed and extended year by year and it became a “de
facto” ”standard” when the Ministry set the requirements for new systems
implementations.
1.4 Outlook
O
ne of the first steps forward for DECWEB was the integration within
the National Electronic System, a unified access point to and
electronic service provider for G2B and G2C activities.
The project needs to be further developed to keep the pace with
evolving technologies. There are two approaches to achieve this:
introduction of new services within DECWEB or creating a new and more
complex system that includes and develops DECWEB functionalities; the
evolution of DECWEB must be analyzed from the point of view of the
390
technology and from the point of view of the services it offers to its clients
and it requires from third parties.
The elements needed to keep DECWEB on the frontline of
Romanian Governmental on-line services should include:
time stamp mechanisms to enhance the benefits of digital
signatures
certificate validation services for the MoPF Certification
Authority and open standards requirements for third parties to
be able to offer such services when issuing digital certificates to
DECWEB users
electronic series system to assign a unique serial number to
each submitted document, as required by the legal framework
electronic notifications mechanisms to offer a reliable and
secure way to exchange electronic messages between
Companies and MoPF
electronic forms workshop to allow MoPF to design any type of
forms and other documents that will be filled in and submitted
by tax payers
electronic payment system, to allow Companies to pay their
fiscal debts on-line
enhanced receipts issuance system, using digital signature and
time stamp mechanisms
archiving system to store the submitted documents for the
period of time required by the legal framework
391
Fig. 14.3. Electronic notifications mechanism
References:
392
[IVAN02] Ion Ivan, Paul Pocatilu, Marius Popa, Cristian Toma, “The
Digital Signature and Data Security in e-commerce”, The
Economic Informatics Review Nr. 3/2002, Bucharest 2002.
[POCA04] Paul Pocatilu, Cristian Toma, Securing Mobile Commerce
Applications, communication in “The Central and East
European Conference in Business Information Systems”,
“Babeş-Bolyai” University, Cluj-Napoca, May 2004
[TOMA05] Cristian Toma, “Secure architecture used in systems of
distributed applications”, The 7-th International
Conference on Informatics in Economy, Academia of
Economic Studies Bucharest, Editura Economica-INFOREC,
Bucuresti, Mai 2005, p. 1132-1138
393
394
Bibliography
395
[COCU05] George-Alexandru COCU – Codul de etica al
administratorului de baze de date, the project presented
at the course Ethics Codes in Informatics, inside of the
master program Informatics Security, The Academy of
Economics Studies, Bucharest, November 2005
[COMA05] George COMANESCU – Codul de etica al proiectantului
HMI (Human-Machine Interface, the project presented at
the course Ethics Codes in Informatics, inside of the
master program Informatics Security, The Academy of
Economics Studies, Bucharest, November 2005
[COUL01] George Coulouris, Jean Dollimore,Tim Kindberg
"Distributed Systems Concepts and Design (Third
Edition)", Addison-Wesley, 2001
[DONA01] Donal O’Mahony, Michael Peirce, Hitesh Teware,
“Electronic Payment Systems for E-Commerce”, Artech
House, 2001
[EDDO98] Guy Eddon, Henry Eddon "Inside Distributed COM",
Microsoft Press, 1998
[ENRI03a] C. Enrique Ortiz, May 2003, on-line article:
http://developers.sun.com/techtopics/mobility/javacard/ar
ticles/javacard1/
[ENRI03b] C. Enrique Ortiz, September 2003, on-line article:
http://developers.sun.com/techtopics/mobility/javacard/ar
ticles/javacard2/
[FAQT00] “Frequently Asked Questions about Today ’s
Cryptography”, RSA Labs, 2000.
[FIPS46] Federal Information Processing Standards Publication 46-
2: http://www.itl.nist.gov/fipspubs/fip46-2.htm
[FORD01] Ford W., Secure Electronic Commerce, Prentice Hall, 2001
[FLUH01] Scott Fluhrer, Itsik Mantin, and Adi Shamir, “Weaknesses
in the KeyScheduling Algorithm of RC4”, Eighth Annual
Workshop on Selected Areas in Cryptography,
August 2001
[HALE00] John Hale, “Research Advances in Database and
Information Systems Security”, Springer Publishing House
2000.
[HOUS01] Russ Housley, Tim Polk "Planning for PKI (Best Practices
Guide for Deploying Public Key Infrastructure)" Wiley,
2001
[IBMD05] IBM developerWorks:
http://www.ibm.com/developerworks
[ISOI04] http://www.ttfn.net/techno/smartcards/iso7816_4.html
[IVAN03] Ion Ivan, Cristian Toma, “Requirements for building
distributed informatics applications”, The Automatics and
Computer Science Romanian Magazine, vol. 13, No. 4,
November 2003
396
[IVAN03a] Ion Ivan, Paul Pocatilu, Marius Popa, Cristian Toma, “The
reliability of m-applications based on transactions”, The
Automatics and Computer Science Romanian Magazine,
vol. 13, No. 2, September 2003
[IVAN02] Ion Ivan, Paul Pocatilu, Marius Popa, Cristian Toma, “The
Digital Signature and Data Security in e-commerce”, The
Economic Informatics Review Nr. 3/2002, Bucharest 2002.
[IVAN01] Ion Ivan, Paul Pocatilu, Cristian Toma, Alexandru Leau,
“e3-com”, Informatică Economică Nr. 3(19)/2001,
Bucuresti 2001
[IVAN01a] Ion IVAN, Laurentiu TEODORESCU - Managementul
calitatii software, Editura Inforec, Bucharest 2001
[IVAN99a] Ion IVAN, Laurentiu TEODORESCU – Software Quality
Management, INFOREC Publishing House, Bucharest 1999
[IVAN99b] Ion IVAN, Mihai POPESCU, Panagiotis SINIOROS, Felix
SIMION – Software Metrics, INFOREC Publishing House,
Bucharest 1999
[JCDT04] Tools Sun Java Card Development Toolkit 2.2.1:
http://java.sun.com/products/javacard/index.jsp
[JCDT04a] Sun Java Card Development Toolkit 2.2.1:
http://java.sun.com/products/javacard/dev_kit.html
[J2SE04] Java 2 Standard Edition Software Development Kit:
http://java.sun.com/products/archive/j2se/1.4.1_07/
[JCVM03] Virtual Machine Speification 2.2.1, Octomber 2003:
http://java.sun.com/products/javacard/specs.html
[JCRE03] Runtime Environment Specification 2.2.1, Octomber 2003
http://java.sun.com/products/javacard/specs.html
[JCAP03] Application Programming Specification 2.2.1, Octomber
2003: http://java.sun.com/products/javacard/specs.html
[JINI03] Jini Technology Internet Resources
http://www.sun.com/jini/
http://www.jini.org/
[JINI99] Arnold, O’Sullivan, Scheifler, Waldo, Wollrath, “The Jini
Specification”, Addison Wesley, 1999
[KANT01] T. Kanter, “Adaptive Personal Mobile Communication,
Service Architecture and Protocols”, Doctoral Dissertation,
Department of Microelectronics and Information
Technology, Royal Institute of Technology (KTH),
November 2001
[KAUF02] Charlie Kaufman, “Network Security, 2/E”, Prentice Hall,
2002.
[KNOW04] http://www.gantthead.com/departments/departmentPage.
cfm?ID=3
[KNOX04] David Knox, “ Effective Oracle Database 10g Security by
Design”, McGraw-Hill Professional Publishing House, 2004
397
[LAWC96] The Romanian law no. 8/1996 about copyrights and
collateral rights.
[LUCA03] Gheorghe-Iulian LUCACI, coord. Ion IVAN - Principii ale
eticii profesionale in dezvoltarea proiectelor informatice,
the final project for the master program INFORMATIZED
MANAGEMENT OF PROJECTS, The Academy of Economics
Studies, Bucharest, March 2003
[MAHO01] O’Mahony D., “Electronic Payment Systems for E-
Commerce”, Artech House, 2001
[MANU03] Programming Manual, Octomber 2003, Application
Progaming Notes 2.2.1 included in [JCDT04a]
[MANU03a] User Manual, Octomber 2003, Development Kit User Guide
2.2.1 included in [JCDT04a]
[MIRO01] Mihaela Miroiu, Gabriela Blebea Nicolae – Introducere in
etica profesionala, Editura Trei, Bucharest 2001
[MOWB97] T.J.Mowbray, W.A.Ruh "Inside CORBA. Distributed Object
Standards and Applications" Addison Wesley 1997
[MSDN06] Microsoft Developer Network: http://msdn.microsoft.com
[NEDE05] Alexandru Stefan NEDELCU – Codul de etica al
consultantului de securitate, the project presented at the
course Ethics Codes in Informatics, inside of the master
program Informatics Security, The Academy of Economics
Studies, Bucharest, November 2005
[OASI02] OASIS. “Assertions and Protocol for the OASIS Security
Assertion Markup Language” http://www.oasis-open.org ,
31 May 2002
[ORDI06] The ordinance no. 25/2006 from 26/01/2006 for proper
administration of ORDA - Romanian Office for Author’s
rights
[ORFA97] R.Orfali, D.Harkey "Client/Server Programming with Java
and CORBA", John Wiley 1997
[PACK04] http://www.gantthead.com/departments/departmentPage.
cfm?ID=4
[PATR05] Victor Valeriu Patriciu, Ion Bica, Monica Ene-Pietroseanu,
Priescu I., "Semnatura electronica si securitatea
informatica", Publishing House All, Bucharest 2005.
[PATR01] Victor Valeriu Patriciu, Ion Bica, Monica Ene-Pietroseanu,
“Securitatea ComerŃului Electronic”, Publishing House
ALL, Bucharest 2001
[PATR99] Patriciu V., Patriciu S., Vasiu I., "Internet-ul şi dreptul",
Publishing House All, Bucharest 1999.
[PATR98] Victor Valeriu Patriciu, Bica Ion, Monica Ene-Pietroseanu,
“Securitatea Informatica în UNIX şi Internet”, Publishing
House Tehnica, Bucharest 1998.
398
[PATR94] Victor Valeriu-Patriciu, “Criptografia şi securitatea reŃelelor
de calculatoare cu aplicaŃii în C si Pascal”, Publishing
House Tehnică, Bucharest 1994.
[PFLE03] Pfleeger, C. - Security in Computing, Ed. Prentice Hall,
2003
[PMBK96] Project Management Institute – www.pmi.org , “PMBOK
Guide – Project Management Body of Knowledge”, 1996.
[POCA03] Paul Pocatilu, Cristian Toma, Mobile Applications Quality,
International Conference “Science and economic education
system role in development from Republic of Moldavia”,
Chişinău, September 2003, pg. 474-478
[POCA04a] Paul Pocatilu, Cristian Toma, “Securing Mobile Commerce
Applications”, communication in – “The Central and East
European Conference in Business Information Systems”,
“Babeş-Bolyai” University, Cluj-Napoca, May 2004.
[RHOU00] Housley R., Planning for PKI, John Wiley, 2000.
[RFC132] Request for Comments 1321: http://rfc.net/rfc1321.html
[RMIJ03] RMI Technology
http://java.sun.com/products/jdk/rmi/
[SALT75] Saltzer, J. – The Protection of Information in Computer
Systems, Proceedings of IEEE, 1975
[SCHN96] Bruce Schneier, “Applied Cryptography 2nd Edition:
protocols, algorithms, and source code in C”, John Wiley &
Sons, Inc. Publishing House, New York 1996.
[SCTI98] Scott Guthery, Tim Jurgensen, „Smart Card Developer’s
Kit”, Macmillan Computer Publishing House, ISBN:
1578700272, USA 1998:
http://unix.be.eu.org/docs/smart-card-developer-
kit/ewtoc.html
[SECE99] Software Engineering Code of Ethics and Professional
Practice, http:/www.acm.org/serving/se/code.htm
[SIEG00] Jon Siegel (ed) "CORBA 3. Fundamentals and
Programming", OMG Press, John Wiley & Sons, 2000
[STAL99] William Stalling, “Cryptography and Network Security”, Ed.
Prentince Hall, 1999.
[STAL03] William Stallings, “Cryptography and Network Security,
3/E”, Prentice Hall, 2003.
[STIN02] Douglas Stinson, “Cryptography – Theory and Practice” 2nd
Edition, Chapman & Hall/Crc Publishing House, New York
2002.
[STOI05] Dragos Mihai STOIAN – Codul de etica al managerului de
proiecte IT, the project presented at the course Ethics
Codes in Informatics, inside of the master program
Informatics Security, The Academy of Economics Studies,
Bucharest, November 2005
399
[TANE02] A.S. Tanenbaum, M. van Steen "Distributed Systems.
Principles and paradigms", Prentice Hall 2002
[TANE01] Tanenbaum, A. – Modern Operating Systems 2nd Edition,
Ed. Prentice Hall, 2001
400
ANNEX 1 – Informatics Master Curricula
401
402
ANNEX 2 – Surname Index of Authors
e-mail: ibica71@yahoo.com
2. Constanta BODEA has graduated from Faculty of Cybernetics,
Statistics and Economic Informatics in 1982, and now she is
Professor of Economic Informatics Department. She is teaching
project management and artificial intelligence course and she has
important results in research projects.
e-mail: bodea@ase.ro
3. Costin BURDUN is the Manager of Information Systems Security
Department. He graduated in 2000 the Computers Faculty from The
Military Technical Academy and he is PhD student in the information
security field. He has a large expereince in implementing public key
infrastructures in large governmental and banking projects.
e-mail: costin.burdun@uti.ro
4. Emil BURTESCU is PhD. Lecturer and he is teaching in “Constantin
Brancoveanu” University, Pitesti. He has graduated from the
University Politehnica of Bucharest, Faculty of Airships, specialization
Equipements and Board Instalations in 1990. He is teaching in
Economic Informatics Department, disciplines such as: informatics
basis, informatics design and databases. Emil Burtescu has
published over 5 books and over 15 papers in informatics area.
e-mail: emil_burtescu@yahoo.com
5. Catalin BOJA has graduated from Faculty of Cybernetics, Statistics
and Economic Informatics, Economic Informatics specialization,
within Academy of Economic Studies Bucharest in 2004. In present,
he is universitary assistant at Economic Informatics Department. He
is interested in: programs optimization, project management,
multimedia systems, assembly viruses and computational
403
cryptography. He is teaching assembly languages, data structures
object oriented programming, advanced programming language and
multimedia systems in Economic Informatics Department and he has
published over 2 books and over 30 papers and conferences
procedings.
e-mail: catalin.boja@ie.ase.ro
6. Valentin CRISTEA is a Professor of the Computer Science and
Engineering Department in the University Politehnica of Bucharest.
The main fields of expertise include Distributed Systems,
Communication Protocols, Parallel Algorithms, Computer Network
Software, Advanced Programming Techniques, and Distributed
Computing in Internet. He has a teaching experience of over 30
years in these domains. Valentin Cristea published more than 25
books in Central editions, more than 100 specialist articles, and
more than 80 technical reports. He is an IT Expert of the World
Bank, Coordinator of several national and international projects in
IT, member of program committees of several IT Conferences
(IWCC, ISDAS, ICT, etc), reviewer of ACM. Member of the ACM and
IEEE. Valentin Cristea is also Director of the National Center for
Information Technology of UPB.
e-mail: valentin@cs.pub.ro
7. Radu CONSTANTINESCU has graduated from Faculty of
Cybernetics, Statistics and Economic Informatics, Economic
Informatics specialization, within Academy of Economic Studies
Bucharest in 2003. In present, he is universitary assistant at
Economic Informatics Department. He is interested in: operating
systems, network security, web technologies, and computational
cryptography. He is teaching operating systems and web
technologies in Economic Informatics Department.
e-mail: radu.constantinescu@ie.ase.ro
Ionut FLOREA is an information security analyst at UTI Systems. In
2003 he graduated Faculty of Automatic Control and Computers
from University Politehnica of Bucharest. Since then he became
Certified Information Systems Auditor (CISA), BS-7799 Auditor and
he lead the PKI (Public key Infrastructure) security applications
implementation team before focusing on presales activities.
e-mail: ionut.florea@uti.ro
8. Mihai IANCIU is the Manager of the IT&C Division in UTI Systems
and he graduated Faculty of Electronics and Telecommunications
from University Politehnica of Buchares in 1989.
e-mail: mihai.ianciu@uti.ro
404
9. Ion IVAN has graduated Academy of Economic Studies in 1971. In
present, he is PhD. Proffessor at Economic Informatics Department
within Academy of Economic Studies Bucharest. He is interested in:
software quality and metrics, data structure and object oriented
programming, IT Project Management, economy data base models,
virtual organizations, mobile applications and computational
cryptography and he has published over 30 books, 60 published
papers and over 45 scientific communications.
e-mail: ionivan@ase.ro
ion.ivan@mec.edu.ro
URL: http://www.ionivan.ro
10. Mihaela MUNTEAN has graduated from the Faculty of Computer
Science, “Politehnica” University of Timisoara in 1986. Currently,
professor Mihaela Muntean is the chair of the Business Information
Systems and Statistics Department at the West University of
Timisoara and an independent IT consultant. She is interested in
information technology and knowledge management, her research
activity results are published in over 70 papers in indexed reviews
and conference proceedings. Now, she is teaching database
management systems, decision support systems and expert
systems, bringing important contribution to the foundation of higher
education in Economic Informatics within the Faculty of Economic
Sciences Timisoara.
e-mail: mihaela.muntean@fse.uvt.ro
11. Floarea NASTASE has graduated from Bucharest Polytechnical
University in 1976. In present, she is PhD. Proffessor at Economic
Informatics Department within Academy of Economic Studies
Bucharest. She is interested in: e-business technologies, operating
systems, e-commerce and e-payment, smart-card applications,
computer translators and compilators, and computational
cryptography and she has published over 10 books, 20 published
papers and over 20 scientific communications.
e-mail: nastasef@ase.ro
12. Victor Valeriu PATRICIU has graduated from Timisoara Technical
University in 1975. In present, he is PhD. Proffessor at Computer
Department within Technical Military Academy Bucharest. He is
interested in: network security, electronic signatures and public keys
infrastructures, e-commerce and e-payment, operating systems, and
computer networks and he has published over 10 books, 70
published papers and over 80 scientific communications.
e-mail: victorpatriciu@yahoo.com
405
13. Marius POPA has graduated Faculty of Cybernetics, Statistics and
Economic Informatics, Economic Informatics specialization, within
Academy of Economic Studies Bucharest in 2002. In present, he is
PhD. universitary lecturer at Economic Informatics Department. He
is interested in: programs optimization, project management,
informatics audit and computational cryptography. He is teaching
object oriented programming, data structures and advanced
programming language in Economic Informatics Department and he
has published over 3 books and over 40 papers in indexed reviews
and conferences procedings.
e-mail: marius.popa@ie.ase.ro
popam2@yahoo.com
14. Andrei TOMA has graduated the Faculty of Cybernetics, Statistics
and Economic Informatics, Economic Informatics specialization,
within Academy of Economic Studies Bucharest in 2005 and the
Faculty of Law within the University of Bucharest in 2005. In
present, he is taking his LLM in International and European Law at
the University of Amsterdam and conducting doctoral research at the
Academy of Economic Studies. He is currently interested in IT Law
as well as Computer Science issues.
e-mail: hypothetical.andrei@gmail.com
15. Cristian TOMA has graduated Faculty of Cybernetics, Statistics and
Economic Informatics, Economic Informatics specialization, within
Academy of Economic Studies Bucharest in 2003. In present, he is
universitary assistant at Economic Informatics Department. He is
interested in: distributed and parallel computing, mobile
applications, smart card programming, e-business and e-payment,
network security, computer viruses, secure web technologies and
computational cryptography. He is teaching assembly language,
object oriented programming, data structures and advanced
programming language in Economic Informatics Department and he
has published over 2 books and over 30 papers in indexed reviews
and conferences procedings.
e-mail: cristian.toma@ie.ase.ro
cristianvtoma@gmail.ro
406
ANNEX 3 – Informatics Security Master Contact
Contact Address
e-mail: securitateainformatica@gmail.com
securitateainformatica@ase.ro
URL: http://ism.ase.ro
407
408
INDEX
A D
AES ..............................................................53 Data Encryption Standard .................. 45
ANSI X9.17 .............................................128 Data mining ........................................... 343
ANSI X9.9................................................128 Data warehouse ................................... 343
ANSI X91 .................................................128 database.................................................. 340
Asymmetric cryptographic systems.26 DES ....................................................... 43, 45
authentication................................108, 110 Digital Certificates ............................... 114
digital signature ........................... 107, 108
B Digital Signature Algorithm................ 76
Discreet logarithms............................... 27
bit slice operations.................................40 distinguished name............................. 125
blind signature.......................................312 Distributed computing ....................... 222
blind signatures.....................................314 Distributed Object Model .................. 226
block cipher ..............................................53 distributed system............................... 221
Block ciphering ........................................79 Distributed Transaction Processing226
Business Intelligence ..........................171 Double ciphering .................................... 85
DSS ............................................................. 76
C DTP............................................................ 226
Dual signature....................................... 294
CAT ............................................................131
CBC..............................................................81 E
certificates.........................................91, 107
Certification Authorities .....................114 eBusiness................................................ 169
Cezar’s cipher ..........................................32 E-cash ...................................................... 310
Cipher Block Chaining ................................81 E-Cash...................................................... 314
Community Framework for Electronic ECB.............................................................. 80
Signatures..........................................100 eCommerce............................................ 169
Computational complexity ..................27 E-Commerce.......................................... 248
confidentiality ................................107, 110 EDI ............................................................ 126
Convergence ..........................................222 EESSI ....................................................... 100
credentials...............................................133 EG cryptographic system.................... 75
Cryptanalysis............................................19 El Gammal................................................ 75
Cryptographic Service Messages ....129 electronic business .............................. 383
cryptographic system ...........................19 Electronic business Data Interchange
Cryptology.................................................19 .............................................................. 126
cryptosystem ...........................................19 Electronic Codebook Encryption....... 80
CSM ...........................................................129 Electronic payments ........................... 251
Customer Relationship Management electronic signature ...................... 91, 108
...............................................................171 electronic wallet ................................... 259
409
encryption modes.................................. 79 Key distribution ...................................... 86
Enigma cipher......................................... 23 Key generating ....................................... 86
ePayment................................................ 170 Key memorization.................................. 86
ethics code............................................. 203 key scheduling ........................................ 40
ethics codes system ........................... 213 KKM........................................................... 128
European Electronic Signature knapsack ................................................... 71
Standardization Initiative............ 100 Knowledge Management ................... 171
eXtensible Markup Language .......... 326
M
F
master key ............................................. 128
Factorization............................................ 27 MD2............................................................. 24
Feistel networks..................................... 39 MD4............................................................. 24
Fish cipher................................................ 23 MD5....................................................... 24, 43
Merkle and Hellman .............................. 71
G Message Handling System................ 126
Message Oriented Middleware ........ 225
Galois Field............................................... 53 Message Oriented Model ................... 322
Generic Connection Framework API Message-Passing Model..................... 261
.............................................................. 261 MH ............................................................... 71
GFC ........................................................... 261 MHS........................................................... 126
middleware............................................. 224
H MIME......................................................... 140
Mobile Agents........................................ 353
homophonic ............................................. 35 MOM.......................................................... 225
monophase transpositions.................. 29
I multiphase transpositions................... 29
Multiple encryptions.............................. 85
IEEE P1363 ............................................ 127
IEEE P1363a.......................................... 127
IETF .......................................................... 131
N
Internet Engineering Task Force ... 131 Network Operating System.............. 226
IPSec .................................................131, 133 NOS........................................................... 226
ISO/IEC 9798........................................ 127
ISO/IEC 9979........................................ 127
ISO/IEC7816-4..................................... 262
O
ITU-T ........................................................ 125 Online Purchasing ................................ 170
ITU-T standards ................................... 125 Online Shopping ................................... 169
OPENPGP................................................. 131
J
Java Card Remote Method Invocation
P
.............................................................. 261 Package selection ................................ 171
Java RMI ................................................. 233 parallel and distributed systems .... 222
JCRMI....................................................... 261 Parallel computing ............................... 222
JSR 177 ................................................... 261 PCBC ........................................................... 82
PDS ........................................................... 222
K PGP............................................................ 138
PKCS ......................................................... 129
KEKs ......................................................... 128 PKI............................................................. 113
Kerberos ................................................. 132 PKIX .......................................................... 131
410
Policy Model............................................322 Security Assertion Markup Language
poly alphabetic ........................................35 .............................................................. 333
Pretty Good Privacy .............................138 Security plan ......................................... 183
Prime numbers ........................................27 Service Oriented Model ..................... 322
Propagation Cipher Block Chaining..82 session key............................................. 113
public key infrastructure......................91 SET.................................................... 293, 297
Public Key Infrastructure...................113 SHA-1......................................................... 24
Public-Key Cryptography Standards Simple Mail Transfer Protocol.......... 126
...............................................................129 Simple Object Access Protocol........ 327
Simple Public-Key Infrastructure... 131
Q smart card ...................................... 260, 288
SMTP......................................................... 126
QUALIFIED ELECTRONIC SIGNATURE SOAP................................................. 321, 327
...............................................................104 SPKI .......................................................... 131
SSH ................................................... 131, 136
R SSL............................................................ 135
Stream ciphering ................................... 79
Random number generators ..............20 stream ciphers ........................................ 20
RDA............................................................225 substitution ciphers............................... 31
registration authority ............................91 substitution key...................................... 33
Remote Data Access............................225 symmetric cryptographic systems... 25
Remote Method Invocation...............226
Remote Procedure Call .......................227
Remote Procedure Calls .....................225
T
Resource Oriented Model...................322 TGS ........................................................... 133
Return on Investment ........................187 The knapsack problem......................... 27
RFC 1510 .................................................132 ticket-granting server ........................ 133
Rijndael ......................................................53 Tiger............................................................ 24
RIPEMD-160 .............................................24 timestamp service ................................. 91
Risk ............................................................184 TLS ............................................................ 131
Risk analysis...........................................184 transposition ciphers ............................ 29
Risk management ................................184 Triple ciphering....................................... 85
RMI ............................................................231
RPC ............................................................225
RSA ......................................................73, 107
U
UDDI......................................................... 327
S Universal Description, Discovery, and
Integration ........................................ 327
S/MIME.............................................131, 139
SAML .........................................................333
SATSA .......................................................261
V
S-box ..........................................................39 Vernam cipher......................................... 22
SCP ............................................................137 Vigenere cipher....................................... 23
Secure Electronic Transaction.................293 Virtual Private Networks ................... 133
secure file copy .....................................137 VPN ........................................................... 133
Secure RPC .............................................234
Secure Shell ...........................................136
Secure Sockets Layer .........................135
W
Secure/Multipurpose Internet Mail Web services ......................................... 325
Extensions..........................................139 Web Services Architecture ............... 321
Security and Trust Services API .....261
411
Web Services Description Language
.............................................................. 327
Web Services Dynamic Discovery . 327
Web Services Inspection Language
.............................................................. 327
WS-Discovery ....................................... 327
WSDL ................................................320, 327
WS-Federation...................................... 333
WS-Inspection ...................................... 327
WS-MetadataExchange ..................... 327
WS-Policy ............................................... 327
WS-Secure Conversation Language
.............................................................. 333
WS-Security........................................... 332
WS-Security Policy Language ......... 333
WS-Trust Language ............................ 333
X
X.400........................................................ 126
X.500 directory..................................... 125
X.509........................................................ 114
X.509 certificates................................. 125
X.509 v3 ................................................. 114
XML Digital Signature ..................131,119
XML signature ....................................... 107
XMLDSIG................................................. 131
XML-Encryption .................................... 332
XML-Signature ...................................... 332
412
413
414